WorldWideScience

Sample records for contribution analysis algorithm

  1. Speech Emotion Feature Selection Method Based on Contribution Analysis Algorithm of Neural Network

    International Nuclear Information System (INIS)

    Wang Xiaojia; Mao Qirong; Zhan Yongzhao

    2008-01-01

    There are many emotion features. If all these features are employed to recognize emotions, redundant features may be existed. Furthermore, recognition result is unsatisfying and the cost of feature extraction is high. In this paper, a method to select speech emotion features based on contribution analysis algorithm of NN is presented. The emotion features are selected by using contribution analysis algorithm of NN from the 95 extracted features. Cluster analysis is applied to analyze the effectiveness for the features selected, and the time of feature extraction is evaluated. Finally, 24 emotion features selected are used to recognize six speech emotions. The experiments show that this method can improve the recognition rate and the time of feature extraction

  2. Panel acoustic contribution analysis.

    Science.gov (United States)

    Wu, Sean F; Natarajan, Logesh Kumar

    2013-02-01

    Formulations are derived to analyze the relative panel acoustic contributions of a vibrating structure. The essence of this analysis is to correlate the acoustic power flow from each panel to the radiated acoustic pressure at any field point. The acoustic power is obtained by integrating the normal component of the surface acoustic intensity, which is the product of the surface acoustic pressure and normal surface velocity reconstructed by using the Helmholtz equation least squares based nearfield acoustical holography, over each panel. The significance of this methodology is that it enables one to analyze and rank relative acoustic contributions of individual panels of a complex vibrating structure to acoustic radiation anywhere in the field based on a single set of the acoustic pressures measured in the near field. Moreover, this approach is valid for both interior and exterior regions. Examples of using this method to analyze and rank the relative acoustic contributions of a scaled vehicle cabin are demonstrated.

  3. Subcubic Control Flow Analysis Algorithms

    DEFF Research Database (Denmark)

    Midtgaard, Jan; Van Horn, David

    We give the first direct subcubic algorithm for performing control flow analysis of higher-order functional programs. Despite the long held belief that inclusion-based flow analysis could not surpass the ``cubic bottleneck, '' we apply known set compression techniques to obtain an algorithm...... that runs in time O(n^3/log n) on a unit cost random-access memory model machine. Moreover, we refine the initial flow analysis into two more precise analyses incorporating notions of reachability. We give subcubic algorithms for these more precise analyses and relate them to an existing analysis from...

  4. Methods in algorithmic analysis

    CERN Document Server

    Dobrushkin, Vladimir A

    2009-01-01

    …helpful to any mathematics student who wishes to acquire a background in classical probability and analysis … This is a remarkably beautiful book that would be a pleasure for a student to read, or for a teacher to make into a year's course.-Harvey Cohn, Computing Reviews, May 2010

  5. Automatic differentiation algorithms in model analysis

    NARCIS (Netherlands)

    Huiskes, M.J.

    2002-01-01

    Title: Automatic differentiation algorithms in model analysis
    Author: M.J. Huiskes
    Date: 19 March, 2002

    In this thesis automatic differentiation algorithms and derivative-based methods

  6. Stochastic simulation algorithms and analysis

    CERN Document Server

    Asmussen, Soren

    2007-01-01

    Sampling-based computational methods have become a fundamental part of the numerical toolset of practitioners and researchers across an enormous number of different applied domains and academic disciplines. This book provides a broad treatment of such sampling-based methods, as well as accompanying mathematical analysis of the convergence properties of the methods discussed. The reach of the ideas is illustrated by discussing a wide range of applications and the models that have found wide usage. The first half of the book focuses on general methods, whereas the second half discusses model-specific algorithms. Given the wide range of examples, exercises and applications students, practitioners and researchers in probability, statistics, operations research, economics, finance, engineering as well as biology and chemistry and physics will find the book of value.

  7. Hierarchical matrices algorithms and analysis

    CERN Document Server

    Hackbusch, Wolfgang

    2015-01-01

    This self-contained monograph presents matrix algorithms and their analysis. The new technique enables not only the solution of linear systems but also the approximation of matrix functions, e.g., the matrix exponential. Other applications include the solution of matrix equations, e.g., the Lyapunov or Riccati equation. The required mathematical background can be found in the appendix. The numerical treatment of fully populated large-scale matrices is usually rather costly. However, the technique of hierarchical matrices makes it possible to store matrices and to perform matrix operations approximately with almost linear cost and a controllable degree of approximation error. For important classes of matrices, the computational cost increases only logarithmically with the approximation error. The operations provided include the matrix inversion and LU decomposition. Since large-scale linear algebra problems are standard in scientific computing, the subject of hierarchical matrices is of interest to scientists ...

  8. Smoothed Analysis of Local Search Algorithms

    NARCIS (Netherlands)

    Manthey, Bodo; Dehne, Frank; Sack, Jörg-Rüdiger; Stege, Ulrike

    2015-01-01

    Smoothed analysis is a method for analyzing the performance of algorithms for which classical worst-case analysis fails to explain the performance observed in practice. Smoothed analysis has been applied to explain the performance of a variety of algorithms in the last years. One particular class of

  9. New algorithm for risk analysis in radiotherapy

    International Nuclear Information System (INIS)

    Torres, Antonio; Montes de Oca, Joe

    2015-01-01

    Risk analyses applied to radiotherapy treatments have become an undeniable necessity, considering the dangers generated by the combination of using powerful radiation fields on patients and the occurrence of human errors and equipment failures during these treatments. The technique par excellence to execute these analyses has been the risk matrix. This paper presents the development of a new algorithm to execute the task with wide graphic and analytic potentialities, thus transforming it into a very useful option for risk monitoring and the optimization of quality assurance. The system SECURE- MR, which is the basic software of this algorithm, has been successfully used in risk analysis regarding different kinds of radiotherapies. Compared to previous methods, It offers new possibilities of analysis considering risk controlling factors as the robustness of reducers of initiators frequency and its consequences. Their analytic capacities and graphs allow novel developments to classify risk contributing factors, to represent information processes as well as accidental sequences. The paper shows the application of the proposed system to a generic process of radiotherapy treatment using a lineal accelerator. (author)

  10. Principal component analysis networks and algorithms

    CERN Document Server

    Kong, Xiangyu; Duan, Zhansheng

    2017-01-01

    This book not only provides a comprehensive introduction to neural-based PCA methods in control science, but also presents many novel PCA algorithms and their extensions and generalizations, e.g., dual purpose, coupled PCA, GED, neural based SVD algorithms, etc. It also discusses in detail various analysis methods for the convergence, stabilizing, self-stabilizing property of algorithms, and introduces the deterministic discrete-time systems method to analyze the convergence of PCA/MCA algorithms. Readers should be familiar with numerical analysis and the fundamentals of statistics, such as the basics of least squares and stochastic algorithms. Although it focuses on neural networks, the book only presents their learning law, which is simply an iterative algorithm. Therefore, no a priori knowledge of neural networks is required. This book will be of interest and serve as a reference source to researchers and students in applied mathematics, statistics, engineering, and other related fields.

  11. Data structures and algorithm analysis in C++

    CERN Document Server

    Shaffer, Clifford A

    2011-01-01

    With its focus on creating efficient data structures and algorithms, this comprehensive text helps readers understand how to select or design the tools that will best solve specific problems. It uses Microsoft C++ as the programming language and is suitable for second-year data structure courses and computer science courses in algorithm analysis.Techniques for representing data are presented within the context of assessing costs and benefits, promoting an understanding of the principles of algorithm analysis and the effects of a chosen physical medium. The text also explores tradeoff issues, f

  12. Data structures and algorithm analysis in Java

    CERN Document Server

    Shaffer, Clifford A

    2011-01-01

    With its focus on creating efficient data structures and algorithms, this comprehensive text helps readers understand how to select or design the tools that will best solve specific problems. It uses Java as the programming language and is suitable for second-year data structure courses and computer science courses in algorithm analysis. Techniques for representing data are presented within the context of assessing costs and benefits, promoting an understanding of the principles of algorithm analysis and the effects of a chosen physical medium. The text also explores tradeoff issues, familiari

  13. Analysis and Improvement of Fireworks Algorithm

    Directory of Open Access Journals (Sweden)

    Xi-Guang Li

    2017-02-01

    Full Text Available The Fireworks Algorithm is a recently developed swarm intelligence algorithm to simulate the explosion process of fireworks. Based on the analysis of each operator of Fireworks Algorithm (FWA, this paper improves the FWA and proves that the improved algorithm converges to the global optimal solution with probability 1. The proposed algorithm improves the goal of further boosting performance and achieving global optimization where mainly include the following strategies. Firstly using the opposition-based learning initialization population. Secondly a new explosion amplitude mechanism for the optimal firework is proposed. In addition, the adaptive t-distribution mutation for non-optimal individuals and elite opposition-based learning for the optimal individual are used. Finally, a new selection strategy, namely Disruptive Selection, is proposed to reduce the running time of the algorithm compared with FWA. In our simulation, we apply the CEC2013 standard functions and compare the proposed algorithm (IFWA with SPSO2011, FWA, EFWA and dynFWA. The results show that the proposed algorithm has better overall performance on the test functions.

  14. Distributed Algorithms for Time Optimal Reachability Analysis

    DEFF Research Database (Denmark)

    Zhang, Zhengkui; Nielsen, Brian; Larsen, Kim Guldstrand

    2016-01-01

    . We propose distributed computing to accelerate time optimal reachability analysis. We develop five distributed state exploration algorithms, implement them in \\uppaal enabling it to exploit the compute resources of a dedicated model-checking cluster. We experimentally evaluate the implemented...... algorithms with four models in terms of their ability to compute near- or proven-optimal solutions, their scalability, time and memory consumption and communication overhead. Our results show that distributed algorithms work much faster than sequential algorithms and have good speedup in general.......Time optimal reachability analysis is a novel model based technique for solving scheduling and planning problems. After modeling them as reachability problems using timed automata, a real-time model checker can compute the fastest trace to the goal states which constitutes a time optimal schedule...

  15. Geometric Algorithms for Trajectory Analysis

    NARCIS (Netherlands)

    Staals, Frank

    2015-01-01

    Technology such as the Global Positing System (GPS) has made tracking moving entities easy and cheap. As a result there is a large amount of trajectory data available, and an increasing demand on tools and techniques to analyze such data. We consider several analysis tasks for trajectory data,

  16. Sparse canonical correlation analysis: new formulation and algorithm.

    Science.gov (United States)

    Chu, Delin; Liao, Li-Zhi; Ng, Michael K; Zhang, Xiaowei

    2013-12-01

    In this paper, we study canonical correlation analysis (CCA), which is a powerful tool in multivariate data analysis for finding the correlation between two sets of multidimensional variables. The main contributions of the paper are: 1) to reveal the equivalent relationship between a recursive formula and a trace formula for the multiple CCA problem, 2) to obtain the explicit characterization for all solutions of the multiple CCA problem even when the corresponding covariance matrices are singular, 3) to develop a new sparse CCA algorithm, and 4) to establish the equivalent relationship between the uncorrelated linear discriminant analysis and the CCA problem. We test several simulated and real-world datasets in gene classification and cross-language document retrieval to demonstrate the effectiveness of the proposed algorithm. The performance of the proposed method is competitive with the state-of-the-art sparse CCA algorithms.

  17. Trends in physics data analysis algorithms

    International Nuclear Information System (INIS)

    Denby, B.

    2004-01-01

    The paper provides a new look at algorithmic trends in modern physics experiments. Based on recently presented material, it attempts to draw conclusions in order to form a coherent historical picture of the past, present, and possible future of the field of data analysis techniques in physics. The importance of cross disciplinary approaches is stressed

  18. Discrete Fourier analysis of multigrid algorithms

    NARCIS (Netherlands)

    van der Vegt, Jacobus J.W.; Rhebergen, Sander

    2011-01-01

    The main topic of this report is a detailed discussion of the discrete Fourier multilevel analysis of multigrid algorithms. First, a brief overview of multigrid methods is given for discretizations of both linear and nonlinear partial differential equations. Special attention is given to the

  19. Interior point algorithms theory and analysis

    CERN Document Server

    Ye, Yinyu

    2011-01-01

    The first comprehensive review of the theory and practice of one of today's most powerful optimization techniques. The explosive growth of research into and development of interior point algorithms over the past two decades has significantly improved the complexity of linear programming and yielded some of today's most sophisticated computing techniques. This book offers a comprehensive and thorough treatment of the theory, analysis, and implementation of this powerful computational tool. Interior Point Algorithms provides detailed coverage of all basic and advanced aspects of the subject.

  20. Smoothed analysis: analysis of algorithms beyond worst case

    NARCIS (Netherlands)

    Manthey, Bodo; Röglin, Heiko

    2011-01-01

    Many algorithms perform very well in practice, but have a poor worst-case performance. The reason for this discrepancy is that worst-case analysis is often a way too pessimistic measure for the performance of an algorithm. In order to provide a more realistic performance measure that can explain the

  1. Noise elimination algorithm for modal analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bao, X. X., E-mail: baoxingxian@upc.edu.cn [Department of Naval Architecture and Ocean Engineering, China University of Petroleum (East China), Qingdao 266580 (China); Li, C. L. [Key Laboratory of Marine Geology and Environment, Institute of Oceanology, Chinese Academy of Sciences, Qingdao 266071 (China); Xiong, C. B. [The First Institute of Oceanography, State Oceanic Administration, Qingdao 266061 (China)

    2015-07-27

    Modal analysis is an ongoing interdisciplinary physical issue. Modal parameters estimation is applied to determine the dynamic characteristics of structures under vibration excitation. Modal analysis is more challenging for the measured vibration response signals are contaminated with noise. This study develops a mathematical algorithm of structured low rank approximation combined with the complex exponential method to estimate the modal parameters. Physical experiments using a steel cantilever beam with ten accelerometers mounted, excited by an impulse load, demonstrate that this method can significantly eliminate noise from measured signals and accurately identify the modal frequencies and damping ratios. This study provides a fundamental mechanism of noise elimination using structured low rank approximation in physical fields.

  2. A cluster analysis on road traffic accidents using genetic algorithms

    Science.gov (United States)

    Saharan, Sabariah; Baragona, Roberto

    2017-04-01

    The analysis of traffic road accidents is increasingly important because of the accidents cost and public road safety. The availability or large data sets makes the study of factors that affect the frequency and severity accidents are viable. However, the data are often highly unbalanced and overlapped. We deal with the data set of the road traffic accidents recorded in Christchurch, New Zealand, from 2000-2009 with a total of 26440 accidents. The data is in a binary set and there are 50 factors road traffic accidents with four level of severity. We used genetic algorithm for the analysis because we are in the presence of a large unbalanced data set and standard clustering like k-means algorithm may not be suitable for the task. The genetic algorithm based on clustering for unknown K, (GCUK) has been used to identify the factors associated with accidents of different levels of severity. The results provided us with an interesting insight into the relationship between factors and accidents severity level and suggest that the two main factors that contributes to fatal accidents are "Speed greater than 60 km h" and "Did not see other people until it was too late". A comparison with the k-means algorithm and the independent component analysis is performed to validate the results.

  3. Framelets and wavelets algorithms, analysis, and applications

    CERN Document Server

    Han, Bin

    2017-01-01

    Marking a distinct departure from the perspectives of frame theory and discrete transforms, this book provides a comprehensive mathematical and algorithmic introduction to wavelet theory. As such, it can be used as either a textbook or reference guide. As a textbook for graduate mathematics students and beginning researchers, it offers detailed information on the basic theory of framelets and wavelets, complemented by self-contained elementary proofs, illustrative examples/figures, and supplementary exercises. Further, as an advanced reference guide for experienced researchers and practitioners in mathematics, physics, and engineering, the book addresses in detail a wide range of basic and advanced topics (such as multiwavelets/multiframelets in Sobolev spaces and directional framelets) in wavelet theory, together with systematic mathematical analysis, concrete algorithms, and recent developments in and applications of framelets and wavelets. Lastly, the book can also be used to teach on or study selected spe...

  4. Discrete geometric analysis of message passing algorithm on graphs

    Science.gov (United States)

    Watanabe, Yusuke

    2010-04-01

    We often encounter probability distributions given as unnormalized products of non-negative functions. The factorization structures are represented by hypergraphs called factor graphs. Such distributions appear in various fields, including statistics, artificial intelligence, statistical physics, error correcting codes, etc. Given such a distribution, computations of marginal distributions and the normalization constant are often required. However, they are computationally intractable because of their computational costs. One successful approximation method is Loopy Belief Propagation (LBP) algorithm. The focus of this thesis is an analysis of the LBP algorithm. If the factor graph is a tree, i.e. having no cycle, the algorithm gives the exact quantities. If the factor graph has cycles, however, the LBP algorithm does not give exact results and possibly exhibits oscillatory and non-convergent behaviors. The thematic question of this thesis is "How the behaviors of the LBP algorithm are affected by the discrete geometry of the factor graph?" The primary contribution of this thesis is the discovery of a formula that establishes the relation between the LBP, the Bethe free energy and the graph zeta function. This formula provides new techniques for analysis of the LBP algorithm, connecting properties of the graph and of the LBP and the Bethe free energy. We demonstrate applications of the techniques to several problems including (non) convexity of the Bethe free energy, the uniqueness and stability of the LBP fixed point. We also discuss the loop series initiated by Chertkov and Chernyak. The loop series is a subgraph expansion of the normalization constant, or partition function, and reflects the graph geometry. We investigate theoretical natures of the series. Moreover, we show a partial connection between the loop series and the graph zeta function.

  5. Degree of contribution (DoC) feature selection algorithm for structural brain MRI volumetric features in depression detection.

    Science.gov (United States)

    Kipli, Kuryati; Kouzani, Abbas Z

    2015-07-01

    Accurate detection of depression at an individual level using structural magnetic resonance imaging (sMRI) remains a challenge. Brain volumetric changes at a structural level appear to have importance in depression biomarkers studies. An automated algorithm is developed to select brain sMRI volumetric features for the detection of depression. A feature selection (FS) algorithm called degree of contribution (DoC) is developed for selection of sMRI volumetric features. This algorithm uses an ensemble approach to determine the degree of contribution in detection of major depressive disorder. The DoC is the score of feature importance used for feature ranking. The algorithm involves four stages: feature ranking, subset generation, subset evaluation, and DoC analysis. The performance of DoC is evaluated on the Duke University Multi-site Imaging Research in the Analysis of Depression sMRI dataset. The dataset consists of 115 brain sMRI scans of 88 healthy controls and 27 depressed subjects. Forty-four sMRI volumetric features are used in the evaluation. The DoC score of forty-four features was determined as the accuracy threshold (Acc_Thresh) was varied. The DoC performance was compared with that of four existing FS algorithms. At all defined Acc_Threshs, DoC outperformed the four examined FS algorithms for the average classification score and the maximum classification score. DoC has a good ability to generate reduced-size subsets of important features that could yield high classification accuracy. Based on the DoC score, the most discriminant volumetric features are those from the left-brain region.

  6. Convergence analysis of canonical genetic algorithms.

    Science.gov (United States)

    Rudolph, G

    1994-01-01

    This paper analyzes the convergence properties of the canonical genetic algorithm (CGA) with mutation, crossover and proportional reproduction applied to static optimization problems. It is proved by means of homogeneous finite Markov chain analysis that a CGA will never converge to the global optimum regardless of the initialization, crossover, operator and objective function. But variants of CGA's that always maintain the best solution in the population, either before or after selection, are shown to converge to the global optimum due to the irreducibility property of the underlying original nonconvergent CGA. These results are discussed with respect to the schema theorem.

  7. Component evaluation testing and analysis algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    Hart, Darren M.; Merchant, Bion John

    2011-10-01

    The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.

  8. Novel mathematical algorithm for pupillometric data analysis.

    Science.gov (United States)

    Canver, Matthew C; Canver, Adam C; Revere, Karen E; Amado, Defne; Bennett, Jean; Chung, Daniel C

    2014-01-01

    Pupillometry is used clinically to evaluate retinal and optic nerve function by measuring pupillary response to light stimuli. We have developed a mathematical algorithm to automate and expedite the analysis of non-filtered, non-calculated pupillometric data obtained from mouse pupillary light reflex recordings, obtained from dynamic pupillary diameter recordings following exposure of varying light intensities. The non-filtered, non-calculated pupillometric data is filtered through a low pass finite impulse response (FIR) filter. Thresholding is used to remove data caused by eye blinking, loss of pupil tracking, and/or head movement. Twelve physiologically relevant parameters were extracted from the collected data: (1) baseline diameter, (2) minimum diameter, (3) response amplitude, (4) re-dilation amplitude, (5) percent of baseline diameter, (6) response time, (7) re-dilation time, (8) average constriction velocity, (9) average re-dilation velocity, (10) maximum constriction velocity, (11) maximum re-dilation velocity, and (12) onset latency. No significant differences were noted between parameters derived from algorithm calculated values and manually derived results (p ≥ 0.05). This mathematical algorithm will expedite endpoint data derivation and eliminate human error in the manual calculation of pupillometric parameters from non-filtered, non-calculated pupillometric values. Subsequently, these values can be used as reference metrics for characterizing the natural history of retinal disease. Furthermore, it will be instrumental in the assessment of functional visual recovery in humans and pre-clinical models of retinal degeneration and optic nerve disease following pharmacological or gene-based therapies. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  9. Editorial: Special Issue on Algorithms for Sequence Analysis and Storage

    Directory of Open Access Journals (Sweden)

    Veli Mäkinen

    2014-03-01

    Full Text Available This special issue of Algorithms is dedicated to approaches to biological sequence analysis that have algorithmic novelty and potential for fundamental impact in methods used for genome research.

  10. A sampling algorithm for segregation analysis

    Directory of Open Access Journals (Sweden)

    Henshall John

    2001-11-01

    Full Text Available Abstract Methods for detecting Quantitative Trait Loci (QTL without markers have generally used iterative peeling algorithms for determining genotype probabilities. These algorithms have considerable shortcomings in complex pedigrees. A Monte Carlo Markov chain (MCMC method which samples the pedigree of the whole population jointly is described. Simultaneous sampling of the pedigree was achieved by sampling descent graphs using the Metropolis-Hastings algorithm. A descent graph describes the inheritance state of each allele and provides pedigrees guaranteed to be consistent with Mendelian sampling. Sampling descent graphs overcomes most, if not all, of the limitations incurred by iterative peeling algorithms. The algorithm was able to find the QTL in most of the simulated populations. However, when the QTL was not modeled or found then its effect was ascribed to the polygenic component. No QTL were detected when they were not simulated.

  11. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  12. Parameterized Analysis of Paging and List Update Algorithms

    DEFF Research Database (Denmark)

    Dorrigiv, Reza; Ehmsen, Martin R.; López-Ortiz, Alejandro

    2015-01-01

    that a larger cache leads to a better performance. We also apply the parameterized analysis framework to list update and show that certain randomized algorithms which are superior to MTF in the classical model are not so in the parameterized case, which matches experimental results....... set model and express the performance of well known algorithms in terms of this parameter. This explicitly introduces parameterized-style analysis to online algorithms. The idea is that rather than normalizing the performance of an online algorithm by an (optimal) offline algorithm, we explicitly...... express the behavior of the algorithm in terms of two more natural parameters: the size of the cache and Denning’s working set measure. This technique creates a performance hierarchy of paging algorithms which better reflects their experimentally observed relative strengths. It also reflects the intuition...

  13. Genetic algorithms and supernovae type Ia analysis

    International Nuclear Information System (INIS)

    Bogdanos, Charalampos; Nesseris, Savvas

    2009-01-01

    We introduce genetic algorithms as a means to analyze supernovae type Ia data and extract model-independent constraints on the evolution of the Dark Energy equation of state w(z) ≡ P DE /ρ DE . Specifically, we will give a brief introduction to the genetic algorithms along with some simple examples to illustrate their advantages and finally we will apply them to the supernovae type Ia data. We find that genetic algorithms can lead to results in line with already established parametric and non-parametric reconstruction methods and could be used as a complementary way of treating SNIa data. As a non-parametric method, genetic algorithms provide a model-independent way to analyze data and can minimize bias due to premature choice of a dark energy model

  14. Analysis of a parallel multigrid algorithm

    Science.gov (United States)

    Chan, Tony F.; Tuminaro, Ray S.

    1989-01-01

    The parallel multigrid algorithm of Frederickson and McBryan (1987) is considered. This algorithm uses multiple coarse-grid problems (instead of one problem) in the hope of accelerating convergence and is found to have a close relationship to traditional multigrid methods. Specifically, the parallel coarse-grid correction operator is identical to a traditional multigrid coarse-grid correction operator, except that the mixing of high and low frequencies caused by aliasing error is removed. Appropriate relaxation operators can be chosen to take advantage of this property. Comparisons between the standard multigrid and the new method are made.

  15. Evolutionary Algorithms Application Analysis in Biometric Systems

    Directory of Open Access Journals (Sweden)

    N. Goranin

    2010-01-01

    Full Text Available Wide usage of biometric information for person identity verification purposes, terrorist acts prevention measures and authenticationprocess simplification in computer systems has raised significant attention to reliability and efficiency of biometricsystems. Modern biometric systems still face many reliability and efficiency related issues such as reference databasesearch speed, errors while recognizing of biometric information or automating biometric feature extraction. Current scientificinvestigations show that application of evolutionary algorithms may significantly improve biometric systems. In thisarticle we provide a comprehensive review of main scientific research done in sphere of evolutionary algorithm applicationfor biometric system parameter improvement.

  16. Novel algorithms for protein sequence analysis

    NARCIS (Netherlands)

    Ye, Kai

    2008-01-01

    Each protein is characterized by its unique sequential order of amino acids, the so-called protein sequence. Biology”s paradigm is that this order of amino acids determines the protein”s architecture and function. In this thesis, we introduce novel algorithms to analyze protein sequences. Chapter 1

  17. Analysis and enumeration algorithms for biological graphs

    CERN Document Server

    Marino, Andrea

    2015-01-01

    In this work we plan to revise the main techniques for enumeration algorithms and to show four examples of enumeration algorithms that can be applied to efficiently deal with some biological problems modelled by using biological networks: enumerating central and peripheral nodes of a network, enumerating stories, enumerating paths or cycles, and enumerating bubbles. Notice that the corresponding computational problems we define are of more general interest and our results hold in the case of arbitrary graphs. Enumerating all the most and less central vertices in a network according to their eccentricity is an example of an enumeration problem whose solutions are polynomial and can be listed in polynomial time, very often in linear or almost linear time in practice. Enumerating stories, i.e. all maximal directed acyclic subgraphs of a graph G whose sources and targets belong to a predefined subset of the vertices, is on the other hand an example of an enumeration problem with an exponential number of solutions...

  18. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  19. An improved algorithm for connectivity analysis of distribution networks

    International Nuclear Information System (INIS)

    Kansal, M.L.; Devi, Sunita

    2007-01-01

    In the present paper, an efficient algorithm for connectivity analysis of moderately sized distribution networks has been suggested. Algorithm is based on generation of all possible minimal system cutsets. The algorithm is efficient as it identifies only the necessary and sufficient conditions of system failure conditions in n-out-of-n type of distribution networks. The proposed algorithm is demonstrated with the help of saturated and unsaturated distribution networks. The computational efficiency of the algorithm is justified by comparing the computational efforts with the previously suggested appended spanning tree (AST) algorithm. The proposed technique has the added advantage as it can be utilized for generation of system inequalities which is useful in reliability estimation of capacitated networks

  20. Image Registration Algorithm Based on Parallax Constraint and Clustering Analysis

    Science.gov (United States)

    Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song

    2018-01-01

    To resolve the problem of slow computation speed and low matching accuracy in image registration, a new image registration algorithm based on parallax constraint and clustering analysis is proposed. Firstly, Harris corner detection algorithm is used to extract the feature points of two images. Secondly, use Normalized Cross Correlation (NCC) function to perform the approximate matching of feature points, and the initial feature pair is obtained. Then, according to the parallax constraint condition, the initial feature pair is preprocessed by K-means clustering algorithm, which is used to remove the feature point pairs with obvious errors in the approximate matching process. Finally, adopt Random Sample Consensus (RANSAC) algorithm to optimize the feature points to obtain the final feature point matching result, and the fast and accurate image registration is realized. The experimental results show that the image registration algorithm proposed in this paper can improve the accuracy of the image matching while ensuring the real-time performance of the algorithm.

  1. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Cheng-Yuan Shih

    2010-01-01

    Full Text Available This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA and quadratic discriminant analysis (QDA. It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  2. Robust stability analysis of adaptation algorithms for single perceptron.

    Science.gov (United States)

    Hui, S; Zak, S H

    1991-01-01

    The problem of robust stability and convergence of learning parameters of adaptation algorithms in a noisy environment for the single preceptron is addressed. The case in which the same input pattern is presented in the adaptation cycle is analyzed. The algorithm proposed is of the Widrow-Hoff type. It is concluded that this algorithm is robust. However, the weight vectors do not necessarily converge in the presence of measurement noise. A modified version of this algorithm in which the reduction factors are allowed to vary with time is proposed, and it is shown that this algorithm is robust and that the weight vectors converge in the presence of bounded noise. Only deterministic-type arguments are used in the analysis. An ultimate bound on the error in terms of a convex combination of the initial error and the bound on the noise is obtained.

  3. Forex markets: technical analysis and algorithmic trading

    OpenAIRE

    Klepić, Igor

    2013-01-01

    Technical analysis is an efficient way for analysing financial markets such as forex where currencies are being traded. Forex market is the biggest market among all of them which is closed only during weekends. Analysis uses past behaviour of the market and with different mathematical methods tries to predict the future movement of the market. With the help of technical analysis forex market allows individuals as well as large financial corporations to efficiently manage their wealth while th...

  4. DNA Microarray Data Analysis: A Novel Biclustering Algorithm Approach

    Directory of Open Access Journals (Sweden)

    Tewfik Ahmed H

    2006-01-01

    Full Text Available Biclustering algorithms refer to a distinct class of clustering algorithms that perform simultaneous row-column clustering. Biclustering problems arise in DNA microarray data analysis, collaborative filtering, market research, information retrieval, text mining, electoral trends, exchange analysis, and so forth. When dealing with DNA microarray experimental data for example, the goal of biclustering algorithms is to find submatrices, that is, subgroups of genes and subgroups of conditions, where the genes exhibit highly correlated activities for every condition. In this study, we develop novel biclustering algorithms using basic linear algebra and arithmetic tools. The proposed biclustering algorithms can be used to search for all biclusters with constant values, biclusters with constant values on rows, biclusters with constant values on columns, and biclusters with coherent values from a set of data in a timely manner and without solving any optimization problem. We also show how one of the proposed biclustering algorithms can be adapted to identify biclusters with coherent evolution. The algorithms developed in this study discover all valid biclusters of each type, while almost all previous biclustering approaches will miss some.

  5. Quantum algorithms for topological and geometric analysis of data

    Science.gov (United States)

    Lloyd, Seth; Garnerone, Silvano; Zanardi, Paolo

    2016-01-01

    Extracting useful information from large data sets can be a daunting task. Topological methods for analysing data sets provide a powerful technique for extracting such information. Persistent homology is a sophisticated tool for identifying topological features and for determining how such features persist as the data is viewed at different scales. Here we present quantum machine learning algorithms for calculating Betti numbers—the numbers of connected components, holes and voids—in persistent homology, and for finding eigenvectors and eigenvalues of the combinatorial Laplacian. The algorithms provide an exponential speed-up over the best currently known classical algorithms for topological data analysis. PMID:26806491

  6. A theoretical analysis of the median LMF adaptive algorithm

    DEFF Research Database (Denmark)

    Bysted, Tommy Kristensen; Rusu, C.

    1999-01-01

    Higher order adaptive algorithms are sensitive to impulse interference. In the case of the LMF (Least Mean Fourth), an easy and effective way to reduce this is to median filter the instantaneous gradient of the LMF algorithm. Although previous published simulations have indicated that this reduces...... the speed of convergence, no analytical studies have yet been made to prove this. In order to enhance the usability, this paper presents a convergence and steady-state analysis of the median LMF adaptive algorithm. As expected this proves that the median LMF has a slower convergence and a lower steady...

  7. Discrete Hadamard transformation algorithm's parallelism analysis and achievement

    Science.gov (United States)

    Hu, Hui

    2009-07-01

    With respect to Discrete Hadamard Transformation (DHT) wide application in real-time signal processing while limitation in operation speed of DSP. The article makes DHT parallel research and its parallel performance analysis. Based on multiprocessor platform-TMS320C80 programming structure, the research is carried out to achieve two kinds of parallel DHT algorithms. Several experiments demonstrated the effectiveness of the proposed algorithms.

  8. Convergence Analysis for the Multiplicative Schwarz Preconditioned Inexact Newton Algorithm

    KAUST Repository

    Liu, Lulu

    2016-10-26

    The multiplicative Schwarz preconditioned inexact Newton (MSPIN) algorithm, based on decomposition by field type rather than by subdomain, was recently introduced to improve the convergence of systems with unbalanced nonlinearities. This paper provides a convergence analysis of the MSPIN algorithm. Under reasonable assumptions, it is shown that MSPIN is locally convergent, and desired superlinear or even quadratic convergence can be obtained when the forcing terms are picked suitably.

  9. Convergence Analysis for the Multiplicative Schwarz Preconditioned Inexact Newton Algorithm

    KAUST Repository

    Liu, Lulu; Keyes, David E.

    2016-01-01

    The multiplicative Schwarz preconditioned inexact Newton (MSPIN) algorithm, based on decomposition by field type rather than by subdomain, was recently introduced to improve the convergence of systems with unbalanced nonlinearities. This paper provides a convergence analysis of the MSPIN algorithm. Under reasonable assumptions, it is shown that MSPIN is locally convergent, and desired superlinear or even quadratic convergence can be obtained when the forcing terms are picked suitably.

  10. STREAM PROCESSING ALGORITHMS FOR DYNAMIC 3D SCENE ANALYSIS

    Science.gov (United States)

    2018-02-15

    PROCESSING ALGORITHMS FOR DYNAMIC 3D SCENE ANALYSIS 5a. CONTRACT NUMBER FA8750-14-2-0072 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 62788F 6...of Figures 1 The 3D processing pipeline flowchart showing key modules. . . . . . . . . . . . . . . . . 12 2 Overall view (data flow) of the proposed...pipeline flowchart showing key modules. from motion and bundle adjustment algorithm. By fusion of depth masks of the scene obtained from 3D

  11. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  12. Parameter determination for quantitative PIXE analysis using genetic algorithms

    International Nuclear Information System (INIS)

    Aspiazu, J.; Belmont-Moreno, E.

    1996-01-01

    For biological and environmental samples, PIXE technique is in particular advantage for elemental analysis, but the quantitative analysis implies accomplishing complex calculations that require the knowledge of more than a dozen parameters. Using a genetic algorithm, the authors give here an account of the procedure to obtain the best values for the parameters necessary to fit the efficiency for a X-ray detector. The values for some variables involved in quantitative PIXE analysis, were manipulated in a similar way as the genetic information is treated in a biological process. The authors carried out the algorithm until they reproduce, within the confidence interval, the elemental concentrations corresponding to a reference material

  13. Performance analysis of manufacturing systems : queueing approximations and algorithms

    NARCIS (Netherlands)

    Vuuren, van M.

    2007-01-01

    Performance Analysis of Manufacturing Systems Queueing Approximations and Algorithms This thesis is concerned with the performance analysis of manufacturing systems. Manufacturing is the application of tools and a processing medium to the transformation of raw materials into finished goods for sale.

  14. Brain-inspired algorithms for retinal image analysis

    NARCIS (Netherlands)

    ter Haar Romeny, B.M.; Bekkers, E.J.; Zhang, J.; Abbasi-Sureshjani, S.; Huang, F.; Duits, R.; Dasht Bozorg, Behdad; Berendschot, T.T.J.M.; Smit-Ockeloen, I.; Eppenhof, K.A.J.; Feng, J.; Hannink, J.; Schouten, J.; Tong, M.; Wu, H.; van Triest, J.W.; Zhu, S.; Chen, D.; He, W.; Xu, L.; Han, P.; Kang, Y.

    2016-01-01

    Retinal image analysis is a challenging problem due to the precise quantification required and the huge numbers of images produced in screening programs. This paper describes a series of innovative brain-inspired algorithms for automated retinal image analysis, recently developed for the RetinaCheck

  15. A Flocking Based algorithm for Document Clustering Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Gao, Jinzhu [ORNL; Potok, Thomas E [ORNL

    2006-01-01

    Social animals or insects in nature often exhibit a form of emergent collective behavior known as flocking. In this paper, we present a novel Flocking based approach for document clustering analysis. Our Flocking clustering algorithm uses stochastic and heuristic principles discovered from observing bird flocks or fish schools. Unlike other partition clustering algorithm such as K-means, the Flocking based algorithm does not require initial partitional seeds. The algorithm generates a clustering of a given set of data through the embedding of the high-dimensional data items on a two-dimensional grid for easy clustering result retrieval and visualization. Inspired by the self-organized behavior of bird flocks, we represent each document object with a flock boid. The simple local rules followed by each flock boid result in the entire document flock generating complex global behaviors, which eventually result in a clustering of the documents. We evaluate the efficiency of our algorithm with both a synthetic dataset and a real document collection that includes 100 news articles collected from the Internet. Our results show that the Flocking clustering algorithm achieves better performance compared to the K- means and the Ant clustering algorithm for real document clustering.

  16. Algorithm of reducing the false positives in IDS based on correlation Analysis

    Science.gov (United States)

    Liu, Jianyi; Li, Sida; Zhang, Ru

    2018-03-01

    This paper proposes an algorithm of reducing the false positives in IDS based on correlation Analysis. Firstly, the algorithm analyzes the distinguishing characteristics of false positives and real alarms, and preliminary screen the false positives; then use the method of attribute similarity clustering to the alarms and further reduces the amount of alarms; finally, according to the characteristics of multi-step attack, associated it by the causal relationship. The paper also proposed a reverse causation algorithm based on the attack association method proposed by the predecessors, turning alarm information into a complete attack path. Experiments show that the algorithm simplifies the number of alarms, improve the efficiency of alarm processing, and contribute to attack purposes identification and alarm accuracy improvement.

  17. Use of the MULTINEST algorithm for gravitational wave data analysis

    International Nuclear Information System (INIS)

    Feroz, Farhan; Hobson, Michael P; Gair, Jonathan R; Porter, Edward K

    2009-01-01

    We describe an application of the MULTINEST algorithm to gravitational wave data analysis. MULTINEST is a multimodal nested sampling algorithm designed to efficiently evaluate the Bayesian evidence and return posterior probability densities for likelihood surfaces containing multiple secondary modes. The algorithm employs a set of 'live' points which are updated by partitioning the set into multiple overlapping ellipsoids and sampling uniformly from within them. This set of 'live' points climbs up the likelihood surface through nested iso-likelihood contours and the evidence and posterior distributions can be recovered from the point set evolution. The algorithm is model independent in the sense that the specific problem being tackled enters only through the likelihood computation, and does not change how the 'live' point set is updated. In this paper, we consider the use of the algorithm for gravitational wave data analysis by searching a simulated LISA data set containing two non-spinning supermassive black hole binary signals. The algorithm is able to rapidly identify all the modes of the solution and recover the true parameters of the sources to high precision.

  18. Use of the MULTINEST algorithm for gravitational wave data analysis

    Energy Technology Data Exchange (ETDEWEB)

    Feroz, Farhan; Hobson, Michael P [Astrophysics Group, Cavendish Laboratory, JJ Thomson Avenue, Cambridge CB3 0HE (United Kingdom); Gair, Jonathan R [Institute of Astronomy, Madingley Road, Cambridge CB3 0HA (United Kingdom); Porter, Edward K [APC, UMR 7164, Universite Paris 7 Denis Diderot, 10, rue Alice Domon et Leonie Duquet, 75205 Paris Cedex 13 (France)

    2009-11-07

    We describe an application of the MULTINEST algorithm to gravitational wave data analysis. MULTINEST is a multimodal nested sampling algorithm designed to efficiently evaluate the Bayesian evidence and return posterior probability densities for likelihood surfaces containing multiple secondary modes. The algorithm employs a set of 'live' points which are updated by partitioning the set into multiple overlapping ellipsoids and sampling uniformly from within them. This set of 'live' points climbs up the likelihood surface through nested iso-likelihood contours and the evidence and posterior distributions can be recovered from the point set evolution. The algorithm is model independent in the sense that the specific problem being tackled enters only through the likelihood computation, and does not change how the 'live' point set is updated. In this paper, we consider the use of the algorithm for gravitational wave data analysis by searching a simulated LISA data set containing two non-spinning supermassive black hole binary signals. The algorithm is able to rapidly identify all the modes of the solution and recover the true parameters of the sources to high precision.

  19. A comparative analysis of biclustering algorithms for gene expression data

    Science.gov (United States)

    Eren, Kemal; Deveci, Mehmet; Küçüktunç, Onur; Çatalyürek, Ümit V.

    2013-01-01

    The need to analyze high-dimension biological data is driving the development of new data mining methods. Biclustering algorithms have been successfully applied to gene expression data to discover local patterns, in which a subset of genes exhibit similar expression levels over a subset of conditions. However, it is not clear which algorithms are best suited for this task. Many algorithms have been published in the past decade, most of which have been compared only to a small number of algorithms. Surveys and comparisons exist in the literature, but because of the large number and variety of biclustering algorithms, they are quickly outdated. In this article we partially address this problem of evaluating the strengths and weaknesses of existing biclustering methods. We used the BiBench package to compare 12 algorithms, many of which were recently published or have not been extensively studied. The algorithms were tested on a suite of synthetic data sets to measure their performance on data with varying conditions, such as different bicluster models, varying noise, varying numbers of biclusters and overlapping biclusters. The algorithms were also tested on eight large gene expression data sets obtained from the Gene Expression Omnibus. Gene Ontology enrichment analysis was performed on the resulting biclusters, and the best enrichment terms are reported. Our analyses show that the biclustering method and its parameters should be selected based on the desired model, whether that model allows overlapping biclusters, and its robustness to noise. In addition, we observe that the biclustering algorithms capable of finding more than one model are more successful at capturing biologically relevant clusters. PMID:22772837

  20. Data analysis algorithms for gravitational-wave experiments

    International Nuclear Information System (INIS)

    Bonifazi, P.; Ferrari, V.; Frasca, S.; Pallottino, G.V.; Pizzella, G.

    1978-01-01

    The analysis of the sensitivity of a gravitational-wave antenna system shows that the role of the algorithms used for the analysis of the experimental data is comparable to that of the experimental apparatus. After a discussion of the processing performed on the input signals by the antenna and the electronic instrumentation, we derive a mathematical model of the system. This model is then used as a basis for the discussion of a number of data analysis algorithms that include also the Wiener-Kolmogoroff optimum filter; the performances of the algorithms are presented in terms of signal-to-noise ratio and sensitivity to short bursts of resonant gravitational waves. The theoretical results are in good agreement with the experimental results obtained with a small cryogenic antenna (24 kg)

  1. Design and analysis of cryptographic algorithms

    DEFF Research Database (Denmark)

    Kölbl, Stefan

    . From securing our passwords and personal data to protecting mobile communication from eavesdroppers and our electronic bank transactions from manipulation. These applications would be impossible without cryptography. The main topic of this thesis is the design and security analysis of the most......In today’s world computers are ubiquitous. They can be found in virtually any industry and most households own at least one personal computer or have a mobile phone. Apart from these fairly large and complex devices, we also see computers on a much smaller scale appear in everyday objects...... to this development. However, most of this communication happens over inherently insecure channels requiring methods to protect our communication. A further issue is the vast amount of data generated, which raises serious privacy concerns. Cryptography provides the key components for protecting our communication...

  2. Experimental methods for the analysis of optimization algorithms

    CERN Document Server

    Bartz-Beielstein, Thomas; Paquete, Luis; Preuss, Mike

    2010-01-01

    In operations research and computer science it is common practice to evaluate the performance of optimization algorithms on the basis of computational results, and the experimental approach should follow accepted principles that guarantee the reliability and reproducibility of results. However, computational experiments differ from those in other sciences, and the last decade has seen considerable methodological research devoted to understanding the particular features of such experiments and assessing the related statistical methods. This book consists of methodological contributions on diffe

  3. Depth data research of GIS based on clustering analysis algorithm

    Science.gov (United States)

    Xiong, Yan; Xu, Wenli

    2018-03-01

    The data of GIS have spatial distribution. Geographic data has both spatial characteristics and attribute characteristics, and also changes with time. Therefore, the amount of data is very large. Nowadays, many industries and departments in the society are using GIS. However, without proper data analysis and mining scheme, GIS will not exert its maximum effectiveness and will waste a lot of data. In this paper, we use the geographic information demand of a national security department as the experimental object, combining the characteristics of GIS data, taking into account the characteristics of time, space, attributes and so on, and using cluster analysis algorithm. We further study the mining scheme for depth data, and get the algorithm model. This algorithm can automatically classify sample data, and then carry out exploratory analysis. The research shows that the algorithm model and the information mining scheme can quickly find hidden depth information from the surface data of GIS, thus improving the efficiency of the security department. This algorithm can also be extended to other fields.

  4. Examination of Speed Contribution of Parallelization for Several Fingerprint Pre-Processing Algorithms

    Directory of Open Access Journals (Sweden)

    GORGUNOGLU, S.

    2014-05-01

    Full Text Available In analysis of minutiae based fingerprint systems, fingerprints needs to be pre-processed. The pre-processing is carried out to enhance the quality of the fingerprint and to obtain more accurate minutiae points. Reducing the pre-processing time is important for identification and verification in real time systems and especially for databases holding large fingerprints information. Parallel processing and parallel CPU computing can be considered as distribution of processes over multi core processor. This is done by using parallel programming techniques. Reducing the execution time is the main objective in parallel processing. In this study, pre-processing of minutiae based fingerprint system is implemented by parallel processing on multi core computers using OpenMP and on graphics processor using CUDA to improve execution time. The execution times and speedup ratios are compared with the one that of single core processor. The results show that by using parallel processing, execution time is substantially improved. The improvement ratios obtained for different pre-processing algorithms allowed us to make suggestions on the more suitable approaches for parallelization.

  5. Generation and Analysis of Algorithms for the Study of QBDs

    Directory of Open Access Journals (Sweden)

    Ricardo C. Goméz-Vargas

    2013-11-01

    Full Text Available The analysis and characterization of telecommunication systems generally is performed using stochastic models, this paper presents two algorithms for the analysis of quasi birth and death (QBDs, these are implemented on two types of queues PH/PH/1 and M/M/1, showing the results of these and comparing their execution times and error results. For development work using matrix analysis based on Markov chains begins from his generated matrix.

  6. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  7. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  8. Predicting Smoking Status Using Machine Learning Algorithms and Statistical Analysis

    Directory of Open Access Journals (Sweden)

    Charles Frank

    2018-03-01

    Full Text Available Smoking has been proven to negatively affect health in a multitude of ways. As of 2009, smoking has been considered the leading cause of preventable morbidity and mortality in the United States, continuing to plague the country’s overall health. This study aims to investigate the viability and effectiveness of some machine learning algorithms for predicting the smoking status of patients based on their blood tests and vital readings results. The analysis of this study is divided into two parts: In part 1, we use One-way ANOVA analysis with SAS tool to show the statistically significant difference in blood test readings between smokers and non-smokers. The results show that the difference in INR, which measures the effectiveness of anticoagulants, was significant in favor of non-smokers which further confirms the health risks associated with smoking. In part 2, we use five machine learning algorithms: Naïve Bayes, MLP, Logistic regression classifier, J48 and Decision Table to predict the smoking status of patients. To compare the effectiveness of these algorithms we use: Precision, Recall, F-measure and Accuracy measures. The results show that the Logistic algorithm outperformed the four other algorithms with Precision, Recall, F-Measure, and Accuracy of 83%, 83.4%, 83.2%, 83.44%, respectively.

  9. Inference algorithms and learning theory for Bayesian sparse factor analysis

    International Nuclear Information System (INIS)

    Rattray, Magnus; Sharp, Kevin; Stegle, Oliver; Winn, John

    2009-01-01

    Bayesian sparse factor analysis has many applications; for example, it has been applied to the problem of inferring a sparse regulatory network from gene expression data. We describe a number of inference algorithms for Bayesian sparse factor analysis using a slab and spike mixture prior. These include well-established Markov chain Monte Carlo (MCMC) and variational Bayes (VB) algorithms as well as a novel hybrid of VB and Expectation Propagation (EP). For the case of a single latent factor we derive a theory for learning performance using the replica method. We compare the MCMC and VB/EP algorithm results with simulated data to the theoretical prediction. The results for MCMC agree closely with the theory as expected. Results for VB/EP are slightly sub-optimal but show that the new algorithm is effective for sparse inference. In large-scale problems MCMC is infeasible due to computational limitations and the VB/EP algorithm then provides a very useful computationally efficient alternative.

  10. Inference algorithms and learning theory for Bayesian sparse factor analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rattray, Magnus; Sharp, Kevin [School of Computer Science, University of Manchester, Manchester M13 9PL (United Kingdom); Stegle, Oliver [Max-Planck-Institute for Biological Cybernetics, Tuebingen (Germany); Winn, John, E-mail: magnus.rattray@manchester.ac.u [Microsoft Research Cambridge, Roger Needham Building, Cambridge, CB3 0FB (United Kingdom)

    2009-12-01

    Bayesian sparse factor analysis has many applications; for example, it has been applied to the problem of inferring a sparse regulatory network from gene expression data. We describe a number of inference algorithms for Bayesian sparse factor analysis using a slab and spike mixture prior. These include well-established Markov chain Monte Carlo (MCMC) and variational Bayes (VB) algorithms as well as a novel hybrid of VB and Expectation Propagation (EP). For the case of a single latent factor we derive a theory for learning performance using the replica method. We compare the MCMC and VB/EP algorithm results with simulated data to the theoretical prediction. The results for MCMC agree closely with the theory as expected. Results for VB/EP are slightly sub-optimal but show that the new algorithm is effective for sparse inference. In large-scale problems MCMC is infeasible due to computational limitations and the VB/EP algorithm then provides a very useful computationally efficient alternative.

  11. Development of web-based reliability data analysis algorithm model and its application

    International Nuclear Information System (INIS)

    Hwang, Seok-Won; Oh, Ji-Yong; Moosung-Jae

    2010-01-01

    For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.

  12. Development of web-based reliability data analysis algorithm model and its application

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Seok-Won, E-mail: swhwang@khnp.co.k [Korea Hydro and Nuclear Power Co. Ltd., Jang-Dong 25-1, Yuseong-Gu, 305-343 Daejeon (Korea, Republic of); Oh, Ji-Yong [Korea Hydro and Nuclear Power Co. Ltd., Jang-Dong 25-1, Yuseong-Gu, 305-343 Daejeon (Korea, Republic of); Moosung-Jae [Department of Nuclear Engineering Hanyang University 17 Haengdang, Sungdong, Seoul (Korea, Republic of)

    2010-02-15

    For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.

  13. Tools and Algorithms for the Construction and Analysis of Systems

    DEFF Research Database (Denmark)

    This book constitutes the refereed proceedings of the 10th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2004, held in Barcelona, Spain in March/April 2004. The 37 revised full papers and 6 revised tool demonstration papers presented were car...

  14. Identifying Students’ Misconceptions on Basic Algorithmic Concepts Through Flowchart Analysis

    NARCIS (Netherlands)

    Rahimi, E.; Barendsen, E.; Henze, I.; Dagienė, V.; Hellas, A.

    2017-01-01

    In this paper, a flowchart-based approach to identifying secondary school students’ misconceptions (in a broad sense) on basic algorithm concepts is introduced. This approach uses student-generated flowcharts as the units of analysis and examines them against plan composition and construct-based

  15. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  16. Parallel algorithms for nuclear reactor analysis via domain decomposition method

    International Nuclear Information System (INIS)

    Kim, Yong Hee

    1995-02-01

    In this thesis, the neutron diffusion equation in reactor physics is discretized by the finite difference method and is solved on a parallel computer network which is composed of T-800 transputers. T-800 transputer is a message-passing type MIMD (multiple instruction streams and multiple data streams) architecture. A parallel variant of Schwarz alternating procedure for overlapping subdomains is developed with domain decomposition. The thesis provides convergence analysis and improvement of the convergence of the algorithm. The convergence of the parallel Schwarz algorithms with DN(or ND), DD, NN, and mixed pseudo-boundary conditions(a weighted combination of Dirichlet and Neumann conditions) is analyzed for both continuous and discrete models in two-subdomain case and various underlying features are explored. The analysis shows that the convergence rate of the algorithm highly depends on the pseudo-boundary conditions and the theoretically best one is the mixed boundary conditions(MM conditions). Also it is shown that there may exist a significant discrepancy between continuous model analysis and discrete model analysis. In order to accelerate the convergence of the parallel Schwarz algorithm, relaxation in pseudo-boundary conditions is introduced and the convergence analysis of the algorithm for two-subdomain case is carried out. The analysis shows that under-relaxation of the pseudo-boundary conditions accelerates the convergence of the parallel Schwarz algorithm if the convergence rate without relaxation is negative, and any relaxation(under or over) decelerates convergence if the convergence rate without relaxation is positive. Numerical implementation of the parallel Schwarz algorithm on an MIMD system requires multi-level iterations: two levels for fixed source problems, three levels for eigenvalue problems. Performance of the algorithm turns out to be very sensitive to the iteration strategy. In general, multi-level iterations provide good performance when

  17. Development and Aplication of Mathematical Programs for Contribution Analysis in Life Cycle Assessment

    OpenAIRE

    Vlad, Monica

    2009-01-01

    The environmental impact of a final product can be regarded as the sum of the impacts of all processes needed to obtain it. The impacts of these processes in all individual layers of production can be quantified using contribution analysis methods. SPA is an advanced method used to identify the chain of production processes linking the most highly emitting process with the final product. This analysis was performed in Matlab, using a specialized algorithm developed by Peters and Hertwich in 1...

  18. A recurrence-weighted prediction algorithm for musical analysis

    Science.gov (United States)

    Colucci, Renato; Leguizamon Cucunuba, Juan Sebastián; Lloyd, Simon

    2018-03-01

    Forecasting the future behaviour of a system using past data is an important topic. In this article we apply nonlinear time series analysis in the context of music, and present new algorithms for extending a sample of music, while maintaining characteristics similar to the original piece. By using ideas from ergodic theory, we adapt the classical prediction method of Lorenz analogues so as to take into account recurrence times, and demonstrate with examples, how the new algorithm can produce predictions with a high degree of similarity to the original sample.

  19. Implementation and statistical analysis of Metropolis algorithm for SU(3)

    International Nuclear Information System (INIS)

    Katznelson, E.; Nobile, A.

    1984-12-01

    In this paper we study the statistical properties of an implementation of the Metropolis algorithm for SU(3) gauge theory. It is shown that the results have normal distribution. We demonstrate that in this case error analysis can be carried on in a simple way and we show that applying it to both the measurement strategy and the output data analysis has an important influence on the performance and reliability of the simulation. (author)

  20. A Comparative Analysis of Classification Algorithms on Diverse Datasets

    Directory of Open Access Journals (Sweden)

    M. Alghobiri

    2018-04-01

    Full Text Available Data mining involves the computational process to find patterns from large data sets. Classification, one of the main domains of data mining, involves known structure generalizing to apply to a new dataset and predict its class. There are various classification algorithms being used to classify various data sets. They are based on different methods such as probability, decision tree, neural network, nearest neighbor, boolean and fuzzy logic, kernel-based etc. In this paper, we apply three diverse classification algorithms on ten datasets. The datasets have been selected based on their size and/or number and nature of attributes. Results have been discussed using some performance evaluation measures like precision, accuracy, F-measure, Kappa statistics, mean absolute error, relative absolute error, ROC Area etc. Comparative analysis has been carried out using the performance evaluation measures of accuracy, precision, and F-measure. We specify features and limitations of the classification algorithms for the diverse nature datasets.

  1. Parameter identification for structural dynamics based on interval analysis algorithm

    Science.gov (United States)

    Yang, Chen; Lu, Zixing; Yang, Zhenyu; Liang, Ke

    2018-04-01

    A parameter identification method using interval analysis algorithm for structural dynamics is presented in this paper. The proposed uncertain identification method is investigated by using central difference method and ARMA system. With the help of the fixed memory least square method and matrix inverse lemma, a set-membership identification technology is applied to obtain the best estimation of the identified parameters in a tight and accurate region. To overcome the lack of insufficient statistical description of the uncertain parameters, this paper treats uncertainties as non-probabilistic intervals. As long as we know the bounds of uncertainties, this algorithm can obtain not only the center estimations of parameters, but also the bounds of errors. To improve the efficiency of the proposed method, a time-saving algorithm is presented by recursive formula. At last, to verify the accuracy of the proposed method, two numerical examples are applied and evaluated by three identification criteria respectively.

  2. A Novel Multiobjective Evolutionary Algorithm Based on Regression Analysis

    Directory of Open Access Journals (Sweden)

    Zhiming Song

    2015-01-01

    Full Text Available As is known, the Pareto set of a continuous multiobjective optimization problem with m objective functions is a piecewise continuous (m-1-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is (m-1-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper.

  3. Experimental Methods for the Analysis of Optimization Algorithms

    DEFF Research Database (Denmark)

    , computational experiments differ from those in other sciences, and the last decade has seen considerable methodological research devoted to understanding the particular features of such experiments and assessing the related statistical methods. This book consists of methodological contributions on different...... in algorithm design, statistical design, optimization and heuristics, and most chapters provide theoretical background and are enriched with case studies. This book is written for researchers and practitioners in operations research and computer science who wish to improve the experimental assessment......In operations research and computer science it is common practice to evaluate the performance of optimization algorithms on the basis of computational results, and the experimental approach should follow accepted principles that guarantee the reliability and reproducibility of results. However...

  4. Algorithm for automatic analysis of electro-oculographic data.

    Science.gov (United States)

    Pettersson, Kati; Jagadeesan, Sharman; Lukander, Kristian; Henelius, Andreas; Haeggström, Edward; Müller, Kiti

    2013-10-25

    Large amounts of electro-oculographic (EOG) data, recorded during electroencephalographic (EEG) measurements, go underutilized. We present an automatic, auto-calibrating algorithm that allows efficient analysis of such data sets. The auto-calibration is based on automatic threshold value estimation. Amplitude threshold values for saccades and blinks are determined based on features in the recorded signal. The performance of the developed algorithm was tested by analyzing 4854 saccades and 213 blinks recorded in two different conditions: a task where the eye movements were controlled (saccade task) and a task with free viewing (multitask). The results were compared with results from a video-oculography (VOG) device and manually scored blinks. The algorithm achieved 93% detection sensitivity for blinks with 4% false positive rate. The detection sensitivity for horizontal saccades was between 98% and 100%, and for oblique saccades between 95% and 100%. The classification sensitivity for horizontal and large oblique saccades (10 deg) was larger than 89%, and for vertical saccades larger than 82%. The duration and peak velocities of the detected horizontal saccades were similar to those in the literature. In the multitask measurement the detection sensitivity for saccades was 97% with a 6% false positive rate. The developed algorithm enables reliable analysis of EOG data recorded both during EEG and as a separate metrics.

  5. The entity-to-algorithm allocation problem: Extending the analysis

    CSIR Research Space (South Africa)

    Grobler, J

    2014-12-01

    Full Text Available . HYPOTHESES ANALYSIS OF ALTERNATIVE MULTI-METHOD ALGORITHMS. HMHH EIHH EEA-SLPS HMHH NA 4− 19− 5 11− 8− 9 EIHH 5− 19− 4 NA 6− 16− 6 EEA-SLPS 9− 8− 11 6− 16− 6 NA Multi-EA 3− 4− 21 3− 1− 24 2− 3− 23 Multi-EA TOTAL HMHH 21− 4− 3 36− 3− 17 EIHH 24− 1− 3 35− 36... ANALYSIS OF THE VARIOUS ALGORITHMS VERSUS THEIR CONSTITUENT ALGORITHMS. Algorithm HMHH EIHH EEA-SLPS Multi-EA CMAES 0-3-25 4-2-22 4-2-22 2-2-24 SaNSDE 17-2-9 16-8-4 12-12-4 5-0-23 GA 22-3-3 23-2-3 23-4-1 4-5-19 GCPSO 20-1-7 20-3-5 19-3-6 8-3-17 TOTAL 55...

  6. Implicit Discourse: Contributions to a Sociological Analysis

    Directory of Open Access Journals (Sweden)

    Josep Espluga Trenc

    2014-01-01

    Full Text Available This article discusses the variety of types or dimensions of implicit discourse. Specifically, a typological characterisation is proposed, based on the intentions of the producer of the discourse, including a distinction between four basic dimensions: insinuated discourse, hidden discourse, ?failed? discourse and underlying discourse. Some examples are provided of each dimension, and then it is held that the proposed typology is useful for the sociological analysis of implicit discourse, that is, for its detection and interpretation.

  7. Contribution to Control of an Elastic Two-Mass System by Means of Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Zelmira Ferkova

    2007-01-01

    Full Text Available Oscillations of an elastic two-mass system with all known parameters may be suppressed by suitable feedback signal. An observer enables to estimate this feedback without measurement of load mechanism speed. This article contains application of genetic algorithms for identification of elastic system parameters and determination of corresponding observer feedback coefficients. Design correctness is verified by simulation.

  8. A new LMS algorithm for analysis of atrial fibrillation signals.

    Science.gov (United States)

    Ciaccio, Edward J; Biviano, Angelo B; Whang, William; Garan, Hasan

    2012-03-26

    A biomedical signal can be defined by its extrinsic features (x-axis and y-axis shift and scale) and intrinsic features (shape after normalization of extrinsic features). In this study, an LMS algorithm utilizing the method of differential steepest descent is developed, and is tested by normalization of extrinsic features in complex fractionated atrial electrograms (CFAE). Equations for normalization of x-axis and y-axis shift and scale are first derived. The algorithm is implemented for real-time analysis of CFAE acquired during atrial fibrillation (AF). Data was acquired at a 977 Hz sampling rate from 10 paroxysmal and 10 persistent AF patients undergoing clinical electrophysiologic study and catheter ablation therapy. Over 24 trials, normalization characteristics using the new algorithm with four weights were compared to the Widrow-Hoff LMS algorithm with four tapped delays. The time for convergence, and the mean squared error (MSE) after convergence, were compared. The new LMS algorithm was also applied to lead aVF of the electrocardiogram in one patient with longstanding persistent AF, to enhance the F wave and to monitor extrinsic changes in signal shape. The average waveform over a 25 s interval was used as a prototypical reference signal for matching with the aVF lead. Based on the derivation equations, the y-shift and y-scale adjustments of the new LMS algorithm were shown to be equivalent to the scalar form of the Widrow-Hoff LMS algorithm. For x-shift and x-scale adjustments, rather than implementing a long tapped delay as in Widrow-Hoff LMS, the new method uses only two weights. After convergence, the MSE for matching paroxysmal CFAE averaged 0.46 ± 0.49 μV(2)/sample for the new LMS algorithm versus 0.72 ± 0.35 μV(2)/sample for Widrow-Hoff LMS. The MSE for matching persistent CFAE averaged 0.55 ± 0.95 μV(2)/sample for the new LMS algorithm versus 0.62 ± 0.55 μV(2)/sample for Widrow-Hoff LMS. There were no significant differences in estimation

  9. A new LMS algorithm for analysis of atrial fibrillation signals

    Directory of Open Access Journals (Sweden)

    Ciaccio Edward J

    2012-03-01

    Full Text Available Abstract Background A biomedical signal can be defined by its extrinsic features (x-axis and y-axis shift and scale and intrinsic features (shape after normalization of extrinsic features. In this study, an LMS algorithm utilizing the method of differential steepest descent is developed, and is tested by normalization of extrinsic features in complex fractionated atrial electrograms (CFAE. Method Equations for normalization of x-axis and y-axis shift and scale are first derived. The algorithm is implemented for real-time analysis of CFAE acquired during atrial fibrillation (AF. Data was acquired at a 977 Hz sampling rate from 10 paroxysmal and 10 persistent AF patients undergoing clinical electrophysiologic study and catheter ablation therapy. Over 24 trials, normalization characteristics using the new algorithm with four weights were compared to the Widrow-Hoff LMS algorithm with four tapped delays. The time for convergence, and the mean squared error (MSE after convergence, were compared. The new LMS algorithm was also applied to lead aVF of the electrocardiogram in one patient with longstanding persistent AF, to enhance the F wave and to monitor extrinsic changes in signal shape. The average waveform over a 25 s interval was used as a prototypical reference signal for matching with the aVF lead. Results Based on the derivation equations, the y-shift and y-scale adjustments of the new LMS algorithm were shown to be equivalent to the scalar form of the Widrow-Hoff LMS algorithm. For x-shift and x-scale adjustments, rather than implementing a long tapped delay as in Widrow-Hoff LMS, the new method uses only two weights. After convergence, the MSE for matching paroxysmal CFAE averaged 0.46 ± 0.49μV2/sample for the new LMS algorithm versus 0.72 ± 0.35μV2/sample for Widrow-Hoff LMS. The MSE for matching persistent CFAE averaged 0.55 ± 0.95μV2/sample for the new LMS algorithm versus 0.62 ± 0.55μV2/sample for Widrow

  10. Experimental Methods for the Analysis of Optimization Algorithms

    DEFF Research Database (Denmark)

    of solution quality, runtime and other measures; and the third part collects advanced methods from experimental design for configuring and tuning algorithms on a specific class of instances with the goal of using the least amount of experimentation. The contributor list includes leading scientists......, computational experiments differ from those in other sciences, and the last decade has seen considerable methodological research devoted to understanding the particular features of such experiments and assessing the related statistical methods. This book consists of methodological contributions on different...

  11. Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system

    Science.gov (United States)

    Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2016-05-01

    Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.

  12. Cryptographic protocol security analysis based on bounded constructing algorithm

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    An efficient approach to analyzing cryptographic protocols is to develop automatic analysis tools based on formal methods. However, the approach has encountered the high computational complexity problem due to reasons that participants of protocols are arbitrary, their message structures are complex and their executions are concurrent. We propose an efficient automatic verifying algorithm for analyzing cryptographic protocols based on the Cryptographic Protocol Algebra (CPA) model proposed recently, in which algebraic techniques are used to simplify the description of cryptographic protocols and their executions. Redundant states generated in the analysis processes are much reduced by introducing a new algebraic technique called Universal Polynomial Equation and the algorithm can be used to verify the correctness of protocols in the infinite states space. We have implemented an efficient automatic analysis tool for cryptographic protocols, called ACT-SPA, based on this algorithm, and used the tool to check more than 20 cryptographic protocols. The analysis results show that this tool is more efficient, and an attack instance not offered previously is checked by using this tool.

  13. Implementation and analysis of an adaptive multilevel Monte Carlo algorithm

    KAUST Repository

    Hoel, Hakon; Von Schwerin, Erik; Szepessy, Anders; Tempone, Raul

    2014-01-01

    We present an adaptive multilevel Monte Carlo (MLMC) method for weak approximations of solutions to Itô stochastic dierential equations (SDE). The work [11] proposed and analyzed an MLMC method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a single level Euler-Maruyama Monte Carlo method from O(TOL-3) to O(TOL-2 log(TOL-1)2) for a mean square error of O(TOL2). Later, the work [17] presented an MLMC method using a hierarchy of adaptively re ned, non-uniform time discretizations, and, as such, it may be considered a generalization of the uniform time discretizationMLMC method. This work improves the adaptiveMLMC algorithms presented in [17] and it also provides mathematical analysis of the improved algorithms. In particular, we show that under some assumptions our adaptive MLMC algorithms are asymptotically accurate and essentially have the correct complexity but with improved control of the complexity constant factor in the asymptotic analysis. Numerical tests include one case with singular drift and one with stopped diusion, where the complexity of a uniform single level method is O(TOL-4). For both these cases the results con rm the theory, exhibiting savings in the computational cost for achieving the accuracy O(TOL) from O(TOL-3) for the adaptive single level algorithm to essentially O(TOL-2 log(TOL-1)2) for the adaptive MLMC algorithm. © 2014 by Walter de Gruyter Berlin/Boston 2014.

  14. Performance Analysis of the Decentralized Eigendecomposition and ESPRIT Algorithm

    Science.gov (United States)

    Suleiman, Wassim; Pesavento, Marius; Zoubir, Abdelhak M.

    2016-05-01

    In this paper, we consider performance analysis of the decentralized power method for the eigendecomposition of the sample covariance matrix based on the averaging consensus protocol. An analytical expression of the second order statistics of the eigenvectors obtained from the decentralized power method which is required for computing the mean square error (MSE) of subspace-based estimators is presented. We show that the decentralized power method is not an asymptotically consistent estimator of the eigenvectors of the true measurement covariance matrix unless the averaging consensus protocol is carried out over an infinitely large number of iterations. Moreover, we introduce the decentralized ESPRIT algorithm which yields fully decentralized direction-of-arrival (DOA) estimates. Based on the performance analysis of the decentralized power method, we derive an analytical expression of the MSE of DOA estimators using the decentralized ESPRIT algorithm. The validity of our asymptotic results is demonstrated by simulations.

  15. Development of an automatic identification algorithm for antibiogram analysis

    OpenAIRE

    Costa, LFR; Eduardo Silva; Noronha, VT; Ivone Vaz-Moreira; Olga C Nunes; de Andrade, MM

    2015-01-01

    Routinely, diagnostic and microbiology laboratories perform antibiogram analysis which can present some difficulties leading to misreadings and intra and inter-reader deviations. An Automatic Identification Algorithm (AIA) has been proposed as a solution to overcome some issues associated with the disc diffusion method, which is the main goal of this work. ALA allows automatic scanning of inhibition zones obtained by antibiograms. More than 60 environmental isolates were tested using suscepti...

  16. Analysis algorithm for digital data used in nuclear spectroscopy

    CERN Document Server

    AUTHOR|(CDS)2085950; Sin, Mihaela

    Data obtained from digital acquisition systems used in nuclear spectroscopy experiments must be converted by a dedicated algorithm in or- der to extract the physical quantities of interest. I will report here the de- velopment of an algorithm capable to read digital data, discriminate between random and true signals and convert the results into a format readable by a special data analysis program package used to interpret nuclear spectra and to create coincident matrices. The algorithm can be used in any nuclear spectroscopy experimental setup provided that digital acquisition modules are involved. In particular it was used to treat data obtained from the IS441 experiment at ISOLDE where the beta decay of 80Zn was investigated as part of ultra-fast timing studies of neutron rich Zn nuclei. The results obtained for the half-lives of 80Zn and 80Ga were in very good agreement with previous measurements. This fact proved unquestionably that the conversion algorithm works. Another remarkable result was the improve...

  17. An algorithm for reliability analysis of phased-mission systems

    International Nuclear Information System (INIS)

    Ma, Y.; Trivedi, K.S.

    1999-01-01

    The purpose of this paper is to describe an efficient Boolean algebraic algorithm that provides exact solution to the unreliability of a multi-phase mission system where the configurations are described through fault trees. The algorithm extends and improves the Boolean method originally proposed by Somani and Trivedi. By using the Boolean algebraic method, we provide an efficient modeling approach which avoids the state space explosion and the mapping problems that are encountered by the Markov chain approach. To calculate the exact solution of the phased-mission system with deterministic phase durations, we introduce the sum of disjoint phase products (SDPP) formula, which is a phased-extension of the sum of disjoint products (SDP) formula. Computationally, the algorithm is quite efficient because it calls an SDP generation algorithm in the early stage of the SDPP computation. In this way, the phase products generated in the early stage of the SDPP formula are guaranteed to be disjoint. Consequently, the number of the intermediate phase products is greatly reduced. In this paper, we also consider the transient analysis of the phased-mission system. Special care is needed to account for the possible latent failures at the mission phase change times. If there are more stringent success criteria just after a mission phase change time, an unreliability jump would occur at that time. Finally, the algorithm has been implemented in the software package SHARPE. With SHARPE, the complexities of the phased-mission system is made transparent to the potential users. The user can conveniently specify a phased-mission model at a high level (through fault trees) and analyze the system quantitatively

  18. Fringe pattern analysis for optical metrology theory, algorithms, and applications

    CERN Document Server

    Servin, Manuel; Padilla, Moises

    2014-01-01

    The main objective of this book is to present the basic theoretical principles and practical applications for the classical interferometric techniques and the most advanced methods in the field of modern fringe pattern analysis applied to optical metrology. A major novelty of this work is the presentation of a unified theoretical framework based on the Fourier description of phase shifting interferometry using the Frequency Transfer Function (FTF) along with the theory of Stochastic Process for the straightforward analysis and synthesis of phase shifting algorithms with desired properties such

  19. RNA secondary structure prediction with pseudoknots: Contribution of algorithm versus energy model.

    Science.gov (United States)

    Jabbari, Hosna; Wark, Ian; Montemagno, Carlo

    2018-01-01

    RNA is a biopolymer with various applications inside the cell and in biotechnology. Structure of an RNA molecule mainly determines its function and is essential to guide nanostructure design. Since experimental structure determination is time-consuming and expensive, accurate computational prediction of RNA structure is of great importance. Prediction of RNA secondary structure is relatively simpler than its tertiary structure and provides information about its tertiary structure, therefore, RNA secondary structure prediction has received attention in the past decades. Numerous methods with different folding approaches have been developed for RNA secondary structure prediction. While methods for prediction of RNA pseudoknot-free structure (structures with no crossing base pairs) have greatly improved in terms of their accuracy, methods for prediction of RNA pseudoknotted secondary structure (structures with crossing base pairs) still have room for improvement. A long-standing question for improving the prediction accuracy of RNA pseudoknotted secondary structure is whether to focus on the prediction algorithm or the underlying energy model, as there is a trade-off on computational cost of the prediction algorithm versus the generality of the method. The aim of this work is to argue when comparing different methods for RNA pseudoknotted structure prediction, the combination of algorithm and energy model should be considered and a method should not be considered superior or inferior to others if they do not use the same scoring model. We demonstrate that while the folding approach is important in structure prediction, it is not the only important factor in prediction accuracy of a given method as the underlying energy model is also as of great value. Therefore we encourage researchers to pay particular attention in comparing methods with different energy models.

  20. Big data analysis new algorithms for a new society

    CERN Document Server

    Stefanowski, Jerzy

    2016-01-01

    This edited volume is devoted to Big Data Analysis from a Machine Learning standpoint as presented by some of the most eminent researchers in this area. It demonstrates that Big Data Analysis opens up new research problems which were either never considered before, or were only considered within a limited range. In addition to providing methodological discussions on the principles of mining Big Data and the difference between traditional statistical data analysis and newer computing frameworks, this book presents recently developed algorithms affecting such areas as business, financial forecasting, human mobility, the Internet of Things, information networks, bioinformatics, medical systems and life science. It explores, through a number of specific examples, how the study of Big Data Analysis has evolved and how it has started and will most likely continue to affect society. While the benefits brought upon by Big Data Analysis are underlined, the book also discusses some of the warnings that have been issued...

  1. Maximum-entropy clustering algorithm and its global convergence analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.

  2. Aeromagnetic Compensation Algorithm Based on Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Peilin Wu

    2018-01-01

    Full Text Available Aeromagnetic exploration is an important exploration method in geophysics. The data is typically measured by optically pumped magnetometer mounted on an aircraft. But any aircraft produces significant levels of magnetic interference. Therefore, aeromagnetic compensation is important in aeromagnetic exploration. However, multicollinearity of the aeromagnetic compensation model degrades the performance of the compensation. To address this issue, a novel aeromagnetic compensation method based on principal component analysis is proposed. Using the algorithm, the correlation in the feature matrix is eliminated and the principal components are using to construct the hyperplane to compensate the platform-generated magnetic fields. The algorithm was tested using a helicopter, and the obtained improvement ratio is 9.86. The compensated quality is almost the same or slightly better than the ridge regression. The validity of the proposed method was experimentally demonstrated.

  3. Genetic algorithms and the analysis of SnIa data

    International Nuclear Information System (INIS)

    Nesseris, Savvas

    2011-01-01

    The Genetic Algorithm is a heuristic that can be used to produce model independent solutions to an optimization problem, thus making it ideal for use in cosmology and more specifically in the analysis of type Ia supernovae data. In this work we use the Genetic Algorithms (GA) in order to derive a null test on the spatially flat cosmological constant model ΛCDM. This is done in two steps: first, we apply the GA to the Constitution SNIa data in order to acquire a model independent reconstruction of the expansion history of the Universe H(z) and second, we use the reconstructed H(z) in conjunction with the Om statistic, which is constant only for the ΛCDM model, to derive our constraints. We find that while ΛCDM is consistent with the data at the 2σ level, some deviations from ΛCDM model at low redshifts can be accommodated.

  4. High-dimensional cluster analysis with the Masked EM Algorithm

    Science.gov (United States)

    Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.

    2014-01-01

    Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694

  5. Joint optimization of algorithmic suites for EEG analysis.

    Science.gov (United States)

    Santana, Eder; Brockmeier, Austin J; Principe, Jose C

    2014-01-01

    Electroencephalogram (EEG) data analysis algorithms consist of multiple processing steps each with a number of free parameters. A joint optimization methodology can be used as a wrapper to fine-tune these parameters for the patient or application. This approach is inspired by deep learning neural network models, but differs because the processing layers for EEG are heterogeneous with different approaches used for processing space and time. Nonetheless, we treat the processing stages as a neural network and apply backpropagation to jointly optimize the parameters. This approach outperforms previous results on the BCI Competition II - dataset IV; additionally, it outperforms the common spatial patterns (CSP) algorithm on the BCI Competition III dataset IV. In addition, the optimized parameters in the architecture are still interpretable.

  6. A Contribution to Nyquist-Rate ADC Modeling - Detailed Algorithm Description

    Directory of Open Access Journals (Sweden)

    J. Zidek

    2012-04-01

    Full Text Available In this article, the innovative ADC modeling algorithm is described. It is well suitable for nyquist-rate ADC error back annotation. This algorithm is the next step of building a support tool for IC design engineers. The inspiration for us was the work [2]. Here, the ADC behavior is divided into HCF (High Code Frequency and LCF (Low Code Frequency separated independent parts. This paper is based on the same concept but the model coefficients are estimated in a different way only from INL data. The HCF order recognition part was newly added as well. Thanks to that the HCF coefficients number is lower in comparison with the original Grimaldi’s work (especially for converters with low ratio between HCF and “random” part of INL. Modeling results are demonstrated on a real data set measured by ASICentrum on chargeredistribution type SAR ADC chip. Results are showed not only by coefficient values but also by the Model Coverage metrics. Model limitations are also discussed.

  7. A new randomized Kaczmarz based kernel canonical correlation analysis algorithm with applications to information retrieval.

    Science.gov (United States)

    Cai, Jia; Tang, Yi

    2018-02-01

    Canonical correlation analysis (CCA) is a powerful statistical tool for detecting the linear relationship between two sets of multivariate variables. Kernel generalization of it, namely, kernel CCA is proposed to describe nonlinear relationship between two variables. Although kernel CCA can achieve dimensionality reduction results for high-dimensional data feature selection problem, it also yields the so called over-fitting phenomenon. In this paper, we consider a new kernel CCA algorithm via randomized Kaczmarz method. The main contributions of the paper are: (1) A new kernel CCA algorithm is developed, (2) theoretical convergence of the proposed algorithm is addressed by means of scaled condition number, (3) a lower bound which addresses the minimum number of iterations is presented. We test on both synthetic dataset and several real-world datasets in cross-language document retrieval and content-based image retrieval to demonstrate the effectiveness of the proposed algorithm. Numerical results imply the performance and efficiency of the new algorithm, which is competitive with several state-of-the-art kernel CCA methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Postprocessing algorithm for automated analysis of pelvic intraoperative neuromonitoring signals

    Directory of Open Access Journals (Sweden)

    Wegner Celine

    2016-09-01

    Full Text Available Two dimensional pelvic intraoperative neuromonitoring (pIONM® is based on electric stimulation of autonomic nerves under observation of electromyography of internal anal sphincter (IAS and manometry of urinary bladder. The method provides nerve identification and verification of its’ functional integrity. Currently pIONM® is gaining increased attention in times where preservation of function is becoming more and more important. Ongoing technical and methodological developments in experimental and clinical settings require further analysis of the obtained signals. This work describes a postprocessing algorithm for pIONM® signals, developed for automated analysis of huge amount of recorded data. The analysis routine includes a graphical representation of the recorded signals in the time and frequency domain, as well as a quantitative evaluation by means of features calculated from the time and frequency domain. The produced plots are summarized automatically in a PowerPoint presentation. The calculated features are filled into a standardized Excel-sheet, ready for statistical analysis.

  9. Image Denoising Algorithm Combined with SGK Dictionary Learning and Principal Component Analysis Noise Estimation

    Directory of Open Access Journals (Sweden)

    Wenjing Zhao

    2018-01-01

    Full Text Available SGK (sequential generalization of K-means dictionary learning denoising algorithm has the characteristics of fast denoising speed and excellent denoising performance. However, the noise standard deviation must be known in advance when using SGK algorithm to process the image. This paper presents a denoising algorithm combined with SGK dictionary learning and the principal component analysis (PCA noise estimation. At first, the noise standard deviation of the image is estimated by using the PCA noise estimation algorithm. And then it is used for SGK dictionary learning algorithm. Experimental results show the following: (1 The SGK algorithm has the best denoising performance compared with the other three dictionary learning algorithms. (2 The SGK algorithm combined with PCA is superior to the SGK algorithm combined with other noise estimation algorithms. (3 Compared with the original SGK algorithm, the proposed algorithm has higher PSNR and better denoising performance.

  10. k-Nearest Neighbors Algorithm in Profiling Power Analysis Attacks

    Directory of Open Access Journals (Sweden)

    Z. Martinasek

    2016-06-01

    Full Text Available Power analysis presents the typical example of successful attacks against trusted cryptographic devices such as RFID (Radio-Frequency IDentifications and contact smart cards. In recent years, the cryptographic community has explored new approaches in power analysis based on machine learning models such as Support Vector Machine (SVM, RF (Random Forest and Multi-Layer Perceptron (MLP. In this paper, we made an extensive comparison of machine learning algorithms in the power analysis. For this purpose, we implemented a verification program that always chooses the optimal settings of individual machine learning models in order to obtain the best classification accuracy. In our research, we used three datasets, the first containing the power traces of an unprotected AES (Advanced Encryption Standard implementation. The second and third datasets are created independently from public available power traces corresponding to a masked AES implementation (DPA Contest v4. The obtained results revealed some interesting facts, namely, an elementary k-NN (k-Nearest Neighbors algorithm, which has not been commonly used in power analysis yet, shows great application potential in practice.

  11. B. F. Skinner's Contributions to Applied Behavior Analysis

    Science.gov (United States)

    Morris, Edward K.; Smith, Nathaniel G.; Altus, Deborah E.

    2005-01-01

    Our paper reviews and analyzes B. F. Skinner's contributions to applied behavior analysis in order to assess his role as the field's originator and founder. We found, first, that his contributions fall into five categorizes: the style and content of his science, his interpretations of typical and atypical human behavior, the implications he drew…

  12. Imaging for dismantlement verification: Information management and analysis algorithms

    International Nuclear Information System (INIS)

    Robinson, S.M.; Jarman, K.D.; Pitts, W.K.; Seifert, A.; Misner, A.C.; Woodring, M.L.; Myjak, M.J.

    2012-01-01

    The level of detail discernible in imaging techniques has generally excluded them from consideration as verification tools in inspection regimes. An image will almost certainly contain highly sensitive information, and storing a comparison image will almost certainly violate a cardinal principle of information barriers: that no sensitive information be stored in the system. To overcome this problem, some features of the image might be reduced to a few parameters suitable for definition as an attribute, which must be non-sensitive to be acceptable in an Information Barrier regime. However, this process must be performed with care. Features like the perimeter, area, and intensity of an object, for example, might reveal sensitive information. Any data-reduction technique must provide sufficient information to discriminate a real object from a spoofed or incorrect one, while avoiding disclosure (or storage) of any sensitive object qualities. Ultimately, algorithms are intended to provide only a yes/no response verifying the presence of features in the image. We discuss the utility of imaging for arms control applications and present three image-based verification algorithms in this context. The algorithms reduce full image information to non-sensitive feature information, in a process that is intended to enable verification while eliminating the possibility of image reconstruction. The underlying images can be highly detailed, since they are dynamically generated behind an information barrier. We consider the use of active (conventional) radiography alone and in tandem with passive (auto) radiography. We study these algorithms in terms of technical performance in image analysis and application to an information barrier scheme.

  13. The speech signal segmentation algorithm using pitch synchronous analysis

    Directory of Open Access Journals (Sweden)

    Amirgaliyev Yedilkhan

    2017-03-01

    Full Text Available Parameterization of the speech signal using the algorithms of analysis synchronized with the pitch frequency is discussed. Speech parameterization is performed by the average number of zero transitions function and the signal energy function. Parameterization results are used to segment the speech signal and to isolate the segments with stable spectral characteristics. Segmentation results can be used to generate a digital voice pattern of a person or be applied in the automatic speech recognition. Stages needed for continuous speech segmentation are described.

  14. Tools and Algorithms for the Construction and Analysis of Systems

    DEFF Research Database (Denmark)

    This book constitutes the refereed proceedings of the 10th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2004, held in Barcelona, Spain in March/April 2004. The 37 revised full papers and 6 revised tool demonstration papers presented were...... carefully reviewed and selected from a total of 162 submissions. The papers are organized in topical sections on theorem proving, probabilistic model checking, testing, tools, explicit state and Petri nets, scheduling, constraint solving, timed systems, case studies, software, temporal logic, abstraction...

  15. Relatively Inexact Proximal Point Algorithm and Linear Convergence Analysis

    Directory of Open Access Journals (Sweden)

    Ram U. Verma

    2009-01-01

    Full Text Available Based on a notion of relatively maximal (m-relaxed monotonicity, the approximation solvability of a general class of inclusion problems is discussed, while generalizing Rockafellar's theorem (1976 on linear convergence using the proximal point algorithm in a real Hilbert space setting. Convergence analysis, based on this new model, is simpler and compact than that of the celebrated technique of Rockafellar in which the Lipschitz continuity at 0 of the inverse of the set-valued mapping is applied. Furthermore, it can be used to generalize the Yosida approximation, which, in turn, can be applied to first-order evolution equations as well as evolution inclusions.

  16. Tools and Algorithms for Construction and Analysis of Systems

    DEFF Research Database (Denmark)

    This book constitutes the refereed proceedings of the 6th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2000, held as part of ETAPS 2000 in Berlin, Germany, in March/April 2000. The 33 revised full papers presented together with one invited...... paper and two short tool descriptions were carefully reviewed and selected from a total of 107 submissions. The papers are organized in topical sections on software and formal methods, formal methods, timed and hybrid systems, infinite and parameterized systems, diagnostic and test generation, efficient...

  17. A formal analysis of a dynamic distributed spanning tree algorithm

    NARCIS (Netherlands)

    Mooij, A.J.; Wesselink, J.W.

    2003-01-01

    Abstract. We analyze the spanning tree algorithm in the IEEE 1394.1 draft standard, which correctness has not previously been proved. This algorithm is a fully-dynamic distributed graph algorithm, which, in general, is hard to develop. The approach we use is to formally develop an algorithm that is

  18. A Scalable Gaussian Process Analysis Algorithm for Biomass Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Chandola, Varun [ORNL; Vatsavai, Raju [ORNL

    2011-01-01

    Biomass monitoring is vital for studying the carbon cycle of earth's ecosystem and has several significant implications, especially in the context of understanding climate change and its impacts. Recently, several change detection methods have been proposed to identify land cover changes in temporal profiles (time series) of vegetation collected using remote sensing instruments, but do not satisfy one or both of the two requirements of the biomass monitoring problem, i.e., {\\em operating in online mode} and {\\em handling periodic time series}. In this paper, we adapt Gaussian process regression to detect changes in such time series in an online fashion. While Gaussian process (GP) have been widely used as a kernel based learning method for regression and classification, their applicability to massive spatio-temporal data sets, such as remote sensing data, has been limited owing to the high computational costs involved. We focus on addressing the scalability issues associated with the proposed GP based change detection algorithm. This paper makes several significant contributions. First, we propose a GP based online time series change detection algorithm and demonstrate its effectiveness in detecting different types of changes in {\\em Normalized Difference Vegetation Index} (NDVI) data obtained from a study area in Iowa, USA. Second, we propose an efficient Toeplitz matrix based solution which significantly improves the computational complexity and memory requirements of the proposed GP based method. Specifically, the proposed solution can analyze a time series of length $t$ in $O(t^2)$ time while maintaining a $O(t)$ memory footprint, compared to the $O(t^3)$ time and $O(t^2)$ memory requirement of standard matrix manipulation based methods. Third, we describe a parallel version of the proposed solution which can be used to simultaneously analyze a large number of time series. We study three different parallel implementations: using threads, MPI, and a

  19. Artificial bee colony algorithm for single-trial electroencephalogram analysis.

    Science.gov (United States)

    Hsu, Wei-Yen; Hu, Ya-Ping

    2015-04-01

    In this study, we propose an analysis system combined with feature selection to further improve the classification accuracy of single-trial electroencephalogram (EEG) data. Acquiring event-related brain potential data from the sensorimotor cortices, the system comprises artifact and background noise removal, feature extraction, feature selection, and feature classification. First, the artifacts and background noise are removed automatically by means of independent component analysis and surface Laplacian filter, respectively. Several potential features, such as band power, autoregressive model, and coherence and phase-locking value, are then extracted for subsequent classification. Next, artificial bee colony (ABC) algorithm is used to select features from the aforementioned feature combination. Finally, selected subfeatures are classified by support vector machine. Comparing with and without artifact removal and feature selection, using a genetic algorithm on single-trial EEG data for 6 subjects, the results indicate that the proposed system is promising and suitable for brain-computer interface applications. © EEG and Clinical Neuroscience Society (ECNS) 2014.

  20. An Improved Biclustering Algorithm and Its Application to Gene Expression Spectrum Analysis

    OpenAIRE

    Qu, Hua; Wang, Liu-Pu; Liang, Yan-Chun; Wu, Chun-Guo

    2016-01-01

    Cheng and Church algorithm is an important approach in biclustering algorithms. In this paper, the process of the extended space in the second stage of Cheng and Church algorithm is improved and the selections of two important parameters are discussed. The results of the improved algorithm used in the gene expression spectrum analysis show that, compared with Cheng and Church algorithm, the quality of clustering results is enhanced obviously, the mining expression models are better, and the d...

  1. Numerical methods design, analysis, and computer implementation of algorithms

    CERN Document Server

    Greenbaum, Anne

    2012-01-01

    Numerical Methods provides a clear and concise exploration of standard numerical analysis topics, as well as nontraditional ones, including mathematical modeling, Monte Carlo methods, Markov chains, and fractals. Filled with appealing examples that will motivate students, the textbook considers modern application areas, such as information retrieval and animation, and classical topics from physics and engineering. Exercises use MATLAB and promote understanding of computational results. The book gives instructors the flexibility to emphasize different aspects--design, analysis, or computer implementation--of numerical algorithms, depending on the background and interests of students. Designed for upper-division undergraduates in mathematics or computer science classes, the textbook assumes that students have prior knowledge of linear algebra and calculus, although these topics are reviewed in the text. Short discussions of the history of numerical methods are interspersed throughout the chapters. The book a...

  2. Queue and stack sorting algorithm optimization and performance analysis

    Science.gov (United States)

    Qian, Mingzhu; Wang, Xiaobao

    2018-04-01

    Sorting algorithm is one of the basic operation of a variety of software development, in data structures course specializes in all kinds of sort algorithm. The performance of the sorting algorithm is directly related to the efficiency of the software. A lot of excellent scientific research queue is constantly optimizing algorithm, algorithm efficiency better as far as possible, the author here further research queue combined with stacks of sorting algorithms, the algorithm is mainly used for alternating operation queue and stack storage properties, Thus avoiding the need for a large number of exchange or mobile operations in the traditional sort. Before the existing basis to continue research, improvement and optimization, the focus on the optimization of the time complexity of the proposed optimization and improvement, The experimental results show that the improved effectively, at the same time and the time complexity and space complexity of the algorithm, the stability study corresponding research. The improvement and optimization algorithm, improves the practicability.

  3. Transient analysis and leakage detection algorithm using GA and HS algorithm for a pipeline system

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sang Hyun; Yoo, Wan Suk; Oh, Kwang Jung; Hwang, In Sung; Oh, Jeong Eun [Pusan National University, Pusan (Korea, Republic of)

    2006-03-15

    The impact of leakage was incorporated into the transfer functions of the complex head and discharge. The impedance transfer functions for the various leaking pipeline systems were also derived. Hydraulic transients could be efficiently analyzed by the developed method. The simulation of normalized pressure variation using the method of characteristics and the impulse response method shows good agreement to the condition of turbulent flow. The leak calibration could be performed by incorporation of the impulse response method with Genetic Algorithm (GA) and Harmony Search (HS). The objective functions for the leakage detection can be made using the pressure-head response at the valve, or the pressure-head or the flow response at a certain point of the pipeline located upstream from the valve. The proposed method is not constrained by the Courant number to control the numerical dissipation of the method of characteristics. The limitations associated with the discreteness of the pipeline system in the inverse transient analysis can be neglected in the proposed method.

  4. Transient analysis and leakage detection algorithm using GA and HS algorithm for a pipeline system

    International Nuclear Information System (INIS)

    Kim, Sang Hyun; Yoo, Wan Suk; Oh, Kwang Jung; Hwang, In Sung; Oh, Jeong Eun

    2006-01-01

    The impact of leakage was incorporated into the transfer functions of the complex head and discharge. The impedance transfer functions for the various leaking pipeline systems were also derived. Hydraulic transients could be efficiently analyzed by the developed method. The simulation of normalized pressure variation using the method of characteristics and the impulse response method shows good agreement to the condition of turbulent flow. The leak calibration could be performed by incorporation of the impulse response method with Genetic Algorithm (GA) and Harmony Search (HS). The objective functions for the leakage detection can be made using the pressure-head response at the valve, or the pressure-head or the flow response at a certain point of the pipeline located upstream from the valve. The proposed method is not constrained by the Courant number to control the numerical dissipation of the method of characteristics. The limitations associated with the discreteness of the pipeline system in the inverse transient analysis can be neglected in the proposed method

  5. Smoothed analysis of belief propagation and minimum-cost flow algorithms

    NARCIS (Netherlands)

    Cornelissen, Kamiel

    2016-01-01

    Algorithms that have good worst-case performance are not always the ones that perform best in practice. The smoothed analysis framework is a way of analyzing algorithms that usually matches practical performance of these algorithms much better than worst-case analysis. In this thesis we apply

  6. A Novel Divisive Hierarchical Clustering Algorithm for Geospatial Analysis

    Directory of Open Access Journals (Sweden)

    Shaoning Li

    2017-01-01

    Full Text Available In the fields of geographic information systems (GIS and remote sensing (RS, the clustering algorithm has been widely used for image segmentation, pattern recognition, and cartographic generalization. Although clustering analysis plays a key role in geospatial modelling, traditional clustering methods are limited due to computational complexity, noise resistant ability and robustness. Furthermore, traditional methods are more focused on the adjacent spatial context, which makes it hard for the clustering methods to be applied to multi-density discrete objects. In this paper, a new method, cell-dividing hierarchical clustering (CDHC, is proposed based on convex hull retraction. The main steps are as follows. First, a convex hull structure is constructed to describe the global spatial context of geospatial objects. Then, the retracting structure of each borderline is established in sequence by setting the initial parameter. The objects are split into two clusters (i.e., “sub-clusters” if the retracting structure intersects with the borderlines. Finally, clusters are repeatedly split and the initial parameter is updated until the terminate condition is satisfied. The experimental results show that CDHC separates the multi-density objects from noise sufficiently and also reduces complexity compared to the traditional agglomerative hierarchical clustering algorithm.

  7. Research on data auto-analysis algorithms in the explosive detection system

    International Nuclear Information System (INIS)

    Wang Haidong; Li Yuanjing; Yang Yigang; Li Tiezhu; Chen Boxian; Cheng Jianping

    2006-01-01

    This paper mainly describe some auto-analysis algorithms in explosive detection system with TNA method. These include the auto-calibration algorithm when disturbed by other factors, MCA auto-calibration algorithm with calibrated spectrum, the auto-fitting and integral of hydrogen and nitrogen elements data. With these numerical algorithms, the authors can automatically and precisely analysis the gamma-spectra and ultimately achieve the explosive auto-detection. (authors)

  8. Utilizing Minkowski functionals for image analysis: a marching square algorithm

    International Nuclear Information System (INIS)

    Mantz, Hubert; Jacobs, Karin; Mecke, Klaus

    2008-01-01

    Comparing noisy experimental image data with statistical models requires a quantitative analysis of grey-scale images beyond mean values and two-point correlations. A real-space image analysis technique is introduced for digitized grey-scale images, based on Minkowski functionals of thresholded patterns. A novel feature of this marching square algorithm is the use of weighted side lengths for pixels, so that boundary lengths are captured accurately. As examples to illustrate the technique we study surface topologies emerging during the dewetting process of thin films and analyse spinodal decomposition as well as turbulent patterns in chemical reaction–diffusion systems. The grey-scale value corresponds to the height of the film or to the concentration of chemicals, respectively. Comparison with analytic calculations in stochastic geometry models reveals a remarkable agreement of the examples with a Gaussian random field. Thus, a statistical test for non-Gaussian features in experimental data becomes possible with this image analysis technique—even for small image sizes. Implementations of the software used for the analysis are offered for download

  9. Emissivity compensated spectral pyrometry—algorithm and sensitivity analysis

    International Nuclear Information System (INIS)

    Hagqvist, Petter; Sikström, Fredrik; Christiansson, Anna-Karin; Lennartson, Bengt

    2014-01-01

    In order to solve the problem of non-contact temperature measurements on an object with varying emissivity, a new method is herein described and evaluated. The method uses spectral radiance measurements and converts them to temperature readings. It proves to be resilient towards changes in spectral emissivity and tolerates noisy spectral measurements. It is based on an assumption of smooth changes in emissivity and uses historical values of spectral emissivity and temperature for estimating current spectral emissivity. The algorithm, its constituent steps and accompanying parameters are described and discussed. A thorough sensitivity analysis of the method is carried out through simulations. No rigorous instrument calibration is needed for the presented method and it is therefore industrially tractable. (paper)

  10. Road Network Vulnerability Analysis Based on Improved Ant Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Yunpeng Wang

    2014-01-01

    Full Text Available We present an improved ant colony algorithm-based approach to assess the vulnerability of a road network and identify the critical infrastructures. This approach improves computational efficiency and allows for its applications in large-scale road networks. This research involves defining the vulnerability conception, modeling the traffic utility index and the vulnerability of the road network, and identifying the critical infrastructures of the road network. We apply the approach to a simple test road network and a real road network to verify the methodology. The results show that vulnerability is directly related to traffic demand and increases significantly when the demand approaches capacity. The proposed approach reduces the computational burden and may be applied in large-scale road network analysis. It can be used as a decision-supporting tool for identifying critical infrastructures in transportation planning and management.

  11. MPEG-2 Compressed-Domain Algorithms for Video Analysis

    Directory of Open Access Journals (Sweden)

    Hesseler Wolfgang

    2006-01-01

    Full Text Available This paper presents new algorithms for extracting metadata from video sequences in the MPEG-2 compressed domain. Three algorithms for efficient low-level metadata extraction in preprocessing stages are described. The first algorithm detects camera motion using the motion vector field of an MPEG-2 video. The second method extends the idea of motion detection to a limited region of interest, yielding an efficient algorithm to track objects inside video sequences. The third algorithm performs a cut detection using macroblock types and motion vectors.

  12. An analysis of 3D particle path integration algorithms

    International Nuclear Information System (INIS)

    Darmofal, D.L.; Haimes, R.

    1996-01-01

    Several techniques for the numerical integration of particle paths in steady and unsteady vector (velocity) fields are analyzed. Most of the analysis applies to unsteady vector fields, however, some results apply to steady vector field integration. Multistep, multistage, and some hybrid schemes are considered. It is shown that due to initialization errors, many unsteady particle path integration schemes are limited to third-order accuracy in time. Multistage schemes require at least three times more internal data storage than multistep schemes of equal order. However, for timesteps within the stability bounds, multistage schemes are generally more accurate. A linearized analysis shows that the stability of these integration algorithms are determined by the eigenvalues of the local velocity tensor. Thus, the accuracy and stability of the methods are interpreted with concepts typically used in critical point theory. This paper shows how integration schemes can lead to erroneous classification of critical points when the timestep is finite and fixed. For steady velocity fields, we demonstrate that timesteps outside of the relative stability region can lead to similar integration errors. From this analysis, guidelines for accurate timestep sizing are suggested for both steady and unsteady flows. In particular, using simulation data for the unsteady flow around a tapered cylinder, we show that accurate particle path integration requires timesteps which are at most on the order of the physical timescale of the flow

  13. Contributions to sensitivity analysis and generalized discriminant analysis

    International Nuclear Information System (INIS)

    Jacques, J.

    2005-12-01

    Two topics are studied in this thesis: sensitivity analysis and generalized discriminant analysis. Global sensitivity analysis of a mathematical model studies how the output variables of this last react to variations of its inputs. The methods based on the study of the variance quantify the part of variance of the response of the model due to each input variable and each subset of input variables. The first subject of this thesis is the impact of a model uncertainty on results of a sensitivity analysis. Two particular forms of uncertainty are studied: that due to a change of the model of reference, and that due to the use of a simplified model with the place of the model of reference. A second problem was studied during this thesis, that of models with correlated inputs. Indeed, classical sensitivity indices not having significance (from an interpretation point of view) in the presence of correlation of the inputs, we propose a multidimensional approach consisting in expressing the sensitivity of the output of the model to groups of correlated variables. Applications in the field of nuclear engineering illustrate this work. Generalized discriminant analysis consists in classifying the individuals of a test sample in groups, by using information contained in a training sample, when these two samples do not come from the same population. This work extends existing methods in a Gaussian context to the case of binary data. An application in public health illustrates the utility of generalized discrimination models thus defined. (author)

  14. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis

    Science.gov (United States)

    Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao

    2015-01-01

    Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383

  15. Comparison of new and existing algorithms for the analysis of 2D radioxenon beta gamma spectra

    International Nuclear Information System (INIS)

    Deshmukh, Nikhil; Prinke, Amanda; Miller, Brian; McIntyre, Justin

    2017-01-01

    The aim of this study is to compare radioxenon beta–gamma analysis algorithms using simulated spectra with experimentally measured background, where the ground truth of the signal is known. We believe that this is among the largest efforts to date in terms of the number of synthetic spectra generated and number of algorithms compared using identical spectra. We generate an estimate for the minimum detectable counts for each isotope using each algorithm. The paper also points out a conceptual model to put the various algorithms into a continuum. Finally, our results show that existing algorithms can be improved and some newer algorithms can be better than the ones currently used.

  16. Comparison of new and existing algorithms for the analysis of 2D radioxenon beta gamma spectra

    International Nuclear Information System (INIS)

    Deshmukh, Nikhil; Prinke, Amanda; Miller, Brian; McIntyre, Justin

    2017-01-01

    The aim of this paper is to compare radioxenon beta-gamma analysis algorithms using simulated spectra with experimentally measured background, where the ground truth of the signal is known. We believe that this is among the largest efforts to date in terms of the number of synthetic spectra generated and number of algorithms compared using identical spectra. We generate an estimate for the minimum detectable counts for each isotope using each algorithm. The paper also points out a conceptual model to put the various algorithms into a continuum. Our results show that existing algorithms can be improved and some newer algorithms can be better than the ones currently used. (author)

  17. Comparison analysis for classification algorithm in data mining and the study of model use

    Science.gov (United States)

    Chen, Junde; Zhang, Defu

    2018-04-01

    As a key technique in data mining, classification algorithm was received extensive attention. Through an experiment of classification algorithm in UCI data set, we gave a comparison analysis method for the different algorithms and the statistical test was used here. Than that, an adaptive diagnosis model for preventive electricity stealing and leakage was given as a specific case in the paper.

  18. High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm

    Science.gov (United States)

    Cai, Li

    2010-01-01

    A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…

  19. Analysis for Performance of Symbiosis Co-evolutionary Algorithm

    OpenAIRE

    根路銘, もえ子; 遠藤, 聡志; 山田, 孝治; 宮城, 隼夫; Nerome, Moeko; Endo, Satoshi; Yamada, Koji; Miyagi, Hayao

    2000-01-01

    In this paper, we analyze the behavior of symbiotic evolution algorithm for the N-Queens problem as benchmark problem for search methods in the field of aritificial intelligence. It is shown that this algorithm improves the ability of evolutionary search method. When the problem is solved by Genetic Algorithms (GAs), an ordinal representation is often used as one of gene conversion methods which convert from phenotype to genotype and reconvert. The representation can hinder occurrence of leth...

  20. Feature selection using genetic algorithms for fetal heart rate analysis

    International Nuclear Information System (INIS)

    Xu, Liang; Redman, Christopher W G; Georgieva, Antoniya; Payne, Stephen J

    2014-01-01

    The fetal heart rate (FHR) is monitored on a paper strip (cardiotocogram) during labour to assess fetal health. If necessary, clinicians can intervene and assist with a prompt delivery of the baby. Data-driven computerized FHR analysis could help clinicians in the decision-making process. However, selecting the best computerized FHR features that relate to labour outcome is a pressing research problem. The objective of this study is to apply genetic algorithms (GA) as a feature selection method to select the best feature subset from 64 FHR features and to integrate these best features to recognize unfavourable FHR patterns. The GA was trained on 404 cases and tested on 106 cases (both balanced datasets) using three classifiers, respectively. Regularization methods and backward selection were used to optimize the GA. Reasonable classification performance is shown on the testing set for the best feature subset (Cohen's kappa values of 0.45 to 0.49 using different classifiers). This is, to our knowledge, the first time that a feature selection method for FHR analysis has been developed on a database of this size. This study indicates that different FHR features, when integrated, can show good performance in predicting labour outcome. It also gives the importance of each feature, which will be a valuable reference point for further studies. (paper)

  1. Beyond the scope of Free-Wilson analysis: building interpretable QSAR models with machine learning algorithms.

    Science.gov (United States)

    Chen, Hongming; Carlsson, Lars; Eriksson, Mats; Varkonyi, Peter; Norinder, Ulf; Nilsson, Ingemar

    2013-06-24

    A novel methodology was developed to build Free-Wilson like local QSAR models by combining R-group signatures and the SVM algorithm. Unlike Free-Wilson analysis this method is able to make predictions for compounds with R-groups not present in a training set. Eleven public data sets were chosen as test cases for comparing the performance of our new method with several other traditional modeling strategies, including Free-Wilson analysis. Our results show that the R-group signature SVM models achieve better prediction accuracy compared with Free-Wilson analysis in general. Moreover, the predictions of R-group signature models are also comparable to the models using ECFP6 fingerprints and signatures for the whole compound. Most importantly, R-group contributions to the SVM model can be obtained by calculating the gradient for R-group signatures. For most of the studied data sets, a significant correlation with that of a corresponding Free-Wilson analysis is shown. These results suggest that the R-group contribution can be used to interpret bioactivity data and highlight that the R-group signature based SVM modeling method is as interpretable as Free-Wilson analysis. Hence the signature SVM model can be a useful modeling tool for any drug discovery project.

  2. An algorithm for nonlinear thermal analysis of fuel bearing pads

    International Nuclear Information System (INIS)

    Attia, M.H.; D'Silva, N.

    1983-01-01

    An algorithm has been developed for accurate prediction of the temperature field in a CANDU fuel bearing pad and the extent of the nucleate boiling in the crevice region. The methodology recognizes the nonlinear nature of the problem due to the fact that local boiling is both controlling and being controlled by the conditions of heat transfer at the boundaries. The finite difference model accounts for the volumetric effect of the thermal contact resistance at the bearing pad/pressure tube interface. It also allows the evaluation of the thermal barrier effect caused by applying an oxide film on the radiused surface of the bearing pad. Information pertaining to the distribution of the coefficient of heat transfer over water-cooled surfaces has been generated. Analysis of the results indicated the significance of considering the nonlinear behaviour of the system in determining its state of equilibrium. It also indicated that, depending on the thickness of the oxide layer and the position of the bearing pad along the core of the reactor, the nucleate boiling process can be prevented

  3. Performance Analysis of the Consensus-Based Distributed LMS Algorithm

    Directory of Open Access Journals (Sweden)

    Gonzalo Mateos

    2009-01-01

    Full Text Available Low-cost estimation of stationary signals and reduced-complexity tracking of nonstationary processes are well motivated tasks than can be accomplished using ad hoc wireless sensor networks (WSNs. To this end, a fully distributed least mean-square (D-LMS algorithm is developed in this paper, in which sensors exchange messages with single-hop neighbors to consent on the network-wide estimates adaptively. The novel approach does not require a Hamiltonian cycle or a special bridge subset of sensors, while communications among sensors are allowed to be noisy. A mean-square error (MSE performance analysis of D-LMS is conducted in the presence of a time-varying parameter vector, which adheres to a first-order autoregressive model. For sensor observations that are related to the parameter vector of interest via a linear Gaussian model and after adopting simplifying independence assumptions, exact closed-form expressions are derived for the global and sensor-level MSE evolution as well as its steady-state (s.s. values. Mean and MSE-sense stability of D-LMS are also established. Interestingly, extensive numerical tests demonstrate that for small step-sizes the results accurately extend to the pragmatic setting whereby sensors acquire temporally correlated, not necessarily Gaussian data.

  4. CLASSIFICATION ALGORITHMS FOR BIG DATA ANALYSIS, A MAP REDUCE APPROACH

    Directory of Open Access Journals (Sweden)

    V. A. Ayma

    2015-03-01

    Full Text Available Since many years ago, the scientific community is concerned about how to increase the accuracy of different classification methods, and major achievements have been made so far. Besides this issue, the increasing amount of data that is being generated every day by remote sensors raises more challenges to be overcome. In this work, a tool within the scope of InterIMAGE Cloud Platform (ICP, which is an open-source, distributed framework for automatic image interpretation, is presented. The tool, named ICP: Data Mining Package, is able to perform supervised classification procedures on huge amounts of data, usually referred as big data, on a distributed infrastructure using Hadoop MapReduce. The tool has four classification algorithms implemented, taken from WEKA’s machine learning library, namely: Decision Trees, Naïve Bayes, Random Forest and Support Vector Machines (SVM. The results of an experimental analysis using a SVM classifier on data sets of different sizes for different cluster configurations demonstrates the potential of the tool, as well as aspects that affect its performance.

  5. Energy Demand Forecasting: Combining Cointegration Analysis and Artificial Intelligence Algorithm

    Directory of Open Access Journals (Sweden)

    Junbing Huang

    2018-01-01

    Full Text Available Energy is vital for the sustainable development of China. Accurate forecasts of annual energy demand are essential to schedule energy supply and provide valuable suggestions for developing related industries. In the existing literature on energy use prediction, the artificial intelligence-based (AI-based model has received considerable attention. However, few econometric and statistical evidences exist that can prove the reliability of the current AI-based model, an area that still needs to be addressed. In this study, a new energy demand forecasting framework is presented at first. On the basis of historical annual data of electricity usage over the period of 1985–2015, the coefficients of linear and quadratic forms of the AI-based model are optimized by combining an adaptive genetic algorithm and a cointegration analysis shown as an example. Prediction results of the proposed model indicate that the annual growth rate of electricity demand in China will slow down. However, China will continue to demand about 13 trillion kilowatt hours in 2030 because of population growth, economic growth, and urbanization. In addition, the model has greater accuracy and reliability compared with other single optimization methods.

  6. A new algorithm for benchmarking in integer data envelopment analysis

    Directory of Open Access Journals (Sweden)

    M. M. Omran

    2012-08-01

    Full Text Available The aim of this study is to investigate the effect of integer data in data envelopment analysis (DEA. The inputs and outputs in different types of DEA are considered to be continuous. In most application-oriented problems, some or all data are integers; and subsequently, the continuous condition of the values is omitted. For example, situations in which the inputs/outputs are representatives of the number of cars, people, etc. In fact, the benchmark unit is artificial and does not contain integer inputs/outputs after projection on the efficiency frontier. By rounding off the projection point, we may lose the feasibility or end up having inefficient DMU. In such cases, it is required to provide a benchmark unit such that the considered unit reaches the efficiency. In the present short communication, by proposing a novel algorithm, the projecting of an inefficient DMU is carried out in such a way that produced benchmarking takes values with fully integer inputs/outputs.

  7. Sentiment analysis enhancement with target variable in Kumar’s Algorithm

    Science.gov (United States)

    Arman, A. A.; Kawi, A. B.; Hurriyati, R.

    2016-04-01

    Sentiment analysis (also known as opinion mining) refers to the use of text analysis and computational linguistics to identify and extract subjective information in source materials. Sentiment analysis is widely applied to reviews discussion that is being talked in social media for many purposes, ranging from marketing, customer service, or public opinion of public policy. One of the popular algorithm for Sentiment Analysis implementation is Kumar algorithm that developed by Kumar and Sebastian. Kumar algorithm can identify the sentiment score of the statement, sentence or tweet, but cannot determine the relationship of the object or target related to the sentiment being analysed. This research proposed solution for that challenge by adding additional component that represent object or target to the existing algorithm (Kumar algorithm). The result of this research is a modified algorithm that can give sentiment score based on a given object or target.

  8. The Efficiency Analysis of the Augmented Reality Algorithm

    Directory of Open Access Journals (Sweden)

    Dovilė Kurpytė

    2013-05-01

    Full Text Available The article presents the investigation of the efficiency of augmented reality algorithm that depends on the rotation angles and lighting conditions. The following were the target subject parameters: three degrees of freedom perspective of the rotation and side lighting that forms a shadow. Static parameters of subjects with the ability to change them were as follow: the distance between the marker and the camera, camera, processor, and the distance from the light source. The study is based on an open source Java programming language algorithm, where the algorithm is tested with 10 markers. It was found that the rotation error did not exceed 2%.Article in Lithuanian

  9. A digital elevation analysis: Spatially distributed flow apportioning algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sang-Hyun; Kim, Kyung-Hyun [Pusan National University, Pusan(Korea); Jung, Sun-Hee [Korea Environment Institute, (Korea)

    2001-06-30

    A flow determination algorithm is proposed for the distributed hydrologic model. The advantages of a single flow direction scheme and multiple flow direction schemes are selectively considered to address the drawbacks of existing algorithms. A spatially varied flow apportioning factor is introduced in order to accommodate the accumulated area from upslope cells. The channel initiation threshold area(CIT) concept is expanded and integrated into the spatially distributed flow apportioning algorithm in order to delineate a realistic channel network. An application of a field example suggests that the linearly distributed flow apportioning scheme provides some advantages over existing approaches, such as the relaxation of over-dissipation problems near channel cells, the connectivity feature of river cells, the continuity of saturated areas and the negligence of the optimization of few parameters in existing algorithms. The effects of grid sizes are explored spatially as well as statistically. (author). 28 refs., 7 figs.

  10. Probabilistic analysis algorithm for UA slope software program.

    Science.gov (United States)

    2013-12-01

    A reliability-based computational algorithm for using a single row and equally spaced drilled shafts to : stabilize an unstable slope has been developed in this research. The Monte-Carlo simulation (MCS) : technique was used in the previously develop...

  11. Estimation error algorithm at analysis of beta-spectra

    International Nuclear Information System (INIS)

    Bakovets, N.V.; Zhukovskij, A.I.; Zubarev, V.N.; Khadzhinov, E.M.

    2005-01-01

    This work describes the estimation error algorithm at the operations with beta-spectrums, as well as compares the theoretical and experimental errors by the processing of beta-channel's data. (authors)

  12. Analysis of Pathfinder SST algorithm for global and regional conditions

    Indian Academy of Sciences (India)

    SST algorithms to improve the present accuracy of surface temperature measurements ... regions, except in the North Atlantic and adjacent seas, where the residuals are always positive. ..... the stratosphere causing significant contamination of.

  13. Performance Analysis of a Decoding Algorithm for Algebraic Geometry Codes

    DEFF Research Database (Denmark)

    Jensen, Helge Elbrønd; Nielsen, Rasmus Refslund; Høholdt, Tom

    1998-01-01

    We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is greater than or equal to [(dFR-1)/2]+1, where dFR is the Feng-Rao distance......We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is greater than or equal to [(dFR-1)/2]+1, where dFR is the Feng-Rao distance...

  14. The multi-niche crowding genetic algorithm: Analysis and applications

    Energy Technology Data Exchange (ETDEWEB)

    Cedeno, Walter [Univ. of California, Davis, CA (United States)

    1995-09-01

    The ability of organisms to evolve and adapt to the environment has provided mother nature with a rich and diverse set of species. Only organisms well adapted to their environment can survive from one generation to the next, transferring on the traits, that made them successful, to their offspring. Competition for resources and the ever changing environment drives some species to extinction and at the same time others evolve to maintain the delicate balance in nature. In this disertation we present the multi-niche crowding genetic algorithm, a computational metaphor to the survival of species in ecological niches in the face of competition. The multi-niche crowding genetic algorithm maintains stable subpopulations of solutions in multiple niches in multimodal landscapes. The algorithm introduces the concept of crowding selection to promote mating among members with qirnilar traits while allowing many members of the population to participate in mating. The algorithm uses worst among most similar replacement policy to promote competition among members with similar traits while allowing competition among members of different niches as well. We present empirical and theoretical results for the success of the multiniche crowding genetic algorithm for multimodal function optimization. The properties of the algorithm using different parameters are examined. We test the performance of the algorithm on problems of DNA Mapping, Aquifer Management, and the File Design Problem. Applications that combine the use of heuristics and special operators to solve problems in the areas of combinatorial optimization, grouping, and multi-objective optimization. We conclude by presenting the advantages and disadvantages of the algorithm and describing avenues for future investigation to answer other questions raised by this study.

  15. Embedded gamma spectrometry: new algorithms for spectral analysis

    International Nuclear Information System (INIS)

    Martin-Burtart, Nicolas

    2012-01-01

    Airborne gamma spectrometry was first used for mining prospecting. Three main families were looked for: K-40, U-238 and Th-232. The Chernobyl accident acted as a trigger and for the last fifteen years, a lot of new systems have been developed for intervention in case of nuclear accident or environmental purposes. Depending on their uses, new algorithms were developed, mainly for medium or high energy signal extraction. These spectral regions are characteristics of natural emissions (K-40, U-238 and Th-232 decay chains) and fissions products (mainly Cs-137 and Co-60). Below 400 keV, where special nuclear materials emit, these methods can still be used but are greatly imprecise. A new algorithm called 2-windows (extended to 3), was developed. It allows an accurate extraction, taking the flight altitude into account to minimize false detection. Watching radioactive materials traffic appeared with homeland security policy a few years ago. This particular use of dedicated sensors require a new type of algorithms. Before, one algorithm was very efficient for a particular nuclide or spectral region. Now, we need algorithm able to detect an anomaly wherever it is and whatever it is: industrial, medical or SNM. This work identified two families of methods working under these circumstances. Finally, anomalies have to be identified. IAEA recommend to watch around 30 radionuclides. A brand new identification algorithm was developed, using several rays per element and avoiding identifications conflicts. (author) [fr

  16. Performance Analysis of Binary Search Algorithm in RFID

    Directory of Open Access Journals (Sweden)

    Xiangmei SONG

    2014-12-01

    Full Text Available Binary search algorithm (BS is a kind of important anti-collision algorithm in the Radio Frequency Identification (RFID, is also one of the key technologies which determine whether the information in the tag is identified by the reader-writer fast and reliably. The performance of BS directly affects the quality of service in Internet of Things. This paper adopts an automated formal technology: probabilistic model checking to analyze the performance of BS algorithm formally. Firstly, according to the working principle of BS algorithm, its dynamic behavior is abstracted into a Discrete Time Markov Chains which can describe deterministic, discrete time and the probability selection. And then on the model we calculate the probability of the data sent successfully and the expected time of tags completing the data transmission. Compared to the another typical anti-collision protocol S-ALOHA in RFID, experimental results show that with an increase in the number of tags the BS algorithm has a less space and time consumption, the average number of conflicts increases slower than the S-ALOHA protocol standard, BS algorithm needs fewer expected time to complete the data transmission, and the average speed of the data transmission in BS is as 1.6 times as the S-ALOHA protocol.

  17. Comparative analysis of time efficiency and spatial resolution between different EIT reconstruction algorithms

    International Nuclear Information System (INIS)

    Kacarska, Marija; Loskovska, Suzana

    2002-01-01

    In this paper comparative analysis between different EIT algorithms is presented. Analysis is made for spatial and temporal resolution of obtained images by several different algorithms. Discussions consider spatial resolution dependent on data acquisition method, too. Obtained results show that conventional applied-current EIT is more powerful compared to induced-current EIT. (Author)

  18. An algorithm for selecting the most accurate protocol for contact angle measurement by drop shape analysis.

    Science.gov (United States)

    Xu, Z N

    2014-12-01

    In this study, an error analysis is performed to study real water drop images and the corresponding numerically generated water drop profiles for three widely used static contact angle algorithms: the circle- and ellipse-fitting algorithms and the axisymmetric drop shape analysis-profile (ADSA-P) algorithm. The results demonstrate the accuracy of the numerically generated drop profiles based on the Laplace equation. A significant number of water drop profiles with different volumes, contact angles, and noise levels are generated, and the influences of the three factors on the accuracies of the three algorithms are systematically investigated. The results reveal that the above-mentioned three algorithms are complementary. In fact, the circle- and ellipse-fitting algorithms show low errors and are highly resistant to noise for water drops with small/medium volumes and contact angles, while for water drop with large volumes and contact angles just the ADSA-P algorithm can meet accuracy requirement. However, this algorithm introduces significant errors in the case of small volumes and contact angles because of its high sensitivity to noise. The critical water drop volumes of the circle- and ellipse-fitting algorithms corresponding to a certain contact angle error are obtained through a significant amount of computation. To improve the precision of the static contact angle measurement, a more accurate algorithm based on a combination of the three algorithms is proposed. Following a systematic investigation, the algorithm selection rule is described in detail, while maintaining the advantages of the three algorithms and overcoming their deficiencies. In general, static contact angles over the entire hydrophobicity range can be accurately evaluated using the proposed algorithm. The ease of erroneous judgment in static contact angle measurements is avoided. The proposed algorithm is validated by a static contact angle evaluation of real and numerically generated water drop

  19. Improvement of the cost-benefit analysis algorithm for high-rise construction projects

    Directory of Open Access Journals (Sweden)

    Gafurov Andrey

    2018-01-01

    Full Text Available The specific nature of high-rise investment projects entailing long-term construction, high risks, etc. implies a need to improve the standard algorithm of cost-benefit analysis. An improved algorithm is described in the article. For development of the improved algorithm of cost-benefit analysis for high-rise construction projects, the following methods were used: weighted average cost of capital, dynamic cost-benefit analysis of investment projects, risk mapping, scenario analysis, sensitivity analysis of critical ratios, etc. This comprehensive approach helped to adapt the original algorithm to feasibility objectives in high-rise construction. The authors put together the algorithm of cost-benefit analysis for high-rise construction projects on the basis of risk mapping and sensitivity analysis of critical ratios. The suggested project risk management algorithms greatly expand the standard algorithm of cost-benefit analysis in investment projects, namely: the “Project analysis scenario” flowchart, improving quality and reliability of forecasting reports in investment projects; the main stages of cash flow adjustment based on risk mapping for better cost-benefit project analysis provided the broad range of risks in high-rise construction; analysis of dynamic cost-benefit values considering project sensitivity to crucial variables, improving flexibility in implementation of high-rise projects.

  20. Improvement of the cost-benefit analysis algorithm for high-rise construction projects

    Science.gov (United States)

    Gafurov, Andrey; Skotarenko, Oksana; Plotnikov, Vladimir

    2018-03-01

    The specific nature of high-rise investment projects entailing long-term construction, high risks, etc. implies a need to improve the standard algorithm of cost-benefit analysis. An improved algorithm is described in the article. For development of the improved algorithm of cost-benefit analysis for high-rise construction projects, the following methods were used: weighted average cost of capital, dynamic cost-benefit analysis of investment projects, risk mapping, scenario analysis, sensitivity analysis of critical ratios, etc. This comprehensive approach helped to adapt the original algorithm to feasibility objectives in high-rise construction. The authors put together the algorithm of cost-benefit analysis for high-rise construction projects on the basis of risk mapping and sensitivity analysis of critical ratios. The suggested project risk management algorithms greatly expand the standard algorithm of cost-benefit analysis in investment projects, namely: the "Project analysis scenario" flowchart, improving quality and reliability of forecasting reports in investment projects; the main stages of cash flow adjustment based on risk mapping for better cost-benefit project analysis provided the broad range of risks in high-rise construction; analysis of dynamic cost-benefit values considering project sensitivity to crucial variables, improving flexibility in implementation of high-rise projects.

  1. Analysis of ANSI N13.11: the performance algorithm

    International Nuclear Information System (INIS)

    Roberson, P.L.; Hadley, R.T.; Thorson, M.R.

    1982-06-01

    The method of performance testing for personnel dosimeters specified in draft ANSI N13.11, Criteria for Testing Personnel Dosimetry Performance is evaluated. Points addressed are: (1) operational behavior of the performance algorithm; (2) dependence on the number of test dosimeters; (3) basis for choosing an algorithm; and (4) other possible algorithms. The performance algorithm evaluated for each test category is formed by adding the calibration bias and its standard deviation. This algorithm is not optimal due to a high dependence on the standard deviation. The dependence of the calibration bias on the standard deviation is significant because of the low number of dosimeters (15) evaluated per category. For categories with large standard deviations the uncertainty in determining the performance criterion is large. To have a reasonable chance of passing all categories in one test, we required a 95% probability of passing each category. Then, the maximum permissible standard deviation is 30% even with a zero bias. For test categories with standard deviations <10%, the bias can be as high as 35%. For intermediate standard deviations, the chance of passing a category is improved by using a 5 to 10% negative bias. Most multipurpose personnel dosimetry systems will probably require detailed calibration adjustments to pass all categories within two rounds of testing

  2. A fast BDD algorithm for large coherent fault trees analysis

    International Nuclear Information System (INIS)

    Jung, Woo Sik; Han, Sang Hoon; Ha, Jaejoo

    2004-01-01

    Although a binary decision diagram (BDD) algorithm has been tried to solve large fault trees until quite recently, they are not efficiently solved in a short time since the size of a BDD structure exponentially increases according to the number of variables. Furthermore, the truncation of If-Then-Else (ITE) connectives by the probability or size limit and the subsuming to delete subsets could not be directly applied to the intermediate BDD structure under construction. This is the motivation for this work. This paper presents an efficient BDD algorithm for large coherent systems (coherent BDD algorithm) by which the truncation and subsuming could be performed in the progress of the construction of the BDD structure. A set of new formulae developed in this study for AND or OR operation between two ITE connectives of a coherent system makes it possible to delete subsets and truncate ITE connectives with a probability or size limit in the intermediate BDD structure under construction. By means of the truncation and subsuming in every step of the calculation, large fault trees for coherent systems (coherent fault trees) are efficiently solved in a short time using less memory. Furthermore, the coherent BDD algorithm from the aspect of the size of a BDD structure is much less sensitive to variable ordering than the conventional BDD algorithm

  3. Super-Relaxed ( -Proximal Point Algorithms, Relaxed ( -Proximal Point Algorithms, Linear Convergence Analysis, and Nonlinear Variational Inclusions

    Directory of Open Access Journals (Sweden)

    Agarwal RaviP

    2009-01-01

    Full Text Available We glance at recent advances to the general theory of maximal (set-valued monotone mappings and their role demonstrated to examine the convex programming and closely related field of nonlinear variational inequalities. We focus mostly on applications of the super-relaxed ( -proximal point algorithm to the context of solving a class of nonlinear variational inclusion problems, based on the notion of maximal ( -monotonicity. Investigations highlighted in this communication are greatly influenced by the celebrated work of Rockafellar (1976, while others have played a significant part as well in generalizing the proximal point algorithm considered by Rockafellar (1976 to the case of the relaxed proximal point algorithm by Eckstein and Bertsekas (1992. Even for the linear convergence analysis for the overrelaxed (or super-relaxed ( -proximal point algorithm, the fundamental model for Rockafellar's case does the job. Furthermore, we attempt to explore possibilities of generalizing the Yosida regularization/approximation in light of maximal ( -monotonicity, and then applying to first-order evolution equations/inclusions.

  4. Spatial compression algorithm for the analysis of very large multivariate images

    Science.gov (United States)

    Keenan, Michael R [Albuquerque, NM

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  5. A Parcellation Based Nonparametric Algorithm for Independent Component Analysis with Application to fMRI Data

    Directory of Open Access Journals (Sweden)

    Shanshan eLi

    2016-01-01

    Full Text Available Independent Component analysis (ICA is a widely used technique for separating signals that have been mixed together. In this manuscript, we propose a novel ICA algorithm using density estimation and maximum likelihood, where the densities of the signals are estimated via p-spline based histogram smoothing and the mixing matrix is simultaneously estimated using an optimization algorithm. The algorithm is exceedingly simple, easy to implement and blind to the underlying distributions of the source signals. To relax the identically distributed assumption in the density function, a modified algorithm is proposed to allow for different density functions on different regions. The performance of the proposed algorithm is evaluated in different simulation settings. For illustration, the algorithm is applied to a research investigation with a large collection of resting state fMRI datasets. The results show that the algorithm successfully recovers the established brain networks.

  6. Accuracy Analysis of Lunar Lander Terminal Guidance Algorithm

    Directory of Open Access Journals (Sweden)

    E. K. Li

    2017-01-01

    Full Text Available This article studies a proposed analytical algorithm of the terminal guidance for the lunar lander. The analytical solution, which forms the basis of the algorithm, was obtained for a constant acceleration trajectory and thrust vector orientation programs that are essentially linear with time. The main feature of the proposed algorithm is a completely analytical solution to provide the lander terminal guidance to the desired spot in 3D space when landing on the atmosphereless body with no numerical procedures. To reach 6 terminal conditions (components of position and velocity vectors at the final time are used 6 guidance law parameters, namely time-to-go, desired value of braking deceleration, initial values of pitch and yaw angles and rates of their change. In accordance with the principle of flexible trajectories, this algorithm assumes the implementation of a regularly updated control program that ensures reaching terminal conditions from the current state that corresponds to the control program update time. The guidance law parameters, which ensure that terminal conditions are reached, are generated as a function of the current phase coordinates of a lander. The article examines an accuracy and reliability of the proposed analytical algorithm that provides the terminal guidance of the lander in 3D space through mathematical modeling of the lander guidance from the circumlunar pre-landing orbit to the desired spot near the lunar surface. A desired terminal position of the lunar lander is specified by the selenographic latitude, longitude and altitude above the lunar surface. The impact of variations in orbital parameters on the terminal guidance accuracy has been studied. By varying the five initial orbit parameters (obliquity, ascending node longitude, argument of periapsis, periapsis height, apoapsis height when the terminal spot is fixed the statistic characteristics of the terminal guidance algorithm error according to the terminal

  7. Prediction of customer behaviour analysis using classification algorithms

    Science.gov (United States)

    Raju, Siva Subramanian; Dhandayudam, Prabha

    2018-04-01

    Customer Relationship management plays a crucial role in analyzing of customer behavior patterns and their values with an enterprise. Analyzing of customer data can be efficient performed using various data mining techniques, with the goal of developing business strategies and to enhance the business. In this paper, three classification models (NB, J48, and MLPNN) are studied and evaluated for our experimental purpose. The performance measures of the three classifications are compared using three different parameters (accuracy, sensitivity, specificity) and experimental results expose J48 algorithm has better accuracy with compare to NB and MLPNN algorithm.

  8. Analysis of velocity planning interpolation algorithm based on NURBS curve

    Science.gov (United States)

    Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng

    2017-04-01

    To reduce interpolation time and Max interpolation error in NURBS (Non-Uniform Rational B-Spline) inter-polation caused by planning Velocity. This paper proposed a velocity planning interpolation algorithm based on NURBS curve. Firstly, the second-order Taylor expansion is applied on the numerator in NURBS curve representation with parameter curve. Then, velocity planning interpolation algorithm can meet with NURBS curve interpolation. Finally, simulation results show that the proposed NURBS curve interpolator meet the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished.

  9. Using general-purpose compression algorithms for music analysis

    DEFF Research Database (Denmark)

    Louboutin, Corentin; Meredith, David

    2016-01-01

    General-purpose compression algorithms encode files as dictionaries of substrings with the positions of these strings’ occurrences. We hypothesized that such algorithms could be used for pattern discovery in music. We compared LZ77, LZ78, Burrows–Wheeler and COSIATEC on classifying folk song...... in the input data, COSIATEC outperformed LZ77 with a mean F1 score of 0.123, compared with 0.053 for LZ77. However, when the music was processed a voice at a time, the F1 score for LZ77 more than doubled to 0.124. We also discovered a significant correlation between compression factor and F1 score for all...

  10. Comparison of public peak detection algorithms for MALDI mass spectrometry data analysis.

    Science.gov (United States)

    Yang, Chao; He, Zengyou; Yu, Weichuan

    2009-01-06

    In mass spectrometry (MS) based proteomic data analysis, peak detection is an essential step for subsequent analysis. Recently, there has been significant progress in the development of various peak detection algorithms. However, neither a comprehensive survey nor an experimental comparison of these algorithms is yet available. The main objective of this paper is to provide such a survey and to compare the performance of single spectrum based peak detection methods. In general, we can decompose a peak detection procedure into three consequent parts: smoothing, baseline correction and peak finding. We first categorize existing peak detection algorithms according to the techniques used in different phases. Such a categorization reveals the differences and similarities among existing peak detection algorithms. Then, we choose five typical peak detection algorithms to conduct a comprehensive experimental study using both simulation data and real MALDI MS data. The results of comparison show that the continuous wavelet-based algorithm provides the best average performance.

  11. Rational use of cognitive resources: levels of analysis between the computational and the algorithmic.

    Science.gov (United States)

    Griffiths, Thomas L; Lieder, Falk; Goodman, Noah D

    2015-04-01

    Marr's levels of analysis-computational, algorithmic, and implementation-have served cognitive science well over the last 30 years. But the recent increase in the popularity of the computational level raises a new challenge: How do we begin to relate models at different levels of analysis? We propose that it is possible to define levels of analysis that lie between the computational and the algorithmic, providing a way to build a bridge between computational- and algorithmic-level models. The key idea is to push the notion of rationality, often used in defining computational-level models, deeper toward the algorithmic level. We offer a simple recipe for reverse-engineering the mind's cognitive strategies by deriving optimal algorithms for a series of increasingly more realistic abstract computational architectures, which we call "resource-rational analysis." Copyright © 2015 Cognitive Science Society, Inc.

  12. An enhanced search algorithm for the charged fuel enrichment in equilibrium cycle analysis of REBUS-3

    International Nuclear Information System (INIS)

    Park, Tongkyu; Yang, Won Sik; Kim, Sang-Ji

    2017-01-01

    Highlights: • An enhanced search algorithm for charged fuel enrichment was developed for equilibrium cycle analysis with REBUS-3. • The new search algorithm is not sensitive to the user-specified initial guesses. • The new algorithm reduces the computational time by a factor of 2–3. - Abstract: This paper presents an enhanced search algorithm for the charged fuel enrichment in equilibrium cycle analysis of REBUS-3. The current enrichment search algorithm of REBUS-3 takes a large number of iterations to yield a converged solution or even terminates without a converged solution when the user-specified initial guesses are far from the solution. To resolve the convergence problem and to reduce the computational time, an enhanced search algorithm was developed. The enhanced algorithm is based on the idea of minimizing the number of enrichment estimates by allowing drastic enrichment changes and by optimizing the current search algorithm of REBUS-3. Three equilibrium cycle problems with recycling, without recycling and of high discharge burnup were defined and a series of sensitivity analyses were performed with a wide range of user-specified initial guesses. Test results showed that the enhanced search algorithm is able to produce a converged solution regardless of the initial guesses. In addition, it was able to reduce the number of flux calculations by a factor of 2.9, 1.8, and 1.7 for equilibrium cycle problems with recycling, without recycling, and of high discharge burnup, respectively, compared to the current search algorithm.

  13. Security Analysis of A Chaos-based Image Encryption Algorithm

    OpenAIRE

    Lian, Shiguo; Sun, Jinsheng; Wang, Zhiquan

    2006-01-01

    The security of Fridrich Image Encryption Algorithm against brute-force attack, statistical attack, known-plaintext attack and select-plaintext attack is analyzed by investigating the properties of the involved chaotic maps and diffusion functions. Based on the given analyses, some means are proposed to strengthen the overall performance of the focused cryptosystem.

  14. Security analysis of a chaos-based image encryption algorithm

    Science.gov (United States)

    Lian, Shiguo; Sun, Jinsheng; Wang, Zhiquan

    2005-06-01

    The security of Fridrich's algorithm against brute-force attack, statistical attack, known-plaintext attack and select-plaintext attack is analyzed by investigating the properties of the involved chaotic maps and diffusion functions. Based on the given analyses, some means are proposed to strengthen the overall performance of the focused cryptosystem.

  15. Adaptive firefly algorithm: parameter analysis and its application.

    Directory of Open Access Journals (Sweden)

    Ngaam J Cheung

    Full Text Available As a nature-inspired search algorithm, firefly algorithm (FA has several control parameters, which may have great effects on its performance. In this study, we investigate the parameter selection and adaptation strategies in a modified firefly algorithm - adaptive firefly algorithm (AdaFa. There are three strategies in AdaFa including (1 a distance-based light absorption coefficient; (2 a gray coefficient enhancing fireflies to share difference information from attractive ones efficiently; and (3 five different dynamic strategies for the randomization parameter. Promising selections of parameters in the strategies are analyzed to guarantee the efficient performance of AdaFa. AdaFa is validated over widely used benchmark functions, and the numerical experiments and statistical tests yield useful conclusions on the strategies and the parameter selections affecting the performance of AdaFa. When applied to the real-world problem - protein tertiary structure prediction, the results demonstrated improved variants can rebuild the tertiary structure with the average root mean square deviation less than 0.4Å and 1.5Å from the native constrains with noise free and 10% Gaussian white noise.

  16. An algorithm for unified analysis on the thermoluminescence glow curve

    International Nuclear Information System (INIS)

    Chung, K.S.; Park, C.Y.; Lee, J.I.; Kim, J.L.

    2014-01-01

    An algorithm was developed to integrally handle excitation by radiation, relaxation and luminescence by thermal or optical stimulation in thermoluminescence (TL) and optically stimulated luminescence (OSL) processes. This algorithm reflects the mutual interaction between traps through a conduction band. Electrons and holes are created by radiation in the beginning, and these electrons move to the trap through the conduction band. These holes move to the recombination center through a valence band. The ratio of the electrons allocated to each trap differs with the recombination probability and these values also relevant to the process of luminescence. Accordingly, the glow curve can be interpreted by taking the rate of electron–hole pairs created by ionizing radiation as a unique initial condition. This method differs from the conventional method of interpreting the measured glow curve with the initial electron concentration allocated to each trap at the end of irradiation. A program using the Visual Studio's C# subsystem was made to realize such a developed algorithm. To verify this algorithm it was applied to LiF:Mg,Cu,Si. The TL glow curve was deconvoluted with a model of five traps, one deep trap and one recombination center (RC). - Highlights: • TL glow curve deconvolution employing interacting model. • Simulation both irradiation and TL readout stages for various dose level. • Application in the identification TL kinetics of LiF:Mg,Cu,Si TLD

  17. Hardware Genetic Algorithm Optimization by Critical Path Analysis using a Custom VLSI Architecture

    Directory of Open Access Journals (Sweden)

    Farouk Smith

    2015-07-01

    Full Text Available This paper propose a Virtual-Field Programmable Gate Array (V-FPGA architecture that allows direct access to its configuration bits to facilitate hardware evolution, thereby allowing any combinational or sequential digital circuit to be realized. By using the V-FPGA, this paper investigates two possible ways of making evolutionary hardware systems more scalable: by optimizing the system’s genetic algorithm (GA; and by decomposing the solution circuit into smaller, evolvable sub-circuits. GA optimization is done by: omitting a canonical GA’s crossover operator (i.e. by using a 1+λ algorithm; applying evolution constraints; and optimizing the fitness function. A noteworthy contribution this research has made is the in-depth analysis of the phenotypes’ CPs. Through analyzing the CPs, it has been shown that a great amount of insight can be gained into a phenotype’s fitness. We found that as the number of columns in the Cartesian Genetic Programming array increases, so the likelihood of an external output being placed in the column decreases. Furthermore, the number of used LEs per column also substantially decreases per added column. Finally, we demonstrated the evolution of a state-decomposed control circuit. It was shown that the evolution of each state’s sub-circuit was possible, and suggest that modular evolution can be a successful tool when dealing with scalability.

  18. The Reach-and-Evolve Algorithm for Reachability Analysis of Nonlinear Dynamical Systems

    NARCIS (Netherlands)

    P.J. Collins (Pieter); A. Goldsztejn

    2008-01-01

    htmlabstractThis paper introduces a new algorithm dedicated to the rigorous reachability analysis of nonlinear dynamical systems. The algorithm is initially presented in the context of discrete time dynamical systems, and then extended to continuous time dynamical systems driven by ODEs. In

  19. Analysis of Various Multi-Objective Optimization Evolutionary Algorithms for Monte Carlo Treatment Planning System

    CERN Document Server

    Tydrichova, Magdalena

    2017-01-01

    In this project, various available multi-objective optimization evolutionary algorithms were compared considering their performance and distribution of solutions. The main goal was to select the most suitable algorithms for applications in cancer hadron therapy planning. For our purposes, a complex testing and analysis software was developed. Also, many conclusions and hypothesis have been done for the further research.

  20. Ecodriver. D23.2: Simulation and analysis document for on-line vehicle algorithms

    NARCIS (Netherlands)

    Seewald, P.; Orfila, O.; Saintpierre, G.

    2014-01-01

    This deliverable reports on the simulations and analysis of the on-line vehicle algorithms as well as the ecoDriver Android application. The simulation and field test results give an impression of how the algorithms will perform in the real world trials in SP3. Thus, it is possible to apply

  1. An Enhanced K-Means Algorithm for Water Quality Analysis of The Haihe River in China.

    Science.gov (United States)

    Zou, Hui; Zou, Zhihong; Wang, Xiaojing

    2015-11-12

    The increase and the complexity of data caused by the uncertain environment is today's reality. In order to identify water quality effectively and reliably, this paper presents a modified fast clustering algorithm for water quality analysis. The algorithm has adopted a varying weights K-means cluster algorithm to analyze water monitoring data. The varying weights scheme was the best weighting indicator selected by a modified indicator weight self-adjustment algorithm based on K-means, which is named MIWAS-K-means. The new clustering algorithm avoids the margin of the iteration not being calculated in some cases. With the fast clustering analysis, we can identify the quality of water samples. The algorithm is applied in water quality analysis of the Haihe River (China) data obtained by the monitoring network over a period of eight years (2006-2013) with four indicators at seven different sites (2078 samples). Both the theoretical and simulated results demonstrate that the algorithm is efficient and reliable for water quality analysis of the Haihe River. In addition, the algorithm can be applied to more complex data matrices with high dimensionality.

  2. Analysis of Known Linear Distributed Average Consensus Algorithms on Cycles and Paths

    Directory of Open Access Journals (Sweden)

    Jesús Gutiérrez-Gutiérrez

    2018-03-01

    Full Text Available In this paper, we compare six known linear distributed average consensus algorithms on a sensor network in terms of convergence time (and therefore, in terms of the number of transmissions required. The selected network topologies for the analysis (comparison are the cycle and the path. Specifically, in the present paper, we compute closed-form expressions for the convergence time of four known deterministic algorithms and closed-form bounds for the convergence time of two known randomized algorithms on cycles and paths. Moreover, we also compute a closed-form expression for the convergence time of the fastest deterministic algorithm considered on grids.

  3. ANALYSIS OF PARAMETERIZATION VALUE REDUCTION OF SOFT SETS AND ITS ALGORITHM

    Directory of Open Access Journals (Sweden)

    Mohammed Adam Taheir Mohammed

    2016-02-01

    Full Text Available In this paper, the parameterization value reduction of soft sets and its algorithm in decision making are studied and described. It is based on parameterization reduction of soft sets. The purpose of this study is to investigate the inherited disadvantages of parameterization reduction of soft sets and its algorithm. The algorithms presented in this study attempt to reduce the value of least parameters from soft set. Through the analysis, two techniques have been described. Through this study, it is found that parameterization reduction of soft sets and its algorithm has yielded a different and inconsistency in suboptimal result.

  4. Analysis and improvement of a chaos-based image encryption algorithm

    International Nuclear Information System (INIS)

    Xiao Di; Liao Xiaofeng; Wei Pengcheng

    2009-01-01

    The security of digital image attracts much attention recently. In Guan et al. [Guan Z, Huang F, Guan W. Chaos-based image encryption algorithm. Phys Lett A 2005; 346: 153-7.], a chaos-based image encryption algorithm has been proposed. In this paper, the cause of potential flaws in the original algorithm is analyzed in detail, and then the corresponding enhancement measures are proposed. Both theoretical analysis and computer simulation indicate that the improved algorithm can overcome these flaws and maintain all the merits of the original one.

  5. Distributed Scheduling in Time Dependent Environments: Algorithms and Analysis

    OpenAIRE

    Shmuel, Ori; Cohen, Asaf; Gurewitz, Omer

    2017-01-01

    Consider the problem of a multiple access channel in a time dependent environment with a large number of users. In such a system, mostly due to practical constraints (e.g., decoding complexity), not all users can be scheduled together, and usually only one user may transmit at any given time. Assuming a distributed, opportunistic scheduling algorithm, we analyse the system's properties, such as delay, QoS and capacity scaling laws. Specifically, we start with analyzing the performance while \\...

  6. Prediction and Analysis of students Behavior using BARC Algorithm

    OpenAIRE

    M.Sindhuja; Dr.S.Rajalakshmi; S.M.Nandagopal

    2013-01-01

    Educational Data mining is a recent trends where data mining methods are experimented for the improvement of student performance in academics. The work describes the mining of higher education students’ related attributes such as behavior, attitude and relationship. The data were collected from a higher education institution in terms of the mentioned attributes. The proposed work explored Behavior Attitude Relationship Clustering (BARC) Algorithm, which showed the improvement in students’ per...

  7. A Probabilistic Analysis of the Nxt Forging Algorithm

    Directory of Open Access Journals (Sweden)

    Serguei Popov

    2016-12-01

    Full Text Available We discuss the forging algorithm of Nxt from a probabilistic point of view, and obtain explicit formulas and estimates for several important quantities, such as the probability that an account generates a block, the length of the longest sequence of consecutive blocks generated by one account, and the probability that one concurrent blockchain wins over an- other one. Also, we discuss some attack vectors related to splitting an account into many smaller ones.

  8. A new LMS algorithm for analysis of atrial fibrillation signals

    OpenAIRE

    Ciaccio Edward J; Biviano Angelo B; Whang William; Garan Hasan

    2012-01-01

    Abstract Background A biomedical signal can be defined by its extrinsic features (x-axis and y-axis shift and scale) and intrinsic features (shape after normalization of extrinsic features). In this study, an LMS algorithm utilizing the method of differential steepest descent is developed, and is tested by normalization of extrinsic features in complex fractionated atrial electrograms (CFAE). Method Equations for normalization of x-axis and y-axis shift and scale are first derived. The algori...

  9. Concept analysis of cancer survivorship and contributions to oncological nursing.

    Science.gov (United States)

    de Oliveira, Rafaela Azevedo Abrantes; da Conceição, Vander Monteiro; Araujo, Jeferson Santos; Zago, Márcia Maria Fontão

    2018-02-01

    This study aims to analyse the concept of cancer survivorship using Rodgers' evolutionary concept analysis model. The lack of a consensus definition as well as the confusion and debate concerning the definitions of "survivor" and "cancer survivorship" hinder an understanding of the intrinsic needs associated with the latter. Concept analysis. A systematic literature search was performed using the following databases: PubMed, CINAHL, Web of Science, LILACS, and PsycINFO with studies published between 2000 and 2014. The final sample contained 39 studies that were analysed on the basis of Rodgers' model and inductive thematic analysis, discussed through the lens of the medical anthropology concept of culture. Cancer survivorship is a broad concept that can be understood using 8 themes: changes in life plans, positive and negative aspect dualities, life reflections, identity change, individual experiences, symptom control, the need for support, and quality of care. These themes are summarized using 2 attributes: liminality process and culturally congruent care. This article contributes to understanding of cancer survivorship and the processes that are intrinsic to this concept. It calls for future investigations to enhance cancer survivorship across its 2 domains at the personal (patient's life) and clinical (nursing practice) levels. © 2017 John Wiley & Sons Australia, Ltd.

  10. Graph Transformation and Designing Parallel Sparse Matrix Algorithms beyond Data Dependence Analysis

    Directory of Open Access Journals (Sweden)

    H.X. Lin

    2004-01-01

    Full Text Available Algorithms are often parallelized based on data dependence analysis manually or by means of parallel compilers. Some vector/matrix computations such as the matrix-vector products with simple data dependence structures (data parallelism can be easily parallelized. For problems with more complicated data dependence structures, parallelization is less straightforward. The data dependence graph is a powerful means for designing and analyzing parallel algorithms. However, for sparse matrix computations, parallelization based on solely exploiting the existing parallelism in an algorithm does not always give satisfactory results. For example, the conventional Gaussian elimination algorithm for the solution of a tri-diagonal system is inherently sequential, so algorithms specially for parallel computation has to be designed. After briefly reviewing different parallelization approaches, a powerful graph formalism for designing parallel algorithms is introduced. This formalism will be discussed using a tri-diagonal system as an example. Its application to general matrix computations is also discussed. Its power in designing parallel algorithms beyond the ability of data dependence analysis is shown by means of a new algorithm called ACER (Alternating Cyclic Elimination and Reduction algorithm.

  11. The MUSIC algorithm for sparse objects: a compressed sensing analysis

    International Nuclear Information System (INIS)

    Fannjiang, Albert C

    2011-01-01

    The multiple signal classification (MUSIC) algorithm, and its extension for imaging sparse extended objects, with noisy data is analyzed by compressed sensing (CS) techniques. A thresholding rule is developed to augment the standard MUSIC algorithm. The notion of restricted isometry property (RIP) and an upper bound on the restricted isometry constant (RIC) are employed to establish sufficient conditions for the exact localization by MUSIC with or without noise. In the noiseless case, the sufficient condition gives an upper bound on the numbers of random sampling and incident directions necessary for exact localization. In the noisy case, the sufficient condition assumes additionally an upper bound for the noise-to-object ratio in terms of the RIC and the dynamic range of objects. This bound points to the super-resolution capability of the MUSIC algorithm. Rigorous comparison of performance between MUSIC and the CS minimization principle, basis pursuit denoising (BPDN), is given. In general, the MUSIC algorithm guarantees to recover, with high probability, s scatterers with n=O(s 2 ) random sampling and incident directions and sufficiently high frequency. For the favorable imaging geometry where the scatterers are distributed on a transverse plane MUSIC guarantees to recover, with high probability, s scatterers with a median frequency and n=O(s) random sampling/incident directions. Moreover, for the problems of spectral estimation and source localizations both BPDN and MUSIC guarantee, with high probability, to identify exactly the frequencies of random signals with the number n=O(s) of sampling times. However, in the absence of abundant realizations of signals, BPDN is the preferred method for spectral estimation. Indeed, BPDN can identify the frequencies approximately with just one realization of signals with the recovery error at worst linearly proportional to the noise level. Numerical results confirm that BPDN outperforms MUSIC in the well-resolved case while

  12. Hungarian contribution to the Global Soil Organic Carbon Map (GSOC17) using advanced machine learning algorithms and geostatistics

    Science.gov (United States)

    Szatmári, Gábor; Laborczi, Annamária; Takács, Katalin; Pásztor, László

    2017-04-01

    The knowledge about soil organic carbon (SOC) baselines and changes, and the detection of vulnerable hot spots for SOC losses and gains under climate change and changed land management is still fairly limited. Thus Global Soil Partnership (GSP) has been requested to develop a global SOC mapping campaign by 2017. GSPs concept builds on official national data sets, therefore, a bottom-up (country-driven) approach is pursued. The elaborated Hungarian methodology suits the general specifications of GSOC17 provided by GSP. The input data for GSOC17@HU mapping approach has involved legacy soil data bases, as well as proper environmental covariates related to the main soil forming factors, such as climate, organisms, relief and parent material. Nowadays, digital soil mapping (DSM) highly relies on the assumption that soil properties of interest can be modelled as a sum of a deterministic and stochastic component, which can be treated and modelled separately. We also adopted this assumption in our methodology. In practice, multiple regression techniques are commonly used to model the deterministic part. However, this global (and usually linear) models commonly oversimplify the often complex and non-linear relationship, which has a crucial effect on the resulted soil maps. Thus, we integrated machine learning algorithms (namely random forest and quantile regression forest) in the elaborated methodology, supposing then to be more suitable for the problem in hand. This approach has enable us to model the GSOC17 soil properties in that complex and non-linear forms as the soil itself. Furthermore, it has enable us to model and assess the uncertainty of the results, which is highly relevant in decision making. The applied methodology has used geostatistical approach to model the stochastic part of the spatial variability of the soil properties of interest. We created GSOC17@HU map with 1 km grid resolution according to the GSPs specifications. The map contributes to the GSPs

  13. Error Analysis of Inertial Navigation Systems Using Test Algorithms

    OpenAIRE

    Vaispacher, Tomáš; Bréda, Róbert; Adamčík, František

    2015-01-01

    Content of this contribution is an issue of inertial sensors errors, specification of inertial measurement units and generating of test signals for Inertial Navigation System (INS). Given the different levels of navigation tasks, part of this contribution is comparison of the actual types of Inertial Measurement Units. Considering this comparison, there is proposed the way of solving inertial sensors errors and their modelling for low – cost inertial navigation applications. The last part is ...

  14. The improved Apriori algorithm based on matrix pruning and weight analysis

    Science.gov (United States)

    Lang, Zhenhong

    2018-04-01

    This paper uses the matrix compression algorithm and weight analysis algorithm for reference and proposes an improved matrix pruning and weight analysis Apriori algorithm. After the transactional database is scanned for only once, the algorithm will construct the boolean transaction matrix. Through the calculation of one figure in the rows and columns of the matrix, the infrequent item set is pruned, and a new candidate item set is formed. Then, the item's weight and the transaction's weight as well as the weight support for items are calculated, thus the frequent item sets are gained. The experimental result shows that the improved Apriori algorithm not only reduces the number of repeated scans of the database, but also improves the efficiency of data correlation mining.

  15. Orthogonal Algorithm of Logic Probability and Syndrome-Testable Analysis

    Institute of Scientific and Technical Information of China (English)

    1990-01-01

    A new method,orthogonal algoritm,is presented to compute the logic probabilities(i.e.signal probabilities)accurately,The transfer properties of logic probabilities are studied first,which are useful for the calculation of logic probability of the circuit with random independent inputs.Then the orthogonal algoritm is described to compute the logic probability of Boolean function realized by a combinational circuit.This algorithm can make Boolean function “ORTHOGONAL”so that the logic probabilities can be easily calculated by summing up the logic probabilities of all orthogonal terms of the Booleam function.

  16. Grooming analysis algorithm: use in the relationship between sleep deprivation and anxiety-like behavior.

    Science.gov (United States)

    Pires, Gabriel N; Tufik, Sergio; Andersen, Monica L

    2013-03-05

    Increased anxiety is a classic effect of sleep deprivation. However, results regarding sleep deprivation-induced anxiety-like behavior are contradictory in rodent models. The grooming analysis algorithm is a method developed to examine anxiety-like behavior and stress in rodents, based on grooming characteristics and microstructure. This study evaluated the applicability of the grooming analysis algorithm to distinguish sleep-deprived and control rats in comparison to traditional grooming analysis. Forty-six animals were distributed into three groups: control (n=22), paradoxical sleep-deprived (96 h, n=10) and total sleep deprived (6 h, n=14). Immediately after the sleep deprivation protocol, grooming was evaluated using both the grooming analysis algorithm and traditional measures (grooming latency, frequency and duration). Results showed that both paradoxical sleep-deprived and total sleep-deprived groups displayed grooming in a fragmented framework when compared to control animals. Variables from the grooming analysis algorithm were successful in distinguishing sleep-deprived and normal sleep animals regarding anxiety-like behavior. The grooming analysis algorithm and traditional measures were strongly correlated. In conclusion, the grooming analysis algorithm is a reliable method to assess the relationship between anxiety-like behavior and sleep deprivation. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. Convergence analysis of variational and non-variational multigrid algorithms for the Laplace-Beltrami operator

    KAUST Repository

    Bonito, Andrea; Pasciak, Joseph E.

    2012-01-01

    is captured well enough by the coarsest grid. The main argument hinges on a perturbation analysis from an auxiliary variational algorithm defined directly on the smooth surface. In addition, the vanishing mean value constraint is imposed on each level, thereby

  18. An analysis dictionary learning algorithm under a noisy data model with orthogonality constraint.

    Science.gov (United States)

    Zhang, Ye; Yu, Tenglong; Wang, Wenwu

    2014-01-01

    Two common problems are often encountered in analysis dictionary learning (ADL) algorithms. The first one is that the original clean signals for learning the dictionary are assumed to be known, which otherwise need to be estimated from noisy measurements. This, however, renders a computationally slow optimization process and potentially unreliable estimation (if the noise level is high), as represented by the Analysis K-SVD (AK-SVD) algorithm. The other problem is the trivial solution to the dictionary, for example, the null dictionary matrix that may be given by a dictionary learning algorithm, as discussed in the learning overcomplete sparsifying transform (LOST) algorithm. Here we propose a novel optimization model and an iterative algorithm to learn the analysis dictionary, where we directly employ the observed data to compute the approximate analysis sparse representation of the original signals (leading to a fast optimization procedure) and enforce an orthogonality constraint on the optimization criterion to avoid the trivial solutions. Experiments demonstrate the competitive performance of the proposed algorithm as compared with three baselines, namely, the AK-SVD, LOST, and NAAOLA algorithms.

  19. An Analysis Dictionary Learning Algorithm under a Noisy Data Model with Orthogonality Constraint

    Directory of Open Access Journals (Sweden)

    Ye Zhang

    2014-01-01

    Full Text Available Two common problems are often encountered in analysis dictionary learning (ADL algorithms. The first one is that the original clean signals for learning the dictionary are assumed to be known, which otherwise need to be estimated from noisy measurements. This, however, renders a computationally slow optimization process and potentially unreliable estimation (if the noise level is high, as represented by the Analysis K-SVD (AK-SVD algorithm. The other problem is the trivial solution to the dictionary, for example, the null dictionary matrix that may be given by a dictionary learning algorithm, as discussed in the learning overcomplete sparsifying transform (LOST algorithm. Here we propose a novel optimization model and an iterative algorithm to learn the analysis dictionary, where we directly employ the observed data to compute the approximate analysis sparse representation of the original signals (leading to a fast optimization procedure and enforce an orthogonality constraint on the optimization criterion to avoid the trivial solutions. Experiments demonstrate the competitive performance of the proposed algorithm as compared with three baselines, namely, the AK-SVD, LOST, and NAAOLA algorithms.

  20. Performance Analysis of Evolutionary Algorithms for Steiner Tree Problems.

    Science.gov (United States)

    Lai, Xinsheng; Zhou, Yuren; Xia, Xiaoyun; Zhang, Qingfu

    2017-01-01

    The Steiner tree problem (STP) aims to determine some Steiner nodes such that the minimum spanning tree over these Steiner nodes and a given set of special nodes has the minimum weight, which is NP-hard. STP includes several important cases. The Steiner tree problem in graphs (GSTP) is one of them. Many heuristics have been proposed for STP, and some of them have proved to be performance guarantee approximation algorithms for this problem. Since evolutionary algorithms (EAs) are general and popular randomized heuristics, it is significant to investigate the performance of EAs for STP. Several empirical investigations have shown that EAs are efficient for STP. However, up to now, there is no theoretical work on the performance of EAs for STP. In this article, we reveal that the (1+1) EA achieves 3/2-approximation ratio for STP in a special class of quasi-bipartite graphs in expected runtime [Formula: see text], where [Formula: see text], [Formula: see text], and [Formula: see text] are, respectively, the number of Steiner nodes, the number of special nodes, and the largest weight among all edges in the input graph. We also show that the (1+1) EA is better than two other heuristics on two GSTP instances, and the (1+1) EA may be inefficient on a constructed GSTP instance.

  1. Pomegranate MR images analysis using ACM and FCM algorithms

    Science.gov (United States)

    Morad, Ghobad; Shamsi, Mousa; Sedaaghi, M. H.; Alsharif, M. R.

    2011-10-01

    Segmentation of an image plays an important role in image processing applications. In this paper segmentation of pomegranate magnetic resonance (MR) images has been explored. Pomegranate has healthy nutritional and medicinal properties for which the maturity indices and quality of internal tissues play an important role in the sorting process in which the admissible determination of features mentioned above cannot be easily achieved by human operator. Seeds and soft tissues are the main internal components of pomegranate. For research purposes, such as non-destructive investigation, in order to determine the ripening index and the percentage of seeds in growth period, segmentation of the internal structures should be performed as exactly as possible. In this paper, we present an automatic algorithm to segment the internal structure of pomegranate. Since its intensity of stem and calyx is close to the internal tissues, the stem and calyx pixels are usually labeled to the internal tissues by segmentation algorithm. To solve this problem, first, the fruit shape is extracted from its background using active contour model (ACM). Then stem and calyx are removed using morphological filters. Finally the image is segmented by fuzzy c-means (FCM). The experimental results represent an accuracy of 95.91% in the presence of stem and calyx, while the accuracy of segmentation increases to 97.53% when stem and calyx are first removed by morphological filters.

  2. Study on characteristic points of boiling curve by using wavelet analysis and genetic algorithm

    International Nuclear Information System (INIS)

    Wei Huiming; Su Guanghui; Qiu Suizheng; Yang Xingbo

    2009-01-01

    Based on the wavelet analysis theory of signal singularity detection,the critical heat flux (CHF) and minimum film boiling starting point (q min ) of boiling curves can be detected and analyzed by using the wavelet multi-resolution analysis. To predict the CHF in engineering, empirical relations were obtained based on genetic algorithm. The results of wavelet detection and genetic algorithm prediction are consistent with experimental data very well. (authors)

  3. Effectiveness of the random sequential absorption algorithm in the analysis of volume elements with nanoplatelets

    DEFF Research Database (Denmark)

    Pontefisso, Alessandro; Zappalorto, Michele; Quaresimin, Marino

    2016-01-01

    In this work, a study of the Random Sequential Absorption (RSA) algorithm in the generation of nanoplatelet Volume Elements (VEs) is carried out. The effect of the algorithm input parameters on the reinforcement distribution is studied through the implementation of statistical tools, showing...... that the platelet distribution is systematically affected by these parameters. The consequence is that a parametric analysis of the VE input parameters may be biased by hidden differences in the filler distribution. The same statistical tools used in the analysis are implemented in a modified RSA algorithm...

  4. Signal Quality Improvement Algorithms for MEMS Gyroscope-Based Human Motion Analysis Systems: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Jiaying Du

    2018-04-01

    Full Text Available Motion sensors such as MEMS gyroscopes and accelerometers are characterized by a small size, light weight, high sensitivity, and low cost. They are used in an increasing number of applications. However, they are easily influenced by environmental effects such as temperature change, shock, and vibration. Thus, signal processing is essential for minimizing errors and improving signal quality and system stability. The aim of this work is to investigate and present a systematic review of different signal error reduction algorithms that are used for MEMS gyroscope-based motion analysis systems for human motion analysis or have the potential to be used in this area. A systematic search was performed with the search engines/databases of the ACM Digital Library, IEEE Xplore, PubMed, and Scopus. Sixteen papers that focus on MEMS gyroscope-related signal processing and were published in journals or conference proceedings in the past 10 years were found and fully reviewed. Seventeen algorithms were categorized into four main groups: Kalman-filter-based algorithms, adaptive-based algorithms, simple filter algorithms, and compensation-based algorithms. The algorithms were analyzed and presented along with their characteristics such as advantages, disadvantages, and time limitations. A user guide to the most suitable signal processing algorithms within this area is presented.

  5. Signal Quality Improvement Algorithms for MEMS Gyroscope-Based Human Motion Analysis Systems: A Systematic Review.

    Science.gov (United States)

    Du, Jiaying; Gerdtman, Christer; Lindén, Maria

    2018-04-06

    Motion sensors such as MEMS gyroscopes and accelerometers are characterized by a small size, light weight, high sensitivity, and low cost. They are used in an increasing number of applications. However, they are easily influenced by environmental effects such as temperature change, shock, and vibration. Thus, signal processing is essential for minimizing errors and improving signal quality and system stability. The aim of this work is to investigate and present a systematic review of different signal error reduction algorithms that are used for MEMS gyroscope-based motion analysis systems for human motion analysis or have the potential to be used in this area. A systematic search was performed with the search engines/databases of the ACM Digital Library, IEEE Xplore, PubMed, and Scopus. Sixteen papers that focus on MEMS gyroscope-related signal processing and were published in journals or conference proceedings in the past 10 years were found and fully reviewed. Seventeen algorithms were categorized into four main groups: Kalman-filter-based algorithms, adaptive-based algorithms, simple filter algorithms, and compensation-based algorithms. The algorithms were analyzed and presented along with their characteristics such as advantages, disadvantages, and time limitations. A user guide to the most suitable signal processing algorithms within this area is presented.

  6. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation

    International Nuclear Information System (INIS)

    Zhao, Zhanqi; Möller, Knut; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich

    2014-01-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton–Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR C ) and (4) GREIT with individual thorax geometry (GR T ). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal–Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms. (paper)

  7. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.

    Science.gov (United States)

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2014-06-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.

  8. Contributed Review: Source-localization algorithms and applications using time of arrival and time difference of arrival measurements

    Science.gov (United States)

    Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.; Carlson, Thomas J.

    2016-04-01

    Locating the position of fixed or mobile sources (i.e., transmitters) based on measurements obtained from sensors (i.e., receivers) is an important research area that is attracting much interest. In this paper, we review several representative localization algorithms that use time of arrivals (TOAs) and time difference of arrivals (TDOAs) to achieve high signal source position estimation accuracy when a transmitter is in the line-of-sight of a receiver. Circular (TOA) and hyperbolic (TDOA) position estimation approaches both use nonlinear equations that relate the known locations of receivers and unknown locations of transmitters. Estimation of the location of transmitters using the standard nonlinear equations may not be very accurate because of receiver location errors, receiver measurement errors, and computational efficiency challenges that result in high computational burdens. Least squares and maximum likelihood based algorithms have become the most popular computational approaches to transmitter location estimation. In this paper, we summarize the computational characteristics and position estimation accuracies of various positioning algorithms. By improving methods for estimating the time-of-arrival of transmissions at receivers and transmitter location estimation algorithms, transmitter location estimation may be applied across a range of applications and technologies such as radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.

  9. Contributed Review: Source-localization algorithms and applications using time of arrival and time difference of arrival measurements

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xinya [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352, USA; Deng, Zhiqun Daniel [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352, USA; Rauchenstein, Lynn T. [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352, USA; Carlson, Thomas J. [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland, Washington 99352, USA

    2016-04-01

    Locating the position of fixed or mobile sources (i.e., transmitters) based on received measurements from sensors is an important research area that is attracting much research interest. In this paper, we present localization algorithms using time of arrivals (TOA) and time difference of arrivals (TDOA) to achieve high accuracy under line-of-sight conditions. The circular (TOA) and hyperbolic (TDOA) location systems both use nonlinear equations that relate the locations of the sensors and tracked objects. These nonlinear equations can develop accuracy challenges because of the existence of measurement errors and efficiency challenges that lead to high computational burdens. Least squares-based and maximum likelihood-based algorithms have become the most popular categories of location estimators. We also summarize the advantages and disadvantages of various positioning algorithms. By improving measurement techniques and localization algorithms, localization applications can be extended into the signal-processing-related domains of radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.

  10. Computational Analysis of 3D Ising Model Using Metropolis Algorithms

    International Nuclear Information System (INIS)

    Sonsin, A F; Cortes, M R; Nunes, D R; Gomes, J V; Costa, R S

    2015-01-01

    We simulate the Ising Model with the Monte Carlo method and use the algorithms of Metropolis to update the distribution of spins. We found that, in the specific case of the three-dimensional Ising Model, methods of Metropolis are efficient. Studying the system near the point of phase transition, we observe that the magnetization goes to zero. In our simulations we analyzed the behavior of the magnetization and magnetic susceptibility to verify the phase transition in a paramagnetic to ferromagnetic material. The behavior of the magnetization and of the magnetic susceptibility as a function of the temperature suggest a phase transition around KT/J ≈ 4.5 and was evidenced the problem of finite size of the lattice to work with large lattice. (paper)

  11. Analysis of Shrinkage on Thick Plate Part using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Najihah S.N.

    2016-01-01

    Full Text Available Injection moulding is the most widely used processes in manufacturing plastic products. Since the quality of injection improves plastic parts are mostly influenced by process conditions, the method to determine the optimum process conditions becomes the key to improving the part quality. This paper presents a systematic methodology to analyse the shrinkage of the thick plate part during the injection moulding process. Genetic Algorithm (GA method was proposed to optimise the process parameters that would result in optimal solutions of optimisation goals. Using the GA, the shrinkage of the thick plate part was improved by 39.1% in parallel direction and 17.21% in the normal direction of melt flow.

  12. Analysis of Population Diversity of Dynamic Probabilistic Particle Swarm Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Qingjian Ni

    2014-01-01

    Full Text Available In evolutionary algorithm, population diversity is an important factor for solving performance. In this paper, combined with some population diversity analysis methods in other evolutionary algorithms, three indicators are introduced to be measures of population diversity in PSO algorithms, which are standard deviation of population fitness values, population entropy, and Manhattan norm of standard deviation in population positions. The three measures are used to analyze the population diversity in a relatively new PSO variant—Dynamic Probabilistic Particle Swarm Optimization (DPPSO. The results show that the three measure methods can fully reflect the evolution of population diversity in DPPSO algorithms from different angles, and we also discuss the impact of population diversity on the DPPSO variants. The relevant conclusions of the population diversity on DPPSO can be used to analyze, design, and improve the DPPSO algorithms, thus improving optimization performance, which could also be beneficial to understand the working mechanism of DPPSO theoretically.

  13. An analysis of tourism contribution to economic growth in SADC ...

    African Journals Online (AJOL)

    The study how the tourism sector can be an engine of economic growth in SADC member countries. The paper found the contribution of tourism to GDP, employment, export receipts and investment is significant. Although this sector's contribution to the economy varies among SADC countries, the the study found that ...

  14. The use of the multi-cumulant tensor analysis for the algorithmic optimisation of investment portfolios

    Science.gov (United States)

    Domino, Krzysztof

    2017-02-01

    The cumulant analysis plays an important role in non Gaussian distributed data analysis. The shares' prices returns are good example of such data. The purpose of this research is to develop the cumulant based algorithm and use it to determine eigenvectors that represent investment portfolios with low variability. Such algorithm is based on the Alternating Least Square method and involves the simultaneous minimisation 2'nd- 6'th cumulants of the multidimensional random variable (percentage shares' returns of many companies). Then the algorithm was tested during the recent crash on the Warsaw Stock Exchange. To determine incoming crash and provide enter and exit signal for the investment strategy the Hurst exponent was calculated using the local DFA. It was shown that introduced algorithm is on average better that benchmark and other portfolio determination methods, but only within examination window determined by low values of the Hurst exponent. Remark that the algorithm is based on cumulant tensors up to the 6'th order calculated for a multidimensional random variable, what is the novel idea. It can be expected that the algorithm would be useful in the financial data analysis on the world wide scale as well as in the analysis of other types of non Gaussian distributed data.

  15. Scalable Kernel Methods and Algorithms for General Sequence Analysis

    Science.gov (United States)

    Kuksa, Pavel

    2011-01-01

    Analysis of large-scale sequential data has become an important task in machine learning and pattern recognition, inspired in part by numerous scientific and technological applications such as the document and text classification or the analysis of biological sequences. However, current computational methods for sequence comparison still lack…

  16. A Bootstrap Metropolis-Hastings Algorithm for Bayesian Analysis of Big Data.

    Science.gov (United States)

    Liang, Faming; Kim, Jinsu; Song, Qifan

    2016-01-01

    Markov chain Monte Carlo (MCMC) methods have proven to be a very powerful tool for analyzing data of complex structures. However, their computer-intensive nature, which typically require a large number of iterations and a complete scan of the full dataset for each iteration, precludes their use for big data analysis. In this paper, we propose the so-called bootstrap Metropolis-Hastings (BMH) algorithm, which provides a general framework for how to tame powerful MCMC methods to be used for big data analysis; that is to replace the full data log-likelihood by a Monte Carlo average of the log-likelihoods that are calculated in parallel from multiple bootstrap samples. The BMH algorithm possesses an embarrassingly parallel structure and avoids repeated scans of the full dataset in iterations, and is thus feasible for big data problems. Compared to the popular divide-and-combine method, BMH can be generally more efficient as it can asymptotically integrate the whole data information into a single simulation run. The BMH algorithm is very flexible. Like the Metropolis-Hastings algorithm, it can serve as a basic building block for developing advanced MCMC algorithms that are feasible for big data problems. This is illustrated in the paper by the tempering BMH algorithm, which can be viewed as a combination of parallel tempering and the BMH algorithm. BMH can also be used for model selection and optimization by combining with reversible jump MCMC and simulated annealing, respectively.

  17. A Bootstrap Metropolis–Hastings Algorithm for Bayesian Analysis of Big Data

    Science.gov (United States)

    Kim, Jinsu; Song, Qifan

    2016-01-01

    Markov chain Monte Carlo (MCMC) methods have proven to be a very powerful tool for analyzing data of complex structures. However, their computer-intensive nature, which typically require a large number of iterations and a complete scan of the full dataset for each iteration, precludes their use for big data analysis. In this paper, we propose the so-called bootstrap Metropolis-Hastings (BMH) algorithm, which provides a general framework for how to tame powerful MCMC methods to be used for big data analysis; that is to replace the full data log-likelihood by a Monte Carlo average of the log-likelihoods that are calculated in parallel from multiple bootstrap samples. The BMH algorithm possesses an embarrassingly parallel structure and avoids repeated scans of the full dataset in iterations, and is thus feasible for big data problems. Compared to the popular divide-and-combine method, BMH can be generally more efficient as it can asymptotically integrate the whole data information into a single simulation run. The BMH algorithm is very flexible. Like the Metropolis-Hastings algorithm, it can serve as a basic building block for developing advanced MCMC algorithms that are feasible for big data problems. This is illustrated in the paper by the tempering BMH algorithm, which can be viewed as a combination of parallel tempering and the BMH algorithm. BMH can also be used for model selection and optimization by combining with reversible jump MCMC and simulated annealing, respectively. PMID:29033469

  18. Systematic Analysis of Painful Total Knee Prosthesis, a Diagnostic Algorithm

    Directory of Open Access Journals (Sweden)

    Oliver Djahani

    2013-12-01

    Full Text Available   Remaining pain after total knee arthroplasty (TKA is a common observation in about 20% of postoperative patients; where in about 60% of these knees require early revision surgery within five years. Obvious causes of this pain could be identified simply with clinical examinations and standard radiographs. However, unexplained painful TKA still remains a challenge for the surgeon. The management should include a multidisciplinary approach to the patient`s pain as well as addressing the underlying etiology. There are a number of extrinsic (tendinopathy, hip, ankle, spine, CRPS and so on and intrinsic (infection, instability, malalignment, wear and so on causes of painful knee replacement. On average, diagnosis takes more than 12 months and patients become very dissatisfied and some of them even acquire psychological problems. Hence, a systematic diagnostic algorithm might be helpful. This review article aims to act as a guide to the evaluation of patients with painful TKA described in 10 different steps. Furthermore, the preliminary results of a series of 100 consecutive cases will be discussed. Revision surgery was performed only in those cases with clear failure mechanism.

  19. An Efficiency Analysis of Augmented Reality Marker Recognition Algorithm

    Directory of Open Access Journals (Sweden)

    Kurpytė Dovilė

    2014-05-01

    Full Text Available The article reports on the investigation of augmented reality system which is designed for identification and augmentation of 100 different square markers. Marker recognition efficiency was investigated by rotating markers along x and y axis directions in range from −90° to 90°. Virtual simulations of four environments were developed: a an intense source of light, b an intense source of light falling from the left side, c the non-intensive light source falling from the left side, d equally falling shadows. The graphics were created using the OpenGL graphics computer hardware interface; image processing was programmed in C++ language using OpenCV, while augmented reality was developed in Java programming language using NyARToolKit. The obtained results demonstrate that augmented reality marker recognition algorithm is accurate and reliable in the case of changing lighting conditions and rotational angles - only 4 % markers were unidentified. Assessment of marker recognition efficiency let to propose marker classification strategy in order to use it for grouping various markers into distinct markers’ groups possessing similar recognition properties.

  20. A Runtime Analysis of Parallel Evolutionary Algorithms in Dynamic Optimization

    DEFF Research Database (Denmark)

    Lissovoi, Andrei; Witt, Carsten

    2017-01-01

    A simple island model with (Formula presented.) islands and migration occurring after every (Formula presented.) iterations is studied on the dynamic fitness function Maze. This model is equivalent to a (Formula presented.) EA if (Formula presented.), i. e., migration occurs during every iteratio.......). The relationship of (Formula presented.), and the ability of the island model to track the optimum is then investigated more closely. Finally, experiments are performed to supplement the asymptotic results, and investigate the impact of the migration topology.......A simple island model with (Formula presented.) islands and migration occurring after every (Formula presented.) iterations is studied on the dynamic fitness function Maze. This model is equivalent to a (Formula presented.) EA if (Formula presented.), i. e., migration occurs during every iteration....... It is proved that even for an increased offspring population size up to (Formula presented.), the (Formula presented.) EA is still not able to track the optimum of Maze. If the migration interval is chosen carefully, the algorithm is able to track the optimum even for logarithmic (Formula presented...

  1. Implicit Kalman filter algorithm for nuclear reactor analysis

    International Nuclear Information System (INIS)

    Hassberger, J.A.; Lee, J.C.

    1986-01-01

    Artificial intelligence (AI) is currently the hot topic in nuclear power plant diagnostics and control. Recently, researchers have considered the use of simulation as knowledge in which faster than real-time best-estimate simulations based on first principles are tightly coupled with AI systems for analyzing power plant transients on-line. On-line simulations can be improved through a Kalman filter, a mathematical technique for obtaining the optimal estimate of a system state given the information contained in the equations of system dynamics and measurements made on the system. Filtering can be used to systemically adjust parameters of a low-order simulation model to obtain reasonable agreement between the model and actual plant dynamics. The authors present here a general Kalman filtering algorithm that derives its information of system dynamics implicitly and naturally from the discrete time step-series of state estimates available from a simulation program. Previous research has demonstrated that models adjusted on past data can be coupled with an intelligent controller to predict the future time-course of plant transients

  2. Convergence analysis of variational and non-variational multigrid algorithms for the Laplace-Beltrami operator

    KAUST Repository

    Bonito, Andrea

    2012-09-01

    We design and analyze variational and non-variational multigrid algorithms for the Laplace-Beltrami operator on a smooth and closed surface. In both cases, a uniform convergence for the V -cycle algorithm is obtained provided the surface geometry is captured well enough by the coarsest grid. The main argument hinges on a perturbation analysis from an auxiliary variational algorithm defined directly on the smooth surface. In addition, the vanishing mean value constraint is imposed on each level, thereby avoiding singular quadratic forms without adding additional computational cost. Numerical results supporting our analysis are reported. In particular, the algorithms perform well even when applied to surfaces with a large aspect ratio. © 2011 American Mathematical Society.

  3. Taming the Wild: A Unified Analysis of Hogwild!-Style Algorithms.

    Science.gov (United States)

    De Sa, Christopher; Zhang, Ce; Olukotun, Kunle; Ré, Christopher

    2015-12-01

    Stochastic gradient descent (SGD) is a ubiquitous algorithm for a variety of machine learning problems. Researchers and industry have developed several techniques to optimize SGD's runtime performance, including asynchronous execution and reduced precision. Our main result is a martingale-based analysis that enables us to capture the rich noise models that may arise from such techniques. Specifically, we use our new analysis in three ways: (1) we derive convergence rates for the convex case (Hogwild!) with relaxed assumptions on the sparsity of the problem; (2) we analyze asynchronous SGD algorithms for non-convex matrix problems including matrix completion; and (3) we design and analyze an asynchronous SGD algorithm, called Buckwild!, that uses lower-precision arithmetic. We show experimentally that our algorithms run efficiently for a variety of problems on modern hardware.

  4. Analyzing Chaos Systems and Fine Spectrum Sensing Using Detrended Fluctuation Analysis Algorithm

    Directory of Open Access Journals (Sweden)

    Javier S. González-Salas

    2016-01-01

    Full Text Available A numerical study that uses detrended fluctuation analysis (DFA algorithm of time series obtained from linear and nonlinear dynamical systems is presented. The DFA algorithm behavior toward periodic and chaotic signals is investigated and the effect of the time scale under analysis is discussed. The displayed results prove that the DFA algorithm response is invariant (stable performance to initial condition and chaotic system parameters. An initial idea of DFA algorithm implementation for fine spectrum sensing (SS is proposed under two-stage spectrum sensor approach with test statistics based on the scaling exponent value. The outcomes demonstrate a promising new SS technique that can alleviate several imperfections such as noise power uncertainty and spatial correlation between the adjacent antenna array elements.

  5. Ireland's contribution to orthopaedic literature: a bibliometric analysis.

    Science.gov (United States)

    Kennedy, C; O Sullivan, P; Bilal, M; Walsh, A

    2013-10-01

    Bibliometric analysis of scientific performance within a country or speciality, facilitate the recognition of factors that may further enhance research activity and performance. Our aim was to illicit the current state of Irelands orthopaedic research output in terms of quantity and quality. We performed a retrospective bibliometric analysis of all Irish orthopaedic publications over the past 5 years, in the top 20 peer-reviewed orthopaedic journals. Utilising the MEDLINE database, each journal was evaluated for articles that were published over the study period. Reviews, editorials, reports and letters were excluded. Each article abstract was analysed for research content, and country of origin. A nation's mean IF was defined by multiplying each journal's IF by the number of articles. Publications per million (PmP) was calculated by dividing the total number of publications by the population of each country. We analysed a total of 25,595 article abstracts. Ireland contributed 109 articles in total (0.42% of all articles), however ranking according to population per million was 10th worldwide. Ireland ranked 18th worldwide in relation to mean impact factor, which was 2.91 over the study period. Ireland published in 16 of the top 20 journals, 9 of these were of European origin, and 1 of the top 5 was of American origin. In total, 61 Irish articles were assignable to clinical orthopaedic units. Clinical based studies (randomised controlled trials, observational, and epidemiology/bibliometric articles) and research based studies (In vivo, In vitro, and biomechanical) numbered 76 (69.7%) and 33 (30.2%) articles, respectively. This study provides a novel overview of current Irish orthopaedic related research, and how our standards translate to the worldwide orthopaedic community. In order to maintain our publication productivity, academic research should continue to be encouraged at post graduate level. Copyright © 2013 Royal College of Surgeons of Edinburgh

  6. Analysis and Enhancements of Leader Elections algorithms in Mobile Ad Hoc Networks

    OpenAIRE

    Shayeji, Mohammad H. Al; Al-Azmi, AbdulRahman R.; Al-Azmi, AbdulAziz R.; Samrajesh, M. D.

    2012-01-01

    Mobile Ad Hoc networks (MANET), distinct from traditional distributed systems, are dynamic and self-organizing networks. MANET requires a leader to coordinate and organize tasks. The challenge is to have the right election algorithm that chooses the right leader based on various factors in MANET. In this paper, we analyze four leader election algorithms used in mobile Ad Hoc Networks. Factors considered in our analysis are time complexity, message complexity, assumptions considered, fault tol...

  7. Task 2 Report: Algorithm Development and Performance Analysis

    Science.gov (United States)

    1993-07-01

    separated peaks ............................................. 39 7-16 Example ILGC data for schedule 3 phosphites showing an analysis method which integrates...more closely follows the baseline ................. 40 7-18 Example R.GC data for schedule 3 phosphites showing an analysis method resulting in unwanted...much of the ambiguity that can arise in GC/MS with trace environmental samples, for example. Correlated chromatography, on the other hand, separates the

  8. Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis

    Science.gov (United States)

    Chiou, Jin-Chern

    1990-01-01

    Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.

  9. Elements that contribute to boost female entrepreneurship: A prospective analysis

    Directory of Open Access Journals (Sweden)

    Pilar Ortiz García

    2016-06-01

    Full Text Available This paper aims to identify what environmental modulators elements favor and contribute to enhance female entrepreneurship from a gender perspective. The qualitative study draws on 10 interviews in depth after being tested on their contents, from a relational perspective, allow a set of conclusions and recommendations that contribute to the empowerment of women entrepreneurs figure. The results highlight the importance of social support and comprehensive training in creating a culture that promotes and makes visible the role of women entrepreneurs.

  10. Analysis and design of algorithms for combinatorial problems

    CERN Document Server

    Ausiello, G

    1985-01-01

    Combinatorial problems have been from the very beginning part of the history of mathematics. By the Sixties, the main classes of combinatorial problems had been defined. During that decade, a great number of research contributions in graph theory had been produced, which laid the foundations for most of the research in graph optimization in the following years. During the Seventies, a large number of special purpose models were developed. The impressive growth of this field since has been strongly determined by the demand of applications and influenced by the technological increases in computing power and the availability of data and software. The availability of such basic tools has led to the feasibility of the exact or well approximate solution of large scale realistic combinatorial optimization problems and has created a number of new combinatorial problems.

  11. Algorithms for the Network Analysis of Bilateral Tax Treaties

    NARCIS (Netherlands)

    S.C. Polak (Sven)

    2014-01-01

    htmlabstractIn this thesis we conduct a network analysis of bilateral tax treaties. We are given tax data of 108 countries. Companies often send money from country to country via indirect routes, because then the tax that must be paid might be lower. In the thesis we will study the most important

  12. Algorithms for Electromagnetic Scattering Analysis of Electrically Large Structures

    DEFF Research Database (Denmark)

    Borries, Oscar Peter

    Accurate analysis of electrically large antennas is often done using either Physical Optics (PO) or Method of Moments (MoM), where the former typically requires fewer computational resources but has a limited application regime. This study has focused on fast variants of these two methods, with t...

  13. Algorithm for Non-proportional Loading in Sequentially Linear Analysis

    NARCIS (Netherlands)

    Yu, C.; Hoogenboom, P.C.J.; Rots, J.G.; Saouma, V.; Bolander, J.; Landis, E.

    2016-01-01

    Sequentially linear analysis (SLA) is an alternative to the Newton-Raphson method for analyzing the nonlinear behavior of reinforced concrete and masonry structures. In this paper SLA is extended to load cases that are applied one after the other, for example first dead load and then wind load. It

  14. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  15. Nuclear power reactor analysis, methods, algorithms and computer programs

    International Nuclear Information System (INIS)

    Matausek, M.V

    1981-01-01

    Full text: For a developing country buying its first nuclear power plants from a foreign supplier, disregarding the type and scope of the contract, there is a certain number of activities which have to be performed by local stuff and domestic organizations. This particularly applies to the choice of the nuclear fuel cycle strategy and the choice of the type and size of the reactors, to bid parameters specification, bid evaluation and final safety analysis report evaluation, as well as to in-core fuel management activities. In the Nuclear Engineering Department of the Boris Kidric Institute of Nuclear Sciences (NET IBK) the continual work is going on, related to the following topics: cross section and resonance integral calculations, spectrum calculations, generation of group constants, lattice and cell problems, criticality and global power distribution search, fuel burnup analysis, in-core fuel management procedures, cost analysis and power plant economics, safety and accident analysis, shielding problems and environmental impact studies, etc. The present paper gives the details of the methods developed and the results achieved, with the particular emphasis on the NET IBK computer program package for the needs of planning, construction and operation of nuclear power plants. The main problems encountered so far were related to small working team, lack of large and powerful computers, absence of reliable basic nuclear data and shortage of experimental and empirical results for testing theoretical models. Some of these difficulties have been overcome thanks to bilateral and multilateral cooperation with developed countries, mostly through IAEA. It is the authors opinion, however, that mutual cooperation of developing countries, having similar problems and similar goals, could lead to significant results. Some activities of this kind are suggested and discussed. (author)

  16. Fully automatic algorithm for the analysis of vessels in the angiographic image of the eye fundus

    Directory of Open Access Journals (Sweden)

    Koprowski Robert

    2012-06-01

    Full Text Available Abstract Background The available scientific literature contains descriptions of manual, semi-automated and automated methods for analysing angiographic images. The presented algorithms segment vessels calculating their tortuosity or number in a given area. We describe a statistical analysis of the inclination of the vessels in the fundus as related to their distance from the center of the optic disc. Methods The paper presents an automated method for analysing vessels which are found in angiographic images of the eye using a Matlab implemented algorithm. It performs filtration and convolution operations with suggested masks. The result is an image containing information on the location of vessels and their inclination angle in relation to the center of the optic disc. This is a new approach to the analysis of vessels whose usefulness has been confirmed in the diagnosis of hypertension. Results The proposed algorithm analyzed and processed the images of the eye fundus using a classifier in the form of decision trees. It enabled the proper classification of healthy patients and those with hypertension. The result is a very good separation of healthy subjects from the hypertensive ones: sensitivity - 83%, specificity - 100%, accuracy - 96%. This confirms a practical usefulness of the proposed method. Conclusions This paper presents an algorithm for the automatic analysis of morphological parameters of the fundus vessels. Such an analysis is performed during fluorescein angiography of the eye. The presented algorithm automatically calculates the global statistical features connected with both tortuosity of vessels and their total area or their number.

  17. Influence of different contributions of scatter and attenuation on the threshold values in contrast-based algorithms for volume segmentation.

    Science.gov (United States)

    Matheoud, Roberta; Della Monica, Patrizia; Secco, Chiara; Loi, Gianfranco; Krengli, Marco; Inglese, Eugenio; Brambilla, Marco

    2011-01-01

    The aim of this work is to evaluate the role of different amount of attenuation and scatter on FDG-PET image volume segmentation using a contrast-oriented method based on the target-to-background (TB) ratio and target dimensions. A phantom study was designed employing 3 phantom sets, which provided a clinical range of attenuation and scatter conditions, equipped with 6 spheres of different volumes (0.5-26.5 ml). The phantoms were: (1) the Hoffman 3-dimensional brain phantom, (2) a modified International Electro technical Commission (IEC) phantom with an annular ring of water bags of 3 cm thickness fit over the IEC phantom, and (3) a modified IEC phantom with an annular ring of water bags of 9 cm. The phantoms cavities were filled with a solution of FDG at 5.4 kBq/ml activity concentration, and the spheres with activity concentration ratios of about 16, 8, and 4 times the background activity concentration. Images were acquired with a Biograph 16 HI-REZ PET/CT scanner. Thresholds (TS) were determined as a percentage of the maximum intensity in the cross section area of the spheres. To reduce statistical fluctuations a nominal maximum value is calculated as the mean from all voxel > 95%. To find the TS value that yielded an area A best matching the true value, the cross section were auto-contoured in the attenuation corrected slices varying TS in step of 1%, until the area so determined differed by less than 10 mm² versus its known physical value. Multiple regression methods were used to derive an adaptive thresholding algorithm and to test its dependence on different conditions of attenuation and scatter. The errors of scatter and attenuation correction increased with increasing amount of attenuation and scatter in the phantoms. Despite these increasing inaccuracies, PET threshold segmentation algorithms resulted not influenced by the different condition of attenuation and scatter. The test of the hypothesis of coincident regression lines for the three phantoms used

  18. Evaluation of single and multi-threshold entropy-based algorithms for folded substrate analysis

    Directory of Open Access Journals (Sweden)

    Magdolna Apro

    2011-10-01

    Full Text Available This paper presents a detailed evaluation of two variants of Maximum Entropy image segmentation algorithm(single and multi-thresholding with respect to their performance on segmenting test images showing folded substrates.The segmentation quality was determined by evaluating values of four different measures: misclassificationerror, modified Hausdorff distance, relative foreground area error and positive-negative false detection ratio. Newnormalization methods were proposed in order to combine all parameters into a unique algorithm evaluation rating.The segmentation algorithms were tested on images obtained by three different digitalisation methods coveringfour different surface textures. In addition, the methods were also tested on three images presenting a perfect fold.The obtained results showed that Multi-Maximum Entropy algorithm is better suited for the analysis of imagesshowing folded substrates.

  19. Optimal Selection of Clustering Algorithm via Multi-Criteria Decision Analysis (MCDA for Load Profiling Applications

    Directory of Open Access Journals (Sweden)

    Ioannis P. Panapakidis

    2018-02-01

    Full Text Available Due to high implementation rates of smart meter systems, considerable amount of research is placed in machine learning tools for data handling and information retrieval. A key tool in load data processing is clustering. In recent years, a number of researches have proposed different clustering algorithms in the load profiling field. The present paper provides a methodology for addressing the aforementioned problem through Multi-Criteria Decision Analysis (MCDA and namely, using the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS. A comparison of the algorithms is employed. Next, a single test case on the selection of an algorithm is examined. User specific weights are applied and based on these weight values, the optimal algorithm is drawn.

  20. Randomized Algorithms for Analysis and Control of Uncertain Systems With Applications

    CERN Document Server

    Tempo, Roberto; Dabbene, Fabrizio

    2013-01-01

    The presence of uncertainty in a system description has always been a critical issue in control. The main objective of Randomized Algorithms for Analysis and Control of Uncertain Systems, with Applications (Second Edition) is to introduce the reader to the fundamentals of probabilistic methods in the analysis and design of systems subject to deterministic and stochastic uncertainty. The approach propounded by this text guarantees a reduction in the computational complexity of classical  control algorithms and in the conservativeness of standard robust control techniques. The second edition has been thoroughly updated to reflect recent research and new applications with chapters on statistical learning theory, sequential methods for control and the scenario approach being completely rewritten.   Features: ·         self-contained treatment explaining Monte Carlo and Las Vegas randomized algorithms from their genesis in the principles of probability theory to their use for system analysis; ·    ...

  1. A Review of Intelligent Driving Style Analysis Systems and Related Artificial Intelligence Algorithms

    Directory of Open Access Journals (Sweden)

    Gys Albertus Marthinus Meiring

    2015-12-01

    Full Text Available In this paper the various driving style analysis solutions are investigated. An in-depth investigation is performed to identify the relevant machine learning and artificial intelligence algorithms utilised in current driver behaviour and driving style analysis systems. This review therefore serves as a trove of information, and will inform the specialist and the student regarding the current state of the art in driver style analysis systems, the application of these systems and the underlying artificial intelligence algorithms applied to these applications. The aim of the investigation is to evaluate the possibilities for unique driver identification utilizing the approaches identified in other driver behaviour studies. It was found that Fuzzy Logic inference systems, Hidden Markov Models and Support Vector Machines consist of promising capabilities to address unique driver identification algorithms if model complexity can be reduced.

  2. A Review of Intelligent Driving Style Analysis Systems and Related Artificial Intelligence Algorithms.

    Science.gov (United States)

    Meiring, Gys Albertus Marthinus; Myburgh, Hermanus Carel

    2015-12-04

    In this paper the various driving style analysis solutions are investigated. An in-depth investigation is performed to identify the relevant machine learning and artificial intelligence algorithms utilised in current driver behaviour and driving style analysis systems. This review therefore serves as a trove of information, and will inform the specialist and the student regarding the current state of the art in driver style analysis systems, the application of these systems and the underlying artificial intelligence algorithms applied to these applications. The aim of the investigation is to evaluate the possibilities for unique driver identification utilizing the approaches identified in other driver behaviour studies. It was found that Fuzzy Logic inference systems, Hidden Markov Models and Support Vector Machines consist of promising capabilities to address unique driver identification algorithms if model complexity can be reduced.

  3. A dataflow analysis tool for parallel processing of algorithms

    Science.gov (United States)

    Jones, Robert L., III

    1993-01-01

    A graph-theoretic design process and software tool is presented for selecting a multiprocessing scheduling solution for a class of computational problems. The problems of interest are those that can be described using a dataflow graph and are intended to be executed repetitively on a set of identical parallel processors. Typical applications include signal processing and control law problems. Graph analysis techniques are introduced and shown to effectively determine performance bounds, scheduling constraints, and resource requirements. The software tool is shown to facilitate the application of the design process to a given problem.

  4. Two Expectation-Maximization Algorithms for Boolean Factor Analysis

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Polyakov, P.Y.

    2014-01-01

    Roč. 130, 23 April (2014), s. 83-97 ISSN 0925-2312 R&D Projects: GA ČR GAP202/10/0262 Grant - others:GA MŠk(CZ) ED1.1.00/02.0070; GA MŠk(CZ) EE.2.3.20.0073 Program:ED Institutional research plan: CEZ:AV0Z10300504 Keywords : Boolean Factor analysis * Binary Matrix factorization * Neural networks * Binary data model * Dimension reduction * Bars problem Subject RIV: IN - Informatics, Computer Science Impact factor: 2.083, year: 2014

  5. Comparative analysis of women's contribution to food crop ...

    African Journals Online (AJOL)

    Women in Nigeria form a significant part of the population and are practically involved in agricultural activities. Based on the involvement of women in agriculture and in the rural economies of Nigeria, this study evaluates the contribution of women to food crop production. This is to establish knowledge about what farm ...

  6. Content Analysis of Research Contributions towards Environmental Issues in Sindh

    Science.gov (United States)

    Lashari, Jagul Huma; Bhutto, Arabella; Rashdi, Roshan S. Shah; Qureshi, S. M.

    2015-01-01

    This research article identifies the contributions of PhD faculty members of Higher Education Institutions (HEIs) in Sindh Pakistan offering degrees in the environment discipline through published articles in journals, conference proceedings, research project reports and focused areas of research. The content analyses of curriculum vitae data of…

  7. Expediting Combinatorial Data Set Analysis by Combining Human and Algorithmic Analysis.

    Science.gov (United States)

    Stein, Helge Sören; Jiao, Sally; Ludwig, Alfred

    2017-01-09

    A challenge in combinatorial materials science remains the efficient analysis of X-ray diffraction (XRD) data and its correlation to functional properties. Rapid identification of phase-regions and proper assignment of corresponding crystal structures is necessary to keep pace with the improved methods for synthesizing and characterizing materials libraries. Therefore, a new modular software called htAx (high-throughput analysis of X-ray and functional properties data) is presented that couples human intelligence tasks used for "ground-truth" phase-region identification with subsequent unbiased verification by an algorithm to efficiently analyze which phases are present in a materials library. Identified phases and phase-regions may then be correlated to functional properties in an expedited manner. For the functionality of htAx to be proven, two previously published XRD benchmark data sets of the materials systems Al-Cr-Fe-O and Ni-Ti-Cu are analyzed by htAx. The analysis of ∼1000 XRD patterns takes less than 1 day with htAx. The proposed method reliably identifies phase-region boundaries and robustly identifies multiphase structures. The method also addresses the problem of identifying regions with previously unpublished crystal structures using a special daisy ternary plot.

  8. Performance Analysis of Iterative Decoding Algorithms for PEG LDPC Codes in Nakagami Fading Channels

    Directory of Open Access Journals (Sweden)

    O. Al Rasheed

    2013-11-01

    Full Text Available In this paper we give a comparative analysis of decoding algorithms of Low Density Parity Check (LDPC codes in a channel with the Nakagami distribution of the fading envelope. We consider the Progressive Edge-Growth (PEG method and Improved PEG method for the parity check matrix construction, which can be used to avoid short girths, small trapping sets and a high level of error floor. A comparative analysis of several classes of LDPC codes in various propagation conditions and decoded using different decoding algorithms is also presented.

  9. A Survey of Models and Algorithms for Social Influence Analysis

    Science.gov (United States)

    Sun, Jimeng; Tang, Jie

    Social influence is the behavioral change of a person because of the perceived relationship with other people, organizations and society in general. Social influence has been a widely accepted phenomenon in social networks for decades. Many applications have been built based around the implicit notation of social influence between people, such as marketing, advertisement and recommendations. With the exponential growth of online social network services such as Facebook and Twitter, social influence can for the first time be measured over a large population. In this chapter, we survey the research on social influence analysis with a focus on the computational aspects. First, we present statistical measurements related to social influence. Second, we describe the literature on social similarity and influences. Third, we present the research on social influence maximization which has many practical applications including marketing and advertisement.

  10. How has severe accident analysis contributed to sizewell B and how can it continue to contribute in the future

    International Nuclear Information System (INIS)

    Harrison, J.R.; Western, D.J.

    1987-01-01

    Sizewell B is a proposed 1100 MWe PWR which is a UK development of the US SNUPPS design. The UK reference design document for the plant was first issued in 1981 and the Pre-Construction Safety Report (PCSR) was submitted to the Nuclear Installations Inspectorate (NII), the UK licensing authority, in 1982. A major public inquiry into the proposal took place between January 1983 and March 1985. This paper is concerned with the analysis of severe accidents. This means all the analysis that is concerned with those fault sequences that are outside the design basis of the plant and which may lead to severe consequences - either in terms of plant damage or release of radioactivity. This analysis comprises probabilistic assessments of the frequency of such sequences, transient analysis of the way such sequences develop and radiological release analysis. Part one of this paper examines how the severe accident analysis carried out for Sizewell B has contributed to the judgement that the design is sound and that the construction phase should proceed. The second part of the paper looks to the future and asks ''Can severe accident analysis make any further contribution during the period from licensing up until operation commences

  11. Construction and analysis of compressible flow calculation algorithms

    International Nuclear Information System (INIS)

    Desideri, Jean-Antoine

    1993-01-01

    The aim of this study is to give a theoretical rationale of a 'paradox' related to the behavior at the stagnation point of some numerical solutions obtained by conventional methods for Eulerian non-equilibrium flows. This 'paradox' concerns the relationship between the solutions given by equilibrium and non-equilibrium models and was raised by several experts during the 'Workshop on Hypersonic Flows for Reentry Problems, Part 1. Antibes 1990'. In the first part, we show that equilibrium conditions are reached at the stagnation point and we analyse the sensitivity of these equilibrium conditions to the flow variables. In the second part, we develop an analysis of the behavior of the mathematical solution to an Eulerian non-equilibrium flow in the vicinity of the stagnation point, which gives an explanation to the described 'paradox'. Then, a numerical procedure, integrating the species convection equations projected on the stagnation point streamline in a Lagrangian time approach, gives a numerical support to the theoretical predictions. We also propose two numerical integration procedures, that allow us to recompute, starting from the equilibrium conditions at the stagnation point, the flow characteristics at the body. The validity limits of these procedures are discussed and the results obtained for a Workshop test-case are compared with the results given by several contributors. Finally, we survey briefly the influence of the local behavior of the solution on the coupling technique to a boundary layer calculation. (author) [fr

  12. Convergence analysis of the alternating RGLS algorithm for the identification of the reduced complexity Volterra model.

    Science.gov (United States)

    Laamiri, Imen; Khouaja, Anis; Messaoud, Hassani

    2015-03-01

    In this paper we provide a convergence analysis of the alternating RGLS (Recursive Generalized Least Square) algorithm used for the identification of the reduced complexity Volterra model describing stochastic non-linear systems. The reduced Volterra model used is the 3rd order SVD-PARAFC-Volterra model provided using the Singular Value Decomposition (SVD) and the Parallel Factor (PARAFAC) tensor decomposition of the quadratic and the cubic kernels respectively of the classical Volterra model. The Alternating RGLS (ARGLS) algorithm consists on the execution of the classical RGLS algorithm in alternating way. The ARGLS convergence was proved using the Ordinary Differential Equation (ODE) method. It is noted that the algorithm convergence canno׳t be ensured when the disturbance acting on the system to be identified has specific features. The ARGLS algorithm is tested in simulations on a numerical example by satisfying the determined convergence conditions. To raise the elegies of the proposed algorithm, we proceed to its comparison with the classical Alternating Recursive Least Squares (ARLS) presented in the literature. The comparison has been built on a non-linear satellite channel and a benchmark system CSTR (Continuous Stirred Tank Reactor). Moreover the efficiency of the proposed identification approach is proved on an experimental Communicating Two Tank system (CTTS). Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Dependency of human target detection performance on clutter and quality of supporting image analysis algorithms in a video surveillance task

    Science.gov (United States)

    Huber, Samuel; Dunau, Patrick; Wellig, Peter; Stein, Karin

    2017-10-01

    Background: In target detection, the success rates depend strongly on human observer performances. Two prior studies tested the contributions of target detection algorithms and prior training sessions. The aim of this Swiss-German cooperation study was to evaluate the dependency of human observer performance on the quality of supporting image analysis algorithms. Methods: The participants were presented 15 different video sequences. Their task was to detect all targets in the shortest possible time. Each video sequence showed a heavily cluttered simulated public area from a different viewing angle. In each video sequence, the number of avatars in the area was altered to 100, 150 and 200 subjects. The number of targets appearing was kept at 10%. The number of marked targets varied from 0, 5, 10, 20 up to 40 marked subjects while keeping the positive predictive value of the detection algorithm at 20%. During the task, workload level was assessed by applying an acoustic secondary task. Detection rates and detection times for the targets were analyzed using inferential statistics. Results: The study found Target Detection Time to increase and Target Detection Rates to decrease with increasing numbers of avatars. The same is true for the Secondary Task Reaction Time while there was no effect on Secondary Task Hit Rate. Furthermore, we found a trend for a u-shaped correlation between the numbers of markings and RTST indicating increased workload. Conclusion: The trial results may indicate useful criteria for the design of training and support of observers in observational tasks.

  14. Three dimensional fuzzy influence analysis of fitting algorithms on integrated chip topographic modeling

    International Nuclear Information System (INIS)

    Liang, Zhong Wei; Wang, Yi Jun; Ye, Bang Yan; Brauwer, Richard Kars

    2012-01-01

    In inspecting the detailed performance results of surface precision modeling in different external parameter conditions, the integrated chip surfaces should be evaluated and assessed during topographic spatial modeling processes. The application of surface fitting algorithms exerts a considerable influence on topographic mathematical features. The influence mechanisms caused by different surface fitting algorithms on the integrated chip surface facilitate the quantitative analysis of different external parameter conditions. By extracting the coordinate information from the selected physical control points and using a set of precise spatial coordinate measuring apparatus, several typical surface fitting algorithms are used for constructing micro topographic models with the obtained point cloud. In computing for the newly proposed mathematical features on surface models, we construct the fuzzy evaluating data sequence and present a new three dimensional fuzzy quantitative evaluating method. Through this method, the value variation tendencies of topographic features can be clearly quantified. The fuzzy influence discipline among different surface fitting algorithms, topography spatial features, and the external science parameter conditions can be analyzed quantitatively and in detail. In addition, quantitative analysis can provide final conclusions on the inherent influence mechanism and internal mathematical relation in the performance results of different surface fitting algorithms, topographic spatial features, and their scientific parameter conditions in the case of surface micro modeling. The performance inspection of surface precision modeling will be facilitated and optimized as a new research idea for micro-surface reconstruction that will be monitored in a modeling process

  15. Three dimensional fuzzy influence analysis of fitting algorithms on integrated chip topographic modeling

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Zhong Wei; Wang, Yi Jun [Guangzhou Univ., Guangzhou (China); Ye, Bang Yan [South China Univ. of Technology, Guangzhou (China); Brauwer, Richard Kars [Indian Institute of Technology, Kanpur (India)

    2012-10-15

    In inspecting the detailed performance results of surface precision modeling in different external parameter conditions, the integrated chip surfaces should be evaluated and assessed during topographic spatial modeling processes. The application of surface fitting algorithms exerts a considerable influence on topographic mathematical features. The influence mechanisms caused by different surface fitting algorithms on the integrated chip surface facilitate the quantitative analysis of different external parameter conditions. By extracting the coordinate information from the selected physical control points and using a set of precise spatial coordinate measuring apparatus, several typical surface fitting algorithms are used for constructing micro topographic models with the obtained point cloud. In computing for the newly proposed mathematical features on surface models, we construct the fuzzy evaluating data sequence and present a new three dimensional fuzzy quantitative evaluating method. Through this method, the value variation tendencies of topographic features can be clearly quantified. The fuzzy influence discipline among different surface fitting algorithms, topography spatial features, and the external science parameter conditions can be analyzed quantitatively and in detail. In addition, quantitative analysis can provide final conclusions on the inherent influence mechanism and internal mathematical relation in the performance results of different surface fitting algorithms, topographic spatial features, and their scientific parameter conditions in the case of surface micro modeling. The performance inspection of surface precision modeling will be facilitated and optimized as a new research idea for micro-surface reconstruction that will be monitored in a modeling process.

  16. Experiments in Discourse Analysis Impact on Information Classification and Retrieval Algorithms.

    Science.gov (United States)

    Morato, Jorge; Llorens, J.; Genova, G.; Moreiro, J. A.

    2003-01-01

    Discusses the inclusion of contextual information in indexing and retrieval systems to improve results and the ability to carry out text analysis by means of linguistic knowledge. Presents research that investigated whether discourse variables have an impact on information and retrieval and classification algorithms. (Author/LRW)

  17. Contribution of the ELFG test in algorithms of non-invasive markers towards the diagnosis of significant fibrosis in chronic hepatitis C.

    Directory of Open Access Journals (Sweden)

    Jean-Pierre Zarski

    Full Text Available BACKGROUND AND AIMS: We aimed to determine the best algorithms for the diagnosis of significant fibrosis in chronic hepatitis C (CHC patients using all available parameters and tests. PATIENTS AND METHODS: We used the database from our study of 507 patients with histologically proven CHC in which fibrosis was evaluated by liver biopsy (Metavir and tests: Fibrometer®, Fibrotest®, Hepascore®, Apri, ELFG, MP3, Forn's, hyaluronic acid, tissue inhibitor of metalloproteinase-1 (TIMP1, MMP1, collagen IV and when possible Fibroscan™. For the first test we used 90% negative predictive value to exclude patients with F≤1, next an induction algorithm was applied giving the best tests with at least 80% positive predictive value for the diagnosis of F≥2. The algorithms were computed using the R Software C4.5 program to select the best tests and cut-offs. The algorithm was automatically induced without premises on the part of the investigators. We also examined the inter-observer variations after independent review of liver biopsies by two pathologists. A medico-economic analysis compared the screening strategies with liver biopsy. RESULTS: In "intention to diagnose" the best algorithms for F≥2 were Fibrometer ®, Fibrotest®, or Hepascore® in first intention with the ELFG score in second intention for indeterminate cases. The percentage of avoided biopsies varied between 50% (Fibrotest® or Fibrometer®+ELFG and 51% (Hepascore®+ELFG. In "per-analysis" Fibroscan™+ELFG avoided liver biopsy in 55% of cases. The diagnostic performance of these screening strategies was statistically superior to the usual combinations (Fibrometer® or Fibrotest®+Fibroscan™ and was cost effective. We note that the consensual review of liver biopsies between the two pathologists was mainly in favor of F1 (64-69%. CONCLUSION: The ELFG test could replace Fibroscan in most currently used algorithms for the diagnosis of significant fibrosis including for those patients

  18. Extracting quantum dynamics from genetic learning algorithms through principal control analysis

    International Nuclear Information System (INIS)

    White, J L; Pearson, B J; Bucksbaum, P H

    2004-01-01

    Genetic learning algorithms are widely used to control ultrafast optical pulse shapes for photo-induced quantum control of atoms and molecules. An unresolved issue is how to use the solutions found by these algorithms to learn about the system's quantum dynamics. We propose a simple method based on covariance analysis of the control space, which can reveal the degrees of freedom in the effective control Hamiltonian. We have applied this technique to stimulated Raman scattering in liquid methanol. A simple model of two-mode stimulated Raman scattering is consistent with the results. (letter to the editor)

  19. Research on Multiple Particle Swarm Algorithm Based on Analysis of Scientific Materials

    Directory of Open Access Journals (Sweden)

    Zhao Hongwei

    2017-01-01

    Full Text Available This paper proposed an improved particle swarm optimization algorithm based on analysis of scientific materials. The core thesis of MPSO (Multiple Particle Swarm Algorithm is to improve the single population PSO to interactive multi-swarms, which is used to settle the problem of being trapped into local minima during later iterations because it is lack of diversity. The simulation results show that the convergence rate is fast and the search performance is good, and it has achieved very good results.

  20. A Fourier analysis for a fast simulation algorithm. [for switching converters

    Science.gov (United States)

    King, Roger J.

    1988-01-01

    This paper presents a derivation of compact expressions for the Fourier series analysis of the steady-state solution of a typical switching converter. The modeling procedure for the simulation and the steady-state solution is described, and some desirable traits for its matrix exponential subroutine are discussed. The Fourier analysis algorithm was tested on a phase-controlled parallel-loaded resonant converter, providing an experimental confirmation.

  1. Analysis of Correlation between an Accelerometer-Based Algorithm for Detecting Parkinsonian Gait and UPDRS Subscales

    Directory of Open Access Journals (Sweden)

    Alejandro Rodríguez-Molinero

    2017-09-01

    Full Text Available BackgroundOur group earlier developed a small monitoring device, which uses accelerometer measurements to accurately detect motor fluctuations in patients with Parkinson’s (On and Off state based on an algorithm that characterizes gait through the frequency content of strides. To further validate the algorithm, we studied the correlation of its outputs with the motor section of the Unified Parkinson’s Disease Rating Scale part-III (UPDRS-III.MethodSeventy-five patients suffering from Parkinson’s disease were asked to walk both in the Off and the On state while wearing the inertial sensor on the waist. Additionally, all patients were administered the motor section of the UPDRS in both motor phases. Tests were conducted at the patient’s home. Convergence between the algorithm and the scale was evaluated by using the Spearman’s correlation coefficient.ResultsCorrelation with the UPDRS-III was moderate (rho −0.56; p < 0.001. Correlation between the algorithm outputs and the gait item in the UPDRS-III was good (rho −0.73; p < 0.001. The factorial analysis of the UPDRS-III has repeatedly shown that several of its items can be clustered under the so-called Factor 1: “axial function, balance, and gait.” The correlation between the algorithm outputs and this factor of the UPDRS-III was −0.67 (p < 0.01.ConclusionThe correlation achieved by the algorithm with the UPDRS-III scale suggests that this algorithm might be a useful tool for monitoring patients with Parkinson’s disease and motor fluctuations.

  2. Sensitivity Analysis of the Scattering-Based SARBM3D Despeckling Algorithm.

    Science.gov (United States)

    Di Simone, Alessio

    2016-06-25

    Synthetic Aperture Radar (SAR) imagery greatly suffers from multiplicative speckle noise, typical of coherent image acquisition sensors, such as SAR systems. Therefore, a proper and accurate despeckling preprocessing step is almost mandatory to aid the interpretation and processing of SAR data by human users and computer algorithms, respectively. Very recently, a scattering-oriented version of the popular SAR Block-Matching 3D (SARBM3D) despeckling filter, named Scattering-Based (SB)-SARBM3D, was proposed. The new filter is based on the a priori knowledge of the local topography of the scene. In this paper, an experimental sensitivity analysis of the above-mentioned despeckling algorithm is carried out, and the main results are shown and discussed. In particular, the role of both electromagnetic and geometrical parameters of the surface and the impact of its scattering behavior are investigated. Furthermore, a comprehensive sensitivity analysis of the SB-SARBM3D filter against the Digital Elevation Model (DEM) resolution and the SAR image-DEM coregistration step is also provided. The sensitivity analysis shows a significant robustness of the algorithm against most of the surface parameters, while the DEM resolution plays a key role in the despeckling process. Furthermore, the SB-SARBM3D algorithm outperforms the original SARBM3D in the presence of the most realistic scattering behaviors of the surface. An actual scenario is also presented to assess the DEM role in real-life conditions.

  3. GLCM algorithm image analysis of nonalcoholic fatty liver and focal fat sparing zone in the ultrasonography

    International Nuclear Information System (INIS)

    Cho, Jin Young; Ye, Soo Young

    2017-01-01

    There is a need for aggressive diagnosis and treatment in middle-aged and high-risk individuals who are more likely to progress from nonalcoholic fatty liver to hepatitis. In this study, nonalcoholic fatty liver was divided into severe, moderate, and severe, and classified by quantitative method using computer analysis of GLCM algorithm. The purpose of this study was to evaluate the characteristics of ultrasound images in the local fat avoidance region. Normal, mild, moderate, severe fatty liver, and focal fat sparing area, 80 cases, respectively. Among the parameters of the GLCM algorithm, the values of the Autocorrelation, Square of the deviation, Sum of averages and Sum of variances with high recognition rate of the liver ultrasound image were calculated. The average recognition rate of the GLCM algorithm was 97.5%. The result of local fat avoidance image analysis showed the most similar value to the normal parenchyma. Ultrasonography can be easily accessed by primary screening, but there may be differences in the accuracy of the test method or the correspondence of results depending on proficiency. GLCM algorithm was applied to quantitatively classify the degree of fatty liver. Local fat avoidance region was similar to normal parenchyma, so it could be predicted to be homogeneous liver parenchyma without fat deposition. We believe that GLCM computer image analysis will provide important information for differentiating not only fatty liver but also other lesions

  4. Power Analysis of Energy Efficient DES Algorithm and Implementation on 28nm FPGA

    DEFF Research Database (Denmark)

    Thind, Vandana; Pandey, Bishwajeet; Hussain, Dil muhammed Akbar

    2016-01-01

    In this work, we have done power analysis ofData Encryption Standard (DES) algorithm using Xilinx ISE software development kit. We have analyzed the amount of power utilized by selective components on board i.e., FPGA Artix-7, where DES algorithm is implemented. The components taken into consider......In this work, we have done power analysis ofData Encryption Standard (DES) algorithm using Xilinx ISE software development kit. We have analyzed the amount of power utilized by selective components on board i.e., FPGA Artix-7, where DES algorithm is implemented. The components taken...... into consideration areclock power, logic power, signals power, IOs power, leakage powerand supply power (dynamic and quiescent). We have used four different WLAN frequencies (2.4 GHz, 3.6 GHz, 4.9GHz, and 5.9 GHz) and four different IO standards like HSTL-I, HSTL-II, HSTL-II-18, HSTL-I-18 for power analysis. We have...... achieved13-47% saving in power at different frequencies and withdifferent energy efficient HSTL IO standard. We calculated the percentage change in the IO power with respect to the mean values of IO power at four different frequencies. We notified that there is minimum of -37.5% and maximum of +35...

  5. GLCM algorithm image analysis of nonalcoholic fatty liver and focal fat sparing zone in the ultrasonography

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jin Young [Dept. of Radiological Science, Graduate School of Catholic University of Pusan, Busan (Korea, Republic of); Ye, Soo Young [Dept. of Radiological Science, Catholic University of Pusan, Busan (Korea, Republic of)

    2017-06-15

    There is a need for aggressive diagnosis and treatment in middle-aged and high-risk individuals who are more likely to progress from nonalcoholic fatty liver to hepatitis. In this study, nonalcoholic fatty liver was divided into severe, moderate, and severe, and classified by quantitative method using computer analysis of GLCM algorithm. The purpose of this study was to evaluate the characteristics of ultrasound images in the local fat avoidance region. Normal, mild, moderate, severe fatty liver, and focal fat sparing area, 80 cases, respectively. Among the parameters of the GLCM algorithm, the values of the Autocorrelation, Square of the deviation, Sum of averages and Sum of variances with high recognition rate of the liver ultrasound image were calculated. The average recognition rate of the GLCM algorithm was 97.5%. The result of local fat avoidance image analysis showed the most similar value to the normal parenchyma. Ultrasonography can be easily accessed by primary screening, but there may be differences in the accuracy of the test method or the correspondence of results depending on proficiency. GLCM algorithm was applied to quantitatively classify the degree of fatty liver. Local fat avoidance region was similar to normal parenchyma, so it could be predicted to be homogeneous liver parenchyma without fat deposition. We believe that GLCM computer image analysis will provide important information for differentiating not only fatty liver but also other lesions.

  6. Blind Extraction of Chaotic Signals by Using the Fast Independent Component Analysis Algorithm

    International Nuclear Information System (INIS)

    Hong-Bin, Chen; Jiu-Chao, Feng; Yong, Fang

    2008-01-01

    We report the results of using the fast independent component analysis (FastICA) algorithm to realize blind extraction of chaotic signals. Two cases are taken into consideration: namely, the mixture is noiseless or contaminated by noise. Pre-whitening is employed to reduce the effect of noise before using the FastICA algorithm. The correlation coefficient criterion is adopted to evaluate the performance, and the success rate is defined as a new criterion to indicate the performance with respect to noise or different mixing matrices. Simulation results show that the FastICA algorithm can extract the chaotic signals effectively. The impact of noise, the length of a signal frame, the number of sources and the number of observed mixtures on the performance is investigated in detail. It is also shown that regarding a noise as an independent source is not always correct

  7. Identifying Time Measurement Tampering in the Traversal Time and Hop Count Analysis (TTHCA Wormhole Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Jonny Karlsson

    2013-05-01

    Full Text Available Traversal time and hop count analysis (TTHCA is a recent wormhole detection algorithm for mobile ad hoc networks (MANET which provides enhanced detection performance against all wormhole attack variants and network types. TTHCA involves each node measuring the processing time of routing packets during the route discovery process and then delivering the measurements to the source node. In a participation mode (PM wormhole where malicious nodes appear in the routing tables as legitimate nodes, the time measurements can potentially be altered so preventing TTHCA from successfully detecting the wormhole. This paper analyses the prevailing conditions for time tampering attacks to succeed for PM wormholes, before introducing an extension to the TTHCA detection algorithm called ∆T Vector which is designed to identify time tampering, while preserving low false positive rates. Simulation results confirm that the ∆T Vector extension is able to effectively detect time tampering attacks, thereby providing an important security enhancement to the TTHCA algorithm.

  8. Adaptive Kernel Canonical Correlation Analysis Algorithms for Nonparametric Identification of Wiener and Hammerstein Systems

    Directory of Open Access Journals (Sweden)

    Ignacio Santamaría

    2008-04-01

    Full Text Available This paper treats the identification of nonlinear systems that consist of a cascade of a linear channel and a nonlinearity, such as the well-known Wiener and Hammerstein systems. In particular, we follow a supervised identification approach that simultaneously identifies both parts of the nonlinear system. Given the correct restrictions on the identification problem, we show how kernel canonical correlation analysis (KCCA emerges as the logical solution to this problem. We then extend the proposed identification algorithm to an adaptive version allowing to deal with time-varying systems. In order to avoid overfitting problems, we discuss and compare three possible regularization techniques for both the batch and the adaptive versions of the proposed algorithm. Simulations are included to demonstrate the effectiveness of the presented algorithm.

  9. A Biometric Face Recognition System Using an Algorithm Based on the Principal Component Analysis Technique

    Directory of Open Access Journals (Sweden)

    Gheorghe Gîlcă

    2015-06-01

    Full Text Available This article deals with a recognition system using an algorithm based on the Principal Component Analysis (PCA technique. The recognition system consists only of a PC and an integrated video camera. The algorithm is developed in MATLAB language and calculates the eigenfaces considered as features of the face. The PCA technique is based on the matching between the facial test image and the training prototype vectors. The mathcing score between the facial test image and the training prototype vectors is calculated between their coefficient vectors. If the matching is high, we have the best recognition. The results of the algorithm based on the PCA technique are very good, even if the person looks from one side at the video camera.

  10. Depth analysis of mechanically machined flaws on steam generator tubings using multi-parameter algorithm

    International Nuclear Information System (INIS)

    Nam Gung, Chan; Lee, Yoon Sang; Hwang, Seong Sik; Kim, Hong Pyo

    2004-01-01

    The eddy current testing (ECT) is a nondestructive technique. It is used for evaluation of material's integrity, especially, steam generator (SG) tubing in nuclear plants, due to their rapid inspection, safe and easy operation. For depth measurement of defects, we prepared Electro Discharge Machined (EDM) notches that have several of defects and applied multi-parameter (MP) algorithm. It is a crack shape estimation program developed in Argonne National Laboratory (ANL). To evaluate the MP algorithm, we compared defect profile with fractography of the defects. In the following sections, we described the basic structure of a computer-aided data analysis algorithm used as means of more accurate and efficient processing of ECT data, and explained the specification of a standard calibration. Finally, we discussed the accuracy of estimated depth profile compared with conventional ECT method

  11. Cause and effect analysis by fuzzy relational equations and a genetic algorithm

    International Nuclear Information System (INIS)

    Rotshtein, Alexander P.; Posner, Morton; Rakytyanska, Hanna B.

    2006-01-01

    This paper proposes using a genetic algorithm as a tool to solve the fault diagnosis problem. The fault diagnosis problem is based on a cause and effect analysis which is formally described by fuzzy relations. Fuzzy relations are formed on the basis of expert assessments. Application of expert fuzzy relations to restore and identify the causes through the observed effects requires the solution to a system of fuzzy relational equations. In this study this search for a solution amounts to solving a corresponding optimization problem. An optimization algorithm is based on the application of genetic operations of crossover, mutation and selection. The genetic algorithm suggested here represents an application in expert systems of fault diagnosis and quality control

  12. Automatic computer aided analysis algorithms and system for adrenal tumors on CT images.

    Science.gov (United States)

    Chai, Hanchao; Guo, Yi; Wang, Yuanyuan; Zhou, Guohui

    2017-12-04

    The adrenal tumor will disturb the secreting function of adrenocortical cells, leading to many diseases. Different kinds of adrenal tumors require different therapeutic schedules. In the practical diagnosis, it highly relies on the doctor's experience to judge the tumor type by reading the hundreds of CT images. This paper proposed an automatic computer aided analysis method for adrenal tumors detection and classification. It consisted of the automatic segmentation algorithms, the feature extraction and the classification algorithms. These algorithms were then integrated into a system and conducted on the graphic interface by using MATLAB Graphic user interface (GUI). The accuracy of the automatic computer aided segmentation and classification reached 90% on 436 CT images. The experiments proved the stability and reliability of this automatic computer aided analytic system.

  13. Fast noise level estimation algorithm based on principal component analysis transform and nonlinear rectification

    Science.gov (United States)

    Xu, Shaoping; Zeng, Xiaoxia; Jiang, Yinnan; Tang, Yiling

    2018-01-01

    We proposed a noniterative principal component analysis (PCA)-based noise level estimation (NLE) algorithm that addresses the problem of estimating the noise level with a two-step scheme. First, we randomly extracted a number of raw patches from a given noisy image and took the smallest eigenvalue of the covariance matrix of the raw patches as the preliminary estimation of the noise level. Next, the final estimation was directly obtained with a nonlinear mapping (rectification) function that was trained on some representative noisy images corrupted with different known noise levels. Compared with the state-of-art NLE algorithms, the experiment results show that the proposed NLE algorithm can reliably infer the noise level and has robust performance over a wide range of image contents and noise levels, showing a good compromise between speed and accuracy in general.

  14. Lost-in-Space Star Identification Using Planar Triangle Principal Component Analysis Algorithm

    Directory of Open Access Journals (Sweden)

    Fuqiang Zhou

    2015-01-01

    Full Text Available It is a challenging task for a star sensor to implement star identification and determine the attitude of a spacecraft in the lost-in-space mode. Several algorithms based on triangle method are proposed for star identification in this mode. However, these methods hold great time consumption and large guide star catalog memory size. The star identification performance of these methods requires improvements. To address these problems, a star identification algorithm using planar triangle principal component analysis is presented here. A star pattern is generated based on the planar triangle created by stars within the field of view of a star sensor and the projection of the triangle. Since a projection can determine an index for a unique triangle in the catalog, the adoption of the k-vector range search technique makes this algorithm very fast. In addition, a sharing star validation method is constructed to verify the identification results. Simulation results show that the proposed algorithm is more robust than the planar triangle and P-vector algorithms under the same conditions.

  15. Analysis of the Chirplet Transform-Based Algorithm for Radar Detection of Accelerated Targets

    Science.gov (United States)

    Galushko, V. G.; Vavriv, D. M.

    2017-06-01

    Purpose: Efficiency analysis of an optimal algorithm of chirp signal processing based on the chirplet transform as applied to detection of radar targets in uniformly accelerated motion. Design/methodology/approach: Standard methods of the optimal filtration theory are used to investigate the ambiguity function of chirp signals. Findings: An analytical expression has been derived for the ambiguity function of chirp signals that is analyzed with respect to detection of radar targets moving at a constant acceleration. Sidelobe level and characteristic width of the ambiguity function with respect to the coordinates frequency and rate of its change have been estimated. The gain in the signal-to-noise ratio has been assessed that is provided by the algorithm under consideration as compared with application of the standard Fourier transform to detection of chirp signals against a “white” noise background. It is shown that already with a comparatively small (processing channels (elementary filters with respect to the frequency change rate) the gain in the signal-tonoise ratio exceeds 10 dB. A block diagram of implementation of the algorithm under consideration is suggested on the basis of a multichannel weighted Fourier transform. Recommendations as for selection of the detection algorithm parameters have been developed. Conclusions: The obtained results testify to efficiency of application of the algorithm under consideration to detection of radar targets moving at a constant acceleration. Nevertheless, it seems expedient to perform computer simulations of its operability with account for the noise impact along with trial measurements in real conditions.

  16. Modelling and Quantitative Analysis of LTRACK–A Novel Mobility Management Algorithm

    Directory of Open Access Journals (Sweden)

    Benedek Kovács

    2006-01-01

    Full Text Available This paper discusses the improvements and parameter optimization issues of LTRACK, a recently proposed mobility management algorithm. Mathematical modelling of the algorithm and the behavior of the Mobile Node (MN are used to optimize the parameters of LTRACK. A numerical method is given to determine the optimal values of the parameters. Markov chains are used to model both the base algorithm and the so-called loop removal effect. An extended qualitative and quantitative analysis is carried out to compare LTRACK to existing handover mechanisms such as MIP, Hierarchical Mobile IP (HMIP, Dynamic Hierarchical Mobility Management Strategy (DHMIP, Telecommunication Enhanced Mobile IP (TeleMIP, Cellular IP (CIP and HAWAII. LTRACK is sensitive to network topology and MN behavior so MN movement modelling is also introduced and discussed with different topologies. The techniques presented here can not only be used to model the LTRACK algorithm, but other algorithms too. There are many discussions and calculations to support our mathematical model to prove that it is adequate in many cases. The model is valid on various network levels, scalable vertically in the ISO-OSI layers and also scales well with the number of network elements.

  17. The contribution to the modal analysis using an infrared camera

    Directory of Open Access Journals (Sweden)

    Dekys Vladimír

    2018-01-01

    Full Text Available The paper deals with modal analysis using an infrared camera. The test objects were excited by the modal exciter with narrowband noise and the response was registered as a frame sequence by the high speed infrared camera FLIR SC7500. The resonant frequencies and the modal shapes were determined from the infrared spectrum recordings. Lock-in technology has also been used. The experimental results were compared with calculated natural frequencies and modal shapes.

  18. A contribution to shielding effectiveness analysis of shielded tents

    Directory of Open Access Journals (Sweden)

    Vranić Zoran M.

    2004-01-01

    Full Text Available An analysis of shielding effectiveness (SE of the shielded tents made of the metallised fabrics is given. First, two electromagnetic characteristic fundamental for coupling through electrically thin shield, the skin depth break frequency and the surface resistance or transfer impedance, is defined and analyzed. Then, the transfer function and the SE are analyzed regarding to the frequency range of interest to the Electromagnetic Compatibility (EMC Community.

  19. Contributions to fuzzy polynomial techniques for stability analysis and control

    OpenAIRE

    Pitarch Pérez, José Luis

    2014-01-01

    The present thesis employs fuzzy-polynomial control techniques in order to improve the stability analysis and control of nonlinear systems. Initially, it reviews the more extended techniques in the field of Takagi-Sugeno fuzzy systems, such as the more relevant results about polynomial and fuzzy polynomial systems. The basic framework uses fuzzy polynomial models by Taylor series and sum-of-squares techniques (semidefinite programming) in order to obtain stability guarantees...

  20. Contribution of surface analysis spectroscopic methods to the lubrication field

    International Nuclear Information System (INIS)

    Blanc, C.

    1979-01-01

    The analytical surface technics such as ESCA, AES and SIMS are tested to be applied to a particular lubrication field. One deals with a 100 C 6 steel surface innumered in tricresylphosphate at 110 0 C for 15 days. The nature of the first layers is studied after relevant solvant cleaning. An iron oxide layer is produced on the bearing surface, namely αFe 2 -O 3 . ESCA, AES and SIMS studies show an overlayer of iron phosphate. The exact nature of iron phosphate is not clearly established but the formation of a ferrous phosphate coating can be assumed from ESCA analysis [fr

  1. Automatic Derivation of Statistical Data Analysis Algorithms: Planetary Nebulae and Beyond

    Science.gov (United States)

    Fischer, Bernd; Hajian, Arsen; Knuth, Kevin; Schumann, Johann

    2004-04-01

    AUTOBAYES is a fully automatic program synthesis system for the data analysis domain. Its input is a declarative problem description in form of a statistical model; its output is documented and optimized C/C++ code. The synthesis process relies on the combination of three key techniques. Bayesian networks are used as a compact internal representation mechanism which enables problem decompositions and guides the algorithm derivation. Program schemas are used as independently composable building blocks for the algorithm construction; they can encapsulate advanced algorithms and data structures. A symbolic-algebraic system is used to find closed-form solutions for problems and emerging subproblems. In this paper, we describe the application of AUTOBAYES to the analysis of planetary nebulae images taken by the Hubble Space Telescope. We explain the system architecture, and present in detail the automatic derivation of the scientists' original analysis as well as a refined analysis using clustering models. This study demonstrates that AUTOBAYES is now mature enough so that it can be applied to realistic scientific data analysis tasks.

  2. Final Report: Sublinear Algorithms for In-situ and In-transit Data Analysis at Exascale.

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, Janine Camille [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Pinar, Ali [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Seshadhri, C. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Thompson, David [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Salloum, Maher [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Bhagatwala, Ankit [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Chen, Jacqueline H. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2015-09-01

    Post-Moore's law scaling is creating a disruptive shift in simulation workflows, as saving the entirety of raw data to persistent storage becomes expensive. We are moving away from a post-process centric data analysis paradigm towards a concurrent analysis framework, in which raw simulation data is processed as it is computed. Algorithms must adapt to machines with extreme concurrency, low communication bandwidth, and high memory latency, while operating within the time constraints prescribed by the simulation. Furthermore, in- put parameters are often data dependent and cannot always be prescribed. The study of sublinear algorithms is a recent development in theoretical computer science and discrete mathematics that has significant potential to provide solutions for these challenges. The approaches of sublinear algorithms address the fundamental mathematical problem of understanding global features of a data set using limited resources. These theoretical ideas align with practical challenges of in-situ and in-transit computation where vast amounts of data must be processed under severe communication and memory constraints. This report details key advancements made in applying sublinear algorithms in-situ to identify features of interest and to enable adaptive workflows over the course of a three year LDRD. Prior to this LDRD, there was no precedent in applying sublinear techniques to large-scale, physics based simulations. This project has definitively demonstrated their efficacy at mitigating high performance computing challenges and highlighted the rich potential for follow-on re- search opportunities in this space.

  3. A novel algorithm for a precise analysis of subchondral bone alterations

    Science.gov (United States)

    Gao, Liang; Orth, Patrick; Goebel, Lars K. H.; Cucchiarini, Magali; Madry, Henning

    2016-01-01

    Subchondral bone alterations are emerging as considerable clinical problems associated with articular cartilage repair. Their analysis exposes a pattern of variable changes, including intra-lesional osteophytes, residual microfracture holes, peri-hole bone resorption, and subchondral bone cysts. A precise distinction between them is becoming increasingly important. Here, we present a tailored algorithm based on continuous data to analyse subchondral bone changes using micro-CT images, allowing for a clear definition of each entity. We evaluated this algorithm using data sets originating from two large animal models of osteochondral repair. Intra-lesional osteophytes were detected in 3 of 10 defects in the minipig and in 4 of 5 defects in the sheep model. Peri-hole bone resorption was found in 22 of 30 microfracture holes in the minipig and in 17 of 30 microfracture holes in the sheep model. Subchondral bone cysts appeared in 1 microfracture hole in the minipig and in 5 microfracture holes in the sheep model (n = 30 holes each). Calculation of inter-rater agreement (90% agreement) and Cohen’s kappa (kappa = 0.874) revealed that the novel algorithm is highly reliable, reproducible, and valid. Comparison analysis with the best existing semi-quantitative evaluation method was also performed, supporting the enhanced precision of this algorithm. PMID:27596562

  4. Comparative analysis of chemical similarity methods for modular natural products with a hypothetical structure enumeration algorithm.

    Science.gov (United States)

    Skinnider, Michael A; Dejong, Chris A; Franczak, Brian C; McNicholas, Paul D; Magarvey, Nathan A

    2017-08-16

    Natural products represent a prominent source of pharmaceutically and industrially important agents. Calculating the chemical similarity of two molecules is a central task in cheminformatics, with applications at multiple stages of the drug discovery pipeline. Quantifying the similarity of natural products is a particularly important problem, as the biological activities of these molecules have been extensively optimized by natural selection. The large and structurally complex scaffolds of natural products distinguish their physical and chemical properties from those of synthetic compounds. However, no analysis of the performance of existing methods for molecular similarity calculation specific to natural products has been reported to date. Here, we present LEMONS, an algorithm for the enumeration of hypothetical modular natural product structures. We leverage this algorithm to conduct a comparative analysis of molecular similarity methods within the unique chemical space occupied by modular natural products using controlled synthetic data, and comprehensively investigate the impact of diverse biosynthetic parameters on similarity search. We additionally investigate a recently described algorithm for natural product retrobiosynthesis and alignment, and find that when rule-based retrobiosynthesis can be applied, this approach outperforms conventional two-dimensional fingerprints, suggesting it may represent a valuable approach for the targeted exploration of natural product chemical space and microbial genome mining. Our open-source algorithm is an extensible method of enumerating hypothetical natural product structures with diverse potential applications in bioinformatics.

  5. A systematic review of gait analysis methods based on inertial sensors and adaptive algorithms.

    Science.gov (United States)

    Caldas, Rafael; Mundt, Marion; Potthast, Wolfgang; Buarque de Lima Neto, Fernando; Markert, Bernd

    2017-09-01

    The conventional methods to assess human gait are either expensive or complex to be applied regularly in clinical practice. To reduce the cost and simplify the evaluation, inertial sensors and adaptive algorithms have been utilized, respectively. This paper aims to summarize studies that applied adaptive also called artificial intelligence (AI) algorithms to gait analysis based on inertial sensor data, verifying if they can support the clinical evaluation. Articles were identified through searches of the main databases, which were encompassed from 1968 to October 2016. We have identified 22 studies that met the inclusion criteria. The included papers were analyzed due to their data acquisition and processing methods with specific questionnaires. Concerning the data acquisition, the mean score is 6.1±1.62, what implies that 13 of 22 papers failed to report relevant outcomes. The quality assessment of AI algorithms presents an above-average rating (8.2±1.84). Therefore, AI algorithms seem to be able to support gait analysis based on inertial sensor data. Further research, however, is necessary to enhance and standardize the application in patients, since most of the studies used distinct methods to evaluate healthy subjects. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Contribution of a fuzzy expert system to regulatory impact analysis

    Directory of Open Access Journals (Sweden)

    Marco Antônio da Cunha

    2015-09-01

    Full Text Available Regulatory Impact Analysis (RIA has been consolidating in Brazilian regulatory agencies throughout the last decades. The RIA methodology aims to examine the regulatory process, measure the costs and benefits generated, as well as other effects of social, political or economic nature caused by a new or an existing regulation. By analysing each regulatory option, the expert or regulator faces a myriad of variables, usually of qualitative nature, that are difficult to measure and with a high degree of uncertainty. This research complements the existing literature, given the scarcity of decision support models in RIA that – regardless of the problem treated – incorporate the tacit knowledge of the regulation expert. This paper proposes an exploratory approach using a Fuzzy Expert System, which therefore helps to enrich the decision process in the final stage of comparison of the regulatory options.

  7. Analysis and development of the automated emergency algorithm to control primary to secondary LOCA for SUNPP safety upgrading

    International Nuclear Information System (INIS)

    Kim, V.; Kuznetsov, V.; Balakan, G.; Gromov, G.; Krushynsky, A.; Sholomitsky, S.; Lola, I.

    2007-01-01

    The paper presents the results of the study conducted to support planned modernization of the South Ukraine nuclear power plant. The objective of the analysis has been to develop the automated emergency control algorithm for primary to secondary LOCA accident for SUNPP WWER-1000 safety upgrading. According to the analyses performed in the framework of safety assesment report, given accident is the most complex for control and has the largest contribution into the core damage frequency value. This is because of initial event diagnostics is difficult, emergency control is complicated for personnel, time available for decision making and actions performing is limited with coolant inventory for make-up, probability of steam dump valves on affected steam generator non-closing after opening is high, and as a consequence containment bypass, irretrievable loss of coolant and radioactive materials release into the environment are possible. Unit design modifications are directed on expansion of safety systems capabilities to overcome given accident and to facilitate the personnel actions on emergency control. Safety systems modification according to developed algorithm will allow to simplify accident control by personnel and enable to control the ECCS discharge limiting pressure below the affected steam generator steam dump valve opening pressure, and decrease the probability of the containment bypass sequences. The analysis of the primary-to-secondary LOCA thermal-hydraulics has been conducted with RELAP5/Mod 3.2, and involved development of the dedicated analytical model, calculations of various plant response accident scenarios, conducting of plant personnel intervention analyses using full-scale simulator, development and justification of the emergency control algorithm aimed on the minimization of negative consequences of the primary-to-secondary LOCA (Authors)

  8. Simulation System of Car Crash Test in C-NCAP Analysis Based on an Improved Apriori Algorithm*

    Science.gov (United States)

    Xiang, LI

    In order to analysis car crash test in C-NCAP, an improved algorithm is given based on Apriori algorithm in this paper. The new algorithm is implemented with vertical data layout, breadth first searching, and intersecting. It takes advantage of the efficiency of vertical data layout and intersecting, and prunes candidate frequent item sets like Apriori. Finally, the new algorithm is applied in simulation of car crash test analysis system. The result shows that the relations will affect the C-NCAP test results, and it can provide a reference for the automotive design.

  9. An Efficient Topology-Based Algorithm for Transient Analysis of Power Grid

    KAUST Repository

    Yang, Lan

    2015-08-10

    In the design flow of integrated circuits, chip-level verification is an important step that sanity checks the performance is as expected. Power grid verification is one of the most expensive and time-consuming steps of chip-level verification, due to its extremely large size. Efficient power grid analysis technology is highly demanded as it saves computing resources and enables faster iteration. In this paper, a topology-base power grid transient analysis algorithm is proposed. Nodal analysis is adopted to analyze the topology which is mathematically equivalent to iteratively solving a positive semi-definite linear equation. The convergence of the method is proved.

  10. Multidisciplinary Design, Analysis, and Optimization Tool Development Using a Genetic Algorithm

    Science.gov (United States)

    Pak, Chan-gi; Li, Wesley

    2009-01-01

    Multidisciplinary design, analysis, and optimization using a genetic algorithm is being developed at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California) to automate analysis and design process by leveraging existing tools to enable true multidisciplinary optimization in the preliminary design stage of subsonic, transonic, supersonic, and hypersonic aircraft. This is a promising technology, but faces many challenges in large-scale, real-world application. This report describes current approaches, recent results, and challenges for multidisciplinary design, analysis, and optimization as demonstrated by experience with the Ikhana fire pod design.!

  11. Shape analysis of corpus callosum in phenylketonuria using a new 3D correspondence algorithm

    Science.gov (United States)

    He, Qing; Christ, Shawn E.; Karsch, Kevin; Peck, Dawn; Duan, Ye

    2010-03-01

    Statistical shape analysis of brain structures has gained increasing interest from neuroimaging community because it can precisely locate shape differences between healthy and pathological structures. The most difficult and crucial problem is establishing shape correspondence among individual 3D shapes. This paper proposes a new algorithm for 3D shape correspondence. A set of landmarks are sampled on a template shape, and initial correspondence is established between the template and the target shape based on the similarity of locations and normal directions. The landmarks on the target are then refined by iterative thin plate spline. The algorithm is simple and fast, and no spherical mapping is needed. We apply our method to the statistical shape analysis of the corpus callosum (CC) in phenylketonuria (PKU), and significant local shape differences between the patients and the controls are found in the most anterior and posterior aspects of the corpus callosum.

  12. Automatic Derivation of Statistical Data Analysis Algorithms: Planetary Nebulae and Beyond

    OpenAIRE

    Fischer, Bernd; Knuth, Kevin; Hajian, Arsen; Schumann, Johann

    2004-01-01

    AUTOBAYES is a fully automatic program synthesis system for the data analysis domain. Its input is a declarative problem description in form of a statistical model; its output is documented and optimized C/C++ code. The synthesis process relies on the combination of three key techniques. Bayesian networks are used as a compact internal representation mechanism which enables problem decompositions and guides the algorithm derivation. Program schemas are used as independently composable buildin...

  13. Performance analysis of multidimensional wavefront algorithms with application to deterministic particle transport

    International Nuclear Information System (INIS)

    Hoisie, A.; Lubeck, O.; Wasserman, H.

    1998-01-01

    The authors develop a model for the parallel performance of algorithms that consist of concurrent, two-dimensional wavefronts implemented in a message passing environment. The model, based on a LogGP machine parameterization, combines the separate contributions of computation and communication wavefronts. They validate the model on three important supercomputer systems, on up to 500 processors. They use data from a deterministic particle transport application taken from the ASCI workload, although the model is general to any wavefront algorithm implemented on a 2-D processor domain. They also use the validated model to make estimates of performance and scalability of wavefront algorithms on 100-TFLOPS computer systems expected to be in existence within the next decade as part of the ASCI program and elsewhere. In this context, the authors analyze two problem sizes. Their model shows that on the largest such problem (1 billion cells), inter-processor communication performance is not the bottleneck. Single-node efficiency is the dominant factor

  14. Improvement of the fringe analysis algorithm for wavelength scanning interferometry based on filter parameter optimization.

    Science.gov (United States)

    Zhang, Tao; Gao, Feng; Muhamedsalih, Hussam; Lou, Shan; Martin, Haydn; Jiang, Xiangqian

    2018-03-20

    The phase slope method which estimates height through fringe pattern frequency and the algorithm which estimates height through the fringe phase are the fringe analysis algorithms widely used in interferometry. Generally they both extract the phase information by filtering the signal in frequency domain after Fourier transform. Among the numerous papers in the literature about these algorithms, it is found that the design of the filter, which plays an important role, has never been discussed in detail. This paper focuses on the filter design in these algorithms for wavelength scanning interferometry (WSI), trying to optimize the parameters to acquire the optimal results. The spectral characteristics of the interference signal are analyzed first. The effective signal is found to be narrow-band (near single frequency), and the central frequency is calculated theoretically. Therefore, the position of the filter pass-band is determined. The width of the filter window is optimized with the simulation to balance the elimination of the noise and the ringing of the filter. Experimental validation of the approach is provided, and the results agree very well with the simulation. The experiment shows that accuracy can be improved by optimizing the filter design, especially when the signal quality, i.e., the signal noise ratio (SNR), is low. The proposed method also shows the potential of improving the immunity to the environmental noise by adapting the signal to acquire the optimal results through designing an adaptive filter once the signal SNR can be estimated accurately.

  15. A Hybrid Feature Subset Selection Algorithm for Analysis of High Correlation Proteomic Data

    Science.gov (United States)

    Kordy, Hussain Montazery; Baygi, Mohammad Hossein Miran; Moradi, Mohammad Hassan

    2012-01-01

    Pathological changes within an organ can be reflected as proteomic patterns in biological fluids such as plasma, serum, and urine. The surface-enhanced laser desorption and ionization time-of-flight mass spectrometry (SELDI-TOF MS) has been used to generate proteomic profiles from biological fluids. Mass spectrometry yields redundant noisy data that the most data points are irrelevant features for differentiating between cancer and normal cases. In this paper, we have proposed a hybrid feature subset selection algorithm based on maximum-discrimination and minimum-correlation coupled with peak scoring criteria. Our algorithm has been applied to two independent SELDI-TOF MS datasets of ovarian cancer obtained from the NCI-FDA clinical proteomics databank. The proposed algorithm has used to extract a set of proteins as potential biomarkers in each dataset. We applied the linear discriminate analysis to identify the important biomarkers. The selected biomarkers have been able to successfully diagnose the ovarian cancer patients from the noncancer control group with an accuracy of 100%, a sensitivity of 100%, and a specificity of 100% in the two datasets. The hybrid algorithm has the advantage that increases reproducibility of selected biomarkers and able to find a small set of proteins with high discrimination power. PMID:23717808

  16. On the rejection-based algorithm for simulation and analysis of large-scale reaction networks

    Energy Technology Data Exchange (ETDEWEB)

    Thanh, Vo Hong, E-mail: vo@cosbi.eu [The Microsoft Research-University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Zunino, Roberto, E-mail: roberto.zunino@unitn.it [Department of Mathematics, University of Trento, Trento (Italy); Priami, Corrado, E-mail: priami@cosbi.eu [The Microsoft Research-University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Department of Mathematics, University of Trento, Trento (Italy)

    2015-06-28

    Stochastic simulation for in silico studies of large biochemical networks requires a great amount of computational time. We recently proposed a new exact simulation algorithm, called the rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)], to improve simulation performance by postponing and collapsing as much as possible the propensity updates. In this paper, we analyze the performance of this algorithm in detail, and improve it for simulating large-scale biochemical reaction networks. We also present a new algorithm, called simultaneous RSSA (SRSSA), which generates many independent trajectories simultaneously for the analysis of the biochemical behavior. SRSSA improves simulation performance by utilizing a single data structure across simulations to select reaction firings and forming trajectories. The memory requirement for building and storing the data structure is thus independent of the number of trajectories. The updating of the data structure when needed is performed collectively in a single operation across the simulations. The trajectories generated by SRSSA are exact and independent of each other by exploiting the rejection-based mechanism. We test our new improvement on real biological systems with a wide range of reaction networks to demonstrate its applicability and efficiency.

  17. Positioning performance analysis of the time sum of arrival algorithm with error features

    Science.gov (United States)

    Gong, Feng-xun; Ma, Yan-qiu

    2018-03-01

    The theoretical positioning accuracy of multilateration (MLAT) with the time difference of arrival (TDOA) algorithm is very high. However, there are some problems in practical applications. Here we analyze the location performance of the time sum of arrival (TSOA) algorithm from the root mean square error ( RMSE) and geometric dilution of precision (GDOP) in additive white Gaussian noise (AWGN) environment. The TSOA localization model is constructed. Using it, the distribution of location ambiguity region is presented with 4-base stations. And then, the location performance analysis is started from the 4-base stations with calculating the RMSE and GDOP variation. Subsequently, when the location parameters are changed in number of base stations, base station layout and so on, the performance changing patterns of the TSOA location algorithm are shown. So, the TSOA location characteristics and performance are revealed. From the RMSE and GDOP state changing trend, the anti-noise performance and robustness of the TSOA localization algorithm are proved. The TSOA anti-noise performance will be used for reducing the blind-zone and the false location rate of MLAT systems.

  18. Bladed wheels damage detection through Non-Harmonic Fourier Analysis improved algorithm

    Science.gov (United States)

    Neri, P.

    2017-05-01

    Recent papers introduced the Non-Harmonic Fourier Analysis for bladed wheels damage detection. This technique showed its potential in estimating the frequency of sinusoidal signals even when the acquisition time is short with respect to the vibration period, provided that some hypothesis are fulfilled. Anyway, previously proposed algorithms showed severe limitations in cracks detection at their early stage. The present paper proposes an improved algorithm which allows to detect a blade vibration frequency shift due to a crack whose size is really small compared to the blade width. Such a technique could be implemented for condition-based maintenance, allowing to use non-contact methods for vibration measurements. A stator-fixed laser sensor could monitor all the blades as they pass in front of the spot, giving precious information about the wheel health. This configuration determines an acquisition time for each blade which become shorter as the machine rotational speed increases. In this situation, traditional Discrete Fourier Transform analysis results in poor frequency resolution, being not suitable for small frequency shift detection. Non-Harmonic Fourier Analysis instead showed high reliability in vibration frequency estimation even with data samples collected in a short time range. A description of the improved algorithm is provided in the paper, along with a comparison with the previous one. Finally, a validation of the method is presented, based on finite element simulations results.

  19. Implementation and Initial Testing of Advanced Processing and Analysis Algorithms for Correlated Neutron Counting

    Energy Technology Data Exchange (ETDEWEB)

    Santi, Peter Angelo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cutler, Theresa Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Favalli, Andrea [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Koehler, Katrina Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Henzl, Vladimir [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Henzlova, Daniela [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Parker, Robert Francis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Croft, Stephen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-01

    In order to improve the accuracy and capabilities of neutron multiplicity counting, additional quantifiable information is needed in order to address the assumptions that are present in the point model. Extracting and utilizing higher order moments (Quads and Pents) from the neutron pulse train represents the most direct way of extracting additional information from the measurement data to allow for an improved determination of the physical properties of the item of interest. The extraction of higher order moments from a neutron pulse train required the development of advanced dead time correction algorithms which could correct for dead time effects in all of the measurement moments in a self-consistent manner. In addition, advanced analysis algorithms have been developed to address specific assumptions that are made within the current analysis model, namely that all neutrons are created at a single point within the item of interest, and that all neutrons that are produced within an item are created with the same energy distribution. This report will discuss the current status of implementation and initial testing of the advanced dead time correction and analysis algorithms that have been developed in an attempt to utilize higher order moments to improve the capabilities of correlated neutron measurement techniques.

  20. Cy-preds: An algorithm and a web service for the analysis and prediction of cysteine reactivity.

    Science.gov (United States)

    Soylu, İnanç; Marino, Stefano M

    2016-02-01

    Cysteine (Cys) is a critically important amino acid, serving a variety of functions within proteins including structural roles, catalysis, and regulation of function through post-translational modifications. Predicting which Cys residues are likely to be reactive is a very sought after feature. Few methods are currently available for the task, either based on evaluation of physicochemical features (e.g., pKa and exposure) or based on similarity with known instances. In this study, we developed an algorithm (named HAL-Cy) which blends previous work with novel implementations to identify reactive Cys from nonreactive. HAL-Cy present two major components: (i) an energy based part, rooted on the evaluation of H-bond network contributions and (ii) a knowledge based part, composed of different profiling approaches (including a newly developed weighting matrix for sequence profiling). In our evaluations, HAL-Cy provided significantly improved performances, as tested in comparisons with existing approaches. We implemented our algorithm in a web service (Cy-preds), the ultimate product of our work; we provided it with a variety of additional features, tools, and options: Cy-preds is capable of performing fully automated calculations for a thorough analysis of Cys reactivity in proteins, ranging from reactivity predictions (e.g., with HAL-Cy) to functional characterization. We believe it represents an original, effective, and very useful addition to the current array of tools available to scientists involved in redox biology, Cys biochemistry, and structural bioinformatics. © 2015 Wiley Periodicals, Inc.

  1. DynPeak: An Algorithm for Pulse Detection and Frequency Analysis in Hormonal Time Series

    Science.gov (United States)

    Vidal, Alexandre; Zhang, Qinghua; Médigue, Claire; Fabre, Stéphane; Clément, Frédérique

    2012-01-01

    The endocrine control of the reproductive function is often studied from the analysis of luteinizing hormone (LH) pulsatile secretion by the pituitary gland. Whereas measurements in the cavernous sinus cumulate anatomical and technical difficulties, LH levels can be easily assessed from jugular blood. However, plasma levels result from a convolution process due to clearance effects when LH enters the general circulation. Simultaneous measurements comparing LH levels in the cavernous sinus and jugular blood have revealed clear differences in the pulse shape, the amplitude and the baseline. Besides, experimental sampling occurs at a relatively low frequency (typically every 10 min) with respect to LH highest frequency release (one pulse per hour) and the resulting LH measurements are noised by both experimental and assay errors. As a result, the pattern of plasma LH may be not so clearly pulsatile. Yet, reliable information on the InterPulse Intervals (IPI) is a prerequisite to study precisely the steroid feedback exerted on the pituitary level. Hence, there is a real need for robust IPI detection algorithms. In this article, we present an algorithm for the monitoring of LH pulse frequency, basing ourselves both on the available endocrinological knowledge on LH pulse (shape and duration with respect to the frequency regime) and synthetic LH data generated by a simple model. We make use of synthetic data to make clear some basic notions underlying our algorithmic choices. We focus on explaining how the process of sampling affects drastically the original pattern of secretion, and especially the amplitude of the detectable pulses. We then describe the algorithm in details and perform it on different sets of both synthetic and experimental LH time series. We further comment on how to diagnose possible outliers from the series of IPIs which is the main output of the algorithm. PMID:22802933

  2. Transcriptome-guided amyloid imaging genetic analysis via a novel structured sparse learning algorithm.

    Science.gov (United States)

    Yan, Jingwen; Du, Lei; Kim, Sungeun; Risacher, Shannon L; Huang, Heng; Moore, Jason H; Saykin, Andrew J; Shen, Li

    2014-09-01

    Imaging genetics is an emerging field that studies the influence of genetic variation on brain structure and function. The major task is to examine the association between genetic markers such as single-nucleotide polymorphisms (SNPs) and quantitative traits (QTs) extracted from neuroimaging data. The complexity of these datasets has presented critical bioinformatics challenges that require new enabling tools. Sparse canonical correlation analysis (SCCA) is a bi-multivariate technique used in imaging genetics to identify complex multi-SNP-multi-QT associations. However, most of the existing SCCA algorithms are designed using the soft thresholding method, which assumes that the input features are independent from one another. This assumption clearly does not hold for the imaging genetic data. In this article, we propose a new knowledge-guided SCCA algorithm (KG-SCCA) to overcome this limitation as well as improve learning results by incorporating valuable prior knowledge. The proposed KG-SCCA method is able to model two types of prior knowledge: one as a group structure (e.g. linkage disequilibrium blocks among SNPs) and the other as a network structure (e.g. gene co-expression network among brain regions). The new model incorporates these prior structures by introducing new regularization terms to encourage weight similarity between grouped or connected features. A new algorithm is designed to solve the KG-SCCA model without imposing the independence constraint on the input features. We demonstrate the effectiveness of our algorithm with both synthetic and real data. For real data, using an Alzheimer's disease (AD) cohort, we examine the imaging genetic associations between all SNPs in the APOE gene (i.e. top AD gene) and amyloid deposition measures among cortical regions (i.e. a major AD hallmark). In comparison with a widely used SCCA implementation, our KG-SCCA algorithm produces not only improved cross-validation performances but also biologically meaningful

  3. A density distribution algorithm for bone incorporating local orthotropy, modal analysis and theories of cellular solids.

    Science.gov (United States)

    Impelluso, Thomas J

    2003-06-01

    An algorithm for bone remodeling is presented which allows for both a redistribution of density and a continuous change of principal material directions for the orthotropic material properties of bone. It employs a modal analysis to add density for growth and a local effective strain based analysis to redistribute density. General re-distribution functions are presented. The model utilizes theories of cellular solids to relate density and strength. The code predicts the same general density distributions and local orthotropy as observed in reality.

  4. METHODOLOGICAL GROUNDS ABOUT ALGORITHM OF DEVELOPMENT ORGANIZATIONAL ANALYSIS OF RAILWAYS OPERATION

    Directory of Open Access Journals (Sweden)

    H. D. Eitutis

    2010-12-01

    Full Text Available It was established that the initial stage of reorganization is to run diagnostics of the enterprise, under which a decision on development of an algorithm for structural transformations shall be made. Organizational and management analysis is an important component of diagnostics and is aimed at defining the methods and principles for the enterprise management system. The results of the carried out organizational analysis allow defining the problems and «bottle necks» in the system of strategic management of Ukrainian railways as a whole and in different directions of their economic activities.

  5. Plant phenomics: an overview of image acquisition technologies and image data analysis algorithms.

    Science.gov (United States)

    Perez-Sanz, Fernando; Navarro, Pedro J; Egea-Cortines, Marcos

    2017-11-01

    The study of phenomes or phenomics has been a central part of biology. The field of automatic phenotype acquisition technologies based on images has seen an important advance in the last years. As with other high-throughput technologies, it addresses a common set of problems, including data acquisition and analysis. In this review, we give an overview of the main systems developed to acquire images. We give an in-depth analysis of image processing with its major issues and the algorithms that are being used or emerging as useful to obtain data out of images in an automatic fashion. © The Author 2017. Published by Oxford University Press.

  6. Analysis of stock investment selection based on CAPM using covariance and genetic algorithm approach

    Science.gov (United States)

    Sukono; Susanti, D.; Najmia, M.; Lesmana, E.; Napitupulu, H.; Supian, S.; Putra, A. S.

    2018-03-01

    Investment is one of the economic growth factors of countries, especially in Indonesia. Stocks is a form of investment, which is liquid. In determining the stock investment decisions which need to be considered by investors is to choose stocks that can generate maximum returns with a minimum risk level. Therefore, we need to know how to allocate the capital which may give the optimal benefit. This study discusses the issue of stock investment based on CAPM which is estimated using covariance and Genetic Algorithm approach. It is assumed that the stocks analyzed follow the CAPM model. To do the estimation of beta parameter on CAPM equation is done by two approach, first is to be represented by covariance approach, and second with genetic algorithm optimization. As a numerical illustration, in this paper analyzed ten stocks traded on the capital market in Indonesia. The results of the analysis show that estimation of beta parameters using covariance and genetic algorithm approach, give the same decision, that is, six underpriced stocks with buying decision, and four overpriced stocks with a sales decision. Based on the analysis, it can be concluded that the results can be used as a consideration for investors buying six under-priced stocks, and selling four overpriced stocks.

  7. Factor Analysis with EM Algorithm Never Gives Improper Solutions when Sample Covariance and Initial Parameter Matrices Are Proper

    Science.gov (United States)

    Adachi, Kohei

    2013-01-01

    Rubin and Thayer ("Psychometrika," 47:69-76, 1982) proposed the EM algorithm for exploratory and confirmatory maximum likelihood factor analysis. In this paper, we prove the following fact: the EM algorithm always gives a proper solution with positive unique variances and factor correlations with absolute values that do not exceed one,…

  8. Sample Data Synchronization and Harmonic Analysis Algorithm Based on Radial Basis Function Interpolation

    Directory of Open Access Journals (Sweden)

    Huaiqing Zhang

    2014-01-01

    Full Text Available The spectral leakage has a harmful effect on the accuracy of harmonic analysis for asynchronous sampling. This paper proposed a time quasi-synchronous sampling algorithm which is based on radial basis function (RBF interpolation. Firstly, a fundamental period is evaluated by a zero-crossing technique with fourth-order Newton’s interpolation, and then, the sampling sequence is reproduced by the RBF interpolation. Finally, the harmonic parameters can be calculated by FFT on the synchronization of sampling data. Simulation results showed that the proposed algorithm has high accuracy in measuring distorted and noisy signals. Compared to the local approximation schemes as linear, quadric, and fourth-order Newton interpolations, the RBF is a global approximation method which can acquire more accurate results while the time-consuming is about the same as Newton’s.

  9. A Wavelet Analysis-Based Dynamic Prediction Algorithm to Network Traffic

    Directory of Open Access Journals (Sweden)

    Meng Fan-Bo

    2016-01-01

    Full Text Available Network traffic is a significantly important parameter for network traffic engineering, while it holds highly dynamic nature in the network. Accordingly, it is difficult and impossible to directly predict traffic amount of end-to-end flows. This paper proposes a new prediction algorithm to network traffic using the wavelet analysis. Firstly, network traffic is converted into the time-frequency domain to capture time-frequency feature of network traffic. Secondly, in different frequency components, we model network traffic in the time-frequency domain. Finally, we build the prediction model about network traffic. At the same time, the corresponding prediction algorithm is presented to attain network traffic prediction. Simulation results indicates that our approach is promising.

  10. Sensitivity analysis and optimization algorithms for 3D forging process design

    International Nuclear Information System (INIS)

    Do, T.T.; Fourment, L.; Laroussi, M.

    2004-01-01

    This paper presents several approaches for preform shape optimization in 3D forging. The process simulation is carried out using the FORGE3 registered finite element software, and the optimization problem regards the shape of initial axisymmetrical preforms. Several objective functions are considered, like the forging energy, the forging force or a surface defect criterion. Both deterministic and stochastic optimization algorithms are tested for 3D applications. The deterministic approach uses the sensitivity analysis that provides the gradient of the objective function. It is obtained by the adjoint-state method and semi-analytical differentiation. The study of stochastic approaches aims at comparing genetic algorithms and evolution strategies. Numerical results show the feasibility of such approaches, i.e. the achieving of satisfactory solutions within a limited number of 3D simulations, less than fifty. For a more industrial problem, the forging of a gear, encouraging optimization results are obtained

  11. The Quantitative Analysis of User Behavior Online - Data, Models and Algorithms

    Science.gov (United States)

    Raghavan, Prabhakar

    By blending principles from mechanism design, algorithms, machine learning and massive distributed computing, the search industry has become good at optimizing monetization on sound scientific principles. This represents a successful and growing partnership between computer science and microeconomics. When it comes to understanding how online users respond to the content and experiences presented to them, we have more of a lacuna in the collaboration between computer science and certain social sciences. We will use a concrete technical example from image search results presentation, developing in the process some algorithmic and machine learning problems of interest in their own right. We then use this example to motivate the kinds of studies that need to grow between computer science and the social sciences; a critical element of this is the need to blend large-scale data analysis with smaller-scale eye-tracking and "individualized" lab studies.

  12. Genetic particle swarm parallel algorithm analysis of optimization arrangement on mistuned blades

    Science.gov (United States)

    Zhao, Tianyu; Yuan, Huiqun; Yang, Wenjun; Sun, Huagang

    2017-12-01

    This article introduces a method of mistuned parameter identification which consists of static frequency testing of blades, dichotomy and finite element analysis. A lumped parameter model of an engine bladed-disc system is then set up. A bladed arrangement optimization method, namely the genetic particle swarm optimization algorithm, is presented. It consists of a discrete particle swarm optimization and a genetic algorithm. From this, the local and global search ability is introduced. CUDA-based co-evolution particle swarm optimization, using a graphics processing unit, is presented and its performance is analysed. The results show that using optimization results can reduce the amplitude and localization of the forced vibration response of a bladed-disc system, while optimization based on the CUDA framework can improve the computing speed. This method could provide support for engineering applications in terms of effectiveness and efficiency.

  13. Collaborative real-time motion video analysis by human observer and image exploitation algorithms

    Science.gov (United States)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2015-05-01

    Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.

  14. Uncertainty analysis of hydrological modeling in a tropical area using different algorithms

    Science.gov (United States)

    Rafiei Emam, Ammar; Kappas, Martin; Fassnacht, Steven; Linh, Nguyen Hoang Khanh

    2018-01-01

    Hydrological modeling outputs are subject to uncertainty resulting from different sources of errors (e.g., error in input data, model structure, and model parameters), making quantification of uncertainty in hydrological modeling imperative and meant to improve reliability of modeling results. The uncertainty analysis must solve difficulties in calibration of hydrological models, which further increase in areas with data scarcity. The purpose of this study is to apply four uncertainty analysis algorithms to a semi-distributed hydrological model, quantifying different source of uncertainties (especially parameter uncertainty) and evaluate their performance. In this study, the Soil and Water Assessment Tools (SWAT) eco-hydrological model was implemented for the watershed in the center of Vietnam. The sensitivity of parameters was analyzed, and the model was calibrated. The uncertainty analysis for the hydrological model was conducted based on four algorithms: Generalized Likelihood Uncertainty Estimation (GLUE), Sequential Uncertainty Fitting (SUFI), Parameter Solution method (ParaSol) and Particle Swarm Optimization (PSO). The performance of the algorithms was compared using P-factor and Rfactor, coefficient of determination (R 2), the Nash Sutcliffe coefficient of efficiency (NSE) and Percent Bias (PBIAS). The results showed the high performance of SUFI and PSO with P-factor>0.83, R-factor 0.91, NSE>0.89, and 0.18analysis. Indeed, the uncertainty analysis must be accounted when the outcomes of the model use for policy or management decisions.

  15. A simple algorithm for subregional striatal uptake analysis with partial volume correction in dopaminergic PET imaging

    International Nuclear Information System (INIS)

    Lue Kunhan; Lin Hsinhon; Chuang Kehshih; Kao Chihhao, K.; Hsieh Hungjen; Liu Shuhsin

    2014-01-01

    In positron emission tomography (PET) of the dopaminergic system, quantitative measurements of nigrostriatal dopamine function are useful for differential diagnosis. A subregional analysis of striatal uptake enables the diagnostic performance to be more powerful. However, the partial volume effect (PVE) induces an underestimation of the true radioactivity concentration in small structures. This work proposes a simple algorithm for subregional analysis of striatal uptake with partial volume correction (PVC) in dopaminergic PET imaging. The PVC algorithm analyzes the separate striatal subregions and takes into account the PVE based on the recovery coefficient (RC). The RC is defined as the ratio of the PVE-uncorrected to PVE-corrected radioactivity concentration, and is derived from a combination of the traditional volume of interest (VOI) analysis and the large VOI technique. The clinical studies, comprising 11 patients with Parkinson's disease (PD) and 6 healthy subjects, were used to assess the impact of PVC on the quantitative measurements. Simulations on a numerical phantom that mimicked realistic healthy and neurodegenerative situations were used to evaluate the performance of the proposed PVC algorithm. In both the clinical and the simulation studies, the striatal-to-occipital ratio (SOR) values for the entire striatum and its subregions were calculated with and without PVC. In the clinical studies, the SOR values in each structure (caudate, anterior putamen, posterior putamen, putamen, and striatum) were significantly higher by using PVC in contrast to those without. Among the PD patients, the SOR values in each structure and quantitative disease severity ratings were shown to be significantly related only when PVC was used. For the simulation studies, the average absolute percentage error of the SOR estimates before and after PVC were 22.74% and 1.54% in the healthy situation, respectively; those in the neurodegenerative situation were 20.69% and 2

  16. Asymptotic analysis of SPTA-based algorithms for no-wait flow shop scheduling problem with release dates.

    Science.gov (United States)

    Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang

    2014-01-01

    We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms.

  17. Asymptotic Analysis of SPTA-Based Algorithms for No-Wait Flow Shop Scheduling Problem with Release Dates

    Directory of Open Access Journals (Sweden)

    Tao Ren

    2014-01-01

    Full Text Available We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms.

  18. [Algorithm for securing an unexpected difficult airway : User analysis on a simulator].

    Science.gov (United States)

    Ott, T; Truschinski, K; Kriege, M; Naß, M; Herrmann, S; Ott, V; Sellin, S

    2018-01-01

    Critical incidents in difficult airway management are still a main contributory factor for perioperative morbidity and mortality. Many national associations have developed algorithms for management of these time critical events. For implementation of these algorithms the provision of technical requirements and procedure-related training are essential. Severe airway incidents are rare events and clinical experience of the individual operators is limited; therefore, simulation is an adequate instrument for training and evaluating difficult airway algorithms. The aim of this observational study was to evaluate the application of the institutional difficult airway algorithm among anesthetists. After ethics committee approval, anesthetists were observed while treating a "cannot intubate" (CI) and a "cannot intubate, cannot ventilate" (CICV) situation in the institutional simulation center. As leader of a supportive team the participants had to deal with an unexpected difficult airway after induction of anesthesia in a patient simulator. The following data were recorded: sequence of the applied airway instruments, time to ventilation after establishing a secured airway using any instrument in the CI situation and time to ventilation via cricothyrotomy in the CICV situation. Conformity to the algorithm was defined by the sequence of the applied instruments. Analysis comprised conformity to the algorithm, non-parametric tests for time to ventilation and differences between junior and senior anesthetists. Out of 50 participants 45 were analyzed in the CI situation. In this situation 93% of the participants acted in conformity with the algorithm. In 62% the airway was secured by flexible intubation endoscopy, in 38% with another device. Data from 46 participants were analyzed in the CICV situation. In this situation 91% acted in conformity with the algorithm. The last device used prior to the decision for cricothyrotomy was flexible intubation endoscopy in 39%, a

  19. Informational and linguistic analysis of large genomic sequence collections via efficient Hadoop cluster algorithms.

    Science.gov (United States)

    Ferraro Petrillo, Umberto; Roscigno, Gianluca; Cattaneo, Giuseppe; Giancarlo, Raffaele

    2018-06-01

    Information theoretic and compositional/linguistic analysis of genomes have a central role in bioinformatics, even more so since the associated methodologies are becoming very valuable also for epigenomic and meta-genomic studies. The kernel of those methods is based on the collection of k-mer statistics, i.e. how many times each k-mer in {A,C,G,T}k occurs in a DNA sequence. Although this problem is computationally very simple and efficiently solvable on a conventional computer, the sheer amount of data available now in applications demands to resort to parallel and distributed computing. Indeed, those type of algorithms have been developed to collect k-mer statistics in the realm of genome assembly. However, they are so specialized to this domain that they do not extend easily to the computation of informational and linguistic indices, concurrently on sets of genomes. Following the well-established approach in many disciplines, and with a growing success also in bioinformatics, to resort to MapReduce and Hadoop to deal with 'Big Data' problems, we present KCH, the first set of MapReduce algorithms able to perform concurrently informational and linguistic analysis of large collections of genomic sequences on a Hadoop cluster. The benchmarking of KCH that we provide indicates that it is quite effective and versatile. It is also competitive with respect to the parallel and distributed algorithms highly specialized to k-mer statistics collection for genome assembly problems. In conclusion, KCH is a much needed addition to the growing number of algorithms and tools that use MapReduce for bioinformatics core applications. The software, including instructions for running it over Amazon AWS, as well as the datasets are available at http://www.di-srv.unisa.it/KCH. umberto.ferraro@uniroma1.it. Supplementary data are available at Bioinformatics online.

  20. A new algorithm for least-cost path analysis by correcting digital elevation models of natural landscapes

    Science.gov (United States)

    Baek, Jieun; Choi, Yosoon

    2017-04-01

    Most algorithms for least-cost path analysis usually calculate the slope gradient between the source cell and the adjacent cells to reflect the weights for terrain slope into the calculation of travel costs. However, these algorithms have limitations that they cannot analyze the least-cost path between two cells when obstacle cells with very high or low terrain elevation exist between the source cell and the target cell. This study presents a new algorithm for least-cost path analysis by correcting digital elevation models of natural landscapes to find possible paths satisfying the constraint of maximum or minimum slope gradient. The new algorithm calculates the slope gradient between the center cell and non-adjacent cells using the concept of extended move-sets. If the algorithm finds possible paths between the center cell and non-adjacent cells with satisfying the constraint of slope condition, terrain elevation of obstacle cells existing between two cells is corrected from the digital elevation model. After calculating the cumulative travel costs to the destination by reflecting the weight of the difference between the original and corrected elevations, the algorithm analyzes the least-cost path. The results of applying the proposed algorithm to the synthetic data sets and the real-world data sets provide proof that the new algorithm can provide more accurate least-cost paths than other conventional algorithms implemented in commercial GIS software such as ArcGIS.

  1. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    Science.gov (United States)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    &MA, Structures and Environments, GNC, Orion, the Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission and Loss of Crew probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and detection and responses that can be tested in VMET and confirm that responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - ARINC 653 partitioned OS, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM. The plan for VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by Flight Software. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure the effectiveness of M&FM algorithms performance in the FSW development and test processes. This paper is outlined in a systematic fashion analogous to a lifecycle process flow for engineering development of algorithms into software and testing. Section I describes the NASA SLS M&FM context, presenting the current infrastructure, leading principles, methods, and participants. Section II defines the testing philosophy of the M&FM algorithms as related to VMET followed by section III, which presents the modeling methods of the algorithms to be tested and validated in VMET. Its details are then further presented in section IV followed by Section V presenting integration, test status, and state analysis. Finally, section VI

  2. Comparative performance analysis of the artificial-intelligence-based thermal control algorithms for the double-skin building

    International Nuclear Information System (INIS)

    Moon, Jin Woo

    2015-01-01

    This study aimed at developing artificial-intelligence-(AI)-theory-based optimal control algorithms for improving the indoor temperature conditions and heating energy efficiency of the double-skin buildings. For this, one conventional rule-based and four AI-based algorithms were developed, including artificial neural network (ANN), fuzzy logic (FL), and adaptive neuro fuzzy inference systems (ANFIS), for operating the surface openings of the double skin and the heating system. A numerical computer simulation method incorporating the matrix laboratory (MATLAB) and the transient systems simulation (TRNSYS) software was used for the comparative performance tests. The analysis results revealed that advanced thermal-environment comfort and stability can be provided by the AI-based algorithms. In particular, the FL and ANFIS algorithms were superior to the ANN algorithm in terms of providing better thermal conditions. The ANN-based algorithm, however, proved its potential to be the most energy-efficient and stable strategy among the four AI-based algorithms. It can be concluded that the optimal algorithm can be differently determined according to the major focus of the strategy. If comfortable thermal condition is the principal interest, then the FL or ANFIS algorithm could be the proper solution, and if energy saving for space heating and system operation stability is the main concerns, then the ANN-based algorithm may be applicable. - Highlights: • Integrated control algorithms were developed for the heating system and surface openings. • AI theories were applied to the control algorithms. • ANN, FL, and ANFIS were the applied AI theories. • Comparative performance tests were conducted using computer simulation. • AI algorithms presented superior temperature environment.

  3. The Development of Advanced Processing and Analysis Algorithms for Improved Neutron Multiplicity Measurements

    International Nuclear Information System (INIS)

    Santi, P.; Favalli, A.; Hauck, D.; Henzl, V.; Henzlova, D.; Ianakiev, K.; Iliev, M.; Swinhoe, M.; Croft, S.; Worrall, L.

    2015-01-01

    One of the most distinctive and informative signatures of special nuclear materials is the emission of correlated neutrons from either spontaneous or induced fission. Because the emission of correlated neutrons is a unique and unmistakable signature of nuclear materials, the ability to effectively detect, process, and analyze these emissions will continue to play a vital role in the non-proliferation, safeguards, and security missions. While currently deployed neutron measurement techniques based on 3He proportional counter technology, such as neutron coincidence and multiplicity counters currently used by the International Atomic Energy Agency, have proven to be effective over the past several decades for a wide range of measurement needs, a number of technical and practical limitations exist in continuing to apply this technique to future measurement needs. In many cases, those limitations exist within the algorithms that are used to process and analyze the detected signals from these counters that were initially developed approximately 20 years ago based on the technology and computing power that was available at that time. Over the past three years, an effort has been undertaken to address the general shortcomings in these algorithms by developing new algorithms that are based on fundamental physics principles that should lead to the development of more sensitive neutron non-destructive assay instrumentation. Through this effort, a number of advancements have been made in correcting incoming data for electronic dead time, connecting the two main types of analysis techniques used to quantify the data (Shift register analysis and Feynman variance to mean analysis), and in the underlying physical model, known as the point model, that is used to interpret the data in terms of the characteristic properties of the item being measured. The current status of the testing and evaluation of these advancements in correlated neutron analysis techniques will be discussed

  4. Aneurysmal subarachnoid hemorrhage prognostic decision-making algorithm using classification and regression tree analysis.

    Science.gov (United States)

    Lo, Benjamin W Y; Fukuda, Hitoshi; Angle, Mark; Teitelbaum, Jeanne; Macdonald, R Loch; Farrokhyar, Forough; Thabane, Lehana; Levine, Mitchell A H

    2016-01-01

    Classification and regression tree analysis involves the creation of a decision tree by recursive partitioning of a dataset into more homogeneous subgroups. Thus far, there is scarce literature on using this technique to create clinical prediction tools for aneurysmal subarachnoid hemorrhage (SAH). The classification and regression tree analysis technique was applied to the multicenter Tirilazad database (3551 patients) in order to create the decision-making algorithm. In order to elucidate prognostic subgroups in aneurysmal SAH, neurologic, systemic, and demographic factors were taken into account. The dependent variable used for analysis was the dichotomized Glasgow Outcome Score at 3 months. Classification and regression tree analysis revealed seven prognostic subgroups. Neurological grade, occurrence of post-admission stroke, occurrence of post-admission fever, and age represented the explanatory nodes of this decision tree. Split sample validation revealed classification accuracy of 79% for the training dataset and 77% for the testing dataset. In addition, the occurrence of fever at 1-week post-aneurysmal SAH is associated with increased odds of post-admission stroke (odds ratio: 1.83, 95% confidence interval: 1.56-2.45, P tree was generated, which serves as a prediction tool to guide bedside prognostication and clinical treatment decision making. This prognostic decision-making algorithm also shed light on the complex interactions between a number of risk factors in determining outcome after aneurysmal SAH.

  5. Development of Multi-perspective Diagnostics and Analysis Algorithms with Applications to Subsonic and Supersonic Combustors

    Science.gov (United States)

    Wickersham, Andrew Joseph

    There are two critical research needs for the study of hydrocarbon combustion in high speed flows: 1) combustion diagnostics with adequate temporal and spatial resolution, and 2) mathematical techniques that can extract key information from large datasets. The goal of this work is to address these needs, respectively, by the use of high speed and multi-perspective chemiluminescence and advanced mathematical algorithms. To obtain the measurements, this work explored the application of high speed chemiluminescence diagnostics and the use of fiber-based endoscopes (FBEs) for non-intrusive and multi-perspective chemiluminescence imaging up to 20 kHz. Non-intrusive and full-field imaging measurements provide a wealth of information for model validation and design optimization of propulsion systems. However, it is challenging to obtain such measurements due to various implementation difficulties such as optical access, thermal management, and equipment cost. This work therefore explores the application of FBEs for non-intrusive imaging to supersonic propulsion systems. The FBEs used in this work are demonstrated to overcome many of the aforementioned difficulties and provided datasets from multiple angular positions up to 20 kHz in a supersonic combustor. The combustor operated on ethylene fuel at Mach 2 with an inlet stagnation temperature and pressure of approximately 640 degrees Fahrenheit and 70 psia, respectively. The imaging measurements were obtained from eight perspectives simultaneously, providing full-field datasets under such flow conditions for the first time, allowing the possibility of inferring multi-dimensional measurements. Due to the high speed and multi-perspective nature, such new diagnostic capability generates a large volume of data and calls for analysis algorithms that can process the data and extract key physics effectively. To extract the key combustion dynamics from the measurements, three mathematical methods were investigated in this work

  6. Configuration space analysis of common cost functions in radiotherapy beam-weight optimization algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Rowbottom, Carl Graham [Joint Department of Physics, Institute of Cancer Research and the Royal Marsden NHS Trust, Sutton, Surrey (United Kingdom); Webb, Steve [Joint Department of Physics, Institute of Cancer Research and the Royal Marsden NHS Trust, Sutton, Surrey (United Kingdom)

    2002-01-07

    The successful implementation of downhill search engines in radiotherapy optimization algorithms depends on the absence of local minima in the search space. Such techniques are much faster than stochastic optimization methods but may become trapped in local minima if they exist. A technique known as 'configuration space analysis' was applied to examine the search space of cost functions used in radiotherapy beam-weight optimization algorithms. A downhill-simplex beam-weight optimization algorithm was run repeatedly to produce a frequency distribution of final cost values. By plotting the frequency distribution as a function of final cost, the existence of local minima can be determined. Common cost functions such as the quadratic deviation of dose to the planning target volume (PTV), integral dose to organs-at-risk (OARs), dose-threshold and dose-volume constraints for OARs were studied. Combinations of the cost functions were also considered. The simple cost function terms such as the quadratic PTV dose and integral dose to OAR cost function terms are not susceptible to local minima. In contrast, dose-threshold and dose-volume OAR constraint cost function terms are able to produce local minima in the example case studied. (author)

  7. Development of estimation algorithm of loose parts and analysis of impact test data

    International Nuclear Information System (INIS)

    Kim, Jung Soo; Ham, Chang Sik; Jung, Chul Hwan; Hwang, In Koo; Kim, Tak Hwane; Kim, Tae Hwane; Park, Jin Ho

    1999-11-01

    Loose parts are produced by being parted from the structure of the reactor coolant system or by coming into RCS from the outside during test operation, refueling, and overhaul time. These loose parts are mixed with reactor coolant fluid and collide with RCS components. When loose parts are occurred within RCS, it is necessary to estimate the impact point and the mass of loose parts. In this report an analysis algorithm for the estimation of the impact point and mass of loose part is developed. The developed algorithm was tested with the impact test data of Yonggwang-3. The estimated impact point using the proposed algorithm in this report had 5 percent error to the real test data. The estimated mass was analyzed within 28 percent error bound using the same unit's data. We analyzed the characteristic frequency of each sensor because this frequency effected the estimation of impact point and mass. The characteristic frequency of the background noise during normal operation was compared with that of the impact test data. The result of the comparison illustrated that the characteristic frequency bandwidth of the impact test data was lower than that of the background noise during normal operation. by the comparison, the integrity of sensor and monitoring system could be checked, too. (author)

  8. NETWORKS OF NANOPARTICLES IN ORGANIC – INORGANIC COMPOSITES: ALGORITHMIC EXTRACTION AND STATISTICAL ANALYSIS

    Directory of Open Access Journals (Sweden)

    Ralf Thiedmann

    2012-03-01

    Full Text Available The rising global demand in energy and the limited resources in fossil fuels require new technologies in renewable energies like solar cells. Silicon solar cells offer a good efficiency but suffer from high production costs. A promising alternative are polymer solar cells, due to potentially low production costs and high flexibility of the panels. In this paper, the nanostructure of organic–inorganic composites is investigated, which can be used as photoactive layers in hybrid–polymer solar cells. These materials consist of a polymeric (OC1C10-PPV phase with CdSe nanoparticles embedded therein. On the basis of 3D image data with high spatial resolution, gained by electron tomography, an algorithm is developed to automatically extract the CdSe nanoparticles from grayscale images, where we assume them as spheres. The algorithm is based on a modified version of the Hough transform, where a watershed algorithm is used to separate the image data into basins such that each basin contains exactly one nanoparticle. After their extraction, neighboring nanoparticles are connected to form a 3D network that is related to the transport of electrons in polymer solar cells. A detailed statistical analysis of the CdSe network morphology is accomplished, which allows deeper insight into the hopping percolation pathways of electrons.

  9. A pooling-LiNGAM algorithm for effective connectivity analysis of fMRI data

    Directory of Open Access Journals (Sweden)

    Lele eXu

    2014-10-01

    Full Text Available The Independent Component Analysis - linear non-Gaussian acyclic model (LiNGAM, an algorithm that can be used to estimate the causal relationship among non-Gaussian distributed data, has the potential value to detect the effective connectivity of human brain areas. Under the assumptions that (a: the data generating process is linear, (b there are no unobserved confounders, and (c data have non-Gaussian distributions, LiNGAM can be used to discover the complete causal structure of data. Previous studies reveal that the algorithm could perform well when the data points being analyzed is relatively long. However, there are too few data points in most neuroimaging recordings, especially functional magnetic resonance imaging (fMRI, to allow the algorithm to converge. Smith’s study speculates a method by pooling data points across subjects may be useful to address this issue (Smith et al., 2011. Thus this study focus on validating Smith’s proposal of pooling data points across subjects for the use of LiNGAM, and this method is named as pooling-LiNGAM (pLiNGAM. Using both simulated and real fMRI data, our current study demonstrates the feasibility and efficiency of the pLiNGAM on the effective connectivity estimation.

  10. Algorithmic processing of intrinsic signals in affixed transmission speckle analysis (ATSA) (Conference Presentation)

    Science.gov (United States)

    Ghijsen, Michael T.; Tromberg, Bruce J.

    2017-03-01

    Affixed Transmission Speckle Analysis (ATSA) is a method recently developed to measure blood flow that is based on laser speckle imaging miniaturized into a clip-on form factor the size of a pulse-oximeter. Measuring at a rate of 250 Hz, ATSA is capable or obtaining the cardiac waveform in blood flow data, referred to as the Speckle-Plethysmogram (SPG). ATSA is also capable of simultaneously measuring the Photoplethysmogram (PPG), a more conventional signal related to light intensity. In this work we present several novel algorithms for extracting physiologically relevant information from the combined SPG-PPG waveform data. First we show that there is a slight time-delay between the SPG and PPG that can be extracted computationally. Second, we present a set of frequency domain algorithms that measure harmonic content on pulse-by-pulse basis for both the SPG and PPG. Finally, we apply these algorithms to data obtained from a set of subjects including healthy controls and individuals with heightened cardiovascular risk. We hypothesize that the time-delay and frequency content are correlated with cardiovascular health; specifically with vascular stiffening.

  11. Guidance and Control Algorithms for the Mars Entry, Descent and Landing Systems Analysis

    Science.gov (United States)

    Davis, Jody L.; CwyerCianciolo, Alicia M.; Powell, Richard W.; Shidner, Jeremy D.; Garcia-Llama, Eduardo

    2010-01-01

    The purpose of the Mars Entry, Descent and Landing Systems Analysis (EDL-SA) study was to identify feasible technologies that will enable human exploration of Mars, specifically to deliver large payloads to the Martian surface. This paper focuses on the methods used to guide and control two of the contending technologies, a mid- lift-to-drag (L/D) rigid aeroshell and a hypersonic inflatable aerodynamic decelerator (HIAD), through the entry portion of the trajectory. The Program to Optimize Simulated Trajectories II (POST2) is used to simulate and analyze the trajectories of the contending technologies and guidance and control algorithms. Three guidance algorithms are discussed in this paper: EDL theoretical guidance, Numerical Predictor-Corrector (NPC) guidance and Analytical Predictor-Corrector (APC) guidance. EDL-SA also considered two forms of control: bank angle control, similar to that used by Apollo and the Space Shuttle, and a center-of-gravity (CG) offset control. This paper presents the performance comparison of these guidance algorithms and summarizes the results as they impact the technology recommendations for future study.

  12. Analysis of Electromagnetic Propagation from MHz to THz with a Memory-Optimised CPML-FDTD Algorithm

    Directory of Open Access Journals (Sweden)

    A. Rodríguez-Sánchez

    2018-01-01

    Full Text Available FDTD method opened a fertile research area on the numerical analysis of electromagnetic phenomena under a wide range of media and propagation conditions, providing an extensive analysis of electromagnetic behaviour like propagation, reflection, refraction, and multitrajectory phenomena. In this paper, we present an optimised FDTD-CPML algorithm, focused in saving memory while increasing the performance of the algorithm. We particularly implement FDTD-CPML method at high frequency bands, used in several telecommunications applications as well as in nanoelectromagnetism. We show an analysis of the performance of the algorithm in single and double precision, as well as a stability of the algorithm analysis, from where we conclude that the implemented CPML ABC constitutes a robust choice in terms of precision and accuracy for the high frequencies herein considered. It is important to recall that the CPML ABC parameters provided in this paper are fixed for the tested range of frequencies, that is, from MHz to THz.

  13. Scalability of Comparative Analysis, Novel Algorithms and Tools (MICW - Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    Energy Technology Data Exchange (ETDEWEB)

    Mavrommatis, Kostas

    2011-10-12

    DOE JGI's Kostas Mavrommatis, chair of the Scalability of Comparative Analysis, Novel Algorithms and Tools panel, at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.

  14. A utility/cost analysis of breast cancer risk prediction algorithms

    Science.gov (United States)

    Abbey, Craig K.; Wu, Yirong; Burnside, Elizabeth S.; Wunderlich, Adam; Samuelson, Frank W.; Boone, John M.

    2016-03-01

    Breast cancer risk prediction algorithms are used to identify subpopulations that are at increased risk for developing breast cancer. They can be based on many different sources of data such as demographics, relatives with cancer, gene expression, and various phenotypic features such as breast density. Women who are identified as high risk may undergo a more extensive (and expensive) screening process that includes MRI or ultrasound imaging in addition to the standard full-field digital mammography (FFDM) exam. Given that there are many ways that risk prediction may be accomplished, it is of interest to evaluate them in terms of expected cost, which includes the costs of diagnostic outcomes. In this work we perform an expected-cost analysis of risk prediction algorithms that is based on a published model that includes the costs associated with diagnostic outcomes (true-positive, false-positive, etc.). We assume the existence of a standard screening method and an enhanced screening method with higher scan cost, higher sensitivity, and lower specificity. We then assess expected cost of using a risk prediction algorithm to determine who gets the enhanced screening method under the strong assumption that risk and diagnostic performance are independent. We find that if risk prediction leads to a high enough positive predictive value, it will be cost-effective regardless of the size of the subpopulation. Furthermore, in terms of the hit-rate and false-alarm rate of the of the risk prediction algorithm, iso-cost contours are lines with slope determined by properties of the available diagnostic systems for screening.

  15. Quantitative analysis of emphysema in 3D using MDCT: Influence of different reconstruction algorithms

    International Nuclear Information System (INIS)

    Ley-Zaporozhan, Julia; Ley, Sebastian; Weinheimer, Oliver; Iliyushenko, Svitlana; Erdugan, Serap; Eberhardt, Ralf; Fuxa, Adelheid; Mews, Juergen; Kauczor, Hans-Ulrich

    2008-01-01

    Purpose: The aim of the study was to compare the influence of different reconstruction algorithms on quantitative emphysema analysis in patients with severe emphysema. Material and methods: Twenty-five patients suffering from severe emphysema were included in the study. All patients underwent inspiratory MDCT (Aquilion-16, slice thickness 1/0.8 mm). The raw data were reconstructed using six different algorithms: bone kernel with beam hardening correction (BHC), soft tissue kernel with BHC; standard soft tissue kernel, smooth soft tissue kernel (internal reference standard), standard lung kernel, and high-convolution kernel. The only difference between image data sets was the algorithm employed to reconstruct the raw data, no additional radiation was required. CT data were analysed using self-written emphysema detection and quantification software providing lung volume, emphysema volume (EV), emphysema index (EI) and mean lung density (MLD). Results: The use of kernels with BHC led to a significant decrease in MLD (5%) and EI (61-79%) in comparison with kernels without BHC. The absolute difference (from smooth soft tissue kernel) in MLD ranged from -0.6 to -6.1 HU and were significant different for all kernels. The EV showed absolute differences between -0.05 and -0.4 L and was significantly different for all kernels. The EI showed absolute differences between -0.8 and -5.1 and was significantly different for all kernels. Conclusion: The use of kernels with BHC led to a significant decrease in MLD and EI. The absolute differences between different kernels without BHC were small but they were larger than the known interscan variation in patients. Thus, for follow-up examinations the same reconstruction algorithm has to be used and use of BHC has to be avoided

  16. Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale.

    Science.gov (United States)

    Emmons, Scott; Kobourov, Stephen; Gallant, Mike; Börner, Katy

    2016-01-01

    Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms-Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters.

  17. SBUV version 8.6 Retrieval Algorithm: Error Analysis and Validation Technique

    Science.gov (United States)

    Kramarova, N. A.; Bhartia, P. K.; Frith, P. K.; McPeters, S. M.; Labow, R. D.; Taylor, G.; Fisher, S.; DeLand, M.

    2012-01-01

    SBUV version 8.6 algorithm was used to reprocess data from the Back Scattered Ultra Violet (BUV), the Solar Back Scattered Ultra Violet (SBUV) and a number of SBUV/2 instruments, which 'span a 41-year period from 1970 to 2011 (except a 5-year gap in the 1970s)[see Bhartia et al, 2012]. In the new version Daumont et al. [1992] ozone cross section were used, and new ozone [McPeters et ai, 2007] and cloud climatologies Doiner and Bhartia, 1995] were implemented. The algorithm uses the Optimum Estimation technique [Rodgers, 2000] to retrieve ozone profiles as ozone layer (partial column, DU) on 21 pressure layers. The corresponding total ozone values are calculated by summing ozone columns at individual layers. The algorithm is optimized to accurately retrieve monthly zonal mean (mzm) profiles rather than an individual profile, since it uses monthly zonal mean ozone climatology as the A Priori. Thus, the SBUV version 8.6 ozone dataset is better suited for long-term trend analysis and monitoring ozone changes rather than for studying short-term ozone variability. Here we discuss some characteristics of the SBUV algorithm and sources of error in the SBUV profile and total ozone retrievals. For the first time the Averaging Kernels, smoothing errors and weighting functions (or Jacobians) are included in the SBUV metadata. The Averaging Kernels (AK) represent the sensitivity of the retrieved profile to the true state and contain valuable information about the retrieval algorithm, such as Vertical Resolution, Degrees of Freedom for Signals (DFS) and Retrieval Efficiency [Rodgers, 2000]. Analysis of AK for mzm ozone profiles shows that the total number of DFS for ozone profiles varies from 4.4 to 5.5 out of 6-9 wavelengths used for retrieval. The number of wavelengths in turn depends on solar zenith angles. Between 25 and 0.5 hPa, where SBUV vertical resolution is the highest, DFS for individual layers are about 0.5.

  18. A Cooperative Framework for Fireworks Algorithm

    OpenAIRE

    Zheng, Shaoqiu; Li, Junzhi; Janecek, Andreas; Tan, Ying

    2015-01-01

    This paper presents a cooperative framework for fireworks algorithm (CoFFWA). A detailed analysis of existing fireworks algorithm (FWA) and its recently developed variants has revealed that (i) the selection strategy lead to the contribution of the firework with the best fitness (core firework) for the optimization overwhelms the contributions of the rest of fireworks (non-core fireworks) in the explosion operator, (ii) the Gaussian mutation operator is not as effective as it is designed to b...

  19. Applications of genetic algorithms on the structure-activity relationship analysis of some cinnamamides.

    Science.gov (United States)

    Hou, T J; Wang, J M; Liao, N; Xu, X J

    1999-01-01

    Quantitative structure-activity relationships (QSARs) for 35 cinnamamides were studied. By using a genetic algorithm (GA), a group of multiple regression models with high fitness scores was generated. From the statistical analyses of the descriptors used in the evolution procedure, the principal features affecting the anticonvulsant activity were found. The significant descriptors include the partition coefficient, the molar refraction, the Hammet sigma constant of the substituents on the benzene ring, and the formation energy of the molecules. It could be found that the steric complementarity and the hydrophobic interaction between the inhibitors and the receptor were very important to the biological activity, while the contribution of the electronic effect was not so obvious. Moreover, by construction of the spline models for these four principal descriptors, the effective range for each descriptor was identified.

  20. Genetic Algorithms for Agent-Based Infrastructure Interdependency Modeling and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    May Permann

    2007-03-01

    Today’s society relies greatly upon an array of complex national and international infrastructure networks such as transportation, electric power, telecommunication, and financial networks. This paper describes initial research combining agent-based infrastructure modeling software and genetic algorithms (GAs) to help optimize infrastructure protection and restoration decisions. This research proposes to apply GAs to the problem of infrastructure modeling and analysis in order to determine the optimum assets to restore or protect from attack or other disaster. This research is just commencing and therefore the focus of this paper is the integration of a GA optimization method with a simulation through the simulation’s agents.

  1. Modified GMDH-NN algorithm and its application for global sensitivity analysis

    Science.gov (United States)

    Song, Shufang; Wang, Lu

    2017-11-01

    Global sensitivity analysis (GSA) is a very useful tool to evaluate the influence of input variables in the whole distribution range. Sobol' method is the most commonly used among variance-based methods, which are efficient and popular GSA techniques. High dimensional model representation (HDMR) is a popular way to compute Sobol' indices, however, its drawbacks cannot be ignored. We show that modified GMDH-NN algorithm can calculate coefficients of metamodel efficiently, so this paper aims at combining it with HDMR and proposes GMDH-HDMR method. The new method shows higher precision and faster convergent rate. Several numerical and engineering examples are used to confirm its advantages.

  2. Analysis of multiplicities in e+e- interactions using 2-jet rates from different jet algorithms

    International Nuclear Information System (INIS)

    Dahiya, S.; Kaur, M.; Dhamija, S.

    2002-01-01

    The shoulder structure of charged particle multiplicity distribution measured in full phase space in e + e - interactions at various c.m. energies from 91 to 189 GeV has been analysed in terms of weighted superposition of two negative binomial distributions associated with 2-jet and multi-jet production. The 2-jet rates have been obtained from various jet algorithms. This phenomenological parametrization reproduces the shoulder structure behaviour quantitatively and improves the agreement with the experimental distributions than the conventional negative binomial distribution. The analysis at the higher energies where the shoulder structure appears more prominently, is important for the understanding of underlying structure. (author)

  3. Design and Demonstration of Automated Data Analysis Algorithms for Ultrasonic Inspection of Complex Composite Panels with Bonds

    Science.gov (United States)

    2016-02-01

    all of the ADA called indications into three groups: true positives (TP), missed calls (MC) and false calls (FC). Note, an indication position error...data review burden and improve the reliability of the ultrasonic inspection of large composite structures, automated data analysis ( ADA ) algorithms...thickness and backwall C-scan images. 15. SUBJECT TERMS automated data analysis ( ADA ) algorithms; time-of-flight indications; backwall amplitude dropout

  4. Algorithms for large scale singular value analysis of spatially variant tomography systems

    International Nuclear Information System (INIS)

    Cao-Huu, Tuan; Brownell, G.; Lachiver, G.

    1996-01-01

    The problem of determining the eigenvalues of large matrices occurs often in the design and analysis of modem tomography systems. As there is an interest in solving systems containing an ever-increasing number of variables, current research effort is being made to create more robust solvers which do not depend on some special feature of the matrix for convergence (e.g. block circulant), and to improve the speed of already known and understood solvers so that solving even larger systems in a reasonable time becomes viable. Our standard techniques for singular value analysis are based on sparse matrix factorization and are not applicable when the input matrices are large because the algorithms cause too much fill. Fill refers to the increase of non-zero elements in the LU decomposition of the original matrix A (the system matrix). So we have developed iterative solutions that are based on sparse direct methods. Data motion and preconditioning techniques are critical for performance. This conference paper describes our algorithmic approaches for large scale singular value analysis of spatially variant imaging systems, and in particular of PCR2, a cylindrical three-dimensional PET imager 2 built at the Massachusetts General Hospital (MGH) in Boston. We recommend the desirable features and challenges for the next generation of parallel machines for optimal performance of our solver

  5. Properties of the numerical algorithms for problems of quantum information technologies: Benefits of deep analysis

    Science.gov (United States)

    Chernyavskiy, Andrey; Khamitov, Kamil; Teplov, Alexey; Voevodin, Vadim; Voevodin, Vladimir

    2016-10-01

    In recent years, quantum information technologies (QIT) showed great development, although, the way of the implementation of QIT faces the serious difficulties, some of which are challenging computational tasks. This work is devoted to the deep and broad analysis of the parallel algorithmic properties of such tasks. As an example we take one- and two-qubit transformations of a many-qubit quantum state, which are the most critical kernels of many important QIT applications. The analysis of the algorithms uses the methodology of the AlgoWiki project (algowiki-project.org) and consists of two parts: theoretical and experimental. Theoretical part includes features like sequential and parallel complexity, macro structure, and visual information graph. Experimental part was made by using the petascale Lomonosov supercomputer (Moscow State University, Russia) and includes the analysis of locality and memory access, scalability and the set of more specific dynamic characteristics of realization. This approach allowed us to obtain bottlenecks and generate ideas of efficiency improvement.

  6. Special Section on "Tools and Algorithms for the Construction and Analysis of Systems"

    DEFF Research Database (Denmark)

    2006-01-01

    in the Lecture Notes in Computer Science series published by Springer. TACAS is a forum for researchers, developers and users interested in rigorously based tools for the construction and analysis of systems. The conference serves to bridge the gaps between different communities – including but not limited......This special section contains the revised and expanded versions of eight of the papers from the 10th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS) held in March/April 2004 in Barcelona, Spain. The conference proceedings appeared as volume 2988...... to those devoted to formal methods, software and hardware verification, static analysis, programming languages, software engineering, real-time systems, and communications protocols – that share common interests in, and techniques for, tool development. Other more theoretical papers from the conference...

  7. Low cost, scalable proteomics data analysis using Amazon's cloud computing services and open source search algorithms.

    Science.gov (United States)

    Halligan, Brian D; Geiger, Joey F; Vallejos, Andrew K; Greene, Andrew S; Twigger, Simon N

    2009-06-01

    One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step-by-step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center Web site ( http://proteomics.mcw.edu/vipdac ).

  8. Applied Swarm-based medicine: collecting decision trees for patterns of algorithms analysis.

    Science.gov (United States)

    Panje, Cédric M; Glatzer, Markus; von Rappard, Joscha; Rothermundt, Christian; Hundsberger, Thomas; Zumstein, Valentin; Plasswilm, Ludwig; Putora, Paul Martin

    2017-08-16

    The objective consensus methodology has recently been applied in consensus finding in several studies on medical decision-making among clinical experts or guidelines. The main advantages of this method are an automated analysis and comparison of treatment algorithms of the participating centers which can be performed anonymously. Based on the experience from completed consensus analyses, the main steps for the successful implementation of the objective consensus methodology were identified and discussed among the main investigators. The following steps for the successful collection and conversion of decision trees were identified and defined in detail: problem definition, population selection, draft input collection, tree conversion, criteria adaptation, problem re-evaluation, results distribution and refinement, tree finalisation, and analysis. This manuscript provides information on the main steps for successful collection of decision trees and summarizes important aspects at each point of the analysis.

  9. A Wavelet-Based Algorithm for the Spatial Analysis of Poisson Data

    Science.gov (United States)

    Freeman, P. E.; Kashyap, V.; Rosner, R.; Lamb, D. Q.

    2002-01-01

    Wavelets are scalable, oscillatory functions that deviate from zero only within a limited spatial regime and have average value zero, and thus may be used to simultaneously characterize the shape, location, and strength of astronomical sources. But in addition to their use as source characterizers, wavelet functions are rapidly gaining currency within the source detection field. Wavelet-based source detection involves the correlation of scaled wavelet functions with binned, two-dimensional image data. If the chosen wavelet function exhibits the property of vanishing moments, significantly nonzero correlation coefficients will be observed only where there are high-order variations in the data; e.g., they will be observed in the vicinity of sources. Source pixels are identified by comparing each correlation coefficient with its probability sampling distribution, which is a function of the (estimated or a priori known) background amplitude. In this paper, we describe the mission-independent, wavelet-based source detection algorithm ``WAVDETECT,'' part of the freely available Chandra Interactive Analysis of Observations (CIAO) software package. Our algorithm uses the Marr, or ``Mexican Hat'' wavelet function, but may be adapted for use with other wavelet functions. Aspects of our algorithm include: (1) the computation of local, exposure-corrected normalized (i.e., flat-fielded) background maps; (2) the correction for exposure variations within the field of view (due to, e.g., telescope support ribs or the edge of the field); (3) its applicability within the low-counts regime, as it does not require a minimum number of background counts per pixel for the accurate computation of source detection thresholds; (4) the generation of a source list in a manner that does not depend upon a detailed knowledge of the point spread function (PSF) shape; and (5) error analysis. These features make our algorithm considerably more general than previous methods developed for the

  10. Gentree of Tool for Syntactic Analysis Based On Younger Cocke Kasami Algorithm

    Directory of Open Access Journals (Sweden)

    - Wijanarto

    2017-04-01

    Full Text Available Syntactic analysis is a series of processes in order to validate a string that is received by a language. Understanding the process of reduction rules to become a tree is the part that is difficult to explain. This paper describes the results of the design tool to automate an input string into a decrease in the rules to trees in the visualized with images either in the form of files or display, performance evaluation tools and analysis of students' understanding of the tool by the algorithm Cocke Younger Kasami (cyk was selected as one of the cases for parsing techniques in the Context Free Grammar (CFG in the form of Chomsky Normal Form (CNF. These results indicate that the model successfully implemented into the application named genTree (Generator Tree, application performance gained a significant number of measurements of the variations in the complexity of the grammar and the input string by 29.13% with the complexities 7 and 8:50% with the complexity of 20, while for long input string against time processing algorithm can be a value of 3.3 and 66.98% as well as 29 and 6:19%, also obtained differences in the ability of the t-test on a group of students control against the experimental group with a value of t = 5.336 with df 74, p value of 0.001 , on the level of signfikansi 0.05% (5%. Also terapat increase in the percentage of correct answers was 58% in the variation of difficulty, 83% of the variation was easy. Sebalikanya wrong answer decline by 60% in difficult variation, the variation was 100% and 57% for easy variation. Recently there is a change decrease in the percentage of students who are not doing as much as 60% in the variation of difficulty, 44% of the variation was 13% on the variations easily can be concluded that the applications run efficiently and optimally, but also can effectively improve students' understanding in beajar automata with case cyk algorithm. Keywords—Tool, Analysis, Syntax, Algorithms, Trees

  11. Fourier analysis algorithm for the posterior corneal keratometric data: clinical usefulness in keratoconus.

    Science.gov (United States)

    Sideroudi, Haris; Labiris, Georgios; Georgantzoglou, Kimon; Ntonti, Panagiota; Siganos, Charalambos; Kozobolis, Vassilios

    2017-07-01

    To develop an algorithm for the Fourier analysis of posterior corneal videokeratographic data and to evaluate the derived parameters in the diagnosis of Subclinical Keratoconus (SKC) and Keratoconus (KC). This was a cross-sectional, observational study that took place in the Eye Institute of Thrace, Democritus University, Greece. Eighty eyes formed the KC group, 55 eyes formed the SKC group while 50 normal eyes populated the control group. A self-developed algorithm in visual basic for Microsoft Excel performed a Fourier series harmonic analysis for the posterior corneal sagittal curvature data. The algorithm decomposed the obtained curvatures into a spherical component, regular astigmatism, asymmetry and higher order irregularities for averaged central 4 mm and for each individual ring separately (1, 2, 3 and 4 mm). The obtained values were evaluated for their diagnostic capacity using receiver operating curves (ROC). Logistic regression was attempted for the identification of a combined diagnostic model. Significant differences were detected in regular astigmatism, asymmetry and higher order irregularities among groups. For the SKC group, the parameters with high diagnostic ability (AUC > 90%) were the higher order irregularities, the asymmetry and the regular astigmatism, mainly in the corneal periphery. Higher predictive accuracy was identified using diagnostic models that combined the asymmetry, regular astigmatism and higher order irregularities in averaged 3and 4 mm area (AUC: 98.4%, Sensitivity: 91.7% and Specificity:100%). Fourier decomposition of posterior Keratometric data provides parameters with high accuracy in differentiating SKC from normal corneas and should be included in the prompt diagnosis of KC. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.

  12. A rigorous methodology for development and uncertainty analysis of group contribution based property models

    DEFF Research Database (Denmark)

    Frutiger, Jerome; Abildskov, Jens; Sin, Gürkan

    ) weighted-least-square regression. 3) Initialization of estimation by use of linear algebra providing a first guess. 4) Sequential parameter and simultaneous GC parameter by using of 4 different minimization algorithms. 5) Thorough uncertainty analysis: a) based on asymptotic approximation of parameter...... covariance matrix b) based on boot strap method. Providing 95%-confidence intervals of parameters and predicted property. 6) Performance statistics analysis and model application. The application of the methodology is shown for a new GC model built to predict lower flammability limit (LFL) for refrigerants...... their credibility and robustness in wider industrial and scientific applications....

  13. Development of CD3 cell quantitation algorithms for renal allograft biopsy rejection assessment utilizing open source image analysis software.

    Science.gov (United States)

    Moon, Andres; Smith, Geoffrey H; Kong, Jun; Rogers, Thomas E; Ellis, Carla L; Farris, Alton B Brad

    2018-02-01

    Renal allograft rejection diagnosis depends on assessment of parameters such as interstitial inflammation; however, studies have shown interobserver variability regarding interstitial inflammation assessment. Since automated image analysis quantitation can be reproducible, we devised customized analysis methods for CD3+ T-cell staining density as a measure of rejection severity and compared them with established commercial methods along with visual assessment. Renal biopsy CD3 immunohistochemistry slides (n = 45), including renal allografts with various degrees of acute cellular rejection (ACR) were scanned for whole slide images (WSIs). Inflammation was quantitated in the WSIs using pathologist visual assessment, commercial algorithms (Aperio nuclear algorithm for CD3+ cells/mm 2 and Aperio positive pixel count algorithm), and customized open source algorithms developed in ImageJ with thresholding/positive pixel counting (custom CD3+%) and identification of pixels fulfilling "maxima" criteria for CD3 expression (custom CD3+ cells/mm 2 ). Based on visual inspections of "markup" images, CD3 quantitation algorithms produced adequate accuracy. Additionally, CD3 quantitation algorithms correlated between each other and also with visual assessment in a statistically significant manner (r = 0.44 to 0.94, p = 0.003 to algorithms presents salient correlations with established methods of CD3 quantitation. These analysis techniques are promising and highly customizable, providing a form of on-slide "flow cytometry" that can facilitate additional diagnostic accuracy in tissue-based assessments.

  14. Multilayer perceptron for robust nonlinear interval regression analysis using genetic algorithms.

    Science.gov (United States)

    Hu, Yi-Chung

    2014-01-01

    On the basis of fuzzy regression, computational models in intelligence such as neural networks have the capability to be applied to nonlinear interval regression analysis for dealing with uncertain and imprecise data. When training data are not contaminated by outliers, computational models perform well by including almost all given training data in the data interval. Nevertheless, since training data are often corrupted by outliers, robust learning algorithms employed to resist outliers for interval regression analysis have been an interesting area of research. Several approaches involving computational intelligence are effective for resisting outliers, but the required parameters for these approaches are related to whether the collected data contain outliers or not. Since it seems difficult to prespecify the degree of contamination beforehand, this paper uses multilayer perceptron to construct the robust nonlinear interval regression model using the genetic algorithm. Outliers beyond or beneath the data interval will impose slight effect on the determination of data interval. Simulation results demonstrate that the proposed method performs well for contaminated datasets.

  15. An algorithm for analysis of the structure of finitely presented Lie algebras

    Directory of Open Access Journals (Sweden)

    Vladimir P. Gerdt

    1997-12-01

    Full Text Available We consider the following problem: what is the most general Lie algebra satisfying a given set of Lie polynomial equations? The presentation of Lie algebras by a finite set of generators and defining relations is one of the most general mathematical and algorithmic schemes of their analysis. That problem is of great practical importance, covering applications ranging from mathematical physics to combinatorial algebra. Some particular applications are constructionof prolongation algebras in the Wahlquist-Estabrook method for integrability analysis of nonlinear partial differential equations and investigation of Lie algebras arising in different physical models. The finite presentations also indicate a way to q-quantize Lie algebras. To solve this problem, one should perform a large volume of algebraic transformations which is sharply increased with growth of the number of generators and relations. For this reason, in practice one needs to use a computer algebra tool. We describe here an algorithm for constructing the basis of a finitely presented Lie algebra and its commutator table, and its implementation in the C language. Some computer results illustrating our algorithmand its actual implementation are also presented.

  16. Evaluation of HIV testing algorithms in Ethiopia: the role of the tie-breaker algorithm and weakly reacting test lines in contributing to a high rate of false positive HIV diagnoses.

    Science.gov (United States)

    Shanks, Leslie; Siddiqui, M Ruby; Kliescikova, Jarmila; Pearce, Neil; Ariti, Cono; Muluneh, Libsework; Pirou, Erwan; Ritmeijer, Koert; Masiga, Johnson; Abebe, Almaz

    2015-02-03

    In Ethiopia a tiebreaker algorithm using 3 rapid diagnostic tests (RDTs) in series is used to diagnose HIV. Discordant results between the first 2 RDTs are resolved by a third 'tiebreaker' RDT. Médecins Sans Frontières uses an alternate serial algorithm of 2 RDTs followed by a confirmation test for all double positive RDT results. The primary objective was to compare the performance of the tiebreaker algorithm with a serial algorithm, and to evaluate the addition of a confirmation test to both algorithms. A secondary objective looked at the positive predictive value (PPV) of weakly reactive test lines. The study was conducted in two HIV testing sites in Ethiopia. Study participants were recruited sequentially until 200 positive samples were reached. Each sample was re-tested in the laboratory on the 3 RDTs and on a simple to use confirmation test, the Orgenics Immunocomb Combfirm® (OIC). The gold standard test was the Western Blot, with indeterminate results resolved by PCR testing. 2620 subjects were included with a HIV prevalence of 7.7%. Each of the 3 RDTs had an individual specificity of at least 99%. The serial algorithm with 2 RDTs had a single false positive result (1 out of 204) to give a PPV of 99.5% (95% CI 97.3%-100%). The tiebreaker algorithm resulted in 16 false positive results (PPV 92.7%, 95% CI: 88.4%-95.8%). Adding the OIC confirmation test to either algorithm eliminated the false positives. All the false positives had at least one weakly reactive test line in the algorithm. The PPV of weakly reacting RDTs was significantly lower than those with strongly positive test lines. The risk of false positive HIV diagnosis in a tiebreaker algorithm is significant. We recommend abandoning the tie-breaker algorithm in favour of WHO recommended serial or parallel algorithms, interpreting weakly reactive test lines as indeterminate results requiring further testing except in the setting of blood transfusion, and most importantly, adding a confirmation test

  17. A retrospective appreciation of Willard Day's contributions to radical behaviorism and the analysis of verbal behavior

    Science.gov (United States)

    Moore, Jay

    1991-01-01

    Willard Day's contributions to radical behaviorism are grouped under three headings: (a) an emphasis on the distinction between radical and methodological behaviorism; (b) an emphasis on the interpretation, rather than the prediction and control, of behavior; and (c) an emphasis on the analysis of verbal behavior as a natural, ongoing phenomenon. The paper suggests that the contributions above are listed in ascending order of significance. PMID:22477632

  18. Analysis of greenhouse gas emissions in the European Union member states with the use of an agglomeration algorithm

    Directory of Open Access Journals (Sweden)

    Anna Kijewska

    2016-01-01

    Full Text Available The use of fossil fuels as sources of energy is related to the emission of pollutants into the atmosphere. The implementation of international commitments on reducing emissions requires their continuous monitoring. The main energy resources for electricity production in the world include fossil fuels, i.e. oil, coal and natural gas, and according to projections their dominant role in the market of energy resources will persist for at least the next two decades. The aim of this article is to analyse the level of differentiation of European Union member states in terms of emissions of four greenhouse gases and to identify groups of similar countries based on these criteria. Such studies will provide information that will enrich our knowledge about the contribution of each European Union country to the emissions of greenhouse gases. This article uses a taxonomic method - cluster analysis, namely the agglomerative algorithm, which enables the extraction of objects that are similar to each other from the data and then to merge them into groups. In this way, a number of homogeneous subsets can be obtained from one heterogeneous set of objects. European Union countries make up the objects of segmentation. Each of them are described by their level of greenhouse gas emissions, such as carbon dioxide, methane, nitrogen oxides and nitrous oxides. Groups of homogeneous countries are distinguished due to total emissions and due to the level of their emissions per capita. Analysis is based on annual Eurostat reports concerning greenhouse gas emissions.

  19. Moving scanning emitter tracking by a single observer using time of interception: Observability analysis and algorithm

    Directory of Open Access Journals (Sweden)

    Yifei ZHANG

    2017-06-01

    Full Text Available The target motion analysis (TMA for a moving scanning emitter with known fixed scan rate by a single observer using the time of interception (TOI measurements only is investigated in this paper. By transforming the TOI of multiple scan cycles into the direction difference of arrival (DDOA model, the observability analysis for the TMA problem is performed. Some necessary conditions for uniquely identifying the scanning emitter trajectory are obtained. This paper also proposes a weighted instrumental variable (WIV estimator for the scanning emitter TMA, which does not require any initial solution guess and is closed-form and computationally attractive. More importantly, simulations show that the proposed algorithm can provide estimation mean square error close to the Cramer-Rao lower bound (CRLB at moderate noise levels with significantly lower estimation bias than the conventional pseudo-linear least square (PLS estimator.

  20. Quantitative analysis of terahertz spectra for illicit drugs using adaptive-range micro-genetic algorithm

    Science.gov (United States)

    Chen, Yi; Ma, Yong; Lu, Zheng; Peng, Bei; Chen, Qin

    2011-08-01

    In the field of anti-illicit drug applications, many suspicious mixture samples might consist of various drug components—for example, a mixture of methamphetamine, heroin, and amoxicillin—which makes spectral identification very difficult. A terahertz spectroscopic quantitative analysis method using an adaptive range micro-genetic algorithm with a variable internal population (ARVIPɛμGA) has been proposed. Five mixture cases are discussed using ARVIPɛμGA driven quantitative terahertz spectroscopic analysis in this paper. The devised simulation results show agreement with the previous experimental results, which suggested that the proposed technique has potential applications for terahertz spectral identifications of drug mixture components. The results show agreement with the results obtained using other experimental and numerical techniques.

  1. Study of Image Analysis Algorithms for Segmentation, Feature Extraction and Classification of Cells

    Directory of Open Access Journals (Sweden)

    Margarita Gamarra

    2017-08-01

    Full Text Available Recent advances in microcopy and improvements in image processing algorithms have allowed the development of computer-assisted analytical approaches in cell identification. Several applications could be mentioned in this field: Cellular phenotype identification, disease detection and treatment, identifying virus entry in cells and virus classification; these applications could help to complement the opinion of medical experts. Although many surveys have been presented in medical image analysis, they focus mainly in tissues and organs and none of the surveys about image cells consider an analysis following the stages in the typical image processing: Segmentation, feature extraction and classification. The goal of this study is to provide comprehensive and critical analyses about the trends in each stage of cell image processing. In this paper, we present a literature survey about cell identification using different image processing techniques.

  2. A learning algorithm for adaptive canonical correlation analysis of several data sets.

    Science.gov (United States)

    Vía, Javier; Santamaría, Ignacio; Pérez, Jesús

    2007-01-01

    Canonical correlation analysis (CCA) is a classical tool in statistical analysis to find the projections that maximize the correlation between two data sets. In this work we propose a generalization of CCA to several data sets, which is shown to be equivalent to the classical maximum variance (MAXVAR) generalization proposed by Kettenring. The reformulation of this generalization as a set of coupled least squares regression problems is exploited to develop a neural structure for CCA. In particular, the proposed CCA model is a two layer feedforward neural network with lateral connections in the output layer to achieve the simultaneous extraction of all the CCA eigenvectors through deflation. The CCA neural model is trained using a recursive least squares (RLS) algorithm. Finally, the convergence of the proposed learning rule is proved by means of stochastic approximation techniques and their performance is analyzed through simulations.

  3. A numerical formulation and algorithm for limit and shakedown analysis of large-scale elastoplastic structures

    Science.gov (United States)

    Peng, Heng; Liu, Yinghua; Chen, Haofeng

    2018-05-01

    In this paper, a novel direct method called the stress compensation method (SCM) is proposed for limit and shakedown analysis of large-scale elastoplastic structures. Without needing to solve the specific mathematical programming problem, the SCM is a two-level iterative procedure based on a sequence of linear elastic finite element solutions where the global stiffness matrix is decomposed only once. In the inner loop, the static admissible residual stress field for shakedown analysis is constructed. In the outer loop, a series of decreasing load multipliers are updated to approach to the shakedown limit multiplier by using an efficient and robust iteration control technique, where the static shakedown theorem is adopted. Three numerical examples up to about 140,000 finite element nodes confirm the applicability and efficiency of this method for two-dimensional and three-dimensional elastoplastic structures, with detailed discussions on the convergence and the accuracy of the proposed algorithm.

  4. Method for exploiting bias in factor analysis using constrained alternating least squares algorithms

    Science.gov (United States)

    Keenan, Michael R.

    2008-12-30

    Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.

  5. Automatic analysis algorithm for radionuclide pulse-height data from beta-gamma coincidence systems

    International Nuclear Information System (INIS)

    Foltz Biegalski, K.M.

    2001-01-01

    There are two acceptable noble gas monitoring measurement modes for Comprehensive Nuclear-Test-Ban-Treaty (CTBT) verification purposes defined in CTBT/PC/II/WG.B/1. These include beta-gamma coincidence and high-resolution gamma-spectrometry. There are at present no commercial, off-the-shelf (COTS) applications for the analysis of β-γ coincidence data. Development of such software is in progress at the Prototype International Data Centre (PIDC) for eventual deployment at the International Data Centre (IDC). Flowcharts detailing the automatic analysis algorithm for β-γ coincidence data to be coded at the PIDC is included. The program is being written in C with Oracle databasing capabilities. (author)

  6. A CRITICAL ANALYSIS OF THE CONTRIBUTIONS MADE BY BRAZILIAN THESES ON THE TOPIC OF NETWORKS

    Directory of Open Access Journals (Sweden)

    Eduardo Jorge Branco Vieira Barcelos

    2015-03-01

    Full Text Available Research into Management Networks is not a dominant paradigm, which leaves it open for discussions on its statements, models and theories. This paper analyzes the contributions made by Brazilian theses on networks in the area of management, from 2004 to 2013. For this purpose, criteria that define contribution are used, for both national and international literature, resulting in a chart of 12 originality and 8 significance indicators. 23 theses were identified as dealing with the subject of networks. Content analysis was employed on explicit contributing texts and a search of the keywords for each of the 20 indicators. As the subject of networks presents multifaceted concepts, research methodologies and management models, it is beneficial to discuss and develop new approaches and contributions were expected at all three levels. The results indicate, however, that theoretical contributions were practically non-existent, methodological contributions were limited, focusing on improvements, and managerial contributions were most common, especially when focused on local networks. The results and analysis show that more research and a deeper discussion on this subject are needed. These could include coordinators and supervisors due to the pre-paradigmatic nature of the topic.

  7. Nonlinear fitness-space-structure adaptation and principal component analysis in genetic algorithms: an application to x-ray reflectivity analysis

    International Nuclear Information System (INIS)

    Tiilikainen, J; Tilli, J-M; Bosund, V; Mattila, M; Hakkarainen, T; Airaksinen, V-M; Lipsanen, H

    2007-01-01

    Two novel genetic algorithms implementing principal component analysis and an adaptive nonlinear fitness-space-structure technique are presented and compared with conventional algorithms in x-ray reflectivity analysis. Principal component analysis based on Hessian or interparameter covariance matrices is used to rotate a coordinate frame. The nonlinear adaptation applies nonlinear estimates to reshape the probability distribution of the trial parameters. The simulated x-ray reflectivity of a realistic model of a periodic nanolaminate structure was used as a test case for the fitting algorithms. The novel methods had significantly faster convergence and less stagnation than conventional non-adaptive genetic algorithms. The covariance approach needs no additional curve calculations compared with conventional methods, and it had better convergence properties than the computationally expensive Hessian approach. These new algorithms can also be applied to other fitting problems where tight interparameter dependence is present

  8. Nature-inspired optimization algorithms

    CERN Document Server

    Yang, Xin-She

    2014-01-01

    Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning

  9. An advanced computational algorithm for systems analysis of tokamak power plants

    International Nuclear Information System (INIS)

    Dragojlovic, Zoran; Rene Raffray, A.; Najmabadi, Farrokh; Kessel, Charles; Waganer, Lester; El-Guebaly, Laila; Bromberg, Leslie

    2010-01-01

    A new computational algorithm for tokamak power plant system analysis is being developed for the ARIES project. The objective of this algorithm is to explore the most influential parameters in the physical, technological and economic trade space related to the developmental transition from experimental facilities to viable commercial power plants. This endeavor is being pursued as a new approach to tokamak systems studies, which examines an expansive, multi-dimensional trade space as opposed to traditional sensitivity analyses about a baseline design point. The new ARIES systems code consists of adaptable modules which are built from a custom-made software toolbox using object-oriented programming. The physics module captures the current tokamak physics knowledge database including modeling of the most-current proposed burning plasma experiment design (FIRE). The engineering model accurately reflects the intent and design detail of the power core elements including accurate and adjustable 3D tokamak geometry and complete modeling of all the power core and ancillary systems. Existing physics and engineering models reflect both near-term as well as advanced technology solutions that have higher performance potential. To fully assess the impact of the range of physics and engineering implementations, the plant cost accounts have been revised to reflect a more functional cost structure, supported by an updated set of costing algorithms for the direct, indirect, and financial cost accounts. All of these features have been validated against the existing ARIES-AT baseline case. The present results demonstrate visualization techniques that provide an insight into trade space assessment of attractive steady-state tokamaks for commercial use.

  10. Detecting free-living steps and walking bouts: validating an algorithm for macro gait analysis.

    Science.gov (United States)

    Hickey, Aodhán; Del Din, Silvia; Rochester, Lynn; Godfrey, Alan

    2017-01-01

    Research suggests wearables and not instrumented walkways are better suited to quantify gait outcomes in clinic and free-living environments, providing a more comprehensive overview of walking due to continuous monitoring. Numerous validation studies in controlled settings exist, but few have examined the validity of wearables and associated algorithms for identifying and quantifying step counts and walking bouts in uncontrolled (free-living) environments. Studies which have examined free-living step and bout count validity found limited agreement due to variations in walking speed, changing terrain or task. Here we present a gait segmentation algorithm to define free-living step count and walking bouts from an open-source, high-resolution, accelerometer-based wearable (AX3, Axivity). Ten healthy participants (20-33 years) wore two portable gait measurement systems; a wearable accelerometer on the lower-back and a wearable body-mounted camera (GoPro HERO) on the chest, for 1 h on two separate occasions (24 h apart) during free-living activities. Step count and walking bouts were derived for both measurement systems and compared. For all participants during a total of almost 20 h of uncontrolled and unscripted free-living activity data, excellent relative (rho  ⩾  0.941) and absolute (ICC (2,1)   ⩾  0.975) agreement with no presence of bias were identified for step count compared to the camera (gold standard reference). Walking bout identification showed excellent relative (rho  ⩾  0.909) and absolute agreement (ICC (2,1)   ⩾  0.941) but demonstrated significant bias. The algorithm employed for identifying and quantifying steps and bouts from a single wearable accelerometer worn on the lower-back has been demonstrated to be valid and could be used for pragmatic gait analysis in prolonged uncontrolled free-living environments.

  11. Baseline correction combined partial least squares algorithm and its application in on-line Fourier transform infrared quantitative analysis.

    Science.gov (United States)

    Peng, Jiangtao; Peng, Silong; Xie, Qiong; Wei, Jiping

    2011-04-01

    In order to eliminate the lower order polynomial interferences, a new quantitative calibration algorithm "Baseline Correction Combined Partial Least Squares (BCC-PLS)", which combines baseline correction and conventional PLS, is proposed. By embedding baseline correction constraints into PLS weights selection, the proposed calibration algorithm overcomes the uncertainty in baseline correction and can meet the requirement of on-line attenuated total reflectance Fourier transform infrared (ATR-FTIR) quantitative analysis. The effectiveness of the algorithm is evaluated by the analysis of glucose and marzipan ATR-FTIR spectra. BCC-PLS algorithm shows improved prediction performance over PLS. The root mean square error of cross-validation (RMSECV) on marzipan spectra for the prediction of the moisture is found to be 0.53%, w/w (range 7-19%). The sugar content is predicted with a RMSECV of 2.04%, w/w (range 33-68%). Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Event based neutron activation spectroscopy and analysis algorithm using MLE and meta-heuristics

    International Nuclear Information System (INIS)

    Wallace, B.

    2014-01-01

    Techniques used in neutron activation analysis are often dependent on the experimental setup. In the context of developing a portable and high efficiency detection array, good energy resolution and half-life discrimination are difficult to obtain with traditional methods given the logistic and financial constraints. An approach different from that of spectrum addition and standard spectroscopy analysis was needed. The use of multiple detectors prompts the need for a flexible storage of acquisition data to enable sophisticated post processing of information. Analogously to what is done in heavy ion physics, gamma detection counts are stored as two-dimensional events. This enables post-selection of energies and time frames without the need to modify the experimental setup. This method of storage also permits the use of more complex analysis tools. Given the nature of the problem at hand, a light and efficient analysis code had to be devised. A thorough understanding of the physical and statistical processes involved was used to create a statistical model. Maximum likelihood estimation was combined with meta-heuristics to produce a sophisticated curve-fitting algorithm. Simulated and experimental data were fed into the analysis code prompting positive results in terms of half-life discrimination, peak identification and noise reduction. The code was also adapted to other fields of research such as heavy ion identification of the quasi-target (QT) and quasi-particle (QP). The approach used seems to be able to translate well into other fields of research. (author)

  13. Application of Classification Algorithm of Machine Learning and Buffer Analysis in Torism Regional Planning

    Science.gov (United States)

    Zhang, T. H.; Ji, H. W.; Hu, Y.; Ye, Q.; Lin, Y.

    2018-04-01

    Remote Sensing (RS) and Geography Information System (GIS) technologies are widely used in ecological analysis and regional planning. With the advantages of large scale monitoring, combination of point and area, multiple time-phases and repeated observation, they are suitable for monitoring and analysis of environmental information in a large range. In this study, support vector machine (SVM) classification algorithm is used to monitor the land use and land cover change (LUCC), and then to perform the ecological evaluation for Chaohu lake tourism area quantitatively. The automatic classification and the quantitative spatial-temporal analysis for the Chaohu Lake basin are realized by the analysis of multi-temporal and multispectral satellite images, DEM data and slope information data. Furthermore, the ecological buffer zone analysis is also studied to set up the buffer width for each catchment area surrounding Chaohu Lake. The results of LUCC monitoring from 1992 to 2015 has shown obvious affections by human activities. Since the construction of the Chaohu Lake basin is in the crucial stage of the rapid development of urbanization, the application of RS and GIS technique can effectively provide scientific basis for land use planning, ecological management, environmental protection and tourism resources development in the Chaohu Lake Basin.

  14. Event based neutron activation spectroscopy and analysis algorithm using MLE and metaheuristics

    Science.gov (United States)

    Wallace, Barton

    2014-03-01

    Techniques used in neutron activation analysis are often dependent on the experimental setup. In the context of developing a portable and high efficiency detection array, good energy resolution and half-life discrimination are difficult to obtain with traditional methods [1] given the logistic and financial constraints. An approach different from that of spectrum addition and standard spectroscopy analysis [2] was needed. The use of multiple detectors prompts the need for a flexible storage of acquisition data to enable sophisticated post processing of information. Analogously to what is done in heavy ion physics, gamma detection counts are stored as two-dimensional events. This enables post-selection of energies and time frames without the need to modify the experimental setup. This method of storage also permits the use of more complex analysis tools. Given the nature of the problem at hand, a light and efficient analysis code had to be devised. A thorough understanding of the physical and statistical processes [3] involved was used to create a statistical model. Maximum likelihood estimation was combined with metaheuristics to produce a sophisticated curve-fitting algorithm. Simulated and experimental data were fed into the analysis code prompting positive results in terms of half-life discrimination, peak identification and noise reduction. The code was also adapted to other fields of research such as heavy ion identification of the quasi-target (QT) and quasi-particle (QP). The approach used seems to be able to translate well into other fields of research.

  15. Automatic Generation of Algorithms for the Statistical Analysis of Planetary Nebulae Images

    Science.gov (United States)

    Fischer, Bernd

    2004-01-01

    Analyzing data sets collected in experiments or by observations is a Core scientific activity. Typically, experimentd and observational data are &aught with uncertainty, and the analysis is based on a statistical model of the conjectured underlying processes, The large data volumes collected by modern instruments make computer support indispensible for this. Consequently, scientists spend significant amounts of their time with the development and refinement of the data analysis programs. AutoBayes [GF+02, FS03] is a fully automatic synthesis system for generating statistical data analysis programs. Externally, it looks like a compiler: it takes an abstract problem specification and translates it into executable code. Its input is a concise description of a data analysis problem in the form of a statistical model as shown in Figure 1; its output is optimized and fully documented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Internally, however, it is quite different: AutoBayes derives a customized algorithm implementing the given model using a schema-based process, and then further refines and optimizes the algorithm into code. A schema is a parameterized code template with associated semantic constraints which define and restrict the template s applicability. The schema parameters are instantiated in a problem-specific way during synthesis as AutoBayes checks the constraints against the original model or, recursively, against emerging sub-problems. AutoBayes schema library contains problem decomposition operators (which are justified by theorems in a formal logic in the domain of Bayesian networks) as well as machine learning algorithms (e.g., EM, k-Means) and nu- meric optimization methods (e.g., Nelder-Mead simplex, conjugate gradient). AutoBayes augments this schema-based approach by symbolic computation to derive closed-form solutions whenever possible. This is a major advantage over other statistical data analysis systems

  16. The Top Ten Algorithms in Data Mining

    CERN Document Server

    Wu, Xindong

    2009-01-01

    From classification and clustering to statistical learning, association analysis, and link mining, this book covers the most important topics in data mining research. It presents the ten most influential algorithms used in the data mining community today. Each chapter provides a detailed description of the algorithm, a discussion of available software implementation, advanced topics, and exercises. With a simple data set, examples illustrate how each algorithm works and highlight the overall performance of each algorithm in a real-world application. Featuring contributions from leading researc

  17. Performance evaluation of the ORNL multi-elemental XRF analysis algorithms

    Energy Technology Data Exchange (ETDEWEB)

    McElroy, Robert Dennis [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-11-01

    Hybrid K-Edge Densitometer (HKED) systems integrate both K-Edge Densitometry (KED) and X-Ray Fluorescence (XRF) analyses to provide accurate rapid, assay results of the uranium and plutonium content of dissolver solution samples from nuclear fuel reprocessing facilities. Introduced for international safeguards applications in the late 1980s, the XRF component of the hybrid analyses is limited to quantification of U and Pu over a narrow range of U:Pu concentration ratios in the vicinity of ≈100. The analysis was further limited regarding the presence of minor actinide components where only a single minor actinide (typically Am) is included in the analysis and then only treated as an interference. The evolving nuclear fuel cycle has created the need to assay more complex dissolver solutions where uranium may no longer be the dominant actinide in the solution and the concentrations of the so called minor actinides (e.g., Th, Np, Am, and Cm) are sufficiently high that they can no longer be treated as impurities and ignored. Extension of the traditional HKED Region of Interest (ROI) based analysis to include these additional actinides is not possible due to the increased complexity of the XRF spectra. Oak Ridge National Laboratory (ORNL) has developed a spectral fitting approach to the HKED XRF measurement with an enhanced algorithm set to accommodate these complex XRF spectra. This report provides a summary of the spectral fitting methodology and examines the performance of these algorithms using data obtained from the ORNL HKED system, as well as data provided by the International Atomic Energy Agency (IAEA) on actual dissolver solutions.

  18. Error analysis and algorithm implementation for an improved optical-electric tracking device based on MEMS

    Science.gov (United States)

    Sun, Hong; Wu, Qian-zhong

    2013-09-01

    In order to improve the precision of optical-electric tracking device, proposing a kind of improved optical-electric tracking device based on MEMS, in allusion to the tracking error of gyroscope senor and the random drift, According to the principles of time series analysis of random sequence, establish AR model of gyro random error based on Kalman filter algorithm, then the output signals of gyro are multiple filtered with Kalman filter. And use ARM as micro controller servo motor is controlled by fuzzy PID full closed loop control algorithm, and add advanced correction and feed-forward links to improve response lag of angle input, Free-forward can make output perfectly follow input. The function of lead compensation link is to shorten the response of input signals, so as to reduce errors. Use the wireless video monitor module and remote monitoring software (Visual Basic 6.0) to monitor servo motor state in real time, the video monitor module gathers video signals, and the wireless video module will sent these signals to upper computer, so that show the motor running state in the window of Visual Basic 6.0. At the same time, take a detailed analysis to the main error source. Through the quantitative analysis of the errors from bandwidth and gyro sensor, it makes the proportion of each error in the whole error more intuitive, consequently, decrease the error of the system. Through the simulation and experiment results shows the system has good following characteristic, and it is very valuable for engineering application.

  19. Economic Analysis on Value Chain of Taxi Fleet with Battery-Swapping Mode Using Multiobjective Genetic Algorithm

    OpenAIRE

    Ning, Guobao; Zhen, Zijian; Wang, Peng; Li, Yang; Yin, Huaixian

    2012-01-01

    This paper presents an economic analysis model on value chain of taxi fleet with battery-swapping mode in a pilot city. In the model, economic benefits of charging-swapping station group, taxi company, and taxi driver in the region have been taken into consideration. Thus, the model is a multiobjective function and multiobjective genetic algorithm is used to solve this problem. According to the real data collected from the pilot city, the multiobjective genetic algorithm is tested as an effec...

  20. Fast fourier algorithms in spectral computation and analysis of vibrating machines

    International Nuclear Information System (INIS)

    Farooq, U.; Hafeez, T.; Khan, M.Z.; Amir, M.

    2001-01-01

    In this work we have discussed Fourier and its history series, relationships among various Fourier mappings, Fourier coefficients, transforms, inverse transforms, integrals, analyses, discrete and fast algorithms for data processing and analysis of vibrating systems. The evaluation of magnitude of the source signal at transmission time, related coefficient matrix, intensity, and magnitude at the receiving end (stations). Matrix computation of Fourier transform has been explained, and applications are presented. The fast Fourier transforms, new computational scheme. have been tested with an example. The work also includes digital programs for obtaining the frequency contents of time function. It has been explained that how the fast Fourier algorithms (FFT) has decreased computational work by several order of magnitudes and split the spectrum of a signal into two (even and odd modes) at every successive step. That fast quantitative processing for discrete Fourier transforms' computations as well as signal splitting and combination provides an efficient. and reliable tool for spectral analyses. Fourier series decompose the given variable into a sum of oscillatory functions each having a specific frequency. These frequencies, with their corresponding amplitude and phase angles, constitute the frequency contents of the original time functions. These fast processing achievements, signals decomposition and combination may be carried out by the principle of superposition and convolution for, even, signals of different frequencies. Considerable information about a machine or a structure can be derived from variable speed and frequency tests. (author)

  1. Optimal Feature Space Selection in Detecting Epileptic Seizure based on Recurrent Quantification Analysis and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Saleh LAshkari

    2016-06-01

    Full Text Available Selecting optimal features based on nature of the phenomenon and high discriminant ability is very important in the data classification problems. Since it doesn't require any assumption about stationary condition and size of the signal and the noise in Recurrent Quantification Analysis (RQA, it may be useful for epileptic seizure Detection. In this study, RQA was used to discriminate ictal EEG from the normal EEG where optimal features selected by combination of algorithm genetic and Bayesian Classifier. Recurrence plots of hundred samples in each two categories were obtained with five distance norms in this study: Euclidean, Maximum, Minimum, Normalized and Fixed Norm. In order to choose optimal threshold for each norm, ten threshold of ε was generated and then the best feature space was selected by genetic algorithm in combination with a bayesian classifier. The results shown that proposed method is capable of discriminating the ictal EEG from the normal EEG where for Minimum norm and 0.1˂ε˂1, accuracy was 100%. In addition, the sensitivity of proposed framework to the ε and the distance norm parameters was low. The optimal feature presented in this study is Trans which it was selected in most feature spaces with high accuracy.

  2. Multifractal analysis of multiparticle emission data in the framework of visibility graph and sandbox algorithm

    Science.gov (United States)

    Mali, P.; Manna, S. K.; Mukhopadhyay, A.; Haldar, P. K.; Singh, G.

    2018-03-01

    Multiparticle emission data in nucleus-nucleus collisions are studied in a graph theoretical approach. The sandbox algorithm used to analyze complex networks is employed to characterize the multifractal properties of the visibility graphs associated with the pseudorapidity distribution of charged particles produced in high-energy heavy-ion collisions. Experimental data on 28Si+Ag/Br interaction at laboratory energy Elab = 14 . 5 A GeV, and 16O+Ag/Br and 32S+Ag/Br interactions both at Elab = 200 A GeV, are used in this analysis. We observe a scale free nature of the degree distributions of the visibility and horizontal visibility graphs associated with the event-wise pseudorapidity distributions. Equivalent event samples simulated by ultra-relativistic quantum molecular dynamics, produce degree distributions that are almost identical to the respective experiment. However, the multifractal variables obtained by using sandbox algorithm for the experiment to some extent differ from the respective simulated results.

  3. Mesh Partitioning Algorithm Based on Parallel Finite Element Analysis and Its Actualization

    Directory of Open Access Journals (Sweden)

    Lei Zhang

    2013-01-01

    Full Text Available In parallel computing based on finite element analysis, domain decomposition is a key technique for its preprocessing. Generally, a domain decomposition of a mesh can be realized through partitioning of a graph which is converted from a finite element mesh. This paper discusses the method for graph partitioning and the way to actualize mesh partitioning. Relevant softwares are introduced, and the data structure and key functions of Metis and ParMetis are introduced. The writing, compiling, and testing of the mesh partitioning interface program based on these key functions are performed. The results indicate some objective law and characteristics to guide the users who use the graph partitioning algorithm and software to write PFEM program, and ideal partitioning effects can be achieved by actualizing mesh partitioning through the program. The interface program can also be used directly by the engineering researchers as a module of the PFEM software. So that it can reduce the application of the threshold of graph partitioning algorithm, improve the calculation efficiency, and promote the application of graph theory and parallel computing.

  4. An Algorithm Approach for the Analysis of Urban Land-Use/Cover: Logic Filters

    Directory of Open Access Journals (Sweden)

    Şinasi Kaya

    2014-11-01

    Full Text Available Accurate classification of land-use/cover based on remotely sensed data is important for interpreters who analyze time or event-based change on certain areas. Any method that has user flexibility on area selection provides great simplicity during analysis, since the analyzer may need to work on a specific area of interest instead of dealing with the entire remotely sensed data. The objectives of the paper are to develop an automation algorithm using Matlab & Simulink on user selected areas, to filter V-I-S (Vegetation, Impervious, Soil components using the algorithm, to analyze the components according to upper and lower threshold values based on each band histogram, and finally to obtain land-use/cover map combining the V-I-S components. LANDSAT 5TM satellite data covering Istanbul and Izmit regions are utilized, and 4, 3, 2 (RGB band combination is selected to fulfill the aims of the study. These referred bands are normalized, and V-I-S components of each band are determined. This methodology that uses Matlab & Simulink program is equally successful like the unsupervised and supervised methods. Practices with these methods that lead to qualitative and quantitative assessments of selected urban areas will further provide important spatial information and data especially to the urban planners and decision-makers.

  5. Algorithms for Image Analysis and Combination of Pattern Classifiers with Application to Medical Diagnosis

    Science.gov (United States)

    Georgiou, Harris

    2009-10-01

    Medical Informatics and the application of modern signal processing in the assistance of the diagnostic process in medical imaging is one of the more recent and active research areas today. This thesis addresses a variety of issues related to the general problem of medical image analysis, specifically in mammography, and presents a series of algorithms and design approaches for all the intermediate levels of a modern system for computer-aided diagnosis (CAD). The diagnostic problem is analyzed with a systematic approach, first defining the imaging characteristics and features that are relevant to probable pathology in mammo-grams. Next, these features are quantified and fused into new, integrated radio-logical systems that exhibit embedded digital signal processing, in order to improve the final result and minimize the radiological dose for the patient. In a higher level, special algorithms are designed for detecting and encoding these clinically interest-ing imaging features, in order to be used as input to advanced pattern classifiers and machine learning models. Finally, these approaches are extended in multi-classifier models under the scope of Game Theory and optimum collective deci-sion, in order to produce efficient solutions for combining classifiers with minimum computational costs for advanced diagnostic systems. The material covered in this thesis is related to a total of 18 published papers, 6 in scientific journals and 12 in international conferences.

  6. Investigating properties of the cardiovascular system using innovative analysis algorithms based on ensemble empirical mode decomposition.

    Science.gov (United States)

    Yeh, Jia-Rong; Lin, Tzu-Yu; Chen, Yun; Sun, Wei-Zen; Abbod, Maysam F; Shieh, Jiann-Shing

    2012-01-01

    Cardiovascular system is known to be nonlinear and nonstationary. Traditional linear assessments algorithms of arterial stiffness and systemic resistance of cardiac system accompany the problem of nonstationary or inconvenience in practical applications. In this pilot study, two new assessment methods were developed: the first is ensemble empirical mode decomposition based reflection index (EEMD-RI) while the second is based on the phase shift between ECG and BP on cardiac oscillation. Both methods utilise the EEMD algorithm which is suitable for nonlinear and nonstationary systems. These methods were used to investigate the properties of arterial stiffness and systemic resistance for a pig's cardiovascular system via ECG and blood pressure (BP). This experiment simulated a sequence of continuous changes of blood pressure arising from steady condition to high blood pressure by clamping the artery and an inverse by relaxing the artery. As a hypothesis, the arterial stiffness and systemic resistance should vary with the blood pressure due to clamping and relaxing the artery. The results show statistically significant correlations between BP, EEMD-based RI, and the phase shift between ECG and BP on cardiac oscillation. The two assessments results demonstrate the merits of the EEMD for signal analysis.

  7. Sensitivity Analysis of a CPAM Inverse Algorithm for Composite Laminates Characterization

    Directory of Open Access Journals (Sweden)

    Farshid Masoumi

    2017-01-01

    Full Text Available Using experimental data and numerical simulations, a new combined technique is presented for characterization of thin and thick orthotropic composite laminates. Four or five elastic constants, as well as ply orientation angles, are considered as the unknown parameters. The material characterization is first examined for isotropic plates under different boundary conditions to evaluate the method’s accuracy. The proposed algorithm, so-called CPAM (Combined Programs of ABAQUS and MATLAB, utilizes an optimization procedure and makes simultaneous use of vibration test data together with their corresponding numerical solutions. The numerical solutions are based on a commercial finite element package for efficiently identifying the material properties. An inverse method based on particle swarm optimization algorithm is further provided using MATLAB software. The error function to be minimized is the sum of squared differences between experimental and simulated data of eigenfrequencies. To evaluate the robustness of the model’s results in the presence of uncertainty and unwanted noises, a sensitivity analysis that employs Gaussian disorder model is directly applied to the measured frequencies. The results with high accuracy confirm the validity and capability of the present method in simultaneous determination of mechanical constants and fiber orientation angles of composite laminates as compared to prior methods.

  8. Comparative analysis of decision tree algorithms on quality of water contaminated with soil

    Directory of Open Access Journals (Sweden)

    Mara Andrea Dota

    2015-02-01

    Full Text Available Agriculture, roads, animal farms and other land uses may modify the water quality from rivers, dams and other surface freshwaters. In the control of the ecological process and for environmental management, it is necessary to quickly and accurately identify surface water contamination (in areas such as rivers and dams with contaminated runoff waters coming, for example, from cultivation and urban areas. This paper presents a comparative analysis of different classification algorithms applied to the data collected from a sample of soil-contaminated water aiming to identify if the water quality classification proposed in this research agrees with reality. The sample was part of a laboratory experiment, which began with a sample of treated water added with increasing fractions of soil. The results show that the proposed classification for water quality in this scenario is coherent, because different algorithms indicated a strong statistic relationship between the classes and their instances, that is, in the classes that qualify the water sample and the values which describe each class. The proposed water classification varies from excelling to very awful (12 classes

  9. Analysis of computational complexity for HT-based fingerprint alignment algorithms on java card environment

    CSIR Research Space (South Africa)

    Mlambo, CS

    2015-01-01

    Full Text Available In this paper, implementations of three Hough Transform based fingerprint alignment algorithms are analyzed with respect to time complexity on Java Card environment. Three algorithms are: Local Match Based Approach (LMBA), Discretized Rotation Based...

  10. Efficient On-the-fly Algorithms for the Analysis of Timed Games

    DEFF Research Database (Denmark)

    Cassez, Franck; David, Alexandre; Fleury, Emmanuel

    2005-01-01

    In this paper, we propose the first efficient on-the-fly algorithm for solving games based on timed game automata with respect to reachability and safety properties The algorithm we propose is a symbolic extension of the on-the-fly algorithm suggested by Liu & Smolka [15] for linear-time model-ch...... symbolic algorithm are proposed as well as methods for obtaining time-optimal winning strategies (for reachability games). Extensive evaluation of an experimental implementation of the algorithm yields very encouraging performance results.......In this paper, we propose the first efficient on-the-fly algorithm for solving games based on timed game automata with respect to reachability and safety properties The algorithm we propose is a symbolic extension of the on-the-fly algorithm suggested by Liu & Smolka [15] for linear-time model...

  11. Analysis of Time and Frequency Domain Pace Algorithms for OFDM with Virtual Subcarriers

    DEFF Research Database (Denmark)

    Rom, Christian; Manchón, Carles Navarro; Deneire, Luc

    2007-01-01

    This paper studies common linear frequency direction pilot-symbol aided channel estimation algorithms for orthogonal frequency division multiplexing in a UTRA long term evolution context. Three deterministic algorithms are analyzed: the maximum likelihood (ML) approach, the noise reduction algori...

  12. Strength Pareto Evolutionary Algorithm using Self-Organizing Data Analysis Techniques

    Directory of Open Access Journals (Sweden)

    Ionut Balan

    2015-03-01

    Full Text Available Multiobjective optimization is widely used in problems solving from a variety of areas. To solve such problems there was developed a set of algorithms, most of them based on evolutionary techniques. One of the algorithms from this class, which gives quite good results is SPEA2, method which is the basis of the proposed algorithm in this paper. Results from this paper are obtained by running these two algorithms on a flow-shop problem.

  13. Basis Expansion Approaches for Regularized Sequential Dictionary Learning Algorithms With Enforced Sparsity for fMRI Data Analysis.

    Science.gov (United States)

    Seghouane, Abd-Krim; Iqbal, Asif

    2017-09-01

    Sequential dictionary learning algorithms have been successfully applied to functional magnetic resonance imaging (fMRI) data analysis. fMRI data sets are, however, structured data matrices with the notions of temporal smoothness in the column direction. This prior information, which can be converted into a constraint of smoothness on the learned dictionary atoms, has seldomly been included in classical dictionary learning algorithms when applied to fMRI data analysis. In this paper, we tackle this problem by proposing two new sequential dictionary learning algorithms dedicated to fMRI data analysis by accounting for this prior information. These algorithms differ from the existing ones in their dictionary update stage. The steps of this stage are derived as a variant of the power method for computing the SVD. The proposed algorithms generate regularized dictionary atoms via the solution of a left regularized rank-one matrix approximation problem where temporal smoothness is enforced via regularization through basis expansion and sparse basis expansion in the dictionary update stage. Applications on synthetic data experiments and real fMRI data sets illustrating the performance of the proposed algorithms are provided.

  14. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    Science.gov (United States)

    Ulbrich, Norbert Manfred

    2013-01-01

    A new regression model search algorithm was developed in 2011 that may be used to analyze both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The new algorithm is a simplified version of a more complex search algorithm that was originally developed at the NASA Ames Balance Calibration Laboratory. The new algorithm has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression models. Therefore, the simplified search algorithm is not intended to replace the original search algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm either fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new regression model search algorithm.

  15. Design optimization and analysis of selected thermal devices using self-adaptive Jaya algorithm

    International Nuclear Information System (INIS)

    Rao, R.V.; More, K.C.

    2017-01-01

    Highlights: • Self-adaptive Jaya algorithm is proposed for optimal design of thermal devices. • Optimization of heat pipe, cooling tower, heat sink and thermo-acoustic prime mover is presented. • Results of the proposed algorithm are better than the other optimization techniques. • The proposed algorithm may be conveniently used for the optimization of other devices. - Abstract: The present study explores the use of an improved Jaya algorithm called self-adaptive Jaya algorithm for optimal design of selected thermal devices viz; heat pipe, cooling tower, honeycomb heat sink and thermo-acoustic prime mover. Four different optimization case studies of the selected thermal devices are presented. The researchers had attempted the same design problems in the past using niched pareto genetic algorithm (NPGA), response surface method (RSM), leap-frog optimization program with constraints (LFOPC) algorithm, teaching-learning based optimization (TLBO) algorithm, grenade explosion method (GEM) and multi-objective genetic algorithm (MOGA). The results achieved by using self-adaptive Jaya algorithm are compared with those achieved by using the NPGA, RSM, LFOPC, TLBO, GEM and MOGA algorithms. The self-adaptive Jaya algorithm is proved superior as compared to the other optimization methods in terms of the results, computational effort and function evalutions.

  16. Sensitivity analysis using contribution to sample variance plot: Application to a water hammer model

    International Nuclear Information System (INIS)

    Tarantola, S.; Kopustinskas, V.; Bolado-Lavin, R.; Kaliatka, A.; Ušpuras, E.; Vaišnoras, M.

    2012-01-01

    This paper presents “contribution to sample variance plot”, a natural extension of the “contribution to the sample mean plot”, which is a graphical tool for global sensitivity analysis originally proposed by Sinclair. These graphical tools have a great potential to display graphically sensitivity information given a generic input sample and its related model realizations. The contribution to the sample variance can be obtained at no extra computational cost, i.e. from the same points used for deriving the contribution to the sample mean and/or scatter-plots. The proposed approach effectively instructs the analyst on how to achieve a targeted reduction of the variance, by operating on the extremes of the input parameters' ranges. The approach is tested against a known benchmark for sensitivity studies, the Ishigami test function, and a numerical model simulating the behaviour of a water hammer effect in a piping system.

  17. Inverse Analysis of Pavement Structural Properties Based on Dynamic Finite Element Modeling and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Xiaochao Tang

    2013-03-01

    Full Text Available With the movement towards the implementation of mechanistic-empirical pavement design guide (MEPDG, an accurate determination of pavement layer moduli is vital for predicting pavement critical mechanistic responses. A backcalculation procedure is commonly used to estimate the pavement layer moduli based on the non-destructive falling weight deflectometer (FWD tests. Backcalculation of flexible pavement layer properties is an inverse problem with known input and output signals based upon which unknown parameters of the pavement system are evaluated. In this study, an inverse analysis procedure that combines the finite element analysis and a population-based optimization technique, Genetic Algorithm (GA has been developed to determine the pavement layer structural properties. A lightweight deflectometer (LWD was used to infer the moduli of instrumented three-layer scaled flexible pavement models. While the common practice in backcalculating pavement layer properties still assumes a static FWD load and uses only peak values of the load and deflections, dynamic analysis was conducted to simulate the impulse LWD load. The recorded time histories of the LWD load were used as the known inputs into the pavement system while the measured time-histories of surface central deflections and subgrade deflections measured with a linear variable differential transformers (LVDT were considered as the outputs. As a result, consistent pavement layer moduli can be obtained through this inverse analysis procedure.

  18. Frequency Analysis Using Bootstrap Method and SIR Algorithm for Prevention of Natural Disasters

    Science.gov (United States)

    Kim, T.; Kim, Y. S.

    2017-12-01

    The frequency analysis of hydrometeorological data is one of the most important factors in response to natural disaster damage, and design standards for a disaster prevention facilities. In case of frequency analysis of hydrometeorological data, it assumes that observation data have statistical stationarity, and a parametric method considering the parameter of probability distribution is applied. For a parametric method, it is necessary to sufficiently collect reliable data; however, snowfall observations are needed to compensate for insufficient data in Korea, because of reducing the number of days for snowfall observations and mean maximum daily snowfall depth due to climate change. In this study, we conducted the frequency analysis for snowfall using the Bootstrap method and SIR algorithm which are the resampling methods that can overcome the problems of insufficient data. For the 58 meteorological stations distributed evenly in Korea, the probability of snowfall depth was estimated by non-parametric frequency analysis using the maximum daily snowfall depth data. The results show that probabilistic daily snowfall depth by frequency analysis is decreased at most stations, and most stations representing the rate of change were found to be consistent in both parametric and non-parametric frequency analysis. This study shows that the resampling methods can do the frequency analysis of the snowfall depth that has insufficient observed samples, which can be applied to interpretation of other natural disasters such as summer typhoons with seasonal characteristics. Acknowledgment.This research was supported by a grant(MPSS-NH-2015-79) from Disaster Prediction and Mitigation Technology Development Program funded by Korean Ministry of Public Safety and Security(MPSS).

  19. Optimisation of shock absorber process parameters using failure mode and effect analysis and genetic algorithm

    Science.gov (United States)

    Mariajayaprakash, Arokiasamy; Senthilvelan, Thiyagarajan; Vivekananthan, Krishnapillai Ponnambal

    2013-07-01

    The various process parameters affecting the quality characteristics of the shock absorber during the process were identified using the Ishikawa diagram and by failure mode and effect analysis. The identified process parameters are welding process parameters (squeeze, heat control, wheel speed, and air pressure), damper sealing process parameters (load, hydraulic pressure, air pressure, and fixture height), washing process parameters (total alkalinity, temperature, pH value of rinsing water, and timing), and painting process parameters (flowability, coating thickness, pointage, and temperature). In this paper, the process parameters, namely, painting and washing process parameters, are optimized by Taguchi method. Though the defects are reasonably minimized by Taguchi method, in order to achieve zero defects during the processes, genetic algorithm technique is applied on the optimized parameters obtained by Taguchi method.

  20. Cost benefit analysis of the radiological shielding of medical cyclotrons using a genetic algorithm

    International Nuclear Information System (INIS)

    Mukherjee, Bhaskar

    2001-01-01

    Adequate radiation shielding is vital to the safe operation of modern commercial medical cyclotrons producing large yields of short-lived radioisotopes. The radiological shielding constitutes a significant capital investment for any new cyclotron-based radioisotope production facility; hence, the shielding design requires an accurate cost-benefit analysis often based on a complex multi-variant optimization technique. This paper demonstrates the application of a Genetic Algorithm (GA) for the optimum design of the high yield target cave of a Medical Cyclotron radioisotope production facility based in Sydney, Australia. The GA is a novel optimization technique that mimics the Darwinian Evolution paradigm and is ideally suited to search for global optima in a large multi-dimensional solution space

  1. A New MANET Wormhole Detection Algorithm Based on Traversal Time and Hop Count Analysis

    Directory of Open Access Journals (Sweden)

    Göran Pulkkis

    2011-11-01

    Full Text Available As demand increases for ubiquitous network facilities, infrastructure-less and self-configuring systems like Mobile Ad hoc Networks (MANET are gaining popularity. MANET routing security however, is one of the most significant challenges to wide scale adoption, with wormhole attacks being an especially severe MANET routing threat. This is because wormholes are able to disrupt a major component of network traffic, while concomitantly being extremely difficult to detect. This paper introduces a new wormhole detection paradigm based upon Traversal Time and Hop Count Analysis (TTHCA, which in comparison to existing algorithms, consistently affords superior detection performance, allied with low false positive rates for all wormhole variants. Simulation results confirm that the TTHCA model exhibits robust wormhole route detection in various network scenarios, while incurring only a small network overhead. This feature makes TTHCA an attractive choice for MANET environments which generally comprise devices, such as wireless sensors, which possess a limited processing capability.

  2. Analysis and Development of Walking Algorithm Kinematic Model for 5-Degree of Freedom Bipedal Robot

    Directory of Open Access Journals (Sweden)

    Gerald Wahyudi Setiono

    2012-12-01

    Full Text Available A design of walking diagram and the calculation of a bipedal robot have been developed. The bipedal robot was designed and constructed with several kinds of servo bracket for the legs, two feet and a hip. Each of the bipedal robot leg was 5-degrees of freedom, three pitches (hip joint, knee joint and ankle joint and two rolls (hip joint and ankle joint. The walking algorithm of this bipedal robot was based on the triangle formulation of cosine law to get the angle value at each joint. The hip height, height of the swinging leg and the step distance are derived based on linear equation. This paper discussed the kinematic model analysis and the development of the walking diagram of the bipedal robot. Kinematics equations were derived, the joint angles were simulated and coded into Arduino board to be executed to the robot.

  3. Criteria and algorithms for constructing reliable databases for statistical analysis of disruptions at ASDEX Upgrade

    International Nuclear Information System (INIS)

    Cannas, B.; Fanni, A.; Pautasso, G.; Sias, G.; Sonato, P.

    2009-01-01

    The present understanding of disruption physics has not gone so far as to provide a mathematical model describing the onset of this instability. A disruption prediction system, based on a statistical analysis of the diagnostic signals recorded during the experiments, would allow estimating the probability of a disruption to take place. A crucial point for a good design of such a prediction system is the appropriateness of the data set. This paper reports the details of the database built to train a disruption predictor based on neural networks for ASDEX Upgrade. The criteria of pulses selection, the analyses performed on plasma parameters and the implemented pre-processing algorithms, are described. As an example of application, a short description of the disruption predictor is reported.

  4. A hybrid algorithm for reliability analysis combining Kriging and subset simulation importance sampling

    International Nuclear Information System (INIS)

    Tong, Cao; Sun, Zhili; Zhao, Qianli; Wang, Qibin; Wang, Shuang

    2015-01-01

    To solve the problem of large computation when failure probability with time-consuming numerical model is calculated, we propose an improved active learning reliability method called AK-SSIS based on AK-IS algorithm. First, an improved iterative stopping criterion in active learning is presented so that iterations decrease dramatically. Second, the proposed method introduces Subset simulation importance sampling (SSIS) into the active learning reliability calculation, and then a learning function suitable for SSIS is proposed. Finally, the efficiency of AK-SSIS is proved by two academic examples from the literature. The results show that AK-SSIS requires fewer calls to the performance function than AK-IS, and the failure probability obtained from AK-SSIS is very robust and accurate. Then this method is applied on a spur gear pair for tooth contact fatigue reliability analysis.

  5. A hybrid algorithm for reliability analysis combining Kriging and subset simulation importance sampling

    Energy Technology Data Exchange (ETDEWEB)

    Tong, Cao; Sun, Zhili; Zhao, Qianli; Wang, Qibin [Northeastern University, Shenyang (China); Wang, Shuang [Jiangxi University of Science and Technology, Ganzhou (China)

    2015-08-15

    To solve the problem of large computation when failure probability with time-consuming numerical model is calculated, we propose an improved active learning reliability method called AK-SSIS based on AK-IS algorithm. First, an improved iterative stopping criterion in active learning is presented so that iterations decrease dramatically. Second, the proposed method introduces Subset simulation importance sampling (SSIS) into the active learning reliability calculation, and then a learning function suitable for SSIS is proposed. Finally, the efficiency of AK-SSIS is proved by two academic examples from the literature. The results show that AK-SSIS requires fewer calls to the performance function than AK-IS, and the failure probability obtained from AK-SSIS is very robust and accurate. Then this method is applied on a spur gear pair for tooth contact fatigue reliability analysis.

  6. Contributions: SAGE

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Contributions: SAGE. Space Alternating Generalized Expectation (SAGE) Maximization algorithm provides an iterative approach to parameter estimation when direct maximization of the likelihood function may be infeasible. Complexity is less in those applications ...

  7. Molecular Infectious Disease Epidemiology: Survival Analysis and Algorithms Linking Phylogenies to Transmission Trees

    Science.gov (United States)

    Kenah, Eben; Britton, Tom; Halloran, M. Elizabeth; Longini, Ira M.

    2016-01-01

    Recent work has attempted to use whole-genome sequence data from pathogens to reconstruct the transmission trees linking infectors and infectees in outbreaks. However, transmission trees from one outbreak do not generalize to future outbreaks. Reconstruction of transmission trees is most useful to public health if it leads to generalizable scientific insights about disease transmission. In a survival analysis framework, estimation of transmission parameters is based on sums or averages over the possible transmission trees. A phylogeny can increase the precision of these estimates by providing partial information about who infected whom. The leaves of the phylogeny represent sampled pathogens, which have known hosts. The interior nodes represent common ancestors of sampled pathogens, which have unknown hosts. Starting from assumptions about disease biology and epidemiologic study design, we prove that there is a one-to-one correspondence between the possible assignments of interior node hosts and the transmission trees simultaneously consistent with the phylogeny and the epidemiologic data on person, place, and time. We develop algorithms to enumerate these transmission trees and show these can be used to calculate likelihoods that incorporate both epidemiologic data and a phylogeny. A simulation study confirms that this leads to more efficient estimates of hazard ratios for infectiousness and baseline hazards of infectious contact, and we use these methods to analyze data from a foot-and-mouth disease virus outbreak in the United Kingdom in 2001. These results demonstrate the importance of data on individuals who escape infection, which is often overlooked. The combination of survival analysis and algorithms linking phylogenies to transmission trees is a rigorous but flexible statistical foundation for molecular infectious disease epidemiology. PMID:27070316

  8. Does risk management contribute to IT project success? A meta-analysis of empirical evidence

    NARCIS (Netherlands)

    de Bakker, K.F.C.; Boonstra, A.; Wortmann, J.C.

    The question whether risk management contributes to IT project success is considered relevant by people from both academic and practitioners' communities already for a long time. This paper presents a meta-analysis of the empirical evidence that either supports or opposes the claim that risk

  9. Decision making based on data analysis and optimization algorithm applied for cogeneration systems integration into a grid

    Science.gov (United States)

    Asmar, Joseph Al; Lahoud, Chawki; Brouche, Marwan

    2018-05-01

    Cogeneration and trigeneration systems can contribute to the reduction of primary energy consumption and greenhouse gas emissions in residential and tertiary sectors, by reducing fossil fuels demand and grid losses with respect to conventional systems. The cogeneration systems are characterized by a very high energy efficiency (80 to 90%) as well as a less polluting aspect compared to the conventional energy production. The integration of these systems into the energy network must simultaneously take into account their economic and environmental challenges. In this paper, a decision-making strategy will be introduced and is divided into two parts. The first one is a strategy based on a multi-objective optimization tool with data analysis and the second part is based on an optimization algorithm. The power dispatching of the Lebanese electricity grid is then simulated and considered as a case study in order to prove the compatibility of the cogeneration power calculated by our decision-making technique. In addition, the thermal energy produced by the cogeneration systems which capacity is selected by our technique shows compatibility with the thermal demand for district heating.

  10. An Analysis of Light Periods of BL Lac Object S5 0716+714 with the MUSIC Algorithm

    Science.gov (United States)

    Tang, Jie

    2012-07-01

    The multiple signal classification (MUSIC) algorithm is introduced to the estimation of light periods of BL Lac objects. The principle of the MUSIC algorithm is given, together with a testing on its spectral resolution by using a simulative signal. From a lot of literature, we have collected a large number of effective observational data of the BL Lac object S5 0716+714 in the three optical wavebands V, R, and I from 1994 to 2008. The light periods of S5 0716+714 are obtained by means of the MUSIC algorithm and average periodogram algorithm, respectively. It is found that there exist two major periodic components, one is the period of (3.33±0.08) yr, another is the period of (1.24±0.01) yr. The comparison of the performances of periodicity analysis of two algorithms indicates that the MUSIC algorithm has a smaller requirement on the sample length, as well as a good spectral resolution and anti-noise ability, to improve the accuracy of periodicity analysis in the case of short sample length.

  11. An integrated study of surface roughness in EDM process using regression analysis and GSO algorithm

    Science.gov (United States)

    Zainal, Nurezayana; Zain, Azlan Mohd; Sharif, Safian; Nuzly Abdull Hamed, Haza; Mohamad Yusuf, Suhaila

    2017-09-01

    The aim of this study is to develop an integrated study of surface roughness (Ra) in the die-sinking electrical discharge machining (EDM) process of Ti-6AL-4V titanium alloy with positive polarity of copper-tungsten (Cu-W) electrode. Regression analysis and glowworm swarm optimization (GSO) algorithm were considered for modelling and optimization process. Pulse on time (A), pulse off time (B), peak current (C) and servo voltage (D) were selected as the machining parameters with various levels. The experiments have been conducted based on the two levels of full factorial design with an added center point design of experiments (DOE). Moreover, mathematical models with linear and 2 factor interaction (2FI) effects of the parameters chosen were developed. The validity test of the fit and the adequacy of the developed mathematical models have been carried out by using analysis of variance (ANOVA) and F-test. The statistical analysis showed that the 2FI model outperformed with the most minimal value of Ra compared to the linear model and experimental result.

  12. Integrating R and Java for Enhancing Interactivity of Algorithmic Data Analysis Software Solutions

    Directory of Open Access Journals (Sweden)

    Titus Felix FURTUNĂ

    2016-06-01

    Full Text Available Conceiving software solutions for statistical processing and algorithmic data analysis involves handling diverse data, fetched from various sources and in different formats, and presenting the results in a suggestive, tailorable manner. Our ongoing research aims to design programming technics for integrating R developing environment with Java programming language for interoperability at a source code level. The goal is to combine the intensive data processing capabilities of R programing language, along with the multitude of statistical function libraries, with the flexibility offered by Java programming language and platform, in terms of graphical user interface and mathematical function libraries. Both developing environments are multiplatform oriented, and can complement each other through interoperability. R is a comprehensive and concise programming language, benefiting from a continuously expanding and evolving set of packages for statistical analysis, developed by the open source community. While is a very efficient environment for statistical data processing, R platform lacks support for developing user friendly, interactive, graphical user interfaces (GUIs. Java on the other hand, is a high level object oriented programming language, which supports designing and developing performant and interactive frameworks for general purpose software solutions, through Java Foundation Classes, JavaFX and various graphical libraries. In this paper we treat both aspects of integration and interoperability that refer to integrating Java code into R applications, and bringing R processing sequences into Java driven software solutions. Our research has been conducted focusing on case studies concerning pattern recognition and cluster analysis.

  13. Nurse middle managers contributions to patient-centred care: A 'managerial work' analysis.

    Science.gov (United States)

    Lalleman, Pcb; Smid, Gac; Dikken, J; Lagerwey, M D; Schuurmans, M J

    2017-10-01

    Nurse middle managers are in an ideal position to facilitate patient-centred care. However, their contribution is underexposed in literature due to difficulties to articulate this in practice. This paper explores how nurse middle managers contribute to patient-centred care in hospitals. A combination of time-use analysis and ethnographic work was used to disclose their contribution to patient-centred care at a micro level. Sixteen nurse managers were shadowed for over 560 hours in four hospitals. Some nurse middle managers seldom contribute to patient-centred care. Others are involved in direct patient care, but this does not result in patient-centred practices. At one hospital, the nurse middle managers did contribute to patient-centred care. Here balancing between "organizing work" and "caring work" is seen as a precondition for their patient-centeredness. Other important themes are feedback mechanisms; place matters; with whom to talk and how to frame the issues at stake; and behavioral style. Both "hands-on" and "heads-on" caring work of nurse middle managers enhances their patient-centeredness. This study is the first of its kind to obtain insight in the often difficult to articulate "doings" of nurse middle managers with regard to patient-centred care through combining time-use analysis with ethnographic work. © 2017 John Wiley & Sons Ltd.

  14. Genetic algorithm using independent component analysis in x-ray reflectivity curve fitting of periodic layer structures

    International Nuclear Information System (INIS)

    Tiilikainen, J; Bosund, V; Tilli, J-M; Sormunen, J; Mattila, M; Hakkarainen, T; Lipsanen, H

    2007-01-01

    A novel genetic algorithm (GA) utilizing independent component analysis (ICA) was developed for x-ray reflectivity (XRR) curve fitting. EFICA was used to reduce mutual information, or interparameter dependences, during the combinatorial phase. The performance of the new algorithm was studied by fitting trial XRR curves to target curves which were computed using realistic multilayer models. The median convergence properties of conventional GA, GA using principal component analysis and the novel GA were compared. GA using ICA was found to outperform the other methods with problems having 41 parameters or more to be fitted without additional XRR curve calculations. The computational complexity of the conventional methods was linear but the novel method had a quadratic computational complexity due to the applied ICA method which sets a practical limit for the dimensionality of the problem to be solved. However, the novel algorithm had the best capability to extend the fitting analysis based on Parratt's formalism to multiperiodic layer structures

  15. Reflections of Practical Implementation of the academic course Analysis and Design of Algorithms taught in the Universities of Pakistan

    Directory of Open Access Journals (Sweden)

    Faryal Shamsi

    2017-12-01

    Full Text Available This Analysis and Design of Algorithm is considered as a compulsory course in the field of Computer Science. It increases the logical and problem solving skills of the students and make their solutions efficient in terms of time and space.  These objectives can only be achieved if a student practically implements what he or she has studied throughout the course. But if the contents of this course are merely studied and rarely practiced then the actual goals of the course is not fulfilled. This article will explore the extent of practical implementation of the course of analysis and design of algorithm. Problems faced by the computer science community and major barriers in the field are also enumerated. Finally, some recommendations are made to overcome the obstacles in the practical implementation of analysis and design of algorithms.

  16. Acoustical contribution calculation and analysis of compressor shell based on acoustic transfer vector method

    Science.gov (United States)

    Chen, Xiaol; Guo, Bei; Tuo, Jinliang; Zhou, Ruixin; Lu, Yang

    2017-08-01

    Nowadays, people are paying more and more attention to the noise reduction of household refrigerator compressor. This paper established a sound field bounded by compressor shell and ISO3744 standard field points. The Acoustic Transfer Vector (ATV) in the sound field radiated by a refrigerator compressor shell were calculated which fits the test result preferably. Then the compressor shell surface is divided into several parts. Based on Acoustic Transfer Vector approach, the sound pressure contribution to the field points and the sound power contribution to the sound field of each part were calculated. To obtain the noise radiation in the sound field, the sound pressure cloud charts were analyzed, and the contribution curves in different frequency of each part were acquired. Meanwhile, the sound power contribution of each part in different frequency was analyzed, to ensure those parts where contributes larger sound power. Through the analysis of acoustic contribution, those parts where radiate larger noise on the compressor shell were determined. This paper provides a credible and effective approach on the structure optimal design of refrigerator compressor shell, which is meaningful in the noise and vibration reduction.

  17. Comparative analysis of instance selection algorithms for instance-based classifiers in the context of medical decision support

    International Nuclear Information System (INIS)

    Mazurowski, Maciej A; Tourassi, Georgia D; Malof, Jordan M

    2011-01-01

    When constructing a pattern classifier, it is important to make best use of the instances (a.k.a. cases, examples, patterns or prototypes) available for its development. In this paper we present an extensive comparative analysis of algorithms that, given a pool of previously acquired instances, attempt to select those that will be the most effective to construct an instance-based classifier in terms of classification performance, time efficiency and storage requirements. We evaluate seven previously proposed instance selection algorithms and compare their performance to simple random selection of instances. We perform the evaluation using k-nearest neighbor classifier and three classification problems: one with simulated Gaussian data and two based on clinical databases for breast cancer detection and diagnosis, respectively. Finally, we evaluate the impact of the number of instances available for selection on the performance of the selection algorithms and conduct initial analysis of the selected instances. The experiments show that for all investigated classification problems, it was possible to reduce the size of the original development dataset to less than 3% of its initial size while maintaining or improving the classification performance. Random mutation hill climbing emerges as the superior selection algorithm. Furthermore, we show that some previously proposed algorithms perform worse than random selection. Regarding the impact of the number of instances available for the classifier development on the performance of the selection algorithms, we confirm that the selection algorithms are generally more effective as the pool of available instances increases. In conclusion, instance selection is generally beneficial for instance-based classifiers as it can improve their performance, reduce their storage requirements and improve their response time. However, choosing the right selection algorithm is crucial.

  18. Theoretical study on X-Ray Fluorescence Analysis: Contribution of the self-excitation phenomenon

    International Nuclear Information System (INIS)

    RAKOTONDRAJAONA, H.N.J.L.

    1999-01-01

    This work consist in setting up, firstly, fluorescence intensities due to the contribution of secondary and tertiary excitation phenomena which settle among the elements of the same sample during the analysis through X fluorescence, inspired by Sherman calculations. Secondly, we have experimentally checked these expression from the analysis of twelve samples; containing all the following elements: Iron, Copper and Zinc. The difference between the theoretical results and the experimental results has been valued from the formula of the test of χ 2 . We consider that this difference is noticeable compared to other errors due to analysis method. [fr

  19. Contribution to the sample mean plot for graphical and numerical sensitivity analysis

    International Nuclear Information System (INIS)

    Bolado-Lavin, R.; Castaings, W.; Tarantola, S.

    2009-01-01

    The contribution to the sample mean plot, originally proposed by Sinclair, is revived and further developed as practical tool for global sensitivity analysis. The potentials of this simple and versatile graphical tool are discussed. Beyond the qualitative assessment provided by this approach, a statistical test is proposed for sensitivity analysis. A case study that simulates the transport of radionuclides through the geosphere from an underground disposal vault containing nuclear waste is considered as a benchmark. The new approach is tested against a very efficient sensitivity analysis method based on state dependent parameter meta-modelling

  20. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASAs Space Launch System

    Science.gov (United States)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    detection, and confirm responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - the ARINC 6535-partitioned Operating System, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM such as telemetry packing and processing. The baseline plan for use of VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by FSW. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure their effectiveness and performance in the exterior FSW development and test processes. This paper is outlined in a systematic fashion analogous to a lifecycle process flow for engineering development of algorithms into software and testing. Section I describes the NASA SLS M&FM context, presenting the current infrastructure, leading principles, methods, and participants. Section II defines the testing philosophy of the M&FM algorithms as related to VMET followed by section III, which presents the modeling methods of the algorithms to be tested and validated in VMET. Its details are then further presented in section IV followed by Section V presenting integration, test status, and state analysis. Finally, section VI addresses the summary and forward directions followed by the appendices presenting relevant information on terminology and documentation.

  1. Performance of the Tariff Method: validation of a simple additive algorithm for analysis of verbal autopsies

    Directory of Open Access Journals (Sweden)

    Murray Christopher JL

    2011-08-01

    Full Text Available Abstract Background Verbal autopsies provide valuable information for studying mortality patterns in populations that lack reliable vital registration data. Methods for transforming verbal autopsy results into meaningful information for health workers and policymakers, however, are often costly or complicated to use. We present a simple additive algorithm, the Tariff Method (termed Tariff, which can be used for assigning individual cause of death and for determining cause-specific mortality fractions (CSMFs from verbal autopsy data. Methods Tariff calculates a score, or "tariff," for each cause, for each sign/symptom, across a pool of validated verbal autopsy data. The tariffs are summed for a given response pattern in a verbal autopsy, and this sum (score provides the basis for predicting the cause of death in a dataset. We implemented this algorithm and evaluated the method's predictive ability, both in terms of chance-corrected concordance at the individual cause assignment level and in terms of CSMF accuracy at the population level. The analysis was conducted separately for adult, child, and neonatal verbal autopsies across 500 pairs of train-test validation verbal autopsy data. Results Tariff is capable of outperforming physician-certified verbal autopsy in most cases. In terms of chance-corrected concordance, the method achieves 44.5% in adults, 39% in children, and 23.9% in neonates. CSMF accuracy was 0.745 in adults, 0.709 in children, and 0.679 in neonates. Conclusions Verbal autopsies can be an efficient means of obtaining cause of death data, and Tariff provides an intuitive, reliable method for generating individual cause assignment and CSMFs. The method is transparent and flexible and can be readily implemented by users without training in statistics or computer science.

  2. Analysis and application of two recursive parametric estimation algorithms for an asynchronous machine

    International Nuclear Information System (INIS)

    Damek, Nawel; Kamoun, Samira

    2011-01-01

    In this communication, two recursive parametric estimation algorithms are analyzed and applied to an squirrelcage asynchronous machine located at the research ''Unit of Automatic Control'' (UCA) at ENIS. The first algorithm which, use the transfer matrix mathematical model, is based on the gradient principle. The second algorithm, which use the state-space mathematical model, is based on the minimization of the estimation error. These algorithms are applied as a key technique to estimate asynchronous machine with unknown, but constant or timevarying parameters. Stator voltage and current are used as measured data. The proposed recursive parametric estimation algorithms are validated on the experimental data of an asynchronous machine under normal operating condition as full load. The results show that these algorithms can estimate effectively the machine parameters with reliability.

  3. Warfarin Dosing Algorithms Underpredict Dose Requirements in Patients Requiring ≥7 mg Daily: A Systematic Review and Meta-analysis.

    Science.gov (United States)

    Saffian, S M; Duffull, S B; Wright, Dfb

    2017-08-01

    There is preliminary evidence to suggest that some published warfarin dosing algorithms produce biased maintenance dose predictions in patients who require higher than average doses. We conducted a meta-analysis of warfarin dosing algorithms to determine if there exists a systematic under- or overprediction of dose requirements for patients requiring ≥7 mg/day across published algorithms. Medline and Embase databases were searched up to September 2015. We quantified the proportion of over- and underpredicted doses in patients whose observed maintenance dose was ≥7 mg/day. The meta-analysis included 47 evaluations of 22 different warfarin dosing algorithms from 16 studies. The meta-analysis included data from 1,492 patients who required warfarin doses of ≥7 mg/day. All 22 algorithms were found to underpredict warfarin dosing requirements in patients who required ≥7 mg/day by an average of 2.3 mg/day with a pooled estimate of underpredicted doses of 92.3% (95% confidence interval 90.3-94.1, I 2 = 24%). © 2017 American Society for Clinical Pharmacology and Therapeutics.

  4. A new approach to nuclear reactor design optimization using genetic algorithms and regression analysis

    International Nuclear Information System (INIS)

    Kumar, Akansha; Tsvetkov, Pavel V.

    2015-01-01

    Highlights: • This paper presents a new method useful for the optimization of complex dynamic systems. • The method uses the strengths of; genetic algorithms (GA), and regression splines. • The method is applied to the design of a gas cooled fast breeder reactor design. • Tools like Java, R, and codes like MCNP, Matlab are used in this research. - Abstract: A module based optimization method using genetic algorithms (GA), and multivariate regression analysis has been developed to optimize a set of parameters in the design of a nuclear reactor. GA simulates natural evolution to perform optimization, and is widely used in recent times by the scientific community. The GA fits a population of random solutions to the optimized solution of a specific problem. In this work, we have developed a genetic algorithm to determine the values for a set of nuclear reactor parameters to design a gas cooled fast breeder reactor core including a basis thermal–hydraulics analysis, and energy transfer. Multivariate regression is implemented using regression splines (RS). Reactor designs are usually complex and a simulation needs a significantly large amount of time to execute, hence the implementation of GA or any other global optimization techniques is not feasible, therefore we present a new method of using RS in conjunction with GA. Due to using RS, we do not necessarily need to run the neutronics simulation for all the inputs generated from the GA module rather, run the simulations for a predefined set of inputs, build a multivariate regression fit to the input and the output parameters, and then use this fit to predict the output parameters for the inputs generated by GA. The reactor parameters are given by the, radius of a fuel pin cell, isotopic enrichment of the fissile material in the fuel, mass flow rate of the coolant, and temperature of the coolant at the core inlet. And, the optimization objectives for the reactor core are, high breeding of U-233 and Pu-239 in

  5. Security Analysis of a Block Encryption Algorithm Based on Dynamic Sequences of Multiple Chaotic Systems

    Science.gov (United States)

    Du, Mao-Kang; He, Bo; Wang, Yong

    2011-01-01

    Recently, the cryptosystem based on chaos has attracted much attention. Wang and Yu (Commun. Nonlin. Sci. Numer. Simulat. 14 (2009) 574) proposed a block encryption algorithm based on dynamic sequences of multiple chaotic systems. We analyze the potential flaws in the algorithm. Then, a chosen-plaintext attack is presented. Some remedial measures are suggested to avoid the flaws effectively. Furthermore, an improved encryption algorithm is proposed to resist the attacks and to keep all the merits of the original cryptosystem.

  6. The Analysis of Alpha Beta Pruning and MTD(f) Algorithm to Determine the Best Algorithm to be Implemented at Connect Four Prototype

    Science.gov (United States)

    Tommy, Lukas; Hardjianto, Mardi; Agani, Nazori

    2017-04-01

    Connect Four is a two-player game which the players take turns dropping discs into a grid to connect 4 of one’s own discs next to each other vertically, horizontally, or diagonally. At Connect Four, Computer requires artificial intelligence (AI) in order to play properly like human. There are many AI algorithms that can be implemented to Connect Four, but the suitable algorithms are unknown. The suitable algorithm means optimal in choosing move and its execution time is not slow at search depth which is deep enough. In this research, analysis and comparison between standard alpha beta (AB) Pruning and MTD(f) will be carried out at the prototype of Connect Four in terms of optimality (win percentage) and speed (execution time and the number of leaf nodes). Experiments are carried out by running computer versus computer mode with 12 different conditions, i.e. varied search depth (5 through 10) and who moves first. The percentage achieved by MTD(f) based on experiments is win 45,83%, lose 37,5% and draw 16,67%. In the experiments with search depth 8, MTD(f) execution time is 35, 19% faster and evaluate 56,27% fewer leaf nodes than AB Pruning. The results of this research are MTD(f) is as optimal as AB Pruning at Connect Four prototype, but MTD(f) on average is faster and evaluates fewer leaf nodes than AB Pruning. The execution time of MTD(f) is not slow and much faster than AB Pruning at search depth which is deep enough.

  7. Analysis of the effects of wing interference on the tail contributions to the rolling derivatives

    Science.gov (United States)

    Michael, William H , Jr

    1952-01-01

    An analysis of the effects of wing interference on the tail contributions to the rolling stability derivatives of complete airplane configurations is made by calculating the angularity of the air stream at the vertical tail due to rolling and determining the resulting forces and moments. Some of the important factors which affect the resultant angularity on the vertical tail are wing aspect ratio and sweepback, vertical-tail span, and considerations associated with angle of attack and airplane geometry. Some calculated sidewash results for a limited range of plan forms and vertical-tail sizes are presented. Equations taking into account the sidewash results are given for determining the tail contributions to the rolling derivatives. Comparisons of estimated and experimental results indicate that a consideration of wing interference effects improves the estimated values of the tail contributions to the rolling derivatives and that fair agreement with available experimental data is obtained.

  8. Development and Analysis of Group Contribution Plus Models for Property Prediction of Organic Chemical Systems

    DEFF Research Database (Denmark)

    Mustaffa, Azizul Azri

    for the GIPs are then used in the UNIFAC model to calculate activity coefficients. This approach can increase the application range of any “host” UNIFAC model by providing a reliable predictive model towards fast and efficient product development. This PhD project is focused on the analysis and further......Prediction of properties is important in chemical process-product design. Reliable property models are needed for increasingly complex and wider range of chemicals. Group-contribution methods provide useful tool but there is a need to validate them and improve their accuracy when complex chemicals...... are present in the mixtures. In accordance with that, a combined group-contribution and atom connectivity approach that is able to extend the application range of property models has been developed for mixture properties. This so-called Group-ContributionPlus (GCPlus) approach is a hybrid model which combines...

  9. Demand analysis of flood insurance by using logistic regression model and genetic algorithm

    Science.gov (United States)

    Sidi, P.; Mamat, M. B.; Sukono; Supian, S.; Putra, A. S.

    2018-03-01

    Citarum River floods in the area of South Bandung Indonesia, often resulting damage to some buildings belonging to the people living in the vicinity. One effort to alleviate the risk of building damage is to have flood insurance. The main obstacle is not all people in the Citarum basin decide to buy flood insurance. In this paper, we intend to analyse the decision to buy flood insurance. It is assumed that there are eight variables that influence the decision of purchasing flood assurance, include: income level, education level, house distance with river, building election with road, flood frequency experience, flood prediction, perception on insurance company, and perception towards government effort in handling flood. The analysis was done by using logistic regression model, and to estimate model parameters, it is done with genetic algorithm. The results of the analysis shows that eight variables analysed significantly influence the demand of flood insurance. These results are expected to be considered for insurance companies, to influence the decision of the community to be willing to buy flood insurance.

  10. Evaluation and optimization of footwear comfort parameters using finite element analysis and a discrete optimization algorithm

    Science.gov (United States)

    Papagiannis, P.; Azariadis, P.; Papanikos, P.

    2017-10-01

    Footwear is subject to bending and torsion deformations that affect comfort perception. Following review of Finite Element Analysis studies of sole rigidity and comfort, a three-dimensional, linear multi-material finite element sole model for quasi-static bending and torsion simulation, overcoming boundary and optimisation limitations, is described. Common footwear materials properties and boundary conditions from gait biomechanics are used. The use of normalised strain energy for product benchmarking is demonstrated along with comfort level determination through strain energy density stratification. Sensitivity of strain energy against material thickness is greater for bending than for torsion, with results of both deformations showing positive correlation. Optimization for a targeted performance level and given layer thickness is demonstrated with bending simulations sufficing for overall comfort assessment. An algorithm for comfort optimization w.r.t. bending is presented, based on a discrete approach with thickness values set in line with practical manufacturing accuracy. This work illustrates the potential of the developed finite element analysis applications to offer viable and proven aids to modern footwear sole design assessment and optimization.

  11. Analysis of the MPEG-1 Layer III (MP3) Algorithm using MATLAB

    CERN Document Server

    Thiagarajan, Jayaraman

    2011-01-01

    The MPEG-1 Layer III (MP3) algorithm is one of the most successful audio formats for consumer audio storage and for transfer and playback of music on digital audio players. The MP3 compression standard along with the AAC (Advanced Audio Coding) algorithm are associated with the most successful music players of the last decade. This book describes the fundamentals and the MATLAB implementation details of the MP3 algorithm. Several of the tedious processes in MP3 are supported by demonstrations using MATLAB software. The book presents the theoretical concepts and algorithms used in the MP3 stand

  12. Convergence analysis of Chauvin's PCA learning algorithm with a constant learning rate

    International Nuclear Information System (INIS)

    Lv Jiancheng; Yi Zhang

    2007-01-01

    The convergence of Chauvin's PCA learning algorithm with a constant learning rate is studied in this paper by using a DDT method (deterministic discrete-time system method). Different from the DCT method (deterministic continuous-time system method), the DDT method does not require that the learning rate converges to zero. An invariant set of Chauvin's algorithm with a constant learning rate is obtained so that the non-divergence of this algorithm can be guaranteed. Rigorous mathematic proofs are provided to prove the local convergence of this algorithm

  13. Convergence analysis of Chauvin's PCA learning algorithm with a constant learning rate

    Energy Technology Data Exchange (ETDEWEB)

    Lv Jiancheng [Computational Intelligence Laboratory, School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054 (China); Yi Zhang [Computational Intelligence Laboratory, School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054 (China)]. E-mail: zhangyi@uestc.edu.cn

    2007-05-15

    The convergence of Chauvin's PCA learning algorithm with a constant learning rate is studied in this paper by using a DDT method (deterministic discrete-time system method). Different from the DCT method (deterministic continuous-time system method), the DDT method does not require that the learning rate converges to zero. An invariant set of Chauvin's algorithm with a constant learning rate is obtained so that the non-divergence of this algorithm can be guaranteed. Rigorous mathematic proofs are provided to prove the local convergence of this algorithm.

  14. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models

    International Nuclear Information System (INIS)

    Lamboni, Matieyendou; Monod, Herve; Makowski, David

    2011-01-01

    Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006 ) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.

  15. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models

    Energy Technology Data Exchange (ETDEWEB)

    Lamboni, Matieyendou [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Monod, Herve, E-mail: herve.monod@jouy.inra.f [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Makowski, David [INRA, UMR Agronomie INRA/AgroParisTech (UMR 211), BP 01, F78850 Thiverval-Grignon (France)

    2011-04-15

    Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.

  16. Melanie II--a third-generation software package for analysis of two-dimensional electrophoresis images: II. Algorithms.

    Science.gov (United States)

    Appel, R D; Vargas, J R; Palagi, P M; Walther, D; Hochstrasser, D F

    1997-12-01

    After two generations of software systems for the analysis of two-dimensional electrophoresis (2-DE) images, a third generation of such software packages has recently emerged that combines state-of-the-art graphical user interfaces with comprehensive spot data analysis capabilities. A key characteristic common to most of these software packages is that many of their tools are implementations of algorithms that resulted from research areas such as image processing, vision, artificial intelligence or machine learning. This article presents the main algorithms implemented in the Melanie II 2-D PAGE software package. The applications of these algorithms, embodied as the feature of the program, are explained in an accompanying article (R. D. Appel et al.; Electrophoresis 1997, 18, 2724-2734).

  17. Genetic algorithm applied to a Soil-Vegetation-Atmosphere system: Sensitivity and uncertainty analysis

    Science.gov (United States)

    Schneider, Sébastien; Jacques, Diederik; Mallants, Dirk

    2010-05-01

    Numerical models are of precious help for predicting water fluxes in the vadose zone and more specifically in Soil-Vegetation-Atmosphere (SVA) systems. For such simulations, robust models and representative soil hydraulic parameters are required. Calibration of unsaturated hydraulic properties is known to be a difficult optimization problem due to the high non-linearity of the water flow equations. Therefore, robust methods are needed to avoid the optimization process to lead to non-optimal parameters. Evolutionary algorithms and specifically genetic algorithms (GAs) are very well suited for those complex parameter optimization problems. Additionally, GAs offer the opportunity to assess the confidence in the hydraulic parameter estimations, because of the large number of model realizations. The SVA system in this study concerns a pine stand on a heterogeneous sandy soil (podzol) in the Campine region in the north of Belgium. Throughfall and other meteorological data and water contents at different soil depths have been recorded during one year at a daily time step in two lysimeters. The water table level, which is varying between 95 and 170 cm, has been recorded with intervals of 0.5 hour. The leaf area index was measured as well at some selected time moments during the year in order to evaluate the energy which reaches the soil and to deduce the potential evaporation. Water contents at several depths have been recorded. Based on the profile description, five soil layers have been distinguished in the podzol. Two models have been used for simulating water fluxes: (i) a mechanistic model, the HYDRUS-1D model, which solves the Richards' equation, and (ii) a compartmental model, which treats the soil profile as a bucket into which water flows until its maximum capacity is reached. A global sensitivity analysis (Morris' one-at-a-time sensitivity analysis) was run previously to the calibration, in order to check the sensitivity in the chosen parameter search space. For

  18. Computer algorithms for automated detection and analysis of local Ca2+ releases in spontaneously beating cardiac pacemaker cells.

    Directory of Open Access Journals (Sweden)

    Alexander V Maltsev

    Full Text Available Local Ca2+ Releases (LCRs are crucial events involved in cardiac pacemaker cell function. However, specific algorithms for automatic LCR detection and analysis have not been developed in live, spontaneously beating pacemaker cells. In the present study we measured LCRs using a high-speed 2D-camera in spontaneously contracting sinoatrial (SA node cells isolated from rabbit and guinea pig and developed a new algorithm capable of detecting and analyzing the LCRs spatially in two-dimensions, and in time. Our algorithm tracks points along the midline of the contracting cell. It uses these points as a coordinate system for affine transform, producing a transformed image series where the cell does not contract. Action potential-induced Ca2+ transients and LCRs were thereafter isolated from recording noise by applying a series of spatial filters. The LCR birth and death events were detected by a differential (frame-to-frame sensitivity algorithm applied to each pixel (cell location. An LCR was detected when its signal changes sufficiently quickly within a sufficiently large area. The LCR is considered to have died when its amplitude decays substantially, or when it merges into the rising whole cell Ca2+ transient. Ultimately, our algorithm provides major LCR parameters such as period, signal mass, duration, and propagation path area. As the LCRs propagate within live cells, the algorithm identifies splitting and merging behaviors, indicating the importance of locally propagating Ca2+-induced-Ca2+-release for the fate of LCRs and for generating a powerful ensemble Ca2+ signal. Thus, our new computer algorithms eliminate motion artifacts and detect 2D local spatiotemporal events from recording noise and global signals. While the algorithms were developed to detect LCRs in sinoatrial nodal cells, they have the potential to be used in other applications in biophysics and cell physiology, for example, to detect Ca2+ wavelets (abortive waves, sparks and

  19. Cost-effective analysis of different algorithms for the diagnosis of hepatitis C virus infection

    Directory of Open Access Journals (Sweden)

    A.M.E.C. Barreto

    2008-02-01

    Full Text Available We compared the cost-benefit of two algorithms, recently proposed by the Centers for Disease Control and Prevention, USA, with the conventional one, the most appropriate for the diagnosis of hepatitis C virus (HCV infection in the Brazilian population. Serum samples were obtained from 517 ELISA-positive or -inconclusive blood donors who had returned to Fundação Pró-Sangue/Hemocentro de São Paulo to confirm previous results. Algorithm A was based on signal-to-cut-off (s/co ratio of ELISA anti-HCV samples that show s/co ratio ³95% concordance with immunoblot (IB positivity. For algorithm B, reflex nucleic acid amplification testing by PCR was required for ELISA-positive or -inconclusive samples and IB for PCR-negative samples. For algorithm C, all positive or inconclusive ELISA samples were submitted to IB. We observed a similar rate of positive results with the three algorithms: 287, 287, and 285 for A, B, and C, respectively, and 283 were concordant with one another. Indeterminate results from algorithms A and C were elucidated by PCR (expanded algorithm which detected two more positive samples. The estimated cost of algorithms A and B was US$21,299.39 and US$32,397.40, respectively, which were 43.5 and 14.0% more economic than C (US$37,673.79. The cost can vary according to the technique used. We conclude that both algorithms A and B are suitable for diagnosing HCV infection in the Brazilian population. Furthermore, algorithm A is the more practical and economical one since it requires supplemental tests for only 54% of the samples. Algorithm B provides early information about the presence of viremia.

  20. Estimate the contribution of incubation parameters influence egg hatchability using multiple linear regression analysis.

    Science.gov (United States)

    Khalil, Mohamed H; Shebl, Mostafa K; Kosba, Mohamed A; El-Sabrout, Karim; Zaki, Nesma

    2016-08-01

    This research was conducted to determine the most affecting parameters on hatchability of indigenous and improved local chickens' eggs. Five parameters were studied (fertility, early and late embryonic mortalities, shape index, egg weight, and egg weight loss) on four strains, namely Fayoumi, Alexandria, Matrouh, and Montazah. Multiple linear regression was performed on the studied parameters to determine the most influencing one on hatchability. The results showed significant differences in commercial and scientific hatchability among strains. Alexandria strain has the highest significant commercial hatchability (80.70%). Regarding the studied strains, highly significant differences in hatching chick weight among strains were observed. Using multiple linear regression analysis, fertility made the greatest percent contribution (71.31%) to hatchability, and the lowest percent contributions were made by shape index and egg weight loss. A prediction of hatchability using multiple regression analysis could be a good tool to improve hatchability percentage in chickens.

  1. An analysis on the contribution of civil society to democratic consolidation in Turkey

    OpenAIRE

    Torus, Emre

    2007-01-01

    Cataloged from PDF version of article. This is an analysis on the contribution of civil society to democratic consolidation in Turkey. This thesis will try to understand this problematic by assessing the civil society’s formal structure, legal framework, internal values and its impact during the consolidation process. The key aim here is to understand the civil society’s role as a contributor to democratic consolidation by mapping the civil society and democratic consolidati...

  2. Performance analysis of a decoding algorithm for algebraic-geometry codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Jensen, Helge Elbrønd; Nielsen, Rasmus Refslund

    1999-01-01

    The fast decoding algorithm for one point algebraic-geometry codes of Sakata, Elbrond Jensen, and Hoholdt corrects all error patterns of weight less than half the Feng-Rao minimum distance. In this correspondence we analyze the performance of the algorithm for heavier error patterns. It turns out...

  3. An improved parametrization and analysis of the EXP3++ algorithm for stochastic and adversarial bandits

    DEFF Research Database (Denmark)

    Seldin, Yevgeny; Lugosi, Gábor

    2017-01-01

    We present a new strategy for gap estimation in randomized algorithms for multiarmed bandits and combine it with the EXP3++ algorithm of Seldin and Slivkins (2014). In the stochastic regime the strategy reduces dependence of regret on a time horizon from $(ln t)^3$ to $(ln t)^2$ and eliminates...

  4. Kinect as a Tool for Gait Analysis: Validation of a Real-Time Joint Extraction Algorithm Working in Side View

    Science.gov (United States)

    Cippitelli, Enea; Gasparrini, Samuele; Spinsante, Susanna; Gambi, Ennio

    2015-01-01

    The Microsoft Kinect sensor has gained attention as a tool for gait analysis for several years. Despite the many advantages the sensor provides, however, the lack of a native capability to extract joints from the side view of a human body still limits the adoption of the device to a number of relevant applications. This paper presents an algorithm to locate and estimate the trajectories of up to six joints extracted from the side depth view of a human body captured by the Kinect device. The algorithm is then applied to extract data that can be exploited to provide an objective score for the “Get Up and Go Test”, which is typically adopted for gait analysis in rehabilitation fields. Starting from the depth-data stream provided by the Microsoft Kinect sensor, the proposed algorithm relies on anthropometric models only, to locate and identify the positions of the joints. Differently from machine learning approaches, this solution avoids complex computations, which usually require significant resources. The reliability of the information about the joint position output by the algorithm is evaluated by comparison to a marker-based system. Tests show that the trajectories extracted by the proposed algorithm adhere to the reference curves better than the ones obtained from the skeleton generated by the native applications provided within the Microsoft Kinect (Microsoft Corporation, Redmond, WA, USA, 2013) and OpenNI (OpenNI organization, Tel Aviv, Israel, 2013) Software Development Kits. PMID:25594588

  5. Kinect as a Tool for Gait Analysis: Validation of a Real-Time Joint Extraction Algorithm Working in Side View

    Directory of Open Access Journals (Sweden)

    Enea Cippitelli

    2015-01-01

    Full Text Available The Microsoft Kinect sensor has gained attention as a tool for gait analysis for several years. Despite the many advantages the sensor provides, however, the lack of a native capability to extract joints from the side view of a human body still limits the adoption of the device to a number of relevant applications. This paper presents an algorithm to locate and estimate the trajectories of up to six joints extracted from the side depth view of a human body captured by the Kinect device. The algorithm is then applied to extract data that can be exploited to provide an objective score for the “Get Up and Go Test”, which is typically adopted for gait analysis in rehabilitation fields. Starting from the depth-data stream provided by the Microsoft Kinect sensor, the proposed algorithm relies on anthropometric models only, to locate and identify the positions of the joints. Differently from machine learning approaches, this solution avoids complex computations, which usually require significant resources. The reliability of the information about the joint position output by the algorithm is evaluated by comparison to a marker-based system. Tests show that the trajectories extracted by the proposed algorithm adhere to the reference curves better than the ones obtained from the skeleton generated by the native applications provided within the Microsoft Kinect (Microsoft Corporation, Redmond,WA, USA, 2013 and OpenNI (OpenNI organization, Tel Aviv, Israel, 2013 Software Development Kits.

  6. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms

    Directory of Open Access Journals (Sweden)

    Dashan Zhang

    2016-04-01

    Full Text Available The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.

  7. Dual Channel Pulse Coupled Neural Network Algorithm for Fusion of Multimodality Brain Images with Quality Analysis

    Directory of Open Access Journals (Sweden)

    Kavitha SRINIVASAN

    2014-09-01

    Full Text Available Background: In the review of medical imaging techniques, an important fact that emerged is that radiologists and physicians still are in a need of high-resolution medical images with complementary information from different modalities to ensure efficient analysis. This requirement should have been sorted out using fusion techniques with the fused image being used in image-guided surgery, image-guided radiotherapy and non-invasive diagnosis. Aim: This paper focuses on Dual Channel Pulse Coupled Neural Network (PCNN Algorithm for fusion of multimodality brain images and the fused image is further analyzed using subjective (human perception and objective (statistical measures for the quality analysis. Material and Methods: The modalities used in fusion are CT, MRI with subtypes T1/T2/PD/GAD, PET and SPECT, since the information from each modality is complementary to one another. The objective measures selected for evaluation of fused image were: Information Entropy (IE - image quality, Mutual Information (MI – deviation in fused to the source images and Signal to Noise Ratio (SNR – noise level, for analysis. Eight sets of brain images with different modalities (T2 with T1, T2 with CT, PD with T2, PD with GAD, T2 with GAD, T2 with SPECT-Tc, T2 with SPECT-Ti, T2 with PET are chosen for experimental purpose and the proposed technique is compared with existing fusion methods such as the Average method, the Contrast pyramid, the Shift Invariant Discrete Wavelet Transform (SIDWT with Harr and the Morphological pyramid, using the selected measures to ascertain relative performance. Results: The IE value and SNR value of the fused image derived from dual channel PCNN is higher than other fusion methods, shows that the quality is better with less noise. Conclusion: The fused image resulting from the proposed method retains the contrast, shape and texture as in source images without false information or information loss.

  8. A novel procedure on next generation sequencing data analysis using text mining algorithm.

    Science.gov (United States)

    Zhao, Weizhong; Chen, James J; Perkins, Roger; Wang, Yuping; Liu, Zhichao; Hong, Huixiao; Tong, Weida; Zou, Wen

    2016-05-13

    Next-generation sequencing (NGS) technologies have provided researchers with vast possibilities in various biological and biomedical research areas. Efficient data mining strategies are in high demand for large scale comparative and evolutional studies to be performed on the large amounts of data derived from NGS projects. Topic modeling is an active research field in machine learning and has been mainly used as an analytical tool to structure large textual corpora for data mining. We report a novel procedure to analyse NGS data using topic modeling. It consists of four major procedures: NGS data retrieval, preprocessing, topic modeling, and data mining using Latent Dirichlet Allocation (LDA) topic outputs. The NGS data set of the Salmonella enterica strains were used as a case study to show the workflow of this procedure. The perplexity measurement of the topic numbers and the convergence efficiencies of Gibbs sampling were calculated and discussed for achieving the best result from the proposed procedure. The output topics by LDA algorithms could be treated as features of Salmonella strains to accurately describe the genetic diversity of fliC gene in various serotypes. The results of a two-way hierarchical clustering and data matrix analysis on LDA-derived matrices successfully classified Salmonella serotypes based on the NGS data. The implementation of topic modeling in NGS data analysis procedure provides a new way to elucidate genetic information from NGS data, and identify the gene-phenotype relationships and biomarkers, especially in the era of biological and medical big data. The implementation of topic modeling in NGS data analysis provides a new way to elucidate genetic information from NGS data, and identify the gene-phenotype relationships and biomarkers, especially in the era of biological and medical big data.

  9. A meteor head echo analysis algorithm for the lower VHF band

    Directory of Open Access Journals (Sweden)

    J. Kero

    2012-04-01

    Full Text Available We have developed an automated analysis scheme for meteor head echo observations by the 46.5 MHz Middle and Upper atmosphere (MU radar near Shigaraki, Japan (34.85° N, 136.10° E. The analysis procedure computes meteoroid range, velocity and deceleration as functions of time with unprecedented accuracy and precision. This is crucial for estimations of meteoroid mass and orbital parameters as well as investigations of the meteoroid-atmosphere interaction processes. In this paper we present this analysis procedure in detail. The algorithms use a combination of single-pulse-Doppler, time-of-flight and pulse-to-pulse phase correlation measurements to determine the radial velocity to within a few tens of metres per second with 3.12 ms time resolution. Equivalently, the precision improvement is at least a factor of 20 compared to previous single-pulse measurements. Such a precision reveals that the deceleration increases significantly during the intense part of a meteoroid's ablation process in the atmosphere. From each received pulse, the target range is determined to within a few tens of meters, or the order of a few hundredths of the 900 m long range gates. This is achieved by transmitting a 13-bit Barker code oversampled by a factor of two at reception and using a novel range interpolation technique. The meteoroid velocity vector is determined from the estimated radial velocity by carefully taking the location of the meteor target and the angle from its trajectory to the radar beam into account. The latter is determined from target range and bore axis offset. We have identified and solved the signal processing issue giving rise to the peculiar signature in signal to noise ratio plots reported by Galindo et al. (2011, and show how to use the range interpolation technique to differentiate the effect of signal processing from physical processes.

  10. Comparing the Effects of Different Smoothing Algorithms on the Assessment of Dimensionality of Ordered Categorical Items with Parallel Analysis.

    Science.gov (United States)

    Debelak, Rudolf; Tran, Ulrich S

    2016-01-01

    The analysis of polychoric correlations via principal component analysis and exploratory factor analysis are well-known approaches to determine the dimensionality of ordered categorical items. However, the application of these approaches has been considered as critical due to the possible indefiniteness of the polychoric correlation matrix. A possible solution to this problem is the application of smoothing algorithms. This study compared the effects of three smoothing algorithms, based on the Frobenius norm, the adaption of the eigenvalues and eigenvectors, and on minimum-trace factor analysis, on the accuracy of various variations of parallel analysis by the means of a simulation study. We simulated different datasets which varied with respect to the size of the respondent sample, the size of the item set, the underlying factor model, the skewness of the response distributions and the number of response categories in each item. We found that a parallel analysis and principal component analysis of smoothed polychoric and Pearson correlations led to the most accurate results in detecting the number of major factors in simulated datasets when compared to the other methods we investigated. Of the methods used for smoothing polychoric correlation matrices, we recommend the algorithm based on minimum trace factor analysis.

  11. Verification test for on-line diagnosis algorithm based on noise analysis

    International Nuclear Information System (INIS)

    Tamaoki, T.; Naito, N.; Tsunoda, T.; Sato, M.; Kameda, A.

    1980-01-01

    An on-line diagnosis algorithm was developed and its verification test was performed using a minicomputer. This algorithm identifies the plant state by analyzing various system noise patterns, such as power spectral densities, coherence functions etc., in three procedure steps. Each obtained noise pattern is examined by using the distances from its reference patterns prepared for various plant states. Then, the plant state is identified by synthesizing each result with an evaluation weight. This weight is determined automatically from the reference noise patterns prior to on-line diagnosis. The test was performed with 50 MW (th) Steam Generator noise data recorded under various controller parameter values. The algorithm performance was evaluated based on a newly devised index. The results obtained with one kind of weight showed the algorithm efficiency under the proper selection of noise patterns. Results for another kind of weight showed the robustness of the algorithm to this selection. (orig.)

  12. A Novel AHRS Inertial Sensor-Based Algorithm for Wheelchair Propulsion Performance Analysis

    Directory of Open Access Journals (Sweden)

    Jonathan Bruce Shepherd

    2016-08-01

    Full Text Available With the increasing rise of professionalism in sport, athletes, teams, and coaches are looking to technology to monitor performance in both games and training in order to find a competitive advantage. The use of inertial sensors has been proposed as a cost effective and adaptable measurement device for monitoring wheelchair kinematics; however, the outcomes are dependent on the reliability of the processing algorithms. Though there are a variety of algorithms that have been proposed to monitor wheelchair propulsion in court sports, they all have limitations. Through experimental testing, we have shown the Attitude and Heading Reference System (AHRS-based algorithm to be a suitable and reliable candidate algorithm for estimating velocity, distance, and approximating trajectory. The proposed algorithm is computationally inexpensive, agnostic of wheel camber, not sensitive to sensor placement, and can be embedded for real-time implementations. The research is conducted under Griffith University Ethics (GU Ref No: 2016/294.

  13. Development and performance analysis of a lossless data reduction algorithm for voip

    International Nuclear Information System (INIS)

    Misbahuddin, S.; Boulejfen, N.

    2014-01-01

    VoIP (Voice Over IP) is becoming an alternative way of voice communications over the Internet. To better utilize voice call bandwidth, some standard compression algorithms are applied in VoIP systems. However, these algorithms affect the voice quality with high compression ratios. This paper presents a lossless data reduction technique to improve VoIP data transfer rate over the IP network. The proposed algorithm exploits the data redundancies in digitized VFs (Voice Frames) generated by VoIP systems. Performance of proposed data reduction algorithm has been presented in terms of compression ratio. The proposed algorithm will help retain the voice quality along with the improvement in VoIP data transfer rates. (author)

  14. Analysis of parameter estimation and optimization application of ant colony algorithm in vehicle routing problem

    Science.gov (United States)

    Xu, Quan-Li; Cao, Yu-Wei; Yang, Kun

    2018-03-01

    Ant Colony Optimization (ACO) is the most widely used artificial intelligence algorithm at present. This study introduced the principle and mathematical model of ACO algorithm in solving Vehicle Routing Problem (VRP), and designed a vehicle routing optimization model based on ACO, then the vehicle routing optimization simulation system was developed by using c ++ programming language, and the sensitivity analyses, estimations and improvements of the three key parameters of ACO were carried out. The results indicated that the ACO algorithm designed in this paper can efficiently solve rational planning and optimization of VRP, and the different values of the key parameters have significant influence on the performance and optimization effects of the algorithm, and the improved algorithm is not easy to locally converge prematurely and has good robustness.

  15. Analysis of Massively Parallel Discrete-Ordinates Transport Sweep Algorithms with Collisions

    International Nuclear Information System (INIS)

    Bailey, T.S.; Falgout, R.D.

    2008-01-01

    We present theoretical scaling models for a variety of discrete-ordinates sweep algorithms. In these models, we pay particular attention to the way each algorithm handles collisions. A collision is defined as a processor having multiple angles with ready to be swept during one stage of the sweep. The models also take into account how subdomains are assigned to processors and how angles are grouped during the sweep. We describe a data driven algorithm that resolves collisions efficiently during the sweep as well as other algorithms that have been designed to avoid collisions completely. Our models are validated using the ARGES and AMTRAN transport codes. We then use the models to study and predict scaling trends in all of the sweep algorithms

  16. Using Linear Algebra to Introduce Computer Algebra, Numerical Analysis, Data Structures and Algorithms (and To Teach Linear Algebra, Too).

    Science.gov (United States)

    Gonzalez-Vega, Laureano

    1999-01-01

    Using a Computer Algebra System (CAS) to help with the teaching of an elementary course in linear algebra can be one way to introduce computer algebra, numerical analysis, data structures, and algorithms. Highlights the advantages and disadvantages of this approach to the teaching of linear algebra. (Author/MM)

  17. Clustering Educational Digital Library Usage Data: A Comparison of Latent Class Analysis and K-Means Algorithms

    Science.gov (United States)

    Xu, Beijie; Recker, Mimi; Qi, Xiaojun; Flann, Nicholas; Ye, Lei

    2013-01-01

    This article examines clustering as an educational data mining method. In particular, two clustering algorithms, the widely used K-means and the model-based Latent Class Analysis, are compared, using usage data from an educational digital library service, the Instructional Architect (IA.usu.edu). Using a multi-faceted approach and multiple data…

  18. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

  19. Contribution of neutron activation analysis and radioanalytical techniques to trace element speciation in environmental analysis

    International Nuclear Information System (INIS)

    Gallorini, M.; Orvini, E.; Sabbioni, E.; Pietra, R.

    1984-01-01

    The feasibility of the NAA in contributing to trace element speciation in environmental and biological samples is shown by a few selected applications. They are: (i) the determination of different species of vanadium, arsenic and selenium in freshwater. The pre-irradiation treatments in which the different chemical species are selectively isolated are critically evaluated. Problems arising from possible contaminations or losses and from blanks are discussed; (ii) the use of NAA in metallobiochemical speciation once biochemical methods of cellular fractionation for the isolation of metallobiocomplexes are developed. Examples concern the biochemical speciation of vanadium in human blood and the development of a method for speciation of inorganic and organic arsenic compounds in biological samples. The use of radioanalytical techniques in developing the separation procedures of different trace metal chemical and biochemical species is also briefly discussed. 24 references, 5 figures, 4 tables

  20. Development of Ray Tracing Algorithms for Scanning Plane and Transverse Plane Analysis for Satellite Multibeam Application

    Directory of Open Access Journals (Sweden)

    N. H. Abd Rahman

    2014-01-01

    Full Text Available Reflector antennas have been widely used in many areas. In the implementation of parabolic reflector antenna for broadcasting satellite applications, it is essential for the spacecraft antenna to provide precise contoured beam to effectively serve the required region. For this purpose, combinations of more than one beam are required. Therefore, a tool utilizing ray tracing method is developed to calculate precise off-axis beams for multibeam antenna system. In the multibeam system, each beam will be fed from different feed positions to allow the main beam to be radiated at the exact direction on the coverage area. Thus, detailed study on caustics of a parabolic reflector antenna is performed and presented in this paper, which is to investigate the behaviour of the rays and its relation to various antenna parameters. In order to produce accurate data for the analysis, the caustic behaviours are investigated in two distinctive modes: scanning plane and transverse plane. This paper presents the detailed discussions on the derivation of the ray tracing algorithms, the establishment of the equations of caustic loci, and the verification of the method through calculation of radiation pattern.

  1. Modified GMDH-NN algorithm and its application for global sensitivity analysis

    International Nuclear Information System (INIS)

    Song, Shufang; Wang, Lu

    2017-01-01

    Global sensitivity analysis (GSA) is a very useful tool to evaluate the influence of input variables in the whole distribution range. Sobol' method is the most commonly used among variance-based methods, which are efficient and popular GSA techniques. High dimensional model representation (HDMR) is a popular way to compute Sobol' indices, however, its drawbacks cannot be ignored. We show that modified GMDH-NN algorithm can calculate coefficients of metamodel efficiently, so this paper aims at combining it with HDMR and proposes GMDH-HDMR method. The new method shows higher precision and faster convergent rate. Several numerical and engineering examples are used to confirm its advantages. - Highlights: • The GMDH-NN is improved to construct the explicit polynomial model of optimal complexity by self-organization. • The paper aims at combining improved GMDH-NN with HDMR expansions and using it to compute Sobol' indices directly. • The method can be applied in uniform, normal and exponential distribution by using suitable orthogonal polynomials. • Engineering examples, e.g., electronic circuit models can be solved by the presented method.

  2. An Analysis of OpenACC Programming Model: Image Processing Algorithms as a Case Study

    Directory of Open Access Journals (Sweden)

    M. J. Mišić

    2014-06-01

    Full Text Available Graphics processing units and similar accelerators have been intensively used in general purpose computations for several years. In the last decade, GPU architecture and organization changed dramatically to support an ever-increasing demand for computing power. Along with changes in hardware, novel programming models have been proposed, such as NVIDIA’s Compute Unified Device Architecture (CUDA and Open Computing Language (OpenCL by Khronos group. Although numerous commercial and scientific applications have been developed using these two models, they still impose a significant challenge for less experienced users. There are users from various scientific and engineering communities who would like to speed up their applications without the need to deeply understand a low-level programming model and underlying hardware. In 2011, OpenACC programming model was launched. Much like OpenMP for multicore processors, OpenACC is a high-level, directive-based programming model for manycore processors like GPUs. This paper presents an analysis of OpenACC programming model and its applicability in typical domains like image processing. Three, simple image processing algorithms have been implemented for execution on the GPU with OpenACC. The results were compared with their sequential counterparts, and results are briefly discussed.

  3. A simple iterative independent component analysis algorithm for vibration source signal identification of complex structures

    Directory of Open Access Journals (Sweden)

    Dong-Sup Lee

    2015-01-01

    Full Text Available Independent Component Analysis (ICA, one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: insta- bility and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to vali- date the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.

  4. Stability Analysis of Tunnel-Slope Coupling Based on Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Tao Luo

    2015-07-01

    Full Text Available Subjects in tunnels, being constrained by terrain and routes, entrances and exits to tunnels, usually stay in the terrain with slopes. Thus, it is necessary to carry out stability analysis by treating the tunnel slope as an entity. In this study, based on the Janbu slices method, a model for the calculation of the stability of the original slope-tunnel-bank slope is established. The genetic algorithm is used to implement calculation variables, safety coefficient expression and fitness function design. The stability of the original slope-tunnel-bank slope under different conditions is calculated, after utilizing the secondary development function of the mathematical tool MATLAB for programming. We found that the bearing capacity of the original slopes is reduced as the tunnels are excavated and the safety coefficient is gradually decreased as loads of the embankment construction increased. After the embankment was constructed, the safety coefficient was 1.38, which is larger than the 1.3 value specified by China’s standards. Thus, the original slope-tunnel-bank slope would remain in a stable state.

  5. Comparison of different image analysis algorithms on MRI to predict physico-chemical and sensory attributes of loin

    DEFF Research Database (Denmark)

    Caballero, Daniel; Caro, Andrés; Dahl, Anders Bjorholm

    2018-01-01

    -chemical and sensory analysis. CFA reached low relationship for the quality parameters of loins, the remaining algorithms achieved correlation coefficients higher than 0.5 noting OPFTA that reached the highest correlation coefficients in all cases except for the L* coordinate color that GLCM obtained the highest...... correlation coefficient. These high correlation coefficients confirm the new algorithm as an alternative to the other computer vision approaches in order to compute the physico chemical and sensory parameters of meat products in a non-destructive and efficient way....

  6. Analysis of Ion Currents Contribution to Repolarization in Human Heart Failure Using Computer Models

    Energy Technology Data Exchange (ETDEWEB)

    Marotta, F.; Paci, M.A.; Severi, S.; Trenor, B.

    2016-07-01

    The mechanisms underlying repolarization of the ventricular action potential (AP) are subject of research for anti-arrhythmic drugs. In fact, the prolongation of the AP occurs in several conditions of heart disease, such as heart failure, a major problem precursor for serious arrhythmias. In this study, we investigated the phenomena of repolarization reserve, defined as the capacity of the cell to repolarize in case of a functional loss, and the all-or-none repolarization, which depends on the delicate balance of inward and outward currents in the different phases of the AP, under conditions of human heart failure (HF). To simulate HF conditions, the O'Hara et al. human AP model was modified and specific protocols for all-or-none repolarization were applied. Our results show that in the early repolarization the threshold for all-or-none repolarization is not altered in HF even if a decrease in potassium currents can be observed. To quantify the contribution of the individual ion currents to HF induced AP prolongation, we used a novel piecewise-linear approximation approach proposed by Paci et al. In particular, INaL and ICaL are the main responsible for APD prolongation due to HF (85 and 35 ms respectively). Our results highlight this novel algorithm as a powerful tool to have a more complete picture of the complex ionic mechanisms underlying this disease and confirm the important role of the late sodium current in HF repolarization. (Author)

  7. Algorithms for the Analysis of 3D Magnetic Resonance Angiography Images

    International Nuclear Information System (INIS)

    Tizon, Xavier

    2004-01-01

    Atherosclerosis is a disease of the arterial wall, progressively impairing blood flow as it spreads throughout the body. The heart attacks and strokes that result of this condition cause more deaths than cancer in industrial countries. Angiography refers to the group of imaging techniques used through the diagnosis, treatment planning and follow-up of atherosclerosis. In recent years, Magnetic Resonance Angiography (MRA) has shown promising abilities to supplant conventional, invasive, X-ray-based angiography. In order to fully benefit from this modality, there is a need for more objective and reproducible methods. This thesis shows, in two applications, how computerized image analysis can help define and implement these methods. First, by using segmentation to improve visualization of blood-pool contrast enhanced (CE)-MRA, with an additional application in coronary Computerized Tomographic Angiography. We show that, using a limited amount of user interaction and an algorithmic framework borrowed from graph theory and fuzzy logic theory, we can simplify the display of complex 3D structures like vessels. Second, by proposing a methodology to analyze the geometry of arteries in whole-body CE-MRA. The vessel centreline is extracted, and geometrical properties of this 3D curve are measured, to improve interpretation of the angiograms. It represents a more global approach than the conventional evaluation of atherosclerosis, as a first step towards screening for vascular diseases. We have developed the methods presented in this thesis with clinical practice in mind. However, they have the potential to be useful to other applications of computerized image analysis

  8. Earthquake Prediction Analysis Based on Empirical Seismic Rate: The M8 Algorithm

    International Nuclear Information System (INIS)

    Molchan, G.; Romashkova, L.

    2010-07-01

    The quality of space-time earthquake prediction is usually characterized by a two-dimensional error diagram (n,τ), where n is the rate of failures-to-predict and τ is the normalized measure of space-time alarm. The most reasonable space measure for analysis of a prediction strategy is the rate of target events λ(dg) in a sub-area dg. In that case the quantity H = 1-(n +τ) determines the prediction capability of the strategy. The uncertainty of λ(dg) causes difficulties in estimating H and the statistical significance, α, of prediction results. We investigate this problem theoretically and show how the uncertainty of the measure can be taken into account in two situations, viz., the estimation of α and the construction of a confidence zone for the (n,τ)-parameters of the random strategies. We use our approach to analyse the results from prediction of M ≥ 8.0 events by the M8 method for the period 1985-2009 (the M8.0+ test). The model of λ(dg) based on the events Mw ≥ 5.5, 1977-2004, and the magnitude range of target events 8.0 ≤ M < 8.5 are considered as basic to this M8 analysis. We find the point and upper estimates of α and show that they are still unstable because the number of target events in the experiment is small. However, our results argue in favour of non-triviality of the M8 prediction algorithm. (author)

  9. Earthquake prediction analysis based on empirical seismic rate: the M8 algorithm

    Science.gov (United States)

    Molchan, G.; Romashkova, L.

    2010-12-01

    The quality of space-time earthquake prediction is usually characterized by a 2-D error diagram (n, τ), where n is the fraction of failures-to-predict and τ is the local rate of alarm averaged in space. The most reasonable averaging measure for analysis of a prediction strategy is the normalized rate of target events λ(dg) in a subarea dg. In that case the quantity H = 1 - (n + τ) determines the prediction capability of the strategy. The uncertainty of λ(dg) causes difficulties in estimating H and the statistical significance, α, of prediction results. We investigate this problem theoretically and show how the uncertainty of the measure can be taken into account in two situations, viz., the estimation of α and the construction of a confidence zone for the (n, τ)-parameters of the random strategies. We use our approach to analyse the results from prediction of M >= 8.0 events by the M8 method for the period 1985-2009 (the M8.0+ test). The model of λ(dg) based on the events Mw >= 5.5, 1977-2004, and the magnitude range of target events 8.0 <= M < 8.5 are considered as basic to this M8 analysis. We find the point and upper estimates of α and show that they are still unstable because the number of target events in the experiment is small. However, our results argue in favour of non-triviality of the M8 prediction algorithm.

  10. Data Retrieval Algorithm and Uncertainty Analysis for a Miniaturized, Laser Heterodyne Radiometer

    Science.gov (United States)

    Miller, J. H.; Melroy, H.; Wilson, E. L.; Clarke, G. B.

    2013-12-01

    database for local zenith angle calculations to initialize path trajectory calculations. In our newest version of the retrieval algorithm, the Python programming language module PySolar is used for the path geometry calculations. For temperature, pressure, and humidity profiles with altitude we use the Modern-Era Retrospective Analysis for Research and Applications (MERRA) data that has been compiled every 6 hours. Spectral simulation is accomplished by integrating short-path segments along the trajectory using the SpecSyn spectral simulation suite developed at GW. Column concentrations are extracted by minimizing residuals between observed and modeled spectrum using the Nelder-Mead simplex algorithm as implemented in the SciPy Python module. We will also present an assessment of uncertainty in the reported concentrations from assumptions made in the meteorological data, LHR instrument and tracker noise, and radio frequency bandwidth and describe additional future goals in instrument development and deployment targets.

  11. Time-domain analysis of planar microstrip devices using a generalized Yee-algorithm based on unstructured grids

    Science.gov (United States)

    Gedney, Stephen D.; Lansing, Faiza

    1993-01-01

    The generalized Yee-algorithm is presented for the temporal full-wave analysis of planar microstrip devices. This algorithm has the significant advantage over the traditional Yee-algorithm in that it is based on unstructured and irregular grids. The robustness of the generalized Yee-algorithm is that structures that contain curved conductors or complex three-dimensional geometries can be more accurately, and much more conveniently modeled using standard automatic grid generation techniques. This generalized Yee-algorithm is based on the the time-marching solution of the discrete form of Maxwell's equations in their integral form. To this end, the electric and magnetic fields are discretized over a dual, irregular, and unstructured grid. The primary grid is assumed to be composed of general fitted polyhedra distributed throughout the volume. The secondary grid (or dual grid) is built up of the closed polyhedra whose edges connect the centroid's of adjacent primary cells, penetrating shared faces. Faraday's law and Ampere's law are used to update the fields normal to the primary and secondary grid faces, respectively. Subsequently, a correction scheme is introduced to project the normal fields onto the grid edges. It is shown that this scheme is stable, maintains second-order accuracy, and preserves the divergenceless nature of the flux densities. Finally, for computational efficiency the algorithm is structured as a series of sparse matrix-vector multiplications. Based on this scheme, the generalized Yee-algorithm has been implemented on vector and parallel high performance computers in a highly efficient manner.

  12. Analysis of Online DBA Algorithm with Adaptive Sleep Cycle in WDM EPON

    Science.gov (United States)

    Pajčin, Bojan; Matavulj, Petar; Radivojević, Mirjana

    2018-05-01

    In order to manage Quality of Service (QoS) and energy efficiency in the optical access network, an online Dynamic Bandwidth Allocation (DBA) algorithm with adaptive sleep cycle is presented. This DBA algorithm has the ability to allocate an additional bandwidth to the end user within a single sleep cycle whose duration changes depending on the current buffers occupancy. The purpose of this DBA algorithm is to tune the duration of the sleep cycle depending on the network load in order to provide service to the end user without violating strict QoS requests in all network operating conditions.

  13. Analysis of the Command and Control Segment (CCS) attitude estimation algorithm

    Science.gov (United States)

    Stockwell, Catherine

    1993-01-01

    This paper categorizes the qualitative behavior of the Command and Control Segment (CCS) differential correction algorithm as applied to attitude estimation using simultaneous spin axis sun angle and Earth cord length measurements. The categories of interest are the domains of convergence, divergence, and their boundaries. Three series of plots are discussed that show the dependence of the estimation algorithm on the vehicle radius, the sun/Earth angle, and the spacecraft attitude. Common qualitative dynamics to all three series are tabulated and discussed. Out-of-limits conditions for the estimation algorithm are identified and discussed.

  14. Stochastic approach for round-off error analysis in computing application to signal processing algorithms

    International Nuclear Information System (INIS)

    Vignes, J.

    1986-01-01

    Any result of algorithms provided by a computer always contains an error resulting from floating-point arithmetic round-off error propagation. Furthermore signal processing algorithms are also generally performed with data containing errors. The permutation-perturbation method, also known under the name CESTAC (controle et estimation stochastique d'arrondi de calcul) is a very efficient practical method for evaluating these errors and consequently for estimating the exact significant decimal figures of any result of algorithms performed on a computer. The stochastic approach of this method, its probabilistic proof, and the perfect agreement between the theoretical and practical aspects are described in this paper [fr

  15. The contribution of particle swarm optimization to three-dimensional slope stability analysis.

    Science.gov (United States)

    Kalatehjari, Roohollah; Rashid, Ahmad Safuan A; Ali, Nazri; Hajihassani, Mohsen

    2014-01-01

    Over the last few years, particle swarm optimization (PSO) has been extensively applied in various geotechnical engineering including slope stability analysis. However, this contribution was limited to two-dimensional (2D) slope stability analysis. This paper applied PSO in three-dimensional (3D) slope stability problem to determine the critical slip surface (CSS) of soil slopes. A detailed description of adopted PSO was presented to provide a good basis for more contribution of this technique to the field of 3D slope stability problems. A general rotating ellipsoid shape was introduced as the specific particle for 3D slope stability analysis. A detailed sensitivity analysis was designed and performed to find the optimum values of parameters of PSO. Example problems were used to evaluate the applicability of PSO in determining the CSS of 3D slopes. The first example presented a comparison between the results of PSO and PLAXI-3D finite element software and the second example compared the ability of PSO to determine the CSS of 3D slopes with other optimization methods from the literature. The results demonstrated the efficiency and effectiveness of PSO in determining the CSS of 3D soil slopes.

  16. The Contribution of Particle Swarm Optimization to Three-Dimensional Slope Stability Analysis

    Science.gov (United States)

    A Rashid, Ahmad Safuan; Ali, Nazri

    2014-01-01

    Over the last few years, particle swarm optimization (PSO) has been extensively applied in various geotechnical engineering including slope stability analysis. However, this contribution was limited to two-dimensional (2D) slope stability analysis. This paper applied PSO in three-dimensional (3D) slope stability problem to determine the critical slip surface (CSS) of soil slopes. A detailed description of adopted PSO was presented to provide a good basis for more contribution of this technique to the field of 3D slope stability problems. A general rotating ellipsoid shape was introduced as the specific particle for 3D slope stability analysis. A detailed sensitivity analysis was designed and performed to find the optimum values of parameters of PSO. Example problems were used to evaluate the applicability of PSO in determining the CSS of 3D slopes. The first example presented a comparison between the results of PSO and PLAXI-3D finite element software and the second example compared the ability of PSO to determine the CSS of 3D slopes with other optimization methods from the literature. The results demonstrated the efficiency and effectiveness of PSO in determining the CSS of 3D soil slopes. PMID:24991652

  17. The Contribution of Particle Swarm Optimization to Three-Dimensional Slope Stability Analysis

    Directory of Open Access Journals (Sweden)

    Roohollah Kalatehjari

    2014-01-01

    Full Text Available Over the last few years, particle swarm optimization (PSO has been extensively applied in various geotechnical engineering including slope stability analysis. However, this contribution was limited to two-dimensional (2D slope stability analysis. This paper applied PSO in three-dimensional (3D slope stability problem to determine the critical slip surface (CSS of soil slopes. A detailed description of adopted PSO was presented to provide a good basis for more contribution of this technique to the field of 3D slope stability problems. A general rotating ellipsoid shape was introduced as the specific particle for 3D slope stability analysis. A detailed sensitivity analysis was designed and performed to find the optimum values of parameters of PSO. Example problems were used to evaluate the applicability of PSO in determining the CSS of 3D slopes. The first example presented a comparison between the results of PSO and PLAXI-3D finite element software and the second example compared the ability of PSO to determine the CSS of 3D slopes with other optimization methods from the literature. The results demonstrated the efficiency and effectiveness of PSO in determining the CSS of 3D soil slopes.

  18. A 3D Two-node and One-node HCMFD Algorithm for Pin-wise Reactor Analysis

    International Nuclear Information System (INIS)

    Kim, Jaeha; Kim, Yonghee

    2016-01-01

    To maximize the parallel computational efficiency, an iterative local-global strategy is adopted in the HCMFD algorithm. The global eigenvalue problem is solved by one-node CMFD, and the local fixed-source problems are solved by two-node CMFD based on the pin-wise nodal solutions. In such local-global scheme, the computational cost is mostly concentrated in solving the local problems while they can be solved in parallel so that a parallel computing can effectively be applied. Previously, the feasibility of the HCMFD algorithm was evaluated only in a 2-D scheme. In this paper, the 3D HCMFD algorithm with some possible variations in treating the axial direction is introduced. The HCMFD algorithm was successfully extended to a 3-D core analysis without any numerical instability even though the axial mesh size in local problems is quite different from the x-y node size. We have shown that 3D pin-wise core analysis can be done very effectively with the HCMFD framework. Additionally, it was demonstrated that parallel efficiency of the new 3D HCMFD scheme can be quite high on a simple OpenMP parallel architecture. It is concluded that the 3D HCMFD will enable an efficient pin-wise 3D core analysis

  19. Multi-GPU parallel algorithm design and analysis for improved inversion of probability tomography with gravity gradiometry data

    Science.gov (United States)

    Hou, Zhenlong; Huang, Danian

    2017-09-01

    In this paper, we make a study on the inversion of probability tomography (IPT) with gravity gradiometry data at first. The space resolution of the results is improved by multi-tensor joint inversion, depth weighting matrix and the other methods. Aiming at solving the problems brought by the big data in the exploration, we present the parallel algorithm and the performance analysis combining Compute Unified Device Architecture (CUDA) with Open Multi-Processing (OpenMP) based on Graphics Processing Unit (GPU) accelerating. In the test of the synthetic model and real data from Vinton Dome, we get the improved results. It is also proved that the improved inversion algorithm is effective and feasible. The performance of parallel algorithm we designed is better than the other ones with CUDA. The maximum speedup could be more than 200. In the performance analysis, multi-GPU speedup and multi-GPU efficiency are applied to analyze the scalability of the multi-GPU programs. The designed parallel algorithm is demonstrated to be able to process larger scale of data and the new analysis method is practical.

  20. Cryptographic analysis on the key space of optical phase encryption algorithm based on the design of discrete random phase mask

    Science.gov (United States)

    Lin, Chao; Shen, Xueju; Li, Zengyan

    2013-07-01

    The key space of phase encryption algorithm using discrete random phase mask is investigated by numerical simulation in this paper. Random phase mask with finite and discrete phase levels is considered as the core component in most practical optical encryption architectures. The key space analysis is based on the design criteria of discrete random phase mask. The role of random amplitude mask and random phase mask in optical encryption system is identified from the perspective of confusion and diffusion. The properties of discrete random phase mask in a practical double random phase encoding scheme working in both amplitude encoding (AE) and phase encoding (PE) modes are comparably analyzed. The key space of random phase encryption algorithm is evaluated considering both the encryption quality and the brute-force attack resistibility. A method for enlarging the key space of phase encryption algorithm is also proposed to enhance the security of optical phase encryption techniques.

  1. An improved cone-beam filtered backprojection reconstruction algorithm based on x-ray angular correction and multiresolution analysis

    International Nuclear Information System (INIS)

    Sun, Y.; Hou, Y.; Yan, Y.

    2004-01-01

    With the extensive application of industrial computed tomography in the field of non-destructive testing, how to improve the quality of the reconstructed image is receiving more and more concern. It is well known that in the existing cone-beam filtered backprojection reconstruction algorithms the cone angle is controlled within a narrow range. The reason of this limitation is the incompleteness of projection data when the cone angle increases. Thus the size of the tested workpiece is limited. Considering the characteristic of X-ray cone angle, an improved cone-beam filtered back-projection reconstruction algorithm taking account of angular correction is proposed in this paper. The aim of our algorithm is to correct the cone-angle effect resulted from the incompleteness of projection data in the conventional algorithm. The basis of the correction is the angular relationship among X-ray source, tested workpiece and the detector. Thus the cone angle is not strictly limited and this algorithm may be used to detect larger workpiece. Further more, adaptive wavelet filter is used to make multiresolution analysis, which can modify the wavelet decomposition series adaptively according to the demand for resolution of local reconstructed area. Therefore the computation and the time of reconstruction can be reduced, and the quality of the reconstructed image can also be improved. (author)

  2. A MapReduce-Based Parallel Frequent Pattern Growth Algorithm for Spatiotemporal Association Analysis of Mobile Trajectory Big Data

    Directory of Open Access Journals (Sweden)

    Dawen Xia

    2018-01-01

    Full Text Available Frequent pattern mining is an effective approach for spatiotemporal association analysis of mobile trajectory big data in data-driven intelligent transportation systems. While existing parallel algorithms have been successfully applied to frequent pattern mining of large-scale trajectory data, two major challenges are how to overcome the inherent defects of Hadoop to cope with taxi trajectory big data including massive small files and how to discover the implicitly spatiotemporal frequent patterns with MapReduce. To conquer these challenges, this paper presents a MapReduce-based Parallel Frequent Pattern growth (MR-PFP algorithm to analyze the spatiotemporal characteristics of taxi operating using large-scale taxi trajectories with massive small file processing strategies on a Hadoop platform. More specifically, we first implement three methods, that is, Hadoop Archives (HAR, CombineFileInputFormat (CFIF, and Sequence Files (SF, to overcome the existing defects of Hadoop and then propose two strategies based on their performance evaluations. Next, we incorporate SF into Frequent Pattern growth (FP-growth algorithm and then implement the optimized FP-growth algorithm on a MapReduce framework. Finally, we analyze the characteristics of taxi operating in both spatial and temporal dimensions by MR-PFP in parallel. The results demonstrate that MR-PFP is superior to existing Parallel FP-growth (PFP algorithm in efficiency and scalability.

  3. ANALYSIS AND CHALLENGES REGARDING WOMEN CONTRIBUTION TO THE SME DEVELOPMENT IN ROMANIA

    Directory of Open Access Journals (Sweden)

    Ioana LUPASC

    2013-12-01

    Full Text Available In Europe, entrepreneurship is promoting and in SMEs are over 99%. SMEs are viewed like the “economics engine”, they are the one that are generating work places. The researches in the entrepreneurial field proved that there is a good relation between the level of the entrepreneurial activity (the frequency of which in an economy it is creating new enterprises and the economic growth. More intense the entrepreneurial activity is, the more it will have positive effects on the economy in generally. In the past years, Europe has increased interest in female entrepreneurship. This support has resulted in the creation and implementation of public policies to encourage women to engage into business, and thereby stimulate economic growth. The contribution of women in the labor market is becoming more relevant. We present analysis, challenges and characteristics of women's contribution to the development of SMEs in Romania.

  4. Analysis of Ant Colony Optimization and Population-Based Evolutionary Algorithms on Dynamic Problems

    DEFF Research Database (Denmark)

    Lissovoi, Andrei

    the dynamic optimum for finite alphabets up to size μ, while MMAS is able to do so for any finite alphabet size. Parallel Evolutionary Algorithms on Maze. We prove that while a (1 + λ) EA is unable to track the optimum of the dynamic fitness function Maze for offspring population size up to λ = O(n1-ε......This thesis presents new running time analyses of nature-inspired algorithms on various dynamic problems. It aims to identify and analyse the features of algorithms and problem classes which allow efficient optimization to occur in the presence of dynamic behaviour. We consider the following...... settings: λ-MMAS on Dynamic Shortest Path Problems. We investigate how in-creasing the number of ants simulated per iteration may help an ACO algorithm to track optimum in a dynamic problem. It is shown that while a constant number of ants per-vertex is sufficient to track some oscillations, there also...

  5. Analysis of coincidence {gamma}-ray spectra using advanced background elimination, unfolding and fitting algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Morhac, M. E-mail: fyzimiro@savba.skfyzimiro@flnr.jinr.ru; Matousek, V. E-mail: matousek@savba.sk; Kliman, J.; Krupa, L.L.; Jandel, M

    2003-04-21

    The efficient algorithms to analyze multiparameter {gamma}-ray spectra are presented. They allow to search for peaks, to separate peaks from background, to improve the resolution and to fit 1-, 2-, 3-parameter {gamma}-ray spectra.

  6. Performance Analysis of Blind Beamforming Algorithms in Adaptive Antenna Array in Rayleigh Fading Channel Model

    International Nuclear Information System (INIS)

    Yasin, M; Akhtar, Pervez; Pathan, Amir Hassan

    2013-01-01

    In this paper, we analyze the performance of adaptive blind algorithms – i.e. Kaiser Constant Modulus Algorithm (KCMA), Hamming CMA (HAMCMA) – with CMA in a wireless cellular communication system using digital modulation technique. These blind algorithms are used in digital signal processor of adaptive antenna to make it smart and change weights of the antenna array system dynamically. The simulation results revealed that KCMA and HAMCMA provide minimum mean square error (MSE) with 1.247 dB and 1.077 dB antenna gain enhancement, 75% reduction in bit error rate (BER) respectively over that of CMA. Therefore, KCMA and HAMCMA algorithms give a cost effective solution for a communication system

  7. Analysis of Different Feature Selection Criteria Based on a Covariance Convergence Perspective for a SLAM Algorithm

    Science.gov (United States)

    Auat Cheein, Fernando A.; Carelli, Ricardo

    2011-01-01

    This paper introduces several non-arbitrary feature selection techniques for a Simultaneous Localization and Mapping (SLAM) algorithm. The feature selection criteria are based on the determination of the most significant features from a SLAM convergence perspective. The SLAM algorithm implemented in this work is a sequential EKF (Extended Kalman filter) SLAM. The feature selection criteria are applied on the correction stage of the SLAM algorithm, restricting it to correct the SLAM algorithm with the most significant features. This restriction also causes a decrement in the processing time of the SLAM. Several experiments with a mobile robot are shown in this work. The experiments concern the map reconstruction and a comparison between the different proposed techniques performance. The experiments were carried out at an outdoor environment composed by trees, although the results shown herein are not restricted to a special type of features. PMID:22346568

  8. Meta-analysis: how does posterior parietal cortex contribute to reasoning?

    Science.gov (United States)

    Wendelken, Carter

    2015-01-01

    Reasoning depends on the contribution of posterior parietal cortex (PPC). But PPC is involved in many basic operations—including spatial attention, mathematical cognition, working memory, long-term memory, and language—and the nature of its contribution to reasoning is unclear. Psychological theories of the processes underlying reasoning make divergent claims about the neural systems that are likely to be involved, and better understanding the specific contribution of PPC can help to inform these theories. We set out to address several competing hypotheses, concerning the role of PPC in reasoning: (1) reasoning involves application of formal logic and is dependent on language, with PPC activation for reasoning mainly reflective of linguistic processing; (2) reasoning involves probabilistic computation and is thus dependent on numerical processing mechanisms in PPC; and (3) reasoning is built upon the representation and processing of spatial relations, and PPC activation associated with reasoning reflects spatial processing. We conducted two separate meta-analyses. First, we pooled data from our own studies of reasoning in adults, and examined activation in PPC regions of interest (ROI). Second, we conducted an automated meta-analysis using Neurosynth, in which we examined overlap between activation maps associated with reasoning and maps associated with other key functions of PPC. In both analyses, we observed reasoning-related activation concentrated in the left Inferior Parietal Lobe (IPL). Reasoning maps demonstrated the greatest overlap with mathematical cognition. Maintenance, visuospatial, and phonological processing also demonstrated some overlap with reasoning, but a large portion of the reasoning map did not overlap with the map for any other function. This evidence suggests that the PPC’s contribution to reasoning may be most closely related to its role in mathematical cognition, but that a core component of this contribution may be specific to

  9. Meta-analysis: How does posterior parietal cortex contribute to reasoning?

    Directory of Open Access Journals (Sweden)

    Carter eWendelken

    2015-01-01

    Full Text Available Reasoning depends on the contribution of posterior parietal cortex (PPC. But PPC is involved in many basic operations -- including spatial attention, mathematical cognition, working memory, long-term memory, and language -- and the nature of its contribution to reasoning is unclear. Psychological theories of the processes underlying reasoning make divergent claims about the neural systems that are likely to be involved, and better understanding the specific contribution of PPC can help to inform these theories. We set out to address several competing hypotheses, concerning the role of PPC in reasoning: 1 reasoning involves application of formal logic and is dependent on language, with PPC activation for reasoning mainly reflective of linguistic processing, 2 reasoning involves probabilistic computation and is thus dependent on numerical processing mechanisms in PPC, and 3 reasoning is built upon the representation and processing of spatial relations, and PPC activation associated with reasoning reflects spatial processing. We conducted two separate meta-analyses. First, we pooled data from our own studies of reasoning in adults, and examined activation in PPC regions of interest. Second, we conducted an automated meta-analysis using Neurosynth, in which we examined overlap between activation maps associated with reasoning and maps associated with other key functions of PPC. In both analyses, we observed reasoning-related activation concentrated in the left Inferior Parietal Lobe (IPL. Reasoning maps demonstrated the greatest overlap with mathematical cognition. Maintenance, visuospatial, and phonological processing also demonstrated some overlap with reasoning, but a large portion of the reasoning map did not overlap with the map for any other function. This evidence suggests that the PPC’s contribution to reasoning may be most closely related to its role in mathematical cognition, but that a core component of this contribution may be specific

  10. Direct calculation of modal contributions to thermal conductivity via Green–Kubo modal analysis

    International Nuclear Information System (INIS)

    Lv, Wei; Henry, Asegun

    2016-01-01

    We derived a new method for direct calculation of the modal contributions to thermal conductivity, which is termed Green–Kubo modal analysis (GKMA). The GKMA method combines the lattice dynamics formalism with the Green–Kubo formula for thermal conductivity, such that the thermal conductivity becomes a direct summation of modal contributions, where one need not define the phonon velocity. As a result, the GKMA method can be applied to any material/group of atoms, where the atoms vibrate around stable equilibrium positions, which includes non-stoichiometric compounds, random alloys, amorphous materials and even rigid molecules. By using molecular dynamics simulations to obtain the time history of each mode’s contribution to the heat current, one naturally includes anharmonicity to full order and can obtain insight into the interactions between different modes through the cross-correlations. As an example, we applied the GMKA method to crystalline and amorphous silicon. The modal contributions at each frequency result from the analysis and thereby allow one to apply a quantum correction to the mode heat capacity to determine the temperature dependence of thermal conductivity. The predicted temperature dependent thermal conductivity for amorphous silicon shows the best agreement with experiments to date. The GKMA method provides new insight into the nature of phonon transport, as it casts the problem in terms of mode–mode correlation instead of scattering, and provides a general unified formalism that can be used to understand phonon–phonon interactions in essentially any class of materials or structures where the atoms vibrate around stable equilibrium sites. (paper)

  11. A canonical correlation analysis based EMG classification algorithm for eliminating electrode shift effect.

    Science.gov (United States)

    Zhe Fan; Zhong Wang; Guanglin Li; Ruomei Wang

    2016-08-01

    Motion classification system based on surface Electromyography (sEMG) pattern recognition has achieved good results in experimental condition. But it is still a challenge for clinical implement and practical application. Many factors contribute to the difficulty of clinical use of the EMG based dexterous control. The most obvious and important is the noise in the EMG signal caused by electrode shift, muscle fatigue, motion artifact, inherent instability of signal and biological signals such as Electrocardiogram. In this paper, a novel method based on Canonical Correlation Analysis (CCA) was developed to eliminate the reduction of classification accuracy caused by electrode shift. The average classification accuracy of our method were above 95% for the healthy subjects. In the process, we validated the influence of electrode shift on motion classification accuracy and discovered the strong correlation with correlation coefficient of >0.9 between shift position data and normal position data.

  12. A Novel AHRS Inertial Sensor-Based Algorithm for Wheelchair Propulsion Performance Analysis

    OpenAIRE

    Jonathan Bruce Shepherd; Tomohito Wada; David Rowlands; Daniel Arthur James

    2016-01-01

    With the increasing rise of professionalism in sport, athletes, teams, and coaches are looking to technology to monitor performance in both games and training in order to find a competitive advantage. The use of inertial sensors has been proposed as a cost effective and adaptable measurement device for monitoring wheelchair kinematics; however, the outcomes are dependent on the reliability of the processing algorithms. Though there are a variety of algorithms that have been proposed to monito...

  13. Analysis of longitudinal variations in North Pacific alkalinity to improve predictive algorithms

    Science.gov (United States)

    Fry, Claudia H.; Tyrrell, Toby; Achterberg, Eric P.

    2016-10-01

    The causes of natural variation in alkalinity in the North Pacific surface ocean need to be investigated to understand the carbon cycle and to improve predictive algorithms. We used GLODAPv2 to test hypotheses on the causes of three longitudinal phenomena in Alk*, a tracer of calcium carbonate cycling. These phenomena are (a) an increase from east to west between 45°N and 55°N, (b) an increase from west to east between 25°N and 40°N, and (c) a minor increase from west to east in the equatorial upwelling region. Between 45°N and 55°N, Alk* is higher on the western than on the eastern side, and this is associated with denser isopycnals with higher Alk* lying at shallower depths. Between 25°N and 40°N, upwelling along the North American continental shelf causes higher Alk* in the east. Along the equator, a strong east-west trend was not observed, even though the upwelling on the eastern side of the basin is more intense, because the water brought to the surface is not high in Alk*. We created two algorithms to predict alkalinity, one for the entire Pacific Ocean north of 30°S and one for the eastern margin. The Pacific Ocean algorithm is more accurate than the commonly used algorithm published by Lee et al. (2006), of similar accuracy to the best previously published algorithm by Sasse et al. (2013), and is less biased with longitude than other algorithms in the subpolar North Pacific. Our eastern margin algorithm is more accurate than previously published algorithms.

  14. An improved parametrization and analysis of the EXP3++ algorithm for stochastic and adversarial bandits

    DEFF Research Database (Denmark)

    Seldin, Yevgeny; Lugosi, Gábor

    We present a new strategy for gap estimation in randomized algorithms for multiarmed bandits and combine it with the EXP3++ algorithm of Seldin and Slivkins (2014). In the stochastic regime the strategy reduces dependence of regret on a time horizon from $(ln t)^3$ to $(ln t)^2$ and eliminates an...... an additive factor of order $\\Delta e^{\\Delta^2}$, where $\\Delta$ is the minimal gap of a problem instance. In the adversarial regime regret guarantee remains unchanged....

  15. An improved parametrization and analysis of the EXP3++ algorithm for stochastic and adversarial bandits

    DEFF Research Database (Denmark)

    Seldin, Yevgeny; Lugosi, Gábor

    2017-01-01

    We present a new strategy for gap estimation in randomized algorithms for multiarmed bandits and combine it with the EXP3++ algorithm of Seldin and Slivkins (2014). In the stochastic regime the strategy reduces dependence of regret on a time horizon from $(ln t)^3$ to $(ln t)^2$ and eliminates...... an additive factor of order $\\Delta e^{\\Delta^2}$, where $\\Delta$ is the minimal gap of a problem instance. In the adversarial regime regret guarantee remains unchanged....

  16. Experimental analysis of the performance of machine learning algorithms in the classification of navigation accident records

    Directory of Open Access Journals (Sweden)

    REIS, M V. S. de A.

    2017-06-01

    Full Text Available This paper aims to evaluate the use of machine learning techniques in a database of marine accidents. We analyzed and evaluated the main causes and types of marine accidents in the Northern Fluminense region. For this, machine learning techniques were used. The study showed that the modeling can be done in a satisfactory manner using different configurations of classification algorithms, varying the activation functions and training parameters. The SMO (Sequential Minimal Optimization algorithm showed the best performance result.

  17. An experimental analysis of design choices of multi-objective ant colony optimization algorithms

    OpenAIRE

    Lopez-Ibanez, Manuel; Stutzle, Thomas

    2012-01-01

    There have been several proposals on how to apply the ant colony optimization (ACO) metaheuristic to multi-objective combinatorial optimization problems (MOCOPs). This paper proposes a new formulation of these multi-objective ant colony optimization (MOACO) algorithms. This formulation is based on adding specific algorithm components for tackling multiple objectives to the basic ACO metaheuristic. Examples of these components are how to represent multiple objectives using pheromone and heuris...

  18. A hospital-level cost-effectiveness analysis model for toxigenic Clostridium difficile detection algorithms.

    Science.gov (United States)

    Verhoye, E; Vandecandelaere, P; De Beenhouwer, H; Coppens, G; Cartuyvels, R; Van den Abeele, A; Frans, J; Laffut, W

    2015-10-01

    Despite thorough analyses of the analytical performance of Clostridium difficile tests and test algorithms, the financial impact at hospital level has not been well described. Such a model should take institution-specific variables into account, such as incidence, request behaviour and infection control policies. To calculate the total hospital costs of different test algorithms, accounting for days on which infected patients with toxigenic strains were not isolated and therefore posed an infectious risk for new/secondary nosocomial infections. A mathematical algorithm was developed to gather the above parameters using data from seven Flemish hospital laboratories (Bilulu Microbiology Study Group) (number of tests, local prevalence and hospital hygiene measures). Measures of sensitivity and specificity for the evaluated tests were taken from the literature. List prices and costs of assays were provided by the manufacturer or the institutions. The calculated cost included reagent costs, personnel costs and the financial burden following due and undue isolations and antibiotic therapies. Five different test algorithms were compared. A dynamic calculation model was constructed to evaluate the cost:benefit ratio of each algorithm for a set of institution- and time-dependent inputted variables (prevalence, cost fluctuations and test performances), making it possible to choose the most advantageous algorithm for its setting. A two-step test algorithm with concomitant glutamate dehydrogenase and toxin testing, followed by a rapid molecular assay was found to be the most cost-effective algorithm. This enabled resolution of almost all cases on the day of arrival, minimizing the number of unnecessary or missing isolations. Copyright © 2015 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  19. Statistical algorithm for automated signature analysis of power spectral density data

    International Nuclear Information System (INIS)

    Piety, K.R.

    1977-01-01

    A statistical algorithm has been developed and implemented on a minicomputer system for on-line, surveillance applications. Power spectral density (PSD) measurements on process signals are the performance signatures that characterize the ''health'' of the monitored equipment. Statistical methods provide a quantitative basis for automating the detection of anomalous conditions. The surveillance algorithm has been tested on signals from neutron sensors, proximeter probes, and accelerometers to determine its potential for monitoring nuclear reactors and rotating machinery

  20. Modified SIMPLE algorithm for the numerical analysis of incompressible flows with free surface

    International Nuclear Information System (INIS)

    Mok, Jin Ho; Hong, Chun Pyo; Lee, Jin Ho

    2005-01-01

    While the SIMPLE algorithm is most widely used for the simulations of flow phenomena that take place in the industrial equipment or the manufacturing processes, it is less adopted for the simulations of the free surface flow. Though the SIMPLE algorithm is free from the limitation of time step, the free surface behavior imposes the restriction on the time step. As a result, the explicit schemes are faster than the implicit scheme in terms of computation time when the same time step is applied to, since the implicit scheme includes the numerical method to solve the simultaneous equations in its procedure. If the computation time of SIMPLE algorithm can be reduced when it is applied to the unsteady free surface flow problems, the calculation can be carried out in the more stable way and, in the design process, the process variables can be controlled based on the more accurate data base. In this study, a modified SIMPLE algorithm is presented for the free surface flow. The broken water column problem is adopted for the validation of the modified algorithm (MoSIMPLE) and for comparison to the conventional SIMPLE algorithm