WorldWideScience

Sample records for multiple step algorithms

  1. Multiple Time-Step Dual-Hamiltonian Hybrid Molecular Dynamics - Monte Carlo Canonical Propagation Algorithm.

    Science.gov (United States)

    Chen, Yunjie; Kale, Seyit; Weare, Jonathan; Dinner, Aaron R; Roux, Benoît

    2016-04-12

    A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method.

  2. DOA Estimation of Low Altitude Target Based on Adaptive Step Glowworm Swarm Optimization-multiple Signal Classification Algorithm

    Directory of Open Access Journals (Sweden)

    Zhou Hao

    2015-06-01

    Full Text Available The traditional MUltiple SIgnal Classification (MUSIC algorithm requires significant computational effort and can not be employed for the Direction Of Arrival (DOA estimation of targets in a low-altitude multipath environment. As such, a novel MUSIC approach is proposed on the basis of the algorithm of Adaptive Step Glowworm Swarm Optimization (ASGSO. The virtual spatial smoothing of the matrix formed by each snapshot is used to realize the decorrelation of the multipath signal and the establishment of a fullorder correlation matrix. ASGSO optimizes the function and estimates the elevation of the target. The simulation results suggest that the proposed method can overcome the low altitude multipath effect and estimate the DOA of target readily and precisely without radar effective aperture loss.

  3. Self-Adaptive Step Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Shuhao Yu

    2013-01-01

    Full Text Available In the standard firefly algorithm, each firefly has the same step settings and its values decrease from iteration to iteration. Therefore, it may fall into the local optimum. Furthermore, the decreasing of step is restrained by the maximum of iteration, which has an influence on the convergence speed and precision. In order to avoid falling into the local optimum and reduce the impact of the maximum of iteration, a self-adaptive step firefly algorithm is proposed in the paper. Its core idea is setting the step of each firefly varying with the iteration, according to each firefly’s historical information and current situation. Experiments are made to show the performance of our approach compared with the standard FA, based on sixteen standard testing benchmark functions. The results reveal that our method can prevent the premature convergence and improve the convergence speed and accurateness.

  4. CSA: An efficient algorithm to improve circular DNA multiple alignment

    Directory of Open Access Journals (Sweden)

    Pereira Luísa

    2009-07-01

    Full Text Available Abstract Background The comparison of homologous sequences from different species is an essential approach to reconstruct the evolutionary history of species and of the genes they harbour in their genomes. Several complete mitochondrial and nuclear genomes are now available, increasing the importance of using multiple sequence alignment algorithms in comparative genomics. MtDNA has long been used in phylogenetic analysis and errors in the alignments can lead to errors in the interpretation of evolutionary information. Although a large number of multiple sequence alignment algorithms have been proposed to date, they all deal with linear DNA and cannot handle directly circular DNA. Researchers interested in aligning circular DNA sequences must first rotate them to the "right" place using an essentially manual process, before they can use multiple sequence alignment tools. Results In this paper we propose an efficient algorithm that identifies the most interesting region to cut circular genomes in order to improve phylogenetic analysis when using standard multiple sequence alignment algorithms. This algorithm identifies the largest chain of non-repeated longest subsequences common to a set of circular mitochondrial DNA sequences. All the sequences are then rotated and made linear for multiple alignment purposes. To evaluate the effectiveness of this new tool, three different sets of mitochondrial DNA sequences were considered. Other tests considering randomly rotated sequences were also performed. The software package Arlequin was used to evaluate the standard genetic measures of the alignments obtained with and without the use of the CSA algorithm with two well known multiple alignment algorithms, the CLUSTALW and the MAVID tools, and also the visualization tool SinicView. Conclusion The results show that a circularization and rotation pre-processing step significantly improves the efficiency of public available multiple sequence alignment

  5. Single-step reinitialization and extending algorithms for level-set based multi-phase flow simulations

    Science.gov (United States)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-12-01

    We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.

  6. Modified random hinge transport mechanics and multiple scattering step-size selection in EGS5

    International Nuclear Information System (INIS)

    Wilderman, S.J.; Bielajew, A.F.

    2005-01-01

    The new transport mechanics in EGS5 allows for significantly longer electron transport step sizes and hence shorter computation times than required for identical problems in EGS4. But as with all Monte Carlo electron transport algorithms, certain classes of problems exhibit step-size dependencies even when operating within recommended ranges, sometimes making selection of step-sizes a daunting task for novice users. Further contributing to this problem, because of the decoupling of multiple scattering and continuous energy loss in the dual random hinge transport mechanics of EGS5, there are two independent step sizes in EGS5, one for multiple scattering and one for continuous energy loss, each of which influences speed and accuracy in a different manner. Further, whereas EGS4 used a single value of fractional energy loss (ESTEPE) to determine step sizes at all energies, to increase performance by decreasing the amount of effort expended simulating lower energy particles, EGS5 permits the fractional energy loss values which are used to determine both the multiple scattering and continuous energy loss step sizes to vary with energy. This results in requiring the user to specify four fractional energy loss values when optimizing computations for speed. Thus, in order to simplify step-size selection and to mitigate step-size dependencies, a method has been devised to automatically optimize step-size selection based on a single material dependent input related to the size of problem tally region. In this paper we discuss the new transport mechanics in EGS5 and describe the automatic step-size optimization algorithm. (author)

  7. An explicit multi-time-stepping algorithm for aerodynamic flows

    OpenAIRE

    Niemann-Tuitman, B.E.; Veldman, A.E.P.

    1997-01-01

    An explicit multi-time-stepping algorithm with applications to aerodynamic flows is presented. In the algorithm, in different parts of the computational domain different time steps are taken, and the flow is synchronized at the so-called synchronization levels. The algorithm is validated for aerodynamic turbulent flows. For two-dimensional flows speedups in the order of five with respect to single time stepping are obtained.

  8. A Two-Step Resume Information Extraction Algorithm

    Directory of Open Access Journals (Sweden)

    Jie Chen

    2018-01-01

    Full Text Available With the rapid growth of Internet-based recruiting, there are a great number of personal resumes among recruiting systems. To gain more attention from the recruiters, most resumes are written in diverse formats, including varying font size, font colour, and table cells. However, the diversity of format is harmful to data mining, such as resume information extraction, automatic job matching, and candidates ranking. Supervised methods and rule-based methods have been proposed to extract facts from resumes, but they strongly rely on hierarchical structure information and large amounts of labelled data, which are hard to collect in reality. In this paper, we propose a two-step resume information extraction approach. In the first step, raw text of resume is identified as different resume blocks. To achieve the goal, we design a novel feature, Writing Style, to model sentence syntax information. Besides word index and punctuation index, word lexical attribute and prediction results of classifiers are included in Writing Style. In the second step, multiple classifiers are employed to identify different attributes of fact information in resumes. Experimental results on a real-world dataset show that the algorithm is feasible and effective.

  9. An explicit multi-time-stepping algorithm for aerodynamic flows

    NARCIS (Netherlands)

    Niemann-Tuitman, B.E.; Veldman, A.E.P.

    1997-01-01

    An explicit multi-time-stepping algorithm with applications to aerodynamic flows is presented. In the algorithm, in different parts of the computational domain different time steps are taken, and the flow is synchronized at the so-called synchronization levels. The algorithm is validated for

  10. One-Step Leapfrog LOD-BOR-FDTD Algorithm with CPML Implementation

    Directory of Open Access Journals (Sweden)

    Yi-Gang Wang

    2016-01-01

    Full Text Available An unconditionally stable one-step leapfrog locally one-dimensional finite-difference time-domain (LOD-FDTD algorithm towards body of revolution (BOR is presented. The equations of the proposed algorithm are obtained by the algebraic manipulation of those used in the conventional LOD-BOR-FDTD algorithm. The equations for z-direction electric and magnetic fields in the proposed algorithm should be treated specially. The new algorithm obtains a higher computational efficiency while preserving the properties of the conventional LOD-BOR-FDTD algorithm. Moreover, the convolutional perfectly matched layer (CPML is introduced into the one-step leapfrog LOD-BOR-FDTD algorithm. The equation of the one-step leapfrog CPML is concise. Numerical results show that its reflection error is small. It can be concluded that the similar CPML scheme can also be easily applied to the one-step leapfrog LOD-FDTD algorithm in the Cartesian coordinate system.

  11. Four wind speed multi-step forecasting models using extreme learning machines and signal decomposing algorithms

    International Nuclear Information System (INIS)

    Liu, Hui; Tian, Hong-qi; Li, Yan-fei

    2015-01-01

    Highlights: • A hybrid architecture is proposed for the wind speed forecasting. • Four algorithms are used for the wind speed multi-scale decomposition. • The extreme learning machines are employed for the wind speed forecasting. • All the proposed hybrid models can generate the accurate results. - Abstract: Realization of accurate wind speed forecasting is important to guarantee the safety of wind power utilization. In this paper, a new hybrid forecasting architecture is proposed to realize the wind speed accurate forecasting. In this architecture, four different hybrid models are presented by combining four signal decomposing algorithms (e.g., Wavelet Decomposition/Wavelet Packet Decomposition/Empirical Mode Decomposition/Fast Ensemble Empirical Mode Decomposition) and Extreme Learning Machines. The originality of the study is to investigate the promoted percentages of the Extreme Learning Machines by those mainstream signal decomposing algorithms in the multiple step wind speed forecasting. The results of two forecasting experiments indicate that: (1) the method of Extreme Learning Machines is suitable for the wind speed forecasting; (2) by utilizing the decomposing algorithms, all the proposed hybrid algorithms have better performance than the single Extreme Learning Machines; (3) in the comparisons of the decomposing algorithms in the proposed hybrid architecture, the Fast Ensemble Empirical Mode Decomposition has the best performance in the three-step forecasting results while the Wavelet Packet Decomposition has the best performance in the one and two step forecasting results. At the same time, the Wavelet Packet Decomposition and the Fast Ensemble Empirical Mode Decomposition are better than the Wavelet Decomposition and the Empirical Mode Decomposition in all the step predictions, respectively; and (4) the proposed algorithms are effective in the wind speed accurate predictions

  12. Adaptive Step Size Gradient Ascent ICA Algorithm for Wireless MIMO Systems

    Directory of Open Access Journals (Sweden)

    Zahoor Uddin

    2018-01-01

    Full Text Available Independent component analysis (ICA is a technique of blind source separation (BSS used for separation of the mixed received signals. ICA algorithms are classified into adaptive and batch algorithms. Adaptive algorithms perform well in time-varying scenario with high-computational complexity, while batch algorithms have better separation performance in quasistatic channels with low-computational complexity. Amongst batch algorithms, the gradient-based ICA algorithms perform well, but step size selection is critical in these algorithms. In this paper, an adaptive step size gradient ascent ICA (ASS-GAICA algorithm is presented. The proposed algorithm is free from selection of the step size parameter with improved convergence and separation performance. Different performance evaluation criteria are used to verify the effectiveness of the proposed algorithm. Performance of the proposed algorithm is compared with the FastICA and optimum block adaptive ICA (OBAICA algorithms for quasistatic and time-varying wireless channels. Simulation is performed over quadrature amplitude modulation (QAM and binary phase shift keying (BPSK signals. Results show that the proposed algorithm outperforms the FastICA and OBAICA algorithms for a wide range of signal-to-noise ratio (SNR and input data block lengths.

  13. Using Alternative Multiplication Algorithms to "Offload" Cognition

    Science.gov (United States)

    Jazby, Dan; Pearn, Cath

    2015-01-01

    When viewed through a lens of embedded cognition, algorithms may enable aspects of the cognitive work of multi-digit multiplication to be "offloaded" to the environmental structure created by an algorithm. This study analyses four multiplication algorithms by viewing different algorithms as enabling cognitive work to be distributed…

  14. SU-C-BRF-07: A Pattern Fusion Algorithm for Multi-Step Ahead Prediction of Surrogate Motion

    International Nuclear Information System (INIS)

    Zawisza, I; Yan, H; Yin, F

    2014-01-01

    Purpose: To assure that tumor motion is within the radiation field during high-dose and high-precision radiosurgery, real-time imaging and surrogate monitoring are employed. These methods are useful in providing real-time tumor/surrogate motion but no future information is available. In order to anticipate future tumor/surrogate motion and track target location precisely, an algorithm is developed and investigated for estimating surrogate motion multiple-steps ahead. Methods: The study utilized a one-dimensional surrogate motion signal divided into three components: (a) training component containing the primary data including the first frame to the beginning of the input subsequence; (b) input subsequence component of the surrogate signal used as input to the prediction algorithm: (c) output subsequence component is the remaining signal used as the known output of the prediction algorithm for validation. The prediction algorithm consists of three major steps: (1) extracting subsequences from training component which best-match the input subsequence according to given criterion; (2) calculating weighting factors from these best-matched subsequence; (3) collecting the proceeding parts of the subsequences and combining them together with assigned weighting factors to form output. The prediction algorithm was examined for several patients, and its performance is assessed based on the correlation between prediction and known output. Results: Respiratory motion data was collected for 20 patients using the RPM system. The output subsequence is the last 50 samples (∼2 seconds) of a surrogate signal, and the input subsequence was 100 (∼3 seconds) frames prior to the output subsequence. Based on the analysis of correlation coefficient between predicted and known output subsequence, the average correlation is 0.9644±0.0394 and 0.9789±0.0239 for equal-weighting and relative-weighting strategies, respectively. Conclusion: Preliminary results indicate that the prediction

  15. Multiple time step integrators in ab initio molecular dynamics

    International Nuclear Information System (INIS)

    Luehr, Nathan; Martínez, Todd J.; Markland, Thomas E.

    2014-01-01

    Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy

  16. Multiple-step fault estimation for interval type-II T-S fuzzy system of hypersonic vehicle with time-varying elevator faults

    Directory of Open Access Journals (Sweden)

    Jin Wang

    2017-03-01

    Full Text Available This article proposes a multiple-step fault estimation algorithm for hypersonic flight vehicles that uses an interval type-II Takagi–Sugeno fuzzy model. An interval type-II Takagi–Sugeno fuzzy model is developed to approximate the nonlinear dynamic system and handle the parameter uncertainties of hypersonic firstly. Then, a multiple-step time-varying additive fault estimation algorithm is designed to estimate time-varying additive elevator fault of hypersonic flight vehicles. Finally, the simulation is conducted in both aspects of modeling and fault estimation; the validity and availability of such method are verified by a series of the comparison of numerical simulation results.

  17. Step-by-Step Model for the Study of the Apriori Algorithm for Predictive Analysis

    Directory of Open Access Journals (Sweden)

    Daniel Grigore ROŞCA

    2015-06-01

    Full Text Available The goal of this paper was to develop an educational oriented application based on the Data Mining Apriori Algorithm which facilitates both the research and the study of data mining by graduate students. The application could be used to discover interesting patterns in the corpus of data and to measure the impact on the speed of execution as a function of problem constraints (value of support and confidence variables or size of the transactional data-base. The paper presents a brief overview of the Apriori Algorithm, aspects about the implementation of the algorithm using a step-by-step process, a discussion of the education-oriented user interface and the process of data mining of a test transactional data base. The impact of some constraints on the speed of the algorithm is also experimentally measured without a systematic review of different approaches to increase execution speed. Possible applications of the implementation, as well as its limits, are briefly reviewed.

  18. Invited Review Article: Measurement uncertainty of linear phase-stepping algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hack, Erwin [EMPA, Laboratory Electronics/Metrology/Reliability, Ueberlandstrasse 129, CH-8600 Duebendorf (Switzerland); Burke, Jan [Australian Centre for Precision Optics, CSIRO (Commonwealth Scientific and Industrial Research Organisation) Materials Science and Engineering, P.O. Box 218, Lindfield, NSW 2070 (Australia)

    2011-06-15

    Phase retrieval techniques are widely used in optics, imaging and electronics. Originating in signal theory, they were introduced to interferometry around 1970. Over the years, many robust phase-stepping techniques have been developed that minimize specific experimental influence quantities such as phase step errors or higher harmonic components of the signal. However, optimizing a technique for a specific influence quantity can compromise its performance with regard to others. We present a consistent quantitative analysis of phase measurement uncertainty for the generalized linear phase stepping algorithm with nominally equal phase stepping angles thereby reviewing and generalizing several results that have been reported in literature. All influence quantities are treated on equal footing, and correlations between them are described in a consistent way. For the special case of classical N-bucket algorithms, we present analytical formulae that describe the combined variance as a function of the phase angle values. For the general Arctan algorithms, we derive expressions for the measurement uncertainty averaged over the full 2{pi}-range of phase angles. We also give an upper bound for the measurement uncertainty which can be expressed as being proportional to an algorithm specific factor. Tabular compilations help the reader to quickly assess the uncertainties that are involved with his or her technique.

  19. Parallelizing flow-accumulation calculations on graphics processing units—From iterative DEM preprocessing algorithm to recursive multiple-flow-direction algorithm

    Science.gov (United States)

    Qin, Cheng-Zhi; Zhan, Lijun

    2012-06-01

    As one of the important tasks in digital terrain analysis, the calculation of flow accumulations from gridded digital elevation models (DEMs) usually involves two steps in a real application: (1) using an iterative DEM preprocessing algorithm to remove the depressions and flat areas commonly contained in real DEMs, and (2) using a recursive flow-direction algorithm to calculate the flow accumulation for every cell in the DEM. Because both algorithms are computationally intensive, quick calculation of the flow accumulations from a DEM (especially for a large area) presents a practical challenge to personal computer (PC) users. In recent years, rapid increases in hardware capacity of the graphics processing units (GPUs) provided in modern PCs have made it possible to meet this challenge in a PC environment. Parallel computing on GPUs using a compute-unified-device-architecture (CUDA) programming model has been explored to speed up the execution of the single-flow-direction algorithm (SFD). However, the parallel implementation on a GPU of the multiple-flow-direction (MFD) algorithm, which generally performs better than the SFD algorithm, has not been reported. Moreover, GPU-based parallelization of the DEM preprocessing step in the flow-accumulation calculations has not been addressed. This paper proposes a parallel approach to calculate flow accumulations (including both iterative DEM preprocessing and a recursive MFD algorithm) on a CUDA-compatible GPU. For the parallelization of an MFD algorithm (MFD-md), two different parallelization strategies using a GPU are explored. The first parallelization strategy, which has been used in the existing parallel SFD algorithm on GPU, has the problem of computing redundancy. Therefore, we designed a parallelization strategy based on graph theory. The application results show that the proposed parallel approach to calculate flow accumulations on a GPU performs much faster than either sequential algorithms or other parallel GPU

  20. Application of algorithms and artificial-intelligence approach for locating multiple harmonics in distribution systems

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Y.-Y.; Chen, Y.-C. [Chung Yuan University (China). Dept. of Electrical Engineering

    1999-05-01

    A new method is proposed for locating multiple harmonic sources in distribution systems. The proposed method first determines the proper locations for metering measurement using fuzzy clustering. Next, an artificial neural network based on the back-propagation approach is used to identify the most likely location for multiple harmonic sources. A set of systematic algorithmic steps is developed until all harmonic locations are identified. The simulation results for an 18-busbar system show that the proposed method is very efficient in locating the multiple harmonics in a distribution system. (author)

  1. Multiple R&D projects scheduling optimization with improved particle swarm algorithm.

    Science.gov (United States)

    Liu, Mengqi; Shan, Miyuan; Wu, Juan

    2014-01-01

    For most enterprises, in order to win the initiative in the fierce competition of market, a key step is to improve their R&D ability to meet the various demands of customers more timely and less costly. This paper discusses the features of multiple R&D environments in large make-to-order enterprises under constrained human resource and budget, and puts forward a multi-project scheduling model during a certain period. Furthermore, we make some improvements to existed particle swarm algorithm and apply the one developed here to the resource-constrained multi-project scheduling model for a simulation experiment. Simultaneously, the feasibility of model and the validity of algorithm are proved in the experiment.

  2. A chaos wolf optimization algorithm with self-adaptive variable step-size

    Science.gov (United States)

    Zhu, Yong; Jiang, Wanlu; Kong, Xiangdong; Quan, Lingxiao; Zhang, Yongshun

    2017-10-01

    To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA) with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as "winner-take-all" and the update mechanism as "survival of the fittest" were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.

  3. Effective arithmetic in finite fields based on Chudnovsky's multiplication algorithm

    OpenAIRE

    Atighehchi , Kévin; Ballet , Stéphane; Bonnecaze , Alexis; Rolland , Robert

    2016-01-01

    International audience; Thanks to a new construction of the Chudnovsky and Chudnovsky multiplication algorithm, we design efficient algorithms for both the exponentiation and the multiplication in finite fields. They are tailored to hardware implementation and they allow computations to be parallelized, while maintaining a low number of bilinear multiplications.À partir d'une nouvelle construction de l'algorithme de multiplication de Chudnovsky et Chudnovsky, nous concevons des algorithmes ef...

  4. Outcome of a 4-step treatment algorithm for depressed inpatients

    NARCIS (Netherlands)

    Birkenhäger, T.K.; Broek, W.W. van den; Moleman, P.; Bruijn, J.A.

    2006-01-01

    Objective: The aim of this study was to examine the efficacy and the feasibility of a 4-step treatment algorithm for inpatients with major depressive disorder. Method: Depressed inpatients, meeting DSM-IV criteria for major depressive disorder, were enrolled in the algorithm that consisted of

  5. A chaos wolf optimization algorithm with self-adaptive variable step-size

    Directory of Open Access Journals (Sweden)

    Yong Zhu

    2017-10-01

    Full Text Available To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as “winner-take-all” and the update mechanism as “survival of the fittest” were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.

  6. Two-Step Proximal Gradient Algorithm for Low-Rank Matrix Completion

    Directory of Open Access Journals (Sweden)

    Qiuyu Wang

    2016-06-01

    Full Text Available In this paper, we  propose a two-step proximal gradient algorithm to solve nuclear norm regularized least squares for the purpose of recovering low-rank data matrix from sampling of its entries. Each iteration generated by the proposed algorithm is a combination of the latest three points, namely, the previous point, the current iterate, and its proximal gradient point. This algorithm preserves the computational simplicity of classical proximal gradient algorithm where a singular value decomposition in proximal operator is involved. Global convergence is followed directly in the literature. Numerical results are reported to show the efficiency of the algorithm.

  7. New management algorithms in multiple sclerosis

    DEFF Research Database (Denmark)

    Sorensen, Per Soelberg

    2014-01-01

    complex. The purpose of the review has been to work out new management algorithms for treatment of relapsing-remitting multiple sclerosis including new oral therapies and therapeutic monoclonal antibodies. RECENT FINDINGS: Recent large placebo-controlled trials in relapsing-remitting multiple sclerosis......PURPOSE OF REVIEW: Our current treatment algorithms include only IFN-β and glatiramer as available first-line disease-modifying drugs and natalizumab and fingolimod as second-line therapies. Today, 10 drugs have been approved in Europe and nine in the United States making the choice of therapy more...

  8. Robust and unobtrusive algorithm based on position independence for step detection

    Science.gov (United States)

    Qiu, KeCheng; Li, MengYang; Luo, YiHan

    2018-04-01

    Running is becoming one of the most popular exercises among the people, monitoring steps can help users better understand their running process and improve exercise efficiency. In this paper, we design and implement a robust and unobtrusive algorithm based on position independence for step detection under real environment. It applies Butterworth filter to suppress high frequency interference and then employs the projection based on mathematics to transform system to solve the problem of unknown position of smartphone. Finally, using sliding window to suppress the false peak. The algorithm was tested for eight participants on the Android 7.0 platform. In our experiments, the results show that the proposed algorithm can achieve desired effect in spite of device pose.

  9. The bilinear complexity and practical algorithms for matrix multiplication

    Science.gov (United States)

    Smirnov, A. V.

    2013-12-01

    A method for deriving bilinear algorithms for matrix multiplication is proposed. New estimates for the bilinear complexity of a number of problems of the exact and approximate multiplication of rectangular matrices are obtained. In particular, the estimate for the boundary rank of multiplying 3 × 3 matrices is improved and a practical algorithm for the exact multiplication of square n × n matrices is proposed. The asymptotic arithmetic complexity of this algorithm is O( n 2.7743).

  10. Local multiplicative Schwarz algorithms for convection-diffusion equations

    Science.gov (United States)

    Cai, Xiao-Chuan; Sarkis, Marcus

    1995-01-01

    We develop a new class of overlapping Schwarz type algorithms for solving scalar convection-diffusion equations discretized by finite element or finite difference methods. The preconditioners consist of two components, namely, the usual two-level additive Schwarz preconditioner and the sum of some quadratic terms constructed by using products of ordered neighboring subdomain preconditioners. The ordering of the subdomain preconditioners is determined by considering the direction of the flow. We prove that the algorithms are optimal in the sense that the convergence rates are independent of the mesh size, as well as the number of subdomains. We show by numerical examples that the new algorithms are less sensitive to the direction of the flow than either the classical multiplicative Schwarz algorithms, and converge faster than the additive Schwarz algorithms. Thus, the new algorithms are more suitable for fluid flow applications than the classical additive or multiplicative Schwarz algorithms.

  11. Can Reduced-Step Polishers Be as Effective as Multiple-Step Polishers in Enhancing Surface Smoothness?

    Science.gov (United States)

    Kemaloglu, Hande; Karacolak, Gamze; Turkun, L Sebnem

    2017-02-01

    The aim of this study was to evaluate the effects of various finishing and polishing systems on the final surface roughness of a resin composite. Hypotheses tested were: (1) reduced-step polishing systems are as effective as multiple-step systems on reducing the surface roughness of a resin composite and (2) the number of application steps in an F/P system has no effect on reducing surface roughness. Ninety discs of a nano-hybrid resin composite were fabricated and divided into nine groups (n = 10). Except the control, all of the specimens were roughened prior to be polished by: Enamel Plus Shiny, Venus Supra, One-gloss, Sof-Lex Wheels, Super-Snap, Enhance/PoGo, Clearfil Twist Dia, and rubber cups. The surface roughness was measured and the surfaces were examined under scanning electron microscope. Results were analyzed with analysis of variance and Holm-Sidak's multiple comparisons test (p One-gloss, Enamel Plus Shiny, and Venus Supra groups. (1) The number of application steps has no effect on the performance of F/P systems. (2) Reduced-step polishers used after a finisher can be preferable to multiple-step systems when used on nanohybrid resin composites. (3) The effect of F/P systems on surface roughness seems to be material-dependent rather than instrument- or system-dependent. Reduced-step systems used after a prepolisher can be an acceptable alternative to multiple-step systems on enhancing the surface smoothness of a nanohybrid composite; however, their effectiveness depends on the materials' properties. (J Esthet Restor Dent 29:31-40, 2017). © 2016 Wiley Periodicals, Inc.

  12. Perinatal Depression Algorithm: A Home Visitor Step-by-Step Guide for Advanced Management of Perinatal Depressive Symptoms

    Science.gov (United States)

    Laszewski, Audrey; Wichman, Christina L.; Doering, Jennifer J.; Maletta, Kristyn; Hammel, Jennifer

    2016-01-01

    Early childhood professionals do many things to support young families. This is true now more than ever, as researchers continue to discover the long-term benefits of early, healthy, nurturing relationships. This article provides an overview of the development of an advanced practice perinatal depression algorithm created as a step-by-step guide…

  13. An efficient parallel algorithm for matrix-vector multiplication

    Energy Technology Data Exchange (ETDEWEB)

    Hendrickson, B.; Leland, R.; Plimpton, S.

    1993-03-01

    The multiplication of a vector by a matrix is the kernel computation of many algorithms in scientific computation. A fast parallel algorithm for this calculation is therefore necessary if one is to make full use of the new generation of parallel supercomputers. This paper presents a high performance, parallel matrix-vector multiplication algorithm that is particularly well suited to hypercube multiprocessors. For an n x n matrix on p processors, the communication cost of this algorithm is O(n/[radical]p + log(p)), independent of the matrix sparsity pattern. The performance of the algorithm is demonstrated by employing it as the kernel in the well-known NAS conjugate gradient benchmark, where a run time of 6.09 seconds was observed. This is the best published performance on this benchmark achieved to date using a massively parallel supercomputer.

  14. Identifying multiple influential spreaders by a heuristic clustering algorithm

    International Nuclear Information System (INIS)

    Bao, Zhong-Kui; Liu, Jian-Guo; Zhang, Hai-Feng

    2017-01-01

    The problem of influence maximization in social networks has attracted much attention. However, traditional centrality indices are suitable for the case where a single spreader is chosen as the spreading source. Many times, spreading process is initiated by simultaneously choosing multiple nodes as the spreading sources. In this situation, choosing the top ranked nodes as multiple spreaders is not an optimal strategy, since the chosen nodes are not sufficiently scattered in networks. Therefore, one ideal situation for multiple spreaders case is that the spreaders themselves are not only influential but also they are dispersively distributed in networks, but it is difficult to meet the two conditions together. In this paper, we propose a heuristic clustering (HC) algorithm based on the similarity index to classify nodes into different clusters, and finally the center nodes in clusters are chosen as the multiple spreaders. HC algorithm not only ensures that the multiple spreaders are dispersively distributed in networks but also avoids the selected nodes to be very “negligible”. Compared with the traditional methods, our experimental results on synthetic and real networks indicate that the performance of HC method on influence maximization is more significant. - Highlights: • A heuristic clustering algorithm is proposed to identify the multiple influential spreaders in complex networks. • The algorithm can not only guarantee the selected spreaders are sufficiently scattered but also avoid to be “insignificant”. • The performance of our algorithm is generally better than other methods, regardless of real networks or synthetic networks.

  15. Identifying multiple influential spreaders by a heuristic clustering algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Bao, Zhong-Kui [School of Mathematical Science, Anhui University, Hefei 230601 (China); Liu, Jian-Guo [Data Science and Cloud Service Research Center, Shanghai University of Finance and Economics, Shanghai, 200133 (China); Zhang, Hai-Feng, E-mail: haifengzhang1978@gmail.com [School of Mathematical Science, Anhui University, Hefei 230601 (China); Department of Communication Engineering, North University of China, Taiyuan, Shan' xi 030051 (China)

    2017-03-18

    The problem of influence maximization in social networks has attracted much attention. However, traditional centrality indices are suitable for the case where a single spreader is chosen as the spreading source. Many times, spreading process is initiated by simultaneously choosing multiple nodes as the spreading sources. In this situation, choosing the top ranked nodes as multiple spreaders is not an optimal strategy, since the chosen nodes are not sufficiently scattered in networks. Therefore, one ideal situation for multiple spreaders case is that the spreaders themselves are not only influential but also they are dispersively distributed in networks, but it is difficult to meet the two conditions together. In this paper, we propose a heuristic clustering (HC) algorithm based on the similarity index to classify nodes into different clusters, and finally the center nodes in clusters are chosen as the multiple spreaders. HC algorithm not only ensures that the multiple spreaders are dispersively distributed in networks but also avoids the selected nodes to be very “negligible”. Compared with the traditional methods, our experimental results on synthetic and real networks indicate that the performance of HC method on influence maximization is more significant. - Highlights: • A heuristic clustering algorithm is proposed to identify the multiple influential spreaders in complex networks. • The algorithm can not only guarantee the selected spreaders are sufficiently scattered but also avoid to be “insignificant”. • The performance of our algorithm is generally better than other methods, regardless of real networks or synthetic networks.

  16. A deterministic algorithm for fitting a step function to a weighted point-set

    KAUST Repository

    Fournier, Hervé

    2013-02-01

    Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance to the input points. It matches the expected time bound of the best known randomized algorithm for this problem. Our approach relies on Coles improved parametric searching technique. As a direct application, our result yields the first O(nlogn)-time algorithm for computing a k-center of a set of n weighted points on the real line. © 2012 Elsevier B.V.

  17. A Novel Walking Detection and Step Counting Algorithm Using Unconstrained Smartphones.

    Science.gov (United States)

    Kang, Xiaomin; Huang, Baoqi; Qi, Guodong

    2018-01-19

    Recently, with the development of artificial intelligence technologies and the popularity of mobile devices, walking detection and step counting have gained much attention since they play an important role in the fields of equipment positioning, saving energy, behavior recognition, etc. In this paper, a novel algorithm is proposed to simultaneously detect walking motion and count steps through unconstrained smartphones in the sense that the smartphone placement is not only arbitrary but also alterable. On account of the periodicity of the walking motion and sensitivity of gyroscopes, the proposed algorithm extracts the frequency domain features from three-dimensional (3D) angular velocities of a smartphone through FFT (fast Fourier transform) and identifies whether its holder is walking or not irrespective of its placement. Furthermore, the corresponding step frequency is recursively updated to evaluate the step count in real time. Extensive experiments are conducted by involving eight subjects and different walking scenarios in a realistic environment. It is shown that the proposed method achieves the precision of 93.76 % and recall of 93.65 % for walking detection, and its overall performance is significantly better than other well-known methods. Moreover, the accuracy of step counting by the proposed method is 95.74 % , and is better than both of the several well-known counterparts and commercial products.

  18. Combinatorial Optimization Algorithms for Dynamic Multiple Fault Diagnosis in Automotive and Aerospace Applications

    Science.gov (United States)

    Kodali, Anuradha

    facility, respectively. The set-covering matrix encapsulates the relationship among the rows (tests or demand points) and columns (faults or locations) of the system at each time. By relaxing the coupling constraints using Lagrange multipliers, the DSC problem can be decoupled into independent subproblems, one for each column. Each subproblem is solved using the Viterbi decoding algorithm, and a primal feasible solution is constructed by modifying the Viterbi solutions via a heuristic. The proposed Viterbi-Lagrangian relaxation algorithm (VLRA) provides a measure of suboptimality via an approximate duality gap. As a major practical extension of the above problem, we also consider the problem of diagnosing faults with delayed test outcomes, termed delay-dynamic set-covering (DDSC), and experiment with real-world problems that exhibit masking faults. Also, we present simulation results on OR-library datasets (set-covering formulations are predominantly validated on these matrices in the literature), posed as facility location problems. Finally, we implement these algorithms to solve problems in aerospace and automotive applications. Firstly, we address the diagnostic ambiguity problem in aerospace and automotive applications by developing a dynamic fusion framework that includes dynamic multiple fault diagnosis algorithms. This improves the correct fault isolation rate, while minimizing the false alarm rates, by considering multiple faults instead of the traditional data-driven techniques based on single fault (class)-single epoch (static) assumption. The dynamic fusion problem is formulated as a maximum a posteriori decision problem of inferring the fault sequence based on uncertain outcomes of multiple binary classifiers over time. The fusion process involves three steps: the first step transforms the multi-class problem into dichotomies using error correcting output codes (ECOC), thereby solving the concomitant binary classification problems; the second step fuses the

  19. Protein alignment algorithms with an efficient backtracking routine on multiple GPUs

    Directory of Open Access Journals (Sweden)

    Kierzynka Michal

    2011-05-01

    Full Text Available Abstract Background Pairwise sequence alignment methods are widely used in biological research. The increasing number of sequences is perceived as one of the upcoming challenges for sequence alignment methods in the nearest future. To overcome this challenge several GPU (Graphics Processing Unit computing approaches have been proposed lately. These solutions show a great potential of a GPU platform but in most cases address the problem of sequence database scanning and computing only the alignment score whereas the alignment itself is omitted. Thus, the need arose to implement the global and semiglobal Needleman-Wunsch, and Smith-Waterman algorithms with a backtracking procedure which is needed to construct the alignment. Results In this paper we present the solution that performs the alignment of every given sequence pair, which is a required step for progressive multiple sequence alignment methods, as well as for DNA recognition at the DNA assembly stage. Performed tests show that the implementation, with performance up to 6.3 GCUPS on a single GPU for affine gap penalties, is very efficient in comparison to other CPU and GPU-based solutions. Moreover, multiple GPUs support with load balancing makes the application very scalable. Conclusions The article shows that the backtracking procedure of the sequence alignment algorithms may be designed to fit in with the GPU architecture. Therefore, our algorithm, apart from scores, is able to compute pairwise alignments. This opens a wide range of new possibilities, allowing other methods from the area of molecular biology to take advantage of the new computational architecture. Performed tests show that the efficiency of the implementation is excellent. Moreover, the speed of our GPU-based algorithms can be almost linearly increased when using more than one graphics card.

  20. Genetic Algorithms for Multiple-Choice Problems

    Science.gov (United States)

    Aickelin, Uwe

    2010-04-01

    This thesis investigates the use of problem-specific knowledge to enhance a genetic algorithm approach to multiple-choice optimisation problems.It shows that such information can significantly enhance performance, but that the choice of information and the way it is included are important factors for success.Two multiple-choice problems are considered.The first is constructing a feasible nurse roster that considers as many requests as possible.In the second problem, shops are allocated to locations in a mall subject to constraints and maximising the overall income.Genetic algorithms are chosen for their well-known robustness and ability to solve large and complex discrete optimisation problems.However, a survey of the literature reveals room for further research into generic ways to include constraints into a genetic algorithm framework.Hence, the main theme of this work is to balance feasibility and cost of solutions.In particular, co-operative co-evolution with hierarchical sub-populations, problem structure exploiting repair schemes and indirect genetic algorithms with self-adjusting decoder functions are identified as promising approaches.The research starts by applying standard genetic algorithms to the problems and explaining the failure of such approaches due to epistasis.To overcome this, problem-specific information is added in a variety of ways, some of which are designed to increase the number of feasible solutions found whilst others are intended to improve the quality of such solutions.As well as a theoretical discussion as to the underlying reasons for using each operator,extensive computational experiments are carried out on a variety of data.These show that the indirect approach relies less on problem structure and hence is easier to implement and superior in solution quality.

  1. Two-step calibration method for multi-algorithm score-based face recognition systems by minimizing discrimination loss

    NARCIS (Netherlands)

    Susyanto, N.; Veldhuis, R.N.J.; Spreeuwers, L.J.; Klaassen, C.A.J.; Fierrez, J.; Li, S.Z.; Ross, A.; Veldhuis, R.; Alonso-Fernandez, F.; Bigun, J.

    2016-01-01

    We propose a new method for combining multi-algorithm score-based face recognition systems, which we call the two-step calibration method. Typically, algorithms for face recognition systems produce dependent scores. The two-step method is based on parametric copulas to handle this dependence. Its

  2. A Novel Walking Detection and Step Counting Algorithm Using Unconstrained Smartphones

    Directory of Open Access Journals (Sweden)

    Xiaomin Kang

    2018-01-01

    Full Text Available Recently, with the development of artificial intelligence technologies and the popularity of mobile devices, walking detection and step counting have gained much attention since they play an important role in the fields of equipment positioning, saving energy, behavior recognition, etc. In this paper, a novel algorithm is proposed to simultaneously detect walking motion and count steps through unconstrained smartphones in the sense that the smartphone placement is not only arbitrary but also alterable. On account of the periodicity of the walking motion and sensitivity of gyroscopes, the proposed algorithm extracts the frequency domain features from three-dimensional (3D angular velocities of a smartphone through FFT (fast Fourier transform and identifies whether its holder is walking or not irrespective of its placement. Furthermore, the corresponding step frequency is recursively updated to evaluate the step count in real time. Extensive experiments are conducted by involving eight subjects and different walking scenarios in a realistic environment. It is shown that the proposed method achieves the precision of 93.76 % and recall of 93.65 % for walking detection, and its overall performance is significantly better than other well-known methods. Moreover, the accuracy of step counting by the proposed method is 95.74 % , and is better than both of the several well-known counterparts and commercial products.

  3. Conjugate gradient algorithms using multiple recursions

    Energy Technology Data Exchange (ETDEWEB)

    Barth, T.; Manteuffel, T.

    1996-12-31

    Much is already known about when a conjugate gradient method can be implemented with short recursions for the direction vectors. The work done in 1984 by Faber and Manteuffel gave necessary and sufficient conditions on the iteration matrix A, in order for a conjugate gradient method to be implemented with a single recursion of a certain form. However, this form does not take into account all possible recursions. This became evident when Jagels and Reichel used an algorithm of Gragg for unitary matrices to demonstrate that the class of matrices for which a practical conjugate gradient algorithm exists can be extended to include unitary and shifted unitary matrices. The implementation uses short double recursions for the direction vectors. This motivates the study of multiple recursion algorithms.

  4. Optimization design for the stepped impedance transformer based on the genetic algorithm

    International Nuclear Information System (INIS)

    Zou Dehui; Lai Wanchang; Qiu Dong

    2007-01-01

    This paper introduces the basic principium and mathematic model of the stepped impedance transformer, then puts the emphasis on comparing two kinds of design methods of the stepped impedance transformer. The design results are simulated by EDA, which indicates that genetic algorithm design is better than Chebyshev integrated design in the term of the most reflect coefficient's module. (authors)

  5. Dynamic multiple thresholding breast boundary detection algorithm for mammograms

    International Nuclear Information System (INIS)

    Wu, Yi-Ta; Zhou Chuan; Chan, Heang-Ping; Paramagul, Chintana; Hadjiiski, Lubomir M.; Daly, Caroline Plowden; Douglas, Julie A.; Zhang Yiheng; Sahiner, Berkman; Shi Jiazheng; Wei Jun

    2010-01-01

    Purpose: Automated detection of breast boundary is one of the fundamental steps for computer-aided analysis of mammograms. In this study, the authors developed a new dynamic multiple thresholding based breast boundary (MTBB) detection method for digitized mammograms. Methods: A large data set of 716 screen-film mammograms (442 CC view and 274 MLO view) obtained from consecutive cases of an Institutional Review Board approved project were used. An experienced breast radiologist manually traced the breast boundary on each digitized image using a graphical interface to provide a reference standard. The initial breast boundary (MTBB-Initial) was obtained by dynamically adapting the threshold to the gray level range in local regions of the breast periphery. The initial breast boundary was then refined by using gradient information from horizontal and vertical Sobel filtering to obtain the final breast boundary (MTBB-Final). The accuracy of the breast boundary detection algorithm was evaluated by comparison with the reference standard using three performance metrics: The Hausdorff distance (HDist), the average minimum Euclidean distance (AMinDist), and the area overlap measure (AOM). Results: In comparison with the authors' previously developed gradient-based breast boundary (GBB) algorithm, it was found that 68%, 85%, and 94% of images had HDist errors less than 6 pixels (4.8 mm) for GBB, MTBB-Initial, and MTBB-Final, respectively. 89%, 90%, and 96% of images had AMinDist errors less than 1.5 pixels (1.2 mm) for GBB, MTBB-Initial, and MTBB-Final, respectively. 96%, 98%, and 99% of images had AOM values larger than 0.9 for GBB, MTBB-Initial, and MTBB-Final, respectively. The improvement by the MTBB-Final method was statistically significant for all the evaluation measures by the Wilcoxon signed rank test (p<0.0001). Conclusions: The MTBB approach that combined dynamic multiple thresholding and gradient information provided better performance than the breast boundary

  6. A deterministic algorithm for fitting a step function to a weighted point-set

    KAUST Repository

    Fournier, Hervé ; Vigneron, Antoine E.

    2013-01-01

    Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance

  7. Multiple depots vehicle routing based on the ant colony with the genetic algorithm

    Directory of Open Access Journals (Sweden)

    ChunYing Liu

    2013-09-01

    Full Text Available Purpose: the distribution routing plans of multi-depots vehicle scheduling problem will increase exponentially along with the adding of customers. So, it becomes an important studying trend to solve the vehicle scheduling problem with heuristic algorithm. On the basis of building the model of multi-depots vehicle scheduling problem, in order to improve the efficiency of the multiple depots vehicle routing, the paper puts forward a fusion algorithm on multiple depots vehicle routing based on the ant colony algorithm with genetic algorithm. Design/methodology/approach: to achieve this objective, the genetic algorithm optimizes the parameters of the ant colony algorithm. The fusion algorithm on multiple depots vehicle based on the ant colony algorithm with genetic algorithm is proposed. Findings: simulation experiment indicates that the result of the fusion algorithm is more excellent than the other algorithm, and the improved algorithm has better convergence effective and global ability. Research limitations/implications: in this research, there are some assumption that might affect the accuracy of the model such as the pheromone volatile factor, heuristic factor in each period, and the selected multiple depots. These assumptions can be relaxed in future work. Originality/value: In this research, a new method for the multiple depots vehicle routing is proposed. The fusion algorithm eliminate the influence of the selected parameter by optimizing the heuristic factor, evaporation factor, initial pheromone distribute, and have the strong global searching ability. The Ant Colony algorithm imports cross operator and mutation operator for operating the first best solution and the second best solution in every iteration, and reserves the best solution. The cross and mutation operator extend the solution space and improve the convergence effective and the global ability. This research shows that considering both the ant colony and genetic algorithm

  8. Low-dose multiple-information retrieval algorithm for X-ray grating-based imaging

    International Nuclear Information System (INIS)

    Wang Zhentian; Huang Zhifeng; Chen Zhiqiang; Zhang Li; Jiang Xiaolei; Kang Kejun; Yin Hongxia; Wang Zhenchang; Stampanoni, Marco

    2011-01-01

    The present work proposes a low dose information retrieval algorithm for X-ray grating-based multiple-information imaging (GB-MII) method, which can retrieve the attenuation, refraction and scattering information of samples by only three images. This algorithm aims at reducing the exposure time and the doses delivered to the sample. The multiple-information retrieval problem in GB-MII is solved by transforming a nonlinear equations set to a linear equations and adopting the nature of the trigonometric functions. The proposed algorithm is validated by experiments both on conventional X-ray source and synchrotron X-ray source, and compared with the traditional multiple-image-based retrieval algorithm. The experimental results show that our algorithm is comparable with the traditional retrieval algorithm and especially suitable for high Signal-to-Noise system.

  9. Alternate mutation based artificial immune algorithm for step fixed charge transportation problem

    Directory of Open Access Journals (Sweden)

    Mahmoud Moustafa El-Sherbiny

    2012-07-01

    Full Text Available Step fixed charge transportation problem (SFCTP is considered as a special version of the fixed-charge transportation problem (FCTP. In SFCTP, the fixed cost is incurred for every route that is used in the solution and is proportional to the amount shipped. This cost structure causes the value of the objective function to behave like a step function. Both FCTP and SFCTP are considered to be NP-hard problems. While a lot of research has been carried out concerning FCTP, not much has been done concerning SFCTP. This paper introduces an alternate Mutation based Artificial Immune (MAI algorithm for solving SFCTPs. The proposed MAI algorithm solves both balanced and unbalanced SFCTP without introducing a dummy supplier or a dummy customer. In MAI algorithm a coding schema is designed and procedures are developed for decoding such schema and shipping units. MAI algorithm guarantees the feasibility of all the generated solutions. Due to the significant role of mutation function on the MAI algorithm’s quality, 16 mutation functions are presented and their performances are compared to select the best one. For this purpose, forty problems with different sizes have been generated at random and then a robust calibration is applied using the relative percentage deviation (RPD method. Through two illustrative problems of different sizes the performance of the MAI algorithm has been compared with most recent methods.

  10. ESPRIT-like algorithm for computational-efficient angle estimation in bistatic multiple-input multiple-output radar

    Science.gov (United States)

    Gong, Jian; Lou, Shuntian; Guo, Yiduo

    2016-04-01

    An estimation of signal parameters via a rotational invariance techniques-like (ESPRIT-like) algorithm is proposed to estimate the direction of arrival and direction of departure for bistatic multiple-input multiple-output (MIMO) radar. The properties of a noncircular signal and Euler's formula are first exploited to establish a real-valued bistatic MIMO radar array data, which is composed of sine and cosine data. Then the receiving/transmitting selective matrices are constructed to obtain the receiving/transmitting rotational invariance factors. Since the rotational invariance factor is a cosine function, symmetrical mirror angle ambiguity may occur. Finally, a maximum likelihood function is used to avoid the estimation ambiguities. Compared with the existing ESPRIT, the proposed algorithm can save about 75% of computational load owing to the real-valued ESPRIT algorithm. Simulation results confirm the effectiveness of the ESPRIT-like algorithm.

  11. Use of multiple objective evolutionary algorithms in optimizing surveillance requirements

    International Nuclear Information System (INIS)

    Martorell, S.; Carlos, S.; Villanueva, J.F.; Sanchez, A.I; Galvan, B.; Salazar, D.; Cepin, M.

    2006-01-01

    This paper presents the development and application of a double-loop Multiple Objective Evolutionary Algorithm that uses a Multiple Objective Genetic Algorithm to perform the simultaneous optimization of periodic Test Intervals (TI) and Test Planning (TP). It takes into account the time-dependent effect of TP performed on stand-by safety-related equipment. TI and TP are part of the Surveillance Requirements within Technical Specifications at Nuclear Power Plants. It addresses the problem of multi-objective optimization in the space of dependable variables, i.e. TI and TP, using a novel flexible structure of the optimization algorithm. Lessons learnt from the cases of application of the methodology to optimize TI and TP for the High-Pressure Injection System are given. The results show that the double-loop Multiple Objective Evolutionary Algorithm is able to find the Pareto set of solutions that represents a surface of non-dominated solutions that satisfy all the constraints imposed on the objective functions and decision variables. Decision makers can adopt then the best solution found depending on their particular preference, e.g. minimum cost, minimum unavailability

  12. A scalable parallel algorithm for multiple objective linear programs

    Science.gov (United States)

    Wiecek, Malgorzata M.; Zhang, Hong

    1994-01-01

    This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.

  13. Variable Step Size Maximum Correntropy Criteria Based Adaptive Filtering Algorithm

    Directory of Open Access Journals (Sweden)

    S. Radhika

    2016-04-01

    Full Text Available Maximum correntropy criterion (MCC based adaptive filters are found to be robust against impulsive interference. This paper proposes a novel MCC based adaptive filter with variable step size in order to obtain improved performance in terms of both convergence rate and steady state error with robustness against impulsive interference. The optimal variable step size is obtained by minimizing the Mean Square Deviation (MSD error from one iteration to the other. Simulation results in the context of a highly impulsive system identification scenario show that the proposed algorithm has faster convergence and lesser steady state error than the conventional MCC based adaptive filters.

  14. Reconstructing Genetic Regulatory Networks Using Two-Step Algorithms with the Differential Equation Models of Neural Networks.

    Science.gov (United States)

    Chen, Chi-Kan

    2017-07-26

    The identification of genetic regulatory networks (GRNs) provides insights into complex cellular processes. A class of recurrent neural networks (RNNs) captures the dynamics of GRN. Algorithms combining the RNN and machine learning schemes were proposed to reconstruct small-scale GRNs using gene expression time series. We present new GRN reconstruction methods with neural networks. The RNN is extended to a class of recurrent multilayer perceptrons (RMLPs) with latent nodes. Our methods contain two steps: the edge rank assignment step and the network construction step. The former assigns ranks to all possible edges by a recursive procedure based on the estimated weights of wires of RNN/RMLP (RE RNN /RE RMLP ), and the latter constructs a network consisting of top-ranked edges under which the optimized RNN simulates the gene expression time series. The particle swarm optimization (PSO) is applied to optimize the parameters of RNNs and RMLPs in a two-step algorithm. The proposed RE RNN -RNN and RE RMLP -RNN algorithms are tested on synthetic and experimental gene expression time series of small GRNs of about 10 genes. The experimental time series are from the studies of yeast cell cycle regulated genes and E. coli DNA repair genes. The unstable estimation of RNN using experimental time series having limited data points can lead to fairly arbitrary predicted GRNs. Our methods incorporate RNN and RMLP into a two-step structure learning procedure. Results show that the RE RMLP using the RMLP with a suitable number of latent nodes to reduce the parameter dimension often result in more accurate edge ranks than the RE RNN using the regularized RNN on short simulated time series. Combining by a weighted majority voting rule the networks derived by the RE RMLP -RNN using different numbers of latent nodes in step one to infer the GRN, the method performs consistently and outperforms published algorithms for GRN reconstruction on most benchmark time series. The framework of two-step

  15. Multi-step wind speed forecasting based on a hybrid forecasting architecture and an improved bat algorithm

    International Nuclear Information System (INIS)

    Xiao, Liye; Qian, Feng; Shao, Wei

    2017-01-01

    Highlights: • Propose a hybrid architecture based on a modified bat algorithm for multi-step wind speed forecasting. • Improve the accuracy of multi-step wind speed forecasting. • Modify bat algorithm with CG to improve optimized performance. - Abstract: As one of the most promising sustainable energy sources, wind energy plays an important role in energy development because of its cleanliness without causing pollution. Generally, wind speed forecasting, which has an essential influence on wind power systems, is regarded as a challenging task. Analyses based on single-step wind speed forecasting have been widely used, but their results are insufficient in ensuring the reliability and controllability of wind power systems. In this paper, a new forecasting architecture based on decomposing algorithms and modified neural networks is successfully developed for multi-step wind speed forecasting. Four different hybrid models are contained in this architecture, and to further improve the forecasting performance, a modified bat algorithm (BA) with the conjugate gradient (CG) method is developed to optimize the initial weights between layers and thresholds of the hidden layer of neural networks. To investigate the forecasting abilities of the four models, the wind speed data collected from four different wind power stations in Penglai, China, were used as a case study. The numerical experiments showed that the hybrid model including the singular spectrum analysis and general regression neural network with CG-BA (SSA-CG-BA-GRNN) achieved the most accurate forecasting results in one-step to three-step wind speed forecasting.

  16. Step wise, multiple objective calibration of a hydrologic model for a snowmelt dominated basin

    Science.gov (United States)

    Hay, L.E.; Leavesley, G.H.; Clark, M.P.; Markstrom, S.L.; Viger, R.J.; Umemoto, M.

    2006-01-01

    The ability to apply a hydrologic model to large numbers of basins for forecasting purposes requires a quick and effective calibration strategy. This paper presents a step wise, multiple objective, automated procedure for hydrologic model calibration. This procedure includes the sequential calibration of a model's simulation of solar radiation (SR), potential evapotranspiration (PET), water balance, and daily runoff. The procedure uses the Shuffled Complex Evolution global search algorithm to calibrate the U.S. Geological Survey's Precipitation Runoff Modeling System in the Yampa River basin of Colorado. This process assures that intermediate states of the model (SR and PET on a monthly mean basis), as well as the water balance and components of the daily hydrograph are simulated, consistently with measured values.

  17. HMC algorithm with multiple time scale integration and mass preconditioning

    Science.gov (United States)

    Urbach, C.; Jansen, K.; Shindler, A.; Wenger, U.

    2006-01-01

    We present a variant of the HMC algorithm with mass preconditioning (Hasenbusch acceleration) and multiple time scale integration. We have tested this variant for standard Wilson fermions at β=5.6 and at pion masses ranging from 380 to 680 MeV. We show that in this situation its performance is comparable to the recently proposed HMC variant with domain decomposition as preconditioner. We give an update of the "Berlin Wall" figure, comparing the performance of our variant of the HMC algorithm to other published performance data. Advantages of the HMC algorithm with mass preconditioning and multiple time scale integration are that it is straightforward to implement and can be used in combination with a wide variety of lattice Dirac operators.

  18. Multiple Object Tracking Using the Shortest Path Faster Association Algorithm

    Directory of Open Access Journals (Sweden)

    Zhenghao Xi

    2014-01-01

    Full Text Available To solve the persistently multiple object tracking in cluttered environments, this paper presents a novel tracking association approach based on the shortest path faster algorithm. First, the multiple object tracking is formulated as an integer programming problem of the flow network. Then we relax the integer programming to a standard linear programming problem. Therefore, the global optimum can be quickly obtained using the shortest path faster algorithm. The proposed method avoids the difficulties of integer programming, and it has a lower worst-case complexity than competing methods but better robustness and tracking accuracy in complex environments. Simulation results show that the proposed algorithm takes less time than other state-of-the-art methods and can operate in real time.

  19. Detecting free-living steps and walking bouts: validating an algorithm for macro gait analysis.

    Science.gov (United States)

    Hickey, Aodhán; Del Din, Silvia; Rochester, Lynn; Godfrey, Alan

    2017-01-01

    Research suggests wearables and not instrumented walkways are better suited to quantify gait outcomes in clinic and free-living environments, providing a more comprehensive overview of walking due to continuous monitoring. Numerous validation studies in controlled settings exist, but few have examined the validity of wearables and associated algorithms for identifying and quantifying step counts and walking bouts in uncontrolled (free-living) environments. Studies which have examined free-living step and bout count validity found limited agreement due to variations in walking speed, changing terrain or task. Here we present a gait segmentation algorithm to define free-living step count and walking bouts from an open-source, high-resolution, accelerometer-based wearable (AX3, Axivity). Ten healthy participants (20-33 years) wore two portable gait measurement systems; a wearable accelerometer on the lower-back and a wearable body-mounted camera (GoPro HERO) on the chest, for 1 h on two separate occasions (24 h apart) during free-living activities. Step count and walking bouts were derived for both measurement systems and compared. For all participants during a total of almost 20 h of uncontrolled and unscripted free-living activity data, excellent relative (rho  ⩾  0.941) and absolute (ICC (2,1)   ⩾  0.975) agreement with no presence of bias were identified for step count compared to the camera (gold standard reference). Walking bout identification showed excellent relative (rho  ⩾  0.909) and absolute agreement (ICC (2,1)   ⩾  0.941) but demonstrated significant bias. The algorithm employed for identifying and quantifying steps and bouts from a single wearable accelerometer worn on the lower-back has been demonstrated to be valid and could be used for pragmatic gait analysis in prolonged uncontrolled free-living environments.

  20. Through-Wall Multiple Targets Vital Signs Tracking Based on VMD Algorithm

    Directory of Open Access Journals (Sweden)

    Jiaming Yan

    2016-08-01

    Full Text Available Targets located at the same distance are easily neglected in most through-wall multiple targets detecting applications which use the single-input single-output (SISO ultra-wideband (UWB radar system. In this paper, a novel multiple targets vital signs tracking algorithm for through-wall detection using SISO UWB radar has been proposed. Taking advantage of the high-resolution decomposition of the Variational Mode Decomposition (VMD based algorithm, the respiration signals of different targets can be decomposed into different sub-signals, and then, we can track the time-varying respiration signals accurately when human targets located in the same distance. Intensive evaluation has been conducted to show the effectiveness of our scheme with a 0.15 m thick concrete brick wall. Constant, piecewise-constant and time-varying vital signs could be separated and tracked successfully with the proposed VMD based algorithm for two targets, even up to three targets. For the multiple targets’ vital signs tracking issues like urban search and rescue missions, our algorithm has superior capability in most detection applications.

  1. Adaptive step-size algorithm for Fourier beam-propagation method with absorbing boundary layer of auto-determined width.

    Science.gov (United States)

    Learn, R; Feigenbaum, E

    2016-06-01

    Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. The second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.

  2. Multi-Target Angle Tracking Algorithm for Bistatic Multiple-Input Multiple-Output (MIMO Radar Based on the Elements of the Covariance Matrix

    Directory of Open Access Journals (Sweden)

    Zhengyan Zhang

    2018-03-01

    Full Text Available In this paper, we consider the problem of tracking the direction of arrivals (DOA and the direction of departure (DOD of multiple targets for bistatic multiple-input multiple-output (MIMO radar. A high-precision tracking algorithm for target angle is proposed. First, the linear relationship between the covariance matrix difference and the angle difference of the adjacent moment was obtained through three approximate relations. Then, the proposed algorithm obtained the relationship between the elements in the covariance matrix difference. On this basis, the performance of the algorithm was improved by averaging the covariance matrix element. Finally, the least square method was used to estimate the DOD and DOA. The algorithm realized the automatic correlation of the angle and provided better performance when compared with the adaptive asymmetric joint diagonalization (AAJD algorithm. The simulation results demonstrated the effectiveness of the proposed algorithm. The algorithm provides the technical support for the practical application of MIMO radar.

  3. Multi-Target Angle Tracking Algorithm for Bistatic Multiple-Input Multiple-Output (MIMO) Radar Based on the Elements of the Covariance Matrix.

    Science.gov (United States)

    Zhang, Zhengyan; Zhang, Jianyun; Zhou, Qingsong; Li, Xiaobo

    2018-03-07

    In this paper, we consider the problem of tracking the direction of arrivals (DOA) and the direction of departure (DOD) of multiple targets for bistatic multiple-input multiple-output (MIMO) radar. A high-precision tracking algorithm for target angle is proposed. First, the linear relationship between the covariance matrix difference and the angle difference of the adjacent moment was obtained through three approximate relations. Then, the proposed algorithm obtained the relationship between the elements in the covariance matrix difference. On this basis, the performance of the algorithm was improved by averaging the covariance matrix element. Finally, the least square method was used to estimate the DOD and DOA. The algorithm realized the automatic correlation of the angle and provided better performance when compared with the adaptive asymmetric joint diagonalization (AAJD) algorithm. The simulation results demonstrated the effectiveness of the proposed algorithm. The algorithm provides the technical support for the practical application of MIMO radar.

  4. A Class of Manifold Regularized Multiplicative Update Algorithms for Image Clustering.

    Science.gov (United States)

    Yang, Shangming; Yi, Zhang; He, Xiaofei; Li, Xuelong

    2015-12-01

    Multiplicative update algorithms are important tools for information retrieval, image processing, and pattern recognition. However, when the graph regularization is added to the cost function, different classes of sample data may be mapped to the same subspace, which leads to the increase of data clustering error rate. In this paper, an improved nonnegative matrix factorization (NMF) cost function is introduced. Based on the cost function, a class of novel graph regularized NMF algorithms is developed, which results in a class of extended multiplicative update algorithms with manifold structure regularization. Analysis shows that in the learning, the proposed algorithms can efficiently minimize the rank of the data representation matrix. Theoretical results presented in this paper are confirmed by simulations. For different initializations and data sets, variation curves of cost functions and decomposition data are presented to show the convergence features of the proposed update rules. Basis images, reconstructed images, and clustering results are utilized to present the efficiency of the new algorithms. Last, the clustering accuracies of different algorithms are also investigated, which shows that the proposed algorithms can achieve state-of-the-art performance in applications of image clustering.

  5. Genomic multiple sequence alignments: refinement using a genetic algorithm

    Directory of Open Access Journals (Sweden)

    Lefkowitz Elliot J

    2005-08-01

    Full Text Available Abstract Background Genomic sequence data cannot be fully appreciated in isolation. Comparative genomics – the practice of comparing genomic sequences from different species – plays an increasingly important role in understanding the genotypic differences between species that result in phenotypic differences as well as in revealing patterns of evolutionary relationships. One of the major challenges in comparative genomics is producing a high-quality alignment between two or more related genomic sequences. In recent years, a number of tools have been developed for aligning large genomic sequences. Most utilize heuristic strategies to identify a series of strong sequence similarities, which are then used as anchors to align the regions between the anchor points. The resulting alignment is globally correct, but in many cases is suboptimal locally. We describe a new program, GenAlignRefine, which improves the overall quality of global multiple alignments by using a genetic algorithm to improve local regions of alignment. Regions of low quality are identified, realigned using the program T-Coffee, and then refined using a genetic algorithm. Because a better COFFEE (Consistency based Objective Function For alignmEnt Evaluation score generally reflects greater alignment quality, the algorithm searches for an alignment that yields a better COFFEE score. To improve the intrinsic slowness of the genetic algorithm, GenAlignRefine was implemented as a parallel, cluster-based program. Results We tested the GenAlignRefine algorithm by running it on a Linux cluster to refine sequences from a simulation, as well as refine a multiple alignment of 15 Orthopoxvirus genomic sequences approximately 260,000 nucleotides in length that initially had been aligned by Multi-LAGAN. It took approximately 150 minutes for a 40-processor Linux cluster to optimize some 200 fuzzy (poorly aligned regions of the orthopoxvirus alignment. Overall sequence identity increased only

  6. A matrix-free, implicit, incompressible fractional-step algorithm for fluid–structure interaction applications

    CSIR Research Space (South Africa)

    Oxtoby, Oliver F

    2012-05-01

    Full Text Available In this paper we detail a fast, fully-coupled, partitioned fluid–structure interaction (FSI) scheme. For the incompressible fluid, new fractional-step algorithms are proposed which make possible the fully implicit, but matrixfree, parallel solution...

  7. Somatic embryogenesis and in-vitro regeneration of rice (Oryza sativa L.) cultivars under one-step and multiple-step salinity stresses

    DEFF Research Database (Denmark)

    Khattak, Mohammad S. K.; Abiri, Rambod; Valdiani, Alireza

    2017-01-01

    The present study aimed to examine the effect of one-step and multiple-step salinity stress on the somatic embryogenesis of rice cultivars within the solid and liquid (cell suspension) culture media conditions. Five rice cultivars, including Puteh Perak, Mahsuri, Basmati-370, Nona Bokra and Khari......, and significant morphological changes were observed. In contrast, the multiple-step NaCl treatment of the calli and cell suspensions led to higher growth of the cultures in the presence of NaCl compared to the controls. The solid MS media, containing 3 μM IAA and 40 μM Kinetin performed as the best media...

  8. Multiple Lookup Table-Based AES Encryption Algorithm Implementation

    Science.gov (United States)

    Gong, Jin; Liu, Wenyi; Zhang, Huixin

    Anew AES (Advanced Encryption Standard) encryption algorithm implementation was proposed in this paper. It is based on five lookup tables, which are generated from S-box(the substitution table in AES). The obvious advantages are reducing the code-size, improving the implementation efficiency, and helping new learners to understand the AES encryption algorithm and GF(28) multiplication which are necessary to correctly implement AES[1]. This method can be applied on processors with word length 32 or above, FPGA and others. And correspondingly we can implement it by VHDL, Verilog, VB and other languages.

  9. Transform Domain Robust Variable Step Size Griffiths' Adaptive Algorithm for Noise Cancellation in ECG

    Science.gov (United States)

    Hegde, Veena; Deekshit, Ravishankar; Satyanarayana, P. S.

    2011-12-01

    The electrocardiogram (ECG) is widely used for diagnosis of heart diseases. Good quality of ECG is utilized by physicians for interpretation and identification of physiological and pathological phenomena. However, in real situations, ECG recordings are often corrupted by artifacts or noise. Noise severely limits the utility of the recorded ECG and thus needs to be removed, for better clinical evaluation. In the present paper a new noise cancellation technique is proposed for removal of random noise like muscle artifact from ECG signal. A transform domain robust variable step size Griffiths' LMS algorithm (TVGLMS) is proposed for noise cancellation. For the TVGLMS, the robust variable step size has been achieved by using the Griffiths' gradient which uses cross-correlation between the desired signal contaminated with observation or random noise and the input. The algorithm is discrete cosine transform (DCT) based and uses symmetric property of the signal to represent the signal in frequency domain with lesser number of frequency coefficients when compared to that of discrete Fourier transform (DFT). The algorithm is implemented for adaptive line enhancer (ALE) filter which extracts the ECG signal in a noisy environment using LMS filter adaptation. The proposed algorithm is found to have better convergence error/misadjustment when compared to that of ordinary transform domain LMS (TLMS) algorithm, both in the presence of white/colored observation noise. The reduction in convergence error achieved by the new algorithm with desired signal decomposition is found to be lower than that obtained without decomposition. The experimental results indicate that the proposed method is better than traditional adaptive filter using LMS algorithm in the aspects of retaining geometrical characteristics of ECG signal.

  10. The Effects of Multiple-Step and Single-Step Directions on Fourth and Fifth Grade Students' Grammar Assessment Performance

    Science.gov (United States)

    Mazerik, Matthew B.

    2006-01-01

    The mean scores of English Language Learners (ELL) and English Only (EO) students in 4th and 5th grade (N = 110), across the teacher-administered Grammar Skills Test, were examined for differences in participants' scores on assessments containing single-step directions and assessments containing multiple-step directions. The results indicated no…

  11. In-depth analysis of protein inference algorithms using multiple search engines and well-defined metrics.

    Science.gov (United States)

    Audain, Enrique; Uszkoreit, Julian; Sachsenberg, Timo; Pfeuffer, Julianus; Liang, Xiao; Hermjakob, Henning; Sanchez, Aniel; Eisenacher, Martin; Reinert, Knut; Tabb, David L; Kohlbacher, Oliver; Perez-Riverol, Yasset

    2017-01-06

    In mass spectrometry-based shotgun proteomics, protein identifications are usually the desired result. However, most of the analytical methods are based on the identification of reliable peptides and not the direct identification of intact proteins. Thus, assembling peptides identified from tandem mass spectra into a list of proteins, referred to as protein inference, is a critical step in proteomics research. Currently, different protein inference algorithms and tools are available for the proteomics community. Here, we evaluated five software tools for protein inference (PIA, ProteinProphet, Fido, ProteinLP, MSBayesPro) using three popular database search engines: Mascot, X!Tandem, and MS-GF+. All the algorithms were evaluated using a highly customizable KNIME workflow using four different public datasets with varying complexities (different sample preparation, species and analytical instruments). We defined a set of quality control metrics to evaluate the performance of each combination of search engines, protein inference algorithm, and parameters on each dataset. We show that the results for complex samples vary not only regarding the actual numbers of reported protein groups but also concerning the actual composition of groups. Furthermore, the robustness of reported proteins when using databases of differing complexities is strongly dependant on the applied inference algorithm. Finally, merging the identifications of multiple search engines does not necessarily increase the number of reported proteins, but does increase the number of peptides per protein and thus can generally be recommended. Protein inference is one of the major challenges in MS-based proteomics nowadays. Currently, there are a vast number of protein inference algorithms and implementations available for the proteomics community. Protein assembly impacts in the final results of the research, the quantitation values and the final claims in the research manuscript. Even though protein

  12. A Robust and Accurate Two-Step Auto-Labeling Conditional Iterative Closest Points (TACICP Algorithm for Three-Dimensional Multi-Modal Carotid Image Registration.

    Directory of Open Access Journals (Sweden)

    Hengkai Guo

    Full Text Available Atherosclerosis is among the leading causes of death and disability. Combining information from multi-modal vascular images is an effective and efficient way to diagnose and monitor atherosclerosis, in which image registration is a key technique. In this paper a feature-based registration algorithm, Two-step Auto-labeling Conditional Iterative Closed Points (TACICP algorithm, is proposed to align three-dimensional carotid image datasets from ultrasound (US and magnetic resonance (MR. Based on 2D segmented contours, a coarse-to-fine strategy is employed with two steps: rigid initialization step and non-rigid refinement step. Conditional Iterative Closest Points (CICP algorithm is given in rigid initialization step to obtain the robust rigid transformation and label configurations. Then the labels and CICP algorithm with non-rigid thin-plate-spline (TPS transformation model is introduced to solve non-rigid carotid deformation between different body positions. The results demonstrate that proposed TACICP algorithm has achieved an average registration error of less than 0.2mm with no failure case, which is superior to the state-of-the-art feature-based methods.

  13. Trust Index Based Fault Tolerant Multiple Event Localization Algorithm for WSNs

    Science.gov (United States)

    Xu, Xianghua; Gao, Xueyong; Wan, Jian; Xiong, Naixue

    2011-01-01

    This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP) localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms. PMID:22163972

  14. Trust Index Based Fault Tolerant Multiple Event Localization Algorithm for WSNs

    Directory of Open Access Journals (Sweden)

    Jian Wan

    2011-06-01

    Full Text Available This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms.

  15. On using the Multiple Signal Classification algorithm to study microbaroms

    Science.gov (United States)

    Marcillo, O. E.; Blom, P. S.; Euler, G. G.

    2016-12-01

    Multiple Signal Classification (MUSIC) (Schmidt, 1986) is a well-known high-resolution algorithm used in array processing for parameter estimation. We report on the application of MUSIC to infrasonic array data in a study of the structure of microbaroms. Microbaroms can be globally observed and display energy centered around 0.2 Hz. Microbaroms are an infrasonic signal generated by the non-linear interaction of ocean surface waves that radiate into the ocean and atmosphere as well as the solid earth in the form of microseisms. Microbaroms sources are dynamic and, in many cases, distributed in space and moving in time. We assume that the microbarom energy detected by an infrasonic array is the result of multiple sources (with different back-azimuths) in the same bandwidth and apply the MUSIC algorithm accordingly to recover the back-azimuth and trace velocity of the individual components. Preliminary results show that the multiple component assumption in MUSIC allows one to resolve the fine structure in the microbarom band that can be related to multiple ocean surface phenomena.

  16. Murasaki: a fast, parallelizable algorithm to find anchors from multiple genomes.

    Directory of Open Access Journals (Sweden)

    Kris Popendorf

    Full Text Available BACKGROUND: With the number of available genome sequences increasing rapidly, the magnitude of sequence data required for multiple-genome analyses is a challenging problem. When large-scale rearrangements break the collinearity of gene orders among genomes, genome comparison algorithms must first identify sets of short well-conserved sequences present in each genome, termed anchors. Previously, anchor identification among multiple genomes has been achieved using pairwise alignment tools like BLASTZ through progressive alignment tools like TBA, but the computational requirements for sequence comparisons of multiple genomes quickly becomes a limiting factor as the number and scale of genomes grows. METHODOLOGY/PRINCIPAL FINDINGS: Our algorithm, named Murasaki, makes it possible to identify anchors within multiple large sequences on the scale of several hundred megabases in few minutes using a single CPU. Two advanced features of Murasaki are (1 adaptive hash function generation, which enables efficient use of arbitrary mismatch patterns (spaced seeds and therefore the comparison of multiple mammalian genomes in a practical amount of computation time, and (2 parallelizable execution that decreases the required wall-clock and CPU times. Murasaki can perform a sensitive anchoring of eight mammalian genomes (human, chimp, rhesus, orangutan, mouse, rat, dog, and cow in 21 hours CPU time (42 minutes wall time. This is the first single-pass in-core anchoring of multiple mammalian genomes. We evaluated Murasaki by comparing it with the genome alignment programs BLASTZ and TBA. We show that Murasaki can anchor multiple genomes in near linear time, compared to the quadratic time requirements of BLASTZ and TBA, while improving overall accuracy. CONCLUSIONS/SIGNIFICANCE: Murasaki provides an open source platform to take advantage of long patterns, cluster computing, and novel hash algorithms to produce accurate anchors across multiple genomes with

  17. The Non–Symmetric s–Step Lanczos Algorithm: Derivation of Efficient Recurrences and Synchronization–Reducing Variants of BiCG and QMR

    Directory of Open Access Journals (Sweden)

    Feuerriegel Stefan

    2015-12-01

    Full Text Available The Lanczos algorithm is among the most frequently used iterative techniques for computing a few dominant eigenvalues of a large sparse non-symmetric matrix. At the same time, it serves as a building block within biconjugate gradient (BiCG and quasi-minimal residual (QMR methods for solving large sparse non-symmetric systems of linear equations. It is well known that, when implemented on distributed-memory computers with a huge number of processes, the synchronization time spent on computing dot products increasingly limits the parallel scalability. Therefore, we propose synchronization-reducing variants of the Lanczos, as well as BiCG and QMR methods, in an attempt to mitigate these negative performance effects. These so-called s-step algorithms are based on grouping dot products for joint execution and replacing time-consuming matrix operations by efficient vector recurrences. The purpose of this paper is to provide a rigorous derivation of the recurrences for the s-step Lanczos algorithm, introduce s-step BiCG and QMR variants, and compare the parallel performance of these new s-step versions with previous algorithms.

  18. Convergence Analysis for the Multiplicative Schwarz Preconditioned Inexact Newton Algorithm

    KAUST Repository

    Liu, Lulu

    2016-10-26

    The multiplicative Schwarz preconditioned inexact Newton (MSPIN) algorithm, based on decomposition by field type rather than by subdomain, was recently introduced to improve the convergence of systems with unbalanced nonlinearities. This paper provides a convergence analysis of the MSPIN algorithm. Under reasonable assumptions, it is shown that MSPIN is locally convergent, and desired superlinear or even quadratic convergence can be obtained when the forcing terms are picked suitably.

  19. Convergence Analysis for the Multiplicative Schwarz Preconditioned Inexact Newton Algorithm

    KAUST Repository

    Liu, Lulu; Keyes, David E.

    2016-01-01

    The multiplicative Schwarz preconditioned inexact Newton (MSPIN) algorithm, based on decomposition by field type rather than by subdomain, was recently introduced to improve the convergence of systems with unbalanced nonlinearities. This paper provides a convergence analysis of the MSPIN algorithm. Under reasonable assumptions, it is shown that MSPIN is locally convergent, and desired superlinear or even quadratic convergence can be obtained when the forcing terms are picked suitably.

  20. Two-step digit-set-restricted modified signed-digit addition-subtraction algorithm and its optoelectronic implementation.

    Science.gov (United States)

    Qian, F; Li, G; Ruan, H; Jing, H; Liu, L

    1999-09-10

    A novel, to our knowledge, two-step digit-set-restricted modified signed-digit (MSD) addition-subtraction algorithm is proposed. With the introduction of the reference digits, the operand words are mapped into an intermediate carry word with all digits restricted to the set {1, 0} and an intermediate sum word with all digits restricted to the set {0, 1}, which can be summed to form the final result without carry generation. The operation can be performed in parallel by use of binary logic. An optical system that utilizes an electron-trapping device is suggested for accomplishing the required binary logic operations. By programming of the illumination of data arrays, any complex logic operations of multiple variables can be realized without additional temporal latency of the intermediate results. This technique has a high space-bandwidth product and signal-to-noise ratio. The main structure can be stacked to construct a compact optoelectronic MSD adder-subtracter.

  1. A proposed adaptive step size perturbation and observation maximum power point tracking algorithm based on photovoltaic system modeling

    Science.gov (United States)

    Huang, Yu

    Solar energy becomes one of the major alternative renewable energy options for its huge abundance and accessibility. Due to the intermittent nature, the high demand of Maximum Power Point Tracking (MPPT) techniques exists when a Photovoltaic (PV) system is used to extract energy from the sunlight. This thesis proposed an advanced Perturbation and Observation (P&O) algorithm aiming for relatively practical circumstances. Firstly, a practical PV system model is studied with determining the series and shunt resistances which are neglected in some research. Moreover, in this proposed algorithm, the duty ratio of a boost DC-DC converter is the object of the perturbation deploying input impedance conversion to achieve working voltage adjustment. Based on the control strategy, the adaptive duty ratio step size P&O algorithm is proposed with major modifications made for sharp insolation change as well as low insolation scenarios. Matlab/Simulink simulation for PV model, boost converter control strategy and various MPPT process is conducted step by step. The proposed adaptive P&O algorithm is validated by the simulation results and detail analysis of sharp insolation changes, low insolation condition and continuous insolation variation.

  2. Multiple Walkers in the Wang-Landau Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Brown, G

    2005-12-28

    The mean cost for converging an estimated density of states using the Wang-Landau algorithm is measured for the Ising and Heisenberg models. The cost increases in a power-law fashion with the number of spins, with an exponent near 3 for one-dimensional models, and closer to 2.4 for two-dimensional models. The effect of multiple, simultaneous walkers on the cost is also measured. For the one-dimensional Ising model the cost can increase with the number of walkers for large systems. For both the Ising and Heisenberg models in two-dimensions, no adverse impact on the cost is observed. Thus multiple walkers is a strategy that should scale well in a parallel computing environment for many models of magnetic materials.

  3. A Novel Algorithm for Efficient Downlink Packet Scheduling for Multiple-Component-Carrier Cellular Systems

    Directory of Open Access Journals (Sweden)

    Yao-Liang Chung

    2016-11-01

    Full Text Available The simultaneous aggregation of multiple component carriers (CCs for use by a base station constitutes one of the more promising strategies for providing substantially enhanced bandwidths for packet transmissions in 4th and 5th generation cellular systems. To the best of our knowledge, however, few previous studies have undertaken a thorough investigation of various performance aspects of the use of a simple yet effective packet scheduling algorithm in which multiple CCs are aggregated for transmission in such systems. Consequently, the present study presents an efficient packet scheduling algorithm designed on the basis of the proportional fair criterion for use in multiple-CC systems for downlink transmission. The proposed algorithm includes a focus on providing simultaneous transmission support for both real-time (RT and non-RT traffic. This algorithm can, when applied with sufficiently efficient designs, provide adequate utilization of spectrum resources for the purposes of transmissions, while also improving energy efficiency to some extent. According to simulation results, the performance of the proposed algorithm in terms of system throughput, mean delay, and fairness constitute substantial improvements over those of an algorithm in which the CCs are used independently instead of being aggregated.

  4. ChromAlign: A two-step algorithmic procedure for time alignment of three-dimensional LC-MS chromatographic surfaces.

    Science.gov (United States)

    Sadygov, Rovshan G; Maroto, Fernando Martin; Hühmer, Andreas F R

    2006-12-15

    We present an algorithmic approach to align three-dimensional chromatographic surfaces of LC-MS data of complex mixture samples. The approach consists of two steps. In the first step, we prealign chromatographic profiles: two-dimensional projections of chromatographic surfaces. This is accomplished by correlation analysis using fast Fourier transforms. In this step, a temporal offset that maximizes the overlap and dot product between two chromatographic profiles is determined. In the second step, the algorithm generates correlation matrix elements between full mass scans of the reference and sample chromatographic surfaces. The temporal offset from the first step indicates a range of the mass scans that are possibly correlated, then the correlation matrix is calculated only for these mass scans. The correlation matrix carries information on highly correlated scans, but it does not itself determine the scan or time alignment. Alignment is determined as a path in the correlation matrix that maximizes the sum of the correlation matrix elements. The computational complexity of the optimal path generation problem is reduced by the use of dynamic programming. The program produces time-aligned surfaces. The use of the temporal offset from the first step in the second step reduces the computation time for generating the correlation matrix and speeds up the process. The algorithm has been implemented in a program, ChromAlign, developed in C++ language for the .NET2 environment in WINDOWS XP. In this work, we demonstrate the applications of ChromAlign to alignment of LC-MS surfaces of several datasets: a mixture of known proteins, samples from digests of surface proteins of T-cells, and samples prepared from digests of cerebrospinal fluid. ChromAlign accurately aligns the LC-MS surfaces we studied. In these examples, we discuss various aspects of the alignment by ChromAlign, such as constant time axis shifts and warping of chromatographic surfaces.

  5. Multiplicative algorithms for constrained non-negative matrix factorization

    KAUST Repository

    Peng, Chengbin

    2012-12-01

    Non-negative matrix factorization (NMF) provides the advantage of parts-based data representation through additive only combinations. It has been widely adopted in areas like item recommending, text mining, data clustering, speech denoising, etc. In this paper, we provide an algorithm that allows the factorization to have linear or approximatly linear constraints with respect to each factor. We prove that if the constraint function is linear, algorithms within our multiplicative framework will converge. This theory supports a large variety of equality and inequality constraints, and can facilitate application of NMF to a much larger domain. Taking the recommender system as an example, we demonstrate how a specialized weighted and constrained NMF algorithm can be developed to fit exactly for the problem, and the tests justify that our constraints improve the performance for both weighted and unweighted NMF algorithms under several different metrics. In particular, on the Movielens data with 94% of items, the Constrained NMF improves recall rate 3% compared to SVD50 and 45% compared to SVD150, which were reported as the best two in the top-N metric. © 2012 IEEE.

  6. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    Science.gov (United States)

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  7. Application of multiple tabu search algorithm to solve dynamic economic dispatch considering generator constraints

    International Nuclear Information System (INIS)

    Pothiya, Saravuth; Ngamroo, Issarachai; Kongprawechnon, Waree

    2008-01-01

    This paper presents a new optimization technique based on a multiple tabu search algorithm (MTS) to solve the dynamic economic dispatch (ED) problem with generator constraints. In the constrained dynamic ED problem, the load demand and spinning reserve capacity as well as some practical operation constraints of generators, such as ramp rate limits and prohibited operating zone are taken into consideration. The MTS algorithm introduces additional mechanisms such as initialization, adaptive searches, multiple searches, crossover and restarting process. To show its efficiency, the MTS algorithm is applied to solve constrained dynamic ED problems of power systems with 6 and 15 units. The results obtained from the MTS algorithm are compared to those achieved from the conventional approaches, such as simulated annealing (SA), genetic algorithm (GA), tabu search (TS) algorithm and particle swarm optimization (PSO). The experimental results show that the proposed MTS algorithm approaches is able to obtain higher quality solutions efficiently and with less computational time than the conventional approaches

  8. Multiple-algorithm parallel fusion of infrared polarization and intensity images based on algorithmic complementarity and synergy

    Science.gov (United States)

    Zhang, Lei; Yang, Fengbao; Ji, Linna; Lv, Sheng

    2018-01-01

    Diverse image fusion methods perform differently. Each method has advantages and disadvantages compared with others. One notion is that the advantages of different image methods can be effectively combined. A multiple-algorithm parallel fusion method based on algorithmic complementarity and synergy is proposed. First, in view of the characteristics of the different algorithms and difference-features among images, an index vector-based feature-similarity is proposed to define the degree of complementarity and synergy. This proposed index vector is a reliable evidence indicator for algorithm selection. Second, the algorithms with a high degree of complementarity and synergy are selected. Then, the different degrees of various features and infrared intensity images are used as the initial weights for the nonnegative matrix factorization (NMF). This avoids randomness of the NMF initialization parameter. Finally, the fused images of different algorithms are integrated using the NMF because of its excellent data fusing performance on independent features. Experimental results demonstrate that the visual effect and objective evaluation index of the fused images obtained using the proposed method are better than those obtained using traditional methods. The proposed method retains all the advantages that individual fusion algorithms have.

  9. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    Science.gov (United States)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  10. Electrohydraulic linear actuator with two stepping motors controlled by overshoot-free algorithm

    Science.gov (United States)

    Milecki, Andrzej; Ortmann, Jarosław

    2017-11-01

    The paper describes electrohydraulic spool valves with stepping motors used as electromechanical transducers. A new concept of a proportional valve in which two stepping motors are working differentially is introduced. Such valve changes the fluid flow proportionally to the sum or difference of the motors' steps numbers. The valve design and principle of its operation is described. Theoretical equations and simulation models are proposed for all elements of the drive, i.e., the stepping motor units, hydraulic valve and cylinder. The main features of the valve and drive operation are described; some specific problem areas covering the nature of stepping motors and their differential work in the valve are also considered. The whole servo drive non-linear model is proposed and used further for simulation investigations. The initial simulation investigations of the drive with a new valve have shown that there is a significant overshoot in the drive step response, which is not allowed in positioning process. Therefore additional effort is spent to reduce the overshoot and in consequence reduce the settling time. A special predictive algorithm is proposed to this end. Then the proposed control method is tested and further improved in simulations. Further on, the model is implemented in reality and the whole servo drive system is tested. The investigation results presented in this paper, are showing an overshoot-free positioning process which enables high positioning accuracy.

  11. Modeling and Design of MPPT Controller Using Stepped P&O Algorithm in Solar Photovoltaic System

    OpenAIRE

    R. Prakash; B. Meenakshipriya; R. Kumaravelan

    2014-01-01

    This paper presents modeling and simulation of Grid Connected Photovoltaic (PV) system by using improved mathematical model. The model is used to study different parameter variations and effects on the PV array including operating temperature and solar irradiation level. In this paper stepped P&O algorithm is proposed for MPPT control. This algorithm will identify the suitable duty ratio in which the DC-DC converter should be operated to maximize the power output. Photo voltaic array with pro...

  12. A NEW HEURISTIC ALGORITHM FOR MULTIPLE TRAVELING SALESMAN PROBLEM

    Directory of Open Access Journals (Sweden)

    F. NURIYEVA

    2017-06-01

    Full Text Available The Multiple Traveling Salesman Problem (mTSP is a combinatorial optimization problem in NP-hard class. The mTSP aims to acquire the minimum cost for traveling a given set of cities by assigning each of them to a different salesman in order to create m number of tours. This paper presents a new heuristic algorithm based on the shortest path algorithm to find a solution for the mTSP. The proposed method has been programmed in C language and its performance analysis has been carried out on the library instances. The computational results show the efficiency of this method.

  13. A Novel Multiple-Time Scale Integrator for the Hybrid Monte Carlo Algorithm

    International Nuclear Information System (INIS)

    Kamleh, Waseem

    2011-01-01

    Hybrid Monte Carlo simulations that implement the fermion action using multiple terms are commonly used. By the nature of their formulation they involve multiple integration time scales in the evolution of the system through simulation time. These different scales are usually dealt with by the Sexton-Weingarten nested leapfrog integrator. In this scheme the choice of time scales is somewhat restricted as each time step must be an exact multiple of the next smallest scale in the sequence. A novel generalisation of the nested leapfrog integrator is introduced which allows for far greater flexibility in the choice of time scales, as each scale now must only be an exact multiple of the smallest step size.

  14. GENESIS 1.1: A hybrid-parallel molecular dynamics simulator with enhanced sampling algorithms on multiple computational platforms.

    Science.gov (United States)

    Kobayashi, Chigusa; Jung, Jaewoon; Matsunaga, Yasuhiro; Mori, Takaharu; Ando, Tadashi; Tamura, Koichi; Kamiya, Motoshi; Sugita, Yuji

    2017-09-30

    GENeralized-Ensemble SImulation System (GENESIS) is a software package for molecular dynamics (MD) simulation of biological systems. It is designed to extend limitations in system size and accessible time scale by adopting highly parallelized schemes and enhanced conformational sampling algorithms. In this new version, GENESIS 1.1, new functions and advanced algorithms have been added. The all-atom and coarse-grained potential energy functions used in AMBER and GROMACS packages now become available in addition to CHARMM energy functions. The performance of MD simulations has been greatly improved by further optimization, multiple time-step integration, and hybrid (CPU + GPU) computing. The string method and replica-exchange umbrella sampling with flexible collective variable choice are used for finding the minimum free-energy pathway and obtaining free-energy profiles for conformational changes of a macromolecule. These new features increase the usefulness and power of GENESIS for modeling and simulation in biological research. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  15. Near-optimal power allocation with PSO algorithm for MIMO cognitive networks using multiple AF two-way relays

    KAUST Repository

    Alsharoa, Ahmad M.

    2014-06-01

    In this paper, the problem of power allocation for a multiple-input multiple-output two-way system is investigated in underlay Cognitive Radio (CR) set-up. In the CR underlay mode, secondary users are allowed to exploit the spectrum allocated to primary users in an opportunistic manner by respecting a tolerated temperature limit. The secondary networks employ an amplify-and-forward two-way relaying technique in order to maximize the sum rate under power budget and interference constraints. In this context, we formulate an optimization problem that is solved in two steps. First, we derive a closed-form expression of the optimal power allocated to terminals. Then, we employ a strong optimization tool based on particle swarm optimization algorithm to find the power allocated to secondary relays. Simulation results demonstrate the efficiency of the proposed solution and analyze the impact of some system parameters on the achieved performance. © 2014 IEEE.

  16. Improvement of arm solutions via step width self-tuning algorithm

    International Nuclear Information System (INIS)

    Sasaki, Shinobu

    1993-09-01

    This paper is concerned with the significant numerical problems encountered in solving the manipulator inverse kinematics. That is, essential difficulties occurred in linearized calculations such as dependence on initial guess or narrow search region are improved with great success by means of a step width self-tuning algorithm. In a practical optimization model based on the reduction of dimensionality and linearized approximation, it is shown that the desired arm solutions are found out at a faster rate over a wider application range. Also, the capability of finding solutions via a traditional Newton method is enhanced to a large extent by combined application of the proposed idea and simplex method. (author)

  17. Double evolutsional artificial bee colony algorithm for multiple traveling salesman problem

    Directory of Open Access Journals (Sweden)

    Xue Ming Hao

    2016-01-01

    Full Text Available The double evolutional artificial bee colony algorithm (DEABC is proposed for solving the single depot multiple traveling salesman problem (MTSP. The proposed DEABC algorithm, which takes advantage of the strength of the upgraded operators, is characterized by its guidance in exploitation search and diversity in exploration search. The double evolutional process for exploitation search is composed of two phases of half stochastic optimal search, and the diversity generating operator for exploration search is used for solutions which cannot be improved after limited times. The computational results demonstrated the superiority of our algorithm over previous state-of-the-art methods.

  18. A comparison of step-and-shoot leaf sequencing algorithms that eliminate tongue-and-groove effects

    Energy Technology Data Exchange (ETDEWEB)

    Kamath, Srijit [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Sahni, Sartaj [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Ranka, Sanjay [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Li, Jonathan [Department of Radiation Oncology, University of Florida, Gainesville, FL (United States); Palta, Jatinder [Department of Radiation Oncology, University of Florida, Gainesville, FL (United States)

    2004-07-21

    The performances of three recently published leaf sequencing algorithms for step-and-shoot intensity-modulated radiation therapy delivery that eliminates tongue-and-groove underdosage are evaluated. Proofs are given to show that the algorithm of Que et al (2004 Phys. Med. Biol. 49 399-405) generates leaf sequences free of tongue-and-groove underdosage and interdigitation. However, the total beam-on times could be up to n times those of the sequences generated by the algorithms of Kamath et al (2004 Phys. Med. Biol. 49 N7-N19), which are optimal in beam-on time for unidirectional leaf movement under the same constraints, where n is the total number of involved leaf pairs. Using 19 clinical fluence matrices and 100 000 randomly generated 15 x 15 matrices, the average monitor units and number of segments of the leaf sequences generated using the algorithm of Que et al are about two to four times those generated by the algorithm of Kamath et al.

  19. A comparison of step-and-shoot leaf sequencing algorithms that eliminate tongue-and-groove effects

    International Nuclear Information System (INIS)

    Kamath, Srijit; Sahni, Sartaj; Ranka, Sanjay; Li, Jonathan; Palta, Jatinder

    2004-01-01

    The performances of three recently published leaf sequencing algorithms for step-and-shoot intensity-modulated radiation therapy delivery that eliminates tongue-and-groove underdosage are evaluated. Proofs are given to show that the algorithm of Que et al (2004 Phys. Med. Biol. 49 399-405) generates leaf sequences free of tongue-and-groove underdosage and interdigitation. However, the total beam-on times could be up to n times those of the sequences generated by the algorithms of Kamath et al (2004 Phys. Med. Biol. 49 N7-N19), which are optimal in beam-on time for unidirectional leaf movement under the same constraints, where n is the total number of involved leaf pairs. Using 19 clinical fluence matrices and 100 000 randomly generated 15 x 15 matrices, the average monitor units and number of segments of the leaf sequences generated using the algorithm of Que et al are about two to four times those generated by the algorithm of Kamath et al

  20. Optimization of Multiple Traveling Salesman Problem Based on Simulated Annealing Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Xu Mingji

    2017-01-01

    Full Text Available It is very effective to solve the multi variable optimization problem by using hierarchical genetic algorithm. This thesis analyzes both advantages and disadvantages of hierarchical genetic algorithm and puts forward an improved simulated annealing genetic algorithm. The new algorithm is applied to solve the multiple traveling salesman problem, which can improve the performance of the solution. First, it improves the design of chromosomes hierarchical structure in terms of redundant hierarchical algorithm, and it suggests a suffix design of chromosomes; Second, concerning to some premature problems of genetic algorithm, it proposes a self-identify crossover operator and mutation; Third, when it comes to the problem of weak ability of local search of genetic algorithm, it stretches the fitness by mixing genetic algorithm with simulated annealing algorithm. Forth, it emulates the problems of N traveling salesmen and M cities so as to verify its feasibility. The simulation and calculation shows that this improved algorithm can be quickly converged to a best global solution, which means the algorithm is encouraging in practical uses.

  1. On an efficient multiple time step Monte Carlo simulation of the SABR model

    NARCIS (Netherlands)

    Leitao Rodriguez, A.; Grzelak, L.A.; Oosterlee, C.W.

    2017-01-01

    In this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl. Math.

  2. DOA Estimation of Multiple LFM Sources Using a STFT-based and FBSS-based MUSIC Algorithm

    Directory of Open Access Journals (Sweden)

    K. B. Cui

    2017-12-01

    Full Text Available Direction of arrival (DOA estimation is an important problem in array signal processing. An effective multiple signal classification (MUSIC method based on the short-time Fourier transform (STFT and forward/ backward spatial smoothing (FBSS techniques for the DOA estimation problem of multiple time-frequency (t-f joint LFM sources is addressed. Previous work in the area e. g. STFT-MUSIC algorithm cannot resolve the t-f completely or largely joint sources because they can only select the single-source t-f points. The proposed method con¬structs the spatial t-f distributions (STFDs by selecting the multiple-source t-f points and uses the FBSS techniques to solve the problem of rank loss. In this way, the STFT-FBSS-MUSIC algorithm can resolve the t-f largely joint or completely joint LFM sources. In addition, the proposed algorithm also owns pretty low computational complexity when resolving multiple LFM sources because it can reduce the times of the feature decomposition and spectrum search. The performance of the proposed method is compared with that of the existing t-f based MUSIC algorithms through computer simulations and the results show its good performance.

  3. Coastal Zone Color Scanner atmospheric correction algorithm - Multiple scattering effects

    Science.gov (United States)

    Gordon, Howard R.; Castano, Diego J.

    1987-01-01

    Errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm are analyzed. The analysis is based on radiative transfer computations in model atmospheres, in which the aerosols and molecules are distributed vertically in an exponential manner, with most of the aerosol scattering located below the molecular scattering. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates, making it possible to determine the errors along typical CZCS scan lines. Information provided by the analysis makes it possible to judge the efficacy of the current algorithm with the current sensor and to estimate the impact of the algorithm-induced errors on a variety of applications.

  4. A composite step conjugate gradients squared algorithm for solving nonsymmetric linear systems

    Science.gov (United States)

    Chan, Tony; Szeto, Tedd

    1994-03-01

    We propose a new and more stable variant of the CGS method [27] for solving nonsymmetric linear systems. The method is based on squaring the Composite Step BCG method, introduced recently by Bank and Chan [1,2], which itself is a stabilized variant of BCG in that it skips over steps for which the BCG iterate is not defined and causes one kind of breakdown in BCG. By doing this, we obtain a method (Composite Step CGS or CSCGS) which not only handles the breakdowns described above, but does so with the advantages of CGS, namely, no multiplications by the transpose matrix and a faster convergence rate than BCG. Our strategy for deciding whether to skip a step does not involve any machine dependent parameters and is designed to skip near breakdowns as well as produce smoother iterates. Numerical experiments show that the new method does produce improved performance over CGS on practical problems.

  5. A 3-Step Algorithm Using Region-Based Active Contours for Video Objects Detection

    Directory of Open Access Journals (Sweden)

    Stéphanie Jehan-Besson

    2002-06-01

    Full Text Available We propose a 3-step algorithm for the automatic detection of moving objects in video sequences using region-based active contours. First, we introduce a very full general framework for region-based active contours with a new Eulerian method to compute the evolution equation of the active contour from a criterion including both region-based and boundary-based terms. This framework can be easily adapted to various applications, thanks to the introduction of functions named descriptors of the different regions. With this new Eulerian method based on shape optimization principles, we can easily take into account the case of descriptors depending upon features globally attached to the regions. Second, we propose a 3-step algorithm for detection of moving objects, with a static or a mobile camera, using region-based active contours. The basic idea is to hierarchically associate temporal and spatial information. The active contour evolves with successively three sets of descriptors: a temporal one, and then two spatial ones. The third spatial descriptor takes advantage of the segmentation of the image in intensity homogeneous regions. User interaction is reduced to the choice of a few parameters at the beginning of the process. Some experimental results are supplied.

  6. NIMEFI: gene regulatory network inference using multiple ensemble feature importance algorithms.

    Directory of Open Access Journals (Sweden)

    Joeri Ruyssinck

    Full Text Available One of the long-standing open challenges in computational systems biology is the topology inference of gene regulatory networks from high-throughput omics data. Recently, two community-wide efforts, DREAM4 and DREAM5, have been established to benchmark network inference techniques using gene expression measurements. In these challenges the overall top performer was the GENIE3 algorithm. This method decomposes the network inference task into separate regression problems for each gene in the network in which the expression values of a particular target gene are predicted using all other genes as possible predictors. Next, using tree-based ensemble methods, an importance measure for each predictor gene is calculated with respect to the target gene and a high feature importance is considered as putative evidence of a regulatory link existing between both genes. The contribution of this work is twofold. First, we generalize the regression decomposition strategy of GENIE3 to other feature importance methods. We compare the performance of support vector regression, the elastic net, random forest regression, symbolic regression and their ensemble variants in this setting to the original GENIE3 algorithm. To create the ensemble variants, we propose a subsampling approach which allows us to cast any feature selection algorithm that produces a feature ranking into an ensemble feature importance algorithm. We demonstrate that the ensemble setting is key to the network inference task, as only ensemble variants achieve top performance. As second contribution, we explore the effect of using rankwise averaged predictions of multiple ensemble algorithms as opposed to only one. We name this approach NIMEFI (Network Inference using Multiple Ensemble Feature Importance algorithms and show that this approach outperforms all individual methods in general, although on a specific network a single method can perform better. An implementation of NIMEFI has been made

  7. Using the Multiplicative Schwarz Alternating Algorithm (MSAA) for Solving the Large Linear System of Equations Related to Global Gravity Field Recovery up to Degree and Order 120

    Science.gov (United States)

    Safari, A.; Sharifi, M. A.; Amjadiparvar, B.

    2010-05-01

    The GRACE mission has substantiated the low-low satellite-to-satellite tracking (LL-SST) concept. The LL-SST configuration can be combined with the previously realized high-low SST concept in the CHAMP mission to provide a much higher accuracy. The line of sight (LOS) acceleration difference between the GRACE satellite pair is the mostly used observable for mapping the global gravity field of the Earth in terms of spherical harmonic coefficients. In this paper, mathematical formulae for LOS acceleration difference observations have been derived and the corresponding linear system of equations has been set up for spherical harmonic up to degree and order 120. The total number of unknowns is 14641. Such a linear equation system can be solved with iterative solvers or direct solvers. However, the runtime of direct methods or that of iterative solvers without a suitable preconditioner increases tremendously. This is the reason why we need a more sophisticated method to solve the linear system of problems with a large number of unknowns. Multiplicative variant of the Schwarz alternating algorithm is a domain decomposition method, which allows it to split the normal matrix of the system into several smaller overlaped submatrices. In each iteration step the multiplicative variant of the Schwarz alternating algorithm solves linear systems with the matrices obtained from the splitting successively. It reduces both runtime and memory requirements drastically. In this paper we propose the Multiplicative Schwarz Alternating Algorithm (MSAA) for solving the large linear system of gravity field recovery. The proposed algorithm has been tested on the International Association of Geodesy (IAG)-simulated data of the GRACE mission. The achieved results indicate the validity and efficiency of the proposed algorithm in solving the linear system of equations from accuracy and runtime points of view. Keywords: Gravity field recovery, Multiplicative Schwarz Alternating Algorithm, Low

  8. Reduction rules-based search algorithm for opportunistic replacement strategy of multiple life-limited parts

    Directory of Open Access Journals (Sweden)

    Xuyun FU

    2018-01-01

    Full Text Available The opportunistic replacement of multiple Life-Limited Parts (LLPs is a problem widely existing in industry. The replacement strategy of LLPs has a great impact on the total maintenance cost to a lot of equipment. This article focuses on finding a quick and effective algorithm for this problem. To improve the algorithm efficiency, six reduction rules are suggested from the perspectives of solution feasibility, determination of the replacement of LLPs, determination of the maintenance occasion and solution optimality. Based on these six reduction rules, a search algorithm is proposed. This search algorithm can identify one or several optimal solutions. A numerical experiment shows that these six reduction rules are effective, and the time consumed by the algorithm is less than 38 s if the total life of equipment is shorter than 55000 and the number of LLPs is less than 11. A specific case shows that the algorithm can obtain optimal solutions which are much better than the result of the traditional method in 10 s, and it can provide support for determining to-be-replaced LLPs when determining the maintenance workscope of an aircraft engine. Therefore, the algorithm is applicable to engineering applications concerning opportunistic replacement of multiple LLPs in aircraft engines.

  9. Multiple stage miniature stepping motor

    International Nuclear Information System (INIS)

    Niven, W.A.; Shikany, S.D.; Shira, M.L.

    1981-01-01

    A stepping motor comprising a plurality of stages which may be selectively activated to effect stepping movement of the motor, and which are mounted along a common rotor shaft to achieve considerable reduction in motor size and minimum diameter, whereby sequential activation of the stages results in successive rotor steps with direction being determined by the particular activating sequence followed

  10. Solution of Constrained Optimal Control Problems Using Multiple Shooting and ESDIRK Methods

    DEFF Research Database (Denmark)

    Capolei, Andrea; Jørgensen, John Bagterp

    2012-01-01

    of this paper is the use of ESDIRK integration methods for solution of the initial value problems and the corresponding sensitivity equations arising in the multiple shooting algorithm. Compared to BDF-methods, ESDIRK-methods are advantageous in multiple shooting algorithms in which restarts and frequent...... algorithm. As we consider stiff systems, implicit solvers with sensitivity computation capabilities for initial value problems must be used in the multiple shooting algorithm. Traditionally, multi-step methods based on the BDF algorithm have been used for such problems. The main novel contribution...... discontinuities on each shooting interval are present. The ESDIRK methods are implemented using an inexact Newton method that reuses the factorization of the iteration matrix for the integration as well as the sensitivity computation. Numerical experiments are provided to demonstrate the algorithm....

  11. New time-saving predictor algorithm for multiple breath washout in adolescents

    DEFF Research Database (Denmark)

    Grønbæk, Jonathan; Hallas, Henrik Wegener; Arianto, Lambang

    2016-01-01

    BACKGROUND: Multiple breath washout (MBW) is an informative but time-consuming test. This study evaluates the uncertainty of a time-saving predictor algorithm in adolescents. METHODS: Adolescents were recruited from the Copenhagen Prospective Study on Asthma in Childhood (COPSAC2000) birth cohort...

  12. Strong convergence of an extragradient-type algorithm for the multiple-sets split equality problem.

    Science.gov (United States)

    Zhao, Ying; Shi, Luoyi

    2017-01-01

    This paper introduces a new extragradient-type method to solve the multiple-sets split equality problem (MSSEP). Under some suitable conditions, the strong convergence of an algorithm can be verified in the infinite-dimensional Hilbert spaces. Moreover, several numerical results are given to show the effectiveness of our algorithm.

  13. A Fast General-Purpose Clustering Algorithm Based on FPGAs for High-Throughput Data Processing

    CERN Document Server

    Annovi, A; The ATLAS collaboration; Castegnaro, A; Gatta, M

    2012-01-01

    We present a fast general-purpose algorithm for high-throughput clustering of data ”with a two dimensional organization”. The algorithm is designed to be implemented with FPGAs or custom electronics. The key feature is a processing time that scales linearly with the amount of data to be processed. This means that clustering can be performed in pipeline with the readout, without suffering from combinatorial delays due to looping multiple times through all the data. This feature makes this algorithm especially well suited for problems where the data has high density, e.g. in the case of tracking devices working under high-luminosity condition such as those of LHC or Super-LHC. The algorithm is organized in two steps: the first step (core) clusters the data; the second step analyzes each cluster of data to extract the desired information. The current algorithm is developed as a clustering device for modern high-energy physics pixel detectors. However, the algorithm has much broader field of applications. In ...

  14. A Multiple Model Prediction Algorithm for CNC Machine Wear PHM

    Directory of Open Access Journals (Sweden)

    Huimin Chen

    2011-01-01

    Full Text Available The 2010 PHM data challenge focuses on the remaining useful life (RUL estimation for cutters of a high speed CNC milling machine using measurements from dynamometer, accelerometer, and acoustic emission sensors. We present a multiple model approach for wear depth estimation of milling machine cutters using the provided data. The feature selection, initial wear estimation and multiple model fusion components of the proposed algorithm are explained in details and compared with several alternative methods using the training data. The final submission ranked #2 among professional and student participants and the method is applicable to other data driven PHM problems.

  15. Robust multiple frequency multiple power localization schemes in the presence of multiple jamming attacks.

    Directory of Open Access Journals (Sweden)

    Ahmed Abdulqader Hussein

    Full Text Available Localization of the wireless sensor network is a vital area acquiring an impressive research concern and called upon to expand more with the rising of its applications. As localization is gaining prominence in wireless sensor network, it is vulnerable to jamming attacks. Jamming attacks disrupt communication opportunity among the sender and receiver and deeply impact the localization process, leading to a huge error of the estimated sensor node position. Therefore, detection and elimination of jamming influence are absolutely indispensable. Range-based techniques especially Received Signal Strength (RSS is facing severe impact of these attacks. This paper proposes algorithms based on Combination Multiple Frequency Multiple Power Localization (C-MFMPL and Step Function Multiple Frequency Multiple Power Localization (SF-MFMPL. The algorithms have been tested in the presence of multiple types of jamming attacks including capture and replay, random and constant jammers over a log normal shadow fading propagation model. In order to overcome the impact of random and constant jammers, the proposed method uses two sets of frequencies shared by the implemented anchor nodes to obtain the averaged RSS readings all over the transmitted frequencies successfully. In addition, three stages of filters have been used to cope with the replayed beacons caused by the capture and replay jammers. In this paper the localization performance of the proposed algorithms for the ideal case which is defined by without the existence of the jamming attack are compared with the case of jamming attacks. The main contribution of this paper is to achieve robust localization performance in the presence of multiple jamming attacks under log normal shadow fading environment with a different simulation conditions and scenarios.

  16. Magnetoresistance in hybrid organic spin valves at the onset of multiple-step tunneling.

    Science.gov (United States)

    Schoonus, J J H M; Lumens, P G E; Wagemans, W; Kohlhepp, J T; Bobbert, P A; Swagten, H J M; Koopmans, B

    2009-10-02

    By combining experiments with simple model calculations, we obtain new insight in spin transport through hybrid, CoFeB/Al2O3(1.5 nm)/tris(8-hydroxyquinoline)aluminium (Alq3)/Co spin valves. We have measured the characteristic changes in the I-V behavior as well as the intrinsic loss of magnetoresistance at the onset of multiple-step tunneling. In the regime of multiple-step tunneling, under the condition of low hopping rates, spin precession in the presence of hyperfine coupling is conjectured to be the relevant source of spin relaxation. A quantitative analysis leads to the prediction of a symmetric magnetoresistance around zero magnetic field in addition to the hysteretic magnetoresistance curves, which are indeed observed in our experiments.

  17. A curvilinear, fully implicit, conservative electromagnetic PIC algorithm in multiple dimensions

    Science.gov (United States)

    Chacón, L.; Chen, G.

    2016-07-01

    We extend a recently proposed fully implicit PIC algorithm for the Vlasov-Darwin model in multiple dimensions (Chen and Chacón (2015) [1]) to curvilinear geometry. As in the Cartesian case, the approach is based on a potential formulation (ϕ, A), and overcomes many difficulties of traditional semi-implicit Darwin PIC algorithms. Conservation theorems for local charge and global energy are derived in curvilinear representation, and then enforced discretely by a careful choice of the discretization of field and particle equations. Additionally, the algorithm conserves canonical-momentum in any ignorable direction, and preserves the Coulomb gauge ∇ ṡ A = 0 exactly. An asymptotically well-posed fluid preconditioner allows efficient use of large cell sizes, which are determined by accuracy considerations, not stability, and can be orders of magnitude larger than required in a standard explicit electromagnetic PIC simulation. We demonstrate the accuracy and efficiency properties of the algorithm with numerical experiments in mapped meshes in 1D-3V and 2D-3V.

  18. Improvement of the temporal resolution of cardiac CT reconstruction algorithms using an optimized filtering step

    International Nuclear Information System (INIS)

    Roux, S.; Desbat, L.; Koenig, A.; Grangeat, P.

    2005-01-01

    In this paper we study a property of the filtering step of multi-cycle reconstruction algorithm used in the field of cardiac CT. We show that the common filtering step procedure is not optimal in the case of divergent geometry and decrease slightly the temporal resolution. We propose to use the filtering procedure related to the work of Noo at al ( F.Noo, M. Defrise, R. Clakdoyle, and H. Kudo. Image reconstruction from fan-beam projections on less than a short-scan. Phys. Med.Biol., 47:2525-2546, July 2002)and show that this alternative allows to reach the optimal temporal resolution with the same computational effort. (N.C.)

  19. Research on Multiple Particle Swarm Algorithm Based on Analysis of Scientific Materials

    Directory of Open Access Journals (Sweden)

    Zhao Hongwei

    2017-01-01

    Full Text Available This paper proposed an improved particle swarm optimization algorithm based on analysis of scientific materials. The core thesis of MPSO (Multiple Particle Swarm Algorithm is to improve the single population PSO to interactive multi-swarms, which is used to settle the problem of being trapped into local minima during later iterations because it is lack of diversity. The simulation results show that the convergence rate is fast and the search performance is good, and it has achieved very good results.

  20. SU-F-J-66: Anatomy Deformation Based Comparison Between One-Step and Two-Step Optimization for Online ART

    International Nuclear Information System (INIS)

    Feng, Z; Yu, G; Qin, S; Li, D; Ma, C; Zhu, J; Yin, Y

    2016-01-01

    Purpose: This study investigated that how the quality of adapted plan was affected by inter-fractional anatomy deformation by using one-step and two-step optimization for on line adaptive radiotherapy (ART) procedure. Methods: 10 lung carcinoma patients were chosen randomly to produce IMRT plan by one-step and two-step algorithms respectively, and the prescribed dose was set as 60 Gy on the planning target volume (PTV) for all patients. To simulate inter-fractional target deformation, four specific cases were created by systematic anatomy variation; including target superior shift 0.5 cm, 0.3cm contraction, 0.3 cm expansion and 45-degree rotation. Based on these four anatomy deformation, adapted plan, regenerated plan and non-adapted plan were created to evaluate quality of adaptation. Adapted plans were generated automatically by using one-step and two-step algorithms respectively to optimize original plans, and regenerated plans were manually created by experience physicists. Non-adapted plans were produced by recalculating the dose distribution based on corresponding original plans. The deviations among these three plans were statistically analyzed by paired T-test. Results: In PTV superior shift case, adapted plans had significantly better PTV coverage by using two-step algorithm compared with one-step one, and meanwhile there was a significant difference of V95 by comparison with adapted and non-adapted plans (p=0.0025). In target contraction deformation, with almost same PTV coverage, the total lung received lower dose using one-step algorithm than two-step algorithm (p=0.0143,0.0126 for V20, Dmean respectively). In other two deformation cases, there were no significant differences observed by both two optimized algorithms. Conclusion: In geometry deformation such as target contraction, with comparable PTV coverage, one-step algorithm gave better OAR sparing than two-step algorithm. Reversely, the adaptation by using two-step algorithm had higher efficiency

  1. SU-F-J-66: Anatomy Deformation Based Comparison Between One-Step and Two-Step Optimization for Online ART

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Z; Yu, G; Qin, S; Li, D [Shandong Normal University, Jinan, Shandong (China); Ma, C; Zhu, J; Yin, Y [Shandong Cancer Hospital and Institute, Jinan, Shandong (China)

    2016-06-15

    Purpose: This study investigated that how the quality of adapted plan was affected by inter-fractional anatomy deformation by using one-step and two-step optimization for on line adaptive radiotherapy (ART) procedure. Methods: 10 lung carcinoma patients were chosen randomly to produce IMRT plan by one-step and two-step algorithms respectively, and the prescribed dose was set as 60 Gy on the planning target volume (PTV) for all patients. To simulate inter-fractional target deformation, four specific cases were created by systematic anatomy variation; including target superior shift 0.5 cm, 0.3cm contraction, 0.3 cm expansion and 45-degree rotation. Based on these four anatomy deformation, adapted plan, regenerated plan and non-adapted plan were created to evaluate quality of adaptation. Adapted plans were generated automatically by using one-step and two-step algorithms respectively to optimize original plans, and regenerated plans were manually created by experience physicists. Non-adapted plans were produced by recalculating the dose distribution based on corresponding original plans. The deviations among these three plans were statistically analyzed by paired T-test. Results: In PTV superior shift case, adapted plans had significantly better PTV coverage by using two-step algorithm compared with one-step one, and meanwhile there was a significant difference of V95 by comparison with adapted and non-adapted plans (p=0.0025). In target contraction deformation, with almost same PTV coverage, the total lung received lower dose using one-step algorithm than two-step algorithm (p=0.0143,0.0126 for V20, Dmean respectively). In other two deformation cases, there were no significant differences observed by both two optimized algorithms. Conclusion: In geometry deformation such as target contraction, with comparable PTV coverage, one-step algorithm gave better OAR sparing than two-step algorithm. Reversely, the adaptation by using two-step algorithm had higher efficiency

  2. Principles of a new treatment algorithm in multiple sclerosis

    DEFF Research Database (Denmark)

    Hartung, Hans-Peter; Montalban, Xavier; Sorensen, Per Soelberg

    2011-01-01

    We are entering a new era in the management of patients with multiple sclerosis (MS). The first oral treatment (fingolimod) has now gained US FDA approval, addressing an unmet need for patients with MS who wish to avoid parenteral administration. A second agent (cladribine) is currently being...... considered for approval. With the arrival of these oral agents, a key question is where they may fit into the existing MS treatment algorithm. This article aims to help answer this question by analyzing the trial data for the new oral therapies, as well as for existing MS treatments, by applying practical...... clinical experience, and through consideration of our increased understanding of how to define treatment success in MS. This article also provides a speculative look at what the treatment algorithm may look like in 5 years, with the availability of new data, greater experience and, potentially, other novel...

  3. N-Dimensional LLL Reduction Algorithm with Pivoted Reflection

    Directory of Open Access Journals (Sweden)

    Zhongliang Deng

    2018-01-01

    Full Text Available The Lenstra-Lenstra-Lovász (LLL lattice reduction algorithm and many of its variants have been widely used by cryptography, multiple-input-multiple-output (MIMO communication systems and carrier phase positioning in global navigation satellite system (GNSS to solve the integer least squares (ILS problem. In this paper, we propose an n-dimensional LLL reduction algorithm (n-LLL, expanding the Lovász condition in LLL algorithm to n-dimensional space in order to obtain a further reduced basis. We also introduce pivoted Householder reflection into the algorithm to optimize the reduction time. For an m-order positive definite matrix, analysis shows that the n-LLL reduction algorithm will converge within finite steps and always produce better results than the original LLL reduction algorithm with n > 2. The simulations clearly prove that n-LLL is better than the original LLL in reducing the condition number of an ill-conditioned input matrix with 39% improvement on average for typical cases, which can significantly reduce the searching space for solving ILS problem. The simulation results also show that the pivoted reflection has significantly declined the number of swaps in the algorithm by 57%, making n-LLL a more practical reduction algorithm.

  4. A Self-Reconstructing Algorithm for Single and Multiple-Sensor Fault Isolation Based on Auto-Associative Neural Networks

    Directory of Open Access Journals (Sweden)

    Hamidreza Mousavi

    2017-01-01

    Full Text Available Recently different approaches have been developed in the field of sensor fault diagnostics based on Auto-Associative Neural Network (AANN. In this paper we present a novel algorithm called Self reconstructing Auto-Associative Neural Network (S-AANN which is able to detect and isolate single faulty sensor via reconstruction. We have also extended the algorithm to be applicable in multiple fault conditions. The algorithm uses a calibration model based on AANN. AANN can reconstruct the faulty sensor using non-faulty sensors due to correlation between the process variables, and mean of the difference between reconstructed and original data determines which sensors are faulty. The algorithms are tested on a Dimerization process. The simulation results show that the S-AANN can isolate multiple faulty sensors with low computational time that make the algorithm appropriate candidate for online applications.

  5. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    Energy Technology Data Exchange (ETDEWEB)

    Murari, A.; Barana, O. [Consorzio RFX Associazione EURATOM ENEA per la Fusione, Corso Stati Uniti 4, Padua (Italy); Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F. [Euratom/UKAEA Fusion Assoc., Culham Science Centre, Abingdon, Oxon (United Kingdom); Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D. [Association EURATOM-CEA, CEA Cadarache, 13 - Saint-Paul-lez-Durance (France); Albanese, R. [Assoc. Euratom-ENEA-CREATE, Univ. Mediterranea RC (Italy); Arena, P.; Bruno, M. [Assoc. Euratom-ENEA-CREATE, Univ.di Catania (Italy); Ambrosino, G.; Ariola, M. [Assoc. Euratom-ENEA-CREATE, Univ. Napoli Federico Napoli (Italy); Crisanti, F. [Associazone EURATOM ENEA sulla Fusione, C.R. Frascati (Italy); Luna, E. de la; Sanchez, J. [Associacion EURATOM CIEMAT para Fusion, Madrid (Spain)

    2004-07-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  6. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    International Nuclear Information System (INIS)

    Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Murari, A.; Barana, O.; Albanese, R.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.

    2004-01-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with internal transport barriers. Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  7. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    International Nuclear Information System (INIS)

    Murari, A.; Barana, O.; Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Albanese, R.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.

    2004-01-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  8. Algorithm of axial fuel optimization based in progressive steps of turned search

    International Nuclear Information System (INIS)

    Martin del Campo, C.; Francois, J.L.

    2003-01-01

    The development of an algorithm for the axial optimization of fuel of boiling water reactors (BWR) is presented. The algorithm is based in a serial optimizations process in the one that the best solution in each stage is the starting point of the following stage. The objective function of each stage adapts to orient the search toward better values of one or two parameters leaving the rest like restrictions. Conform to it advances in those optimization stages, it is increased the fineness of the evaluation of the investigated designs. The algorithm is based on three stages, in the first one are used Genetic algorithms and in the two following Tabu Search. The objective function of the first stage it looks for to minimize the average enrichment of the one it assembles and to fulfill with the generation of specified energy for the operation cycle besides not violating none of the limits of the design base. In the following stages the objective function looks for to minimize the power factor peak (PPF) and to maximize the margin of shutdown (SDM), having as restrictions the one average enrichment obtained for the best design in the first stage and those other restrictions. The third stage, very similar to the previous one, it begins with the design of the previous stage but it carries out a search of the margin of shutdown to different exhibition steps with calculations in three dimensions (3D). An application to the case of the design of the fresh assemble for the fourth fuel reload of the Unit 1 reactor of the Laguna Verde power plant (U1-CLV) is presented. The obtained results show an advance in the handling of optimization methods and in the construction of the objective functions that should be used for the different design stages of the fuel assemblies. (Author)

  9. Assembly and benign step-by-step post-treatment of oppositely charged reduced graphene oxides for transparent conductive thin films with multiple applications

    Science.gov (United States)

    Zhu, Jiayi; He, Junhui

    2012-05-01

    We report a new approach for the fabrication of flexible and transparent conducting thin films via the layer-by-layer (LbL) assembly of oppositely charged reduced graphene oxide (RGO) and the benign step-by-step post-treatment on substrates with a low glass-transition temperature, such as glass and poly(ethylene terephthalate) (PET). The RGO dispersions and films were characterized by means of atomic force microscopy, UV-visible absorption spectrophotometery, Raman spectroscopy, transmission electron microscopy, contact angle/interface systems and a four-point probe. It was found that the graphene thin films exhibited a significant increase in electrical conductivity after the step-by-step post-treatments. The graphene thin film on the PET substrate had a good conductivity retainability after multiple cycles (30 cycles) of excessively bending (bending angle: 180°), while tin-doped indium oxide (ITO) thin films on PET showed a significant decrease in electrical conductivity. In addition, the graphene thin film had a smooth surface with tunable wettability.We report a new approach for the fabrication of flexible and transparent conducting thin films via the layer-by-layer (LbL) assembly of oppositely charged reduced graphene oxide (RGO) and the benign step-by-step post-treatment on substrates with a low glass-transition temperature, such as glass and poly(ethylene terephthalate) (PET). The RGO dispersions and films were characterized by means of atomic force microscopy, UV-visible absorption spectrophotometery, Raman spectroscopy, transmission electron microscopy, contact angle/interface systems and a four-point probe. It was found that the graphene thin films exhibited a significant increase in electrical conductivity after the step-by-step post-treatments. The graphene thin film on the PET substrate had a good conductivity retainability after multiple cycles (30 cycles) of excessively bending (bending angle: 180°), while tin-doped indium oxide (ITO) thin films on

  10. FELIX: an algorithm for indexing multiple crystallites in X-ray free-electron laser snapshot diffraction images

    DEFF Research Database (Denmark)

    Beyerlein, Kenneth R.; White, Thomas A.; Yefanov, Oleksandr

    2017-01-01

    A novel algorithm for indexing multiple crystals in snapshot X-ray diffraction images, especially suited for serial crystallography data, is presented. The algorithm, FELIX, utilizes a generalized parametrization of the Rodrigues-Frank space, in which all crystal systems can be represented without...

  11. An improved algorithm to convert CAD model to MCNP geometry model based on STEP file

    International Nuclear Information System (INIS)

    Zhou, Qingguo; Yang, Jiaming; Wu, Jiong; Tian, Yanshan; Wang, Junqiong; Jiang, Hai; Li, Kuan-Ching

    2015-01-01

    Highlights: • Fully exploits common features of cells, making the processing efficient. • Accurately provide the cell position. • Flexible to add new parameters in the structure. • Application of novel structure in INP file processing, conveniently evaluate cell location. - Abstract: MCNP (Monte Carlo N-Particle Transport Code) is a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron, or coupled neutron/photon/electron transport. Its input file, the INP file, has the characteristics of complicated form and is error-prone when describing geometric models. Due to this, a conversion algorithm that can solve the problem by converting general geometric model to MCNP model during MCNP aided modeling is highly needed. In this paper, we revised and incorporated a number of improvements over our previous work (Yang et al., 2013), which was proposed and targeted after STEP file and INP file were analyzed. Results of experiments show that the revised algorithm is more applicable and efficient than previous work, with the optimized extraction of geometry and topology information of the STEP file, as well as the production efficiency of output INP file. This proposed research is promising, and serves as valuable reference for the majority of researchers involved with MCNP-related researches

  12. An efficient central DOA tracking algorithm for multiple incoherently distributed sources

    Science.gov (United States)

    Hassen, Sonia Ben; Samet, Abdelaziz

    2015-12-01

    In this paper, we develop a new tracking method for the direction of arrival (DOA) parameters assuming multiple incoherently distributed (ID) sources. The new approach is based on a simple covariance fitting optimization technique exploiting the central and noncentral moments of the source angular power densities to estimate the central DOAs. The current estimates are treated as measurements provided to the Kalman filter that model the dynamic property of directional changes for the moving sources. Then, the covariance-fitting-based algorithm and the Kalman filtering theory are combined to formulate an adaptive tracking algorithm. Our algorithm is compared to the fast approximated power iteration-total least square-estimation of signal parameters via rotational invariance technique (FAPI-TLS-ESPRIT) algorithm using the TLS-ESPRIT method and the subspace updating via FAPI-algorithm. It will be shown that the proposed algorithm offers an excellent DOA tracking performance and outperforms the FAPI-TLS-ESPRIT method especially at low signal-to-noise ratio (SNR) values. Moreover, the performances of the two methods increase as the SNR values increase. This increase is more prominent with the FAPI-TLS-ESPRIT method. However, their performances degrade when the number of sources increases. It will be also proved that our method depends on the form of the angular distribution function when tracking the central DOAs. Finally, it will be shown that the more the sources are spaced, the more the proposed method can exactly track the DOAs.

  13. Security Analysis of a Block Encryption Algorithm Based on Dynamic Sequences of Multiple Chaotic Systems

    Science.gov (United States)

    Du, Mao-Kang; He, Bo; Wang, Yong

    2011-01-01

    Recently, the cryptosystem based on chaos has attracted much attention. Wang and Yu (Commun. Nonlin. Sci. Numer. Simulat. 14 (2009) 574) proposed a block encryption algorithm based on dynamic sequences of multiple chaotic systems. We analyze the potential flaws in the algorithm. Then, a chosen-plaintext attack is presented. Some remedial measures are suggested to avoid the flaws effectively. Furthermore, an improved encryption algorithm is proposed to resist the attacks and to keep all the merits of the original cryptosystem.

  14. Learning-based meta-algorithm for MRI brain extraction.

    Science.gov (United States)

    Shi, Feng; Wang, Li; Gilmore, John H; Lin, Weili; Shen, Dinggang

    2011-01-01

    Multiple-segmentation-and-fusion method has been widely used for brain extraction, tissue segmentation, and region of interest (ROI) localization. However, such studies are hindered in practice by their computational complexity, mainly coming from the steps of template selection and template-to-subject nonlinear registration. In this study, we address these two issues and propose a novel learning-based meta-algorithm for MRI brain extraction. Specifically, we first use exemplars to represent the entire template library, and assign the most similar exemplar to the test subject. Second, a meta-algorithm combining two existing brain extraction algorithms (BET and BSE) is proposed to conduct multiple extractions directly on test subject. Effective parameter settings for the meta-algorithm are learned from the training data and propagated to subject through exemplars. We further develop a level-set based fusion method to combine multiple candidate extractions together with a closed smooth surface, for obtaining the final result. Experimental results show that, with only a small portion of subjects for training, the proposed method is able to produce more accurate and robust brain extraction results, at Jaccard Index of 0.956 +/- 0.010 on total 340 subjects under 6-fold cross validation, compared to those by the BET and BSE even using their best parameter combinations.

  15. Noise effect in an improved conjugate gradient algorithm to invert particle size distribution and the algorithm amendment.

    Science.gov (United States)

    Wei, Yongjie; Ge, Baozhen; Wei, Yaolin

    2009-03-20

    In general, model-independent algorithms are sensitive to noise during laser particle size measurement. An improved conjugate gradient algorithm (ICGA) that can be used to invert particle size distribution (PSD) from diffraction data is presented. By use of the ICGA to invert simulated data with multiplicative or additive noise, we determined that additive noise is the main factor that induces distorted results. Thus the ICGA is amended by introduction of an iteration step-adjusting parameter and is used experimentally on simulated data and some samples. The experimental results show that the sensitivity of the ICGA to noise is reduced and the inverted results are in accord with the real PSD.

  16. Reactive Collision Avoidance Algorithm

    Science.gov (United States)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on

  17. The PARAFAC-MUSIC Algorithm for DOA Estimation with Doppler Frequency in a MIMO Radar System

    Directory of Open Access Journals (Sweden)

    Nan Wang

    2014-01-01

    Full Text Available The PARAFAC-MUSIC algorithm is proposed to estimate the direction-of-arrival (DOA of the targets with Doppler frequency in a monostatic MIMO radar system in this paper. To estimate the Doppler frequency, the PARAFAC (parallel factor algorithm is firstly utilized in the proposed algorithm, and after the compensation of Doppler frequency, MUSIC (multiple signal classification algorithm is applied to estimate the DOA. By these two steps, the DOA of moving targets can be estimated successfully. Simulation results show that the proposed PARAFAC-MUSIC algorithm has a higher accuracy than the PARAFAC algorithm and the MUSIC algorithm in DOA estimation.

  18. Fast matrix multiplication and its algebraic neighbourhood

    Science.gov (United States)

    Pan, V. Ya.

    2017-11-01

    Matrix multiplication is among the most fundamental operations of modern computations. By 1969 it was still commonly believed that the classical algorithm was optimal, although the experts already knew that this was not so. Worldwide interest in matrix multiplication instantly exploded in 1969, when Strassen decreased the exponent 3 of cubic time to 2.807. Then everyone expected to see matrix multiplication performed in quadratic or nearly quadratic time very soon. Further progress, however, turned out to be capricious. It was at stalemate for almost a decade, then a combination of surprising techniques (completely independent of Strassen's original ones and much more advanced) enabled a new decrease of the exponent in 1978-1981 and then again in 1986, to 2.376. By 2017 the exponent has still not passed through the barrier of 2.373, but most disturbing was the curse of recursion — even the decrease of exponents below 2.7733 required numerous recursive steps, and each of them squared the problem size. As a result, all algorithms supporting such exponents supersede the classical algorithm only for inputs of immense sizes, far beyond any potential interest for the user. We survey the long study of fast matrix multiplication, focusing on neglected algorithms for feasible matrix multiplication. We comment on their design, the techniques involved, implementation issues, the impact of their study on the modern theory and practice of Algebraic Computations, and perspectives for fast matrix multiplication. Bibliography: 163 titles.

  19. On the construction of elliptic Chudnovsky-type algorithms for multiplication in large extensions of finite fields

    OpenAIRE

    Ballet, Stéphane; Bonnecaze, Alexis; Tukumuli, Mila

    2013-01-01

    International audience; We indicate a strategy in order to construct bilinear multiplication algorithms of type Chudnovsky in large extensions of any finite field. In particular, using the symmetric version of the generalization of Randriambololona specialized on the elliptic curves, we show that it is possible to construct such algorithms with low bilinear complexity. More precisely, if we only consider the Chudnovsky-type algorithms of type symmetric elliptic, we show that the symmetric bil...

  20. a Voxel-Based Filtering Algorithm for Mobile LIDAR Data

    Science.gov (United States)

    Qin, H.; Guan, G.; Yu, Y.; Zhong, L.

    2018-04-01

    This paper presents a stepwise voxel-based filtering algorithm for mobile LiDAR data. In the first step, to improve computational efficiency, mobile LiDAR points, in xy-plane, are first partitioned into a set of two-dimensional (2-D) blocks with a given block size, in each of which all laser points are further organized into an octree partition structure with a set of three-dimensional (3-D) voxels. Then, a voxel-based upward growing processing is performed to roughly separate terrain from non-terrain points with global and local terrain thresholds. In the second step, the extracted terrain points are refined by computing voxel curvatures. This voxel-based filtering algorithm is comprehensively discussed in the analyses of parameter sensitivity and overall performance. An experimental study performed on multiple point cloud samples, collected by different commercial mobile LiDAR systems, showed that the proposed algorithm provides a promising solution to terrain point extraction from mobile point clouds.

  1. Development of the Multiple Gene Knockout System with One-Step PCR in Thermoacidophilic Crenarchaeon Sulfolobus acidocaldarius

    Directory of Open Access Journals (Sweden)

    Shoji Suzuki

    2017-01-01

    Full Text Available Multiple gene knockout systems developed in the thermoacidophilic crenarchaeon Sulfolobus acidocaldarius are powerful genetic tools. However, plasmid construction typically requires several steps. Alternatively, PCR tailing for high-throughput gene disruption was also developed in S. acidocaldarius, but repeated gene knockout based on PCR tailing has been limited due to lack of a genetic marker system. In this study, we demonstrated efficient homologous recombination frequency (2.8 × 104 ± 6.9 × 103 colonies/μg DNA by optimizing the transformation conditions. This optimized protocol allowed to develop reliable gene knockout via double crossover using short homologous arms and to establish the multiple gene knockout system with one-step PCR (MONSTER. In the MONSTER, a multiple gene knockout cassette was simply and rapidly constructed by one-step PCR without plasmid construction, and the PCR product can be immediately used for target gene deletion. As an example of the applications of this strategy, we successfully made a DNA photolyase- (phr- and arginine decarboxylase- (argD- deficient strain of S. acidocaldarius. In addition, an agmatine selection system consisting of an agmatine-auxotrophic strain and argD marker was also established. The MONSTER provides an alternative strategy that enables the very simple construction of multiple gene knockout cassettes for genetic studies in S. acidocaldarius.

  2. Sequential multiple-step europium ion implantation and annealing of GaN

    KAUST Repository

    Miranda, S. M C; Edwards, Paul R.; O'Donnell, Kevin Peter; Boćkowski, Michał X.; Alves, Eduardo Jorge; Roqan, Iman S.; Vantomme, André ; Lorenz, Katharina

    2014-01-01

    Sequential multiple Eu ion implantations at low fluence (1×1013 cm-2 at 300 keV) and subsequent rapid thermal annealing (RTA) steps (30 s at 1000 °C or 1100 °C) were performed on high quality nominally undoped GaN films grown by metal organic chemical vapour deposition (MOCVD) and medium quality GaN:Mg grown by hydride vapour phase epitaxy (HVPE). Compared to samples implanted in a single step, multiple implantation/annealing shows only marginal structural improvement for the MOCVD samples, but a significant improvement of crystal quality and optical activation of Eu was achieved in the HVPE films. This improvement is attributed to the lower crystalline quality of the starting material, which probably enhances the diffusion of defects and acts to facilitate the annealing of implantation damage and the effective incorporation of the Eu ions in the crystal structure. Optical activation of Eu3+ ions in the HVPE samples was further improved by high temperature and high pressure annealing (HTHP) up to 1400 °C. After HTHP annealing the main room temperature cathodo- and photoluminescence line in Mg-doped samples lies at ∼ 619 nm, characteristic of a known Mg-related Eu3+ centre, while after RTA treatment the dominant line lies at ∼ 622 nm, typical for undoped GaN:Eu. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Sequential multiple-step europium ion implantation and annealing of GaN

    KAUST Repository

    Miranda, S. M C

    2014-01-20

    Sequential multiple Eu ion implantations at low fluence (1×1013 cm-2 at 300 keV) and subsequent rapid thermal annealing (RTA) steps (30 s at 1000 °C or 1100 °C) were performed on high quality nominally undoped GaN films grown by metal organic chemical vapour deposition (MOCVD) and medium quality GaN:Mg grown by hydride vapour phase epitaxy (HVPE). Compared to samples implanted in a single step, multiple implantation/annealing shows only marginal structural improvement for the MOCVD samples, but a significant improvement of crystal quality and optical activation of Eu was achieved in the HVPE films. This improvement is attributed to the lower crystalline quality of the starting material, which probably enhances the diffusion of defects and acts to facilitate the annealing of implantation damage and the effective incorporation of the Eu ions in the crystal structure. Optical activation of Eu3+ ions in the HVPE samples was further improved by high temperature and high pressure annealing (HTHP) up to 1400 °C. After HTHP annealing the main room temperature cathodo- and photoluminescence line in Mg-doped samples lies at ∼ 619 nm, characteristic of a known Mg-related Eu3+ centre, while after RTA treatment the dominant line lies at ∼ 622 nm, typical for undoped GaN:Eu. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. A fast calculating two-stream-like multiple scattering algorithm that captures azimuthal and elevation variations

    Science.gov (United States)

    Fiorino, Steven T.; Elmore, Brannon; Schmidt, Jaclyn; Matchefts, Elizabeth; Burley, Jarred L.

    2016-05-01

    Properly accounting for multiple scattering effects can have important implications for remote sensing and possibly directed energy applications. For example, increasing path radiance can affect signal noise. This study describes the implementation of a fast-calculating two-stream-like multiple scattering algorithm that captures azimuthal and elevation variations into the Laser Environmental Effects Definition and Reference (LEEDR) atmospheric characterization and radiative transfer code. The multiple scattering algorithm fully solves for molecular, aerosol, cloud, and precipitation single-scatter layer effects with a Mie algorithm at every calculation point/layer rather than an interpolated value from a pre-calculated look-up-table. This top-down cumulative diffusivity method first considers the incident solar radiance contribution to a given layer accounting for solid angle and elevation, and it then measures the contribution of diffused energy from previous layers based on the transmission of the current level to produce a cumulative radiance that is reflected from a surface and measured at the aperture at the observer. Then a unique set of asymmetry and backscattering phase function parameter calculations are made which account for the radiance loss due to the molecular and aerosol constituent reflectivity within a level and allows for a more accurate characterization of diffuse layers that contribute to multiple scattered radiances in inhomogeneous atmospheres. The code logic is valid for spectral bands between 200 nm and radio wavelengths, and the accuracy is demonstrated by comparing the results from LEEDR to observed sky radiance data.

  5. Perfect blind restoration of images blurred by multiple filters: theory and efficient algorithms.

    Science.gov (United States)

    Harikumar, G; Bresler, Y

    1999-01-01

    We address the problem of restoring an image from its noisy convolutions with two or more unknown finite impulse response (FIR) filters. We develop theoretical results about the existence and uniqueness of solutions, and show that under some generically true assumptions, both the filters and the image can be determined exactly in the absence of noise, and stably estimated in its presence. We present efficient algorithms to estimate the blur functions and their sizes. These algorithms are of two types, subspace-based and likelihood-based, and are extensions of techniques proposed for the solution of the multichannel blind deconvolution problem in one dimension. We present memory and computation-efficient techniques to handle the very large matrices arising in the two-dimensional (2-D) case. Once the blur functions are determined, they are used in a multichannel deconvolution step to reconstruct the unknown image. The theoretical and practical implications of edge effects, and "weakly exciting" images are examined. Finally, the algorithms are demonstrated on synthetic and real data.

  6. Extending the eigCG algorithm to nonsymmetric Lanczos for linear systems with multiple right-hand sides

    Energy Technology Data Exchange (ETDEWEB)

    Abdel-Rehim, A M; Stathopoulos, Andreas; Orginos, Kostas

    2014-08-01

    The technique that was used to build the EigCG algorithm for sparse symmetric linear systems is extended to the nonsymmetric case using the BiCG algorithm. We show that, similarly to the symmetric case, we can build an algorithm that is capable of computing a few smallest magnitude eigenvalues and their corresponding left and right eigenvectors of a nonsymmetric matrix using only a small window of the BiCG residuals while simultaneously solving a linear system with that matrix. For a system with multiple right-hand sides, we give an algorithm that computes incrementally more eigenvalues while solving the first few systems and then uses the computed eigenvectors to deflate BiCGStab for the remaining systems. Our experiments on various test problems, including Lattice QCD, show the remarkable ability of EigBiCG to compute spectral approximations with accuracy comparable to that of the unrestarted, nonsymmetric Lanczos. Furthermore, our incremental EigBiCG followed by appropriately restarted and deflated BiCGStab provides a competitive method for systems with multiple right-hand sides.

  7. Direct Position Determination of Multiple Non-Circular Sources with a Moving Coprime Array

    Directory of Open Access Journals (Sweden)

    Yankui Zhang

    2018-05-01

    Full Text Available Direct position determination (DPD is currently a hot topic in wireless localization research as it is more accurate than traditional two-step positioning. However, current DPD algorithms are all based on uniform arrays, which have an insufficient degree of freedom and limited estimation accuracy. To improve the DPD accuracy, this paper introduces a coprime array to the position model of multiple non-circular sources with a moving array. To maximize the advantages of this coprime array, we reconstruct the covariance matrix by vectorization, apply a spatial smoothing technique, and converge the subspace data from each measuring position to establish the cost function. Finally, we obtain the position coordinates of the multiple non-circular sources. The complexity of the proposed method is computed and compared with that of other methods, and the Cramer–Rao lower bound of DPD for multiple sources with a moving coprime array, is derived. Theoretical analysis and simulation results show that the proposed algorithm is not only applicable to circular sources, but can also improve the positioning accuracy of non-circular sources. Compared with existing two-step positioning algorithms and DPD algorithms based on uniform linear arrays, the proposed technique offers a significant improvement in positioning accuracy with a slight increase in complexity.

  8. Direct Position Determination of Multiple Non-Circular Sources with a Moving Coprime Array.

    Science.gov (United States)

    Zhang, Yankui; Ba, Bin; Wang, Daming; Geng, Wei; Xu, Haiyun

    2018-05-08

    Direct position determination (DPD) is currently a hot topic in wireless localization research as it is more accurate than traditional two-step positioning. However, current DPD algorithms are all based on uniform arrays, which have an insufficient degree of freedom and limited estimation accuracy. To improve the DPD accuracy, this paper introduces a coprime array to the position model of multiple non-circular sources with a moving array. To maximize the advantages of this coprime array, we reconstruct the covariance matrix by vectorization, apply a spatial smoothing technique, and converge the subspace data from each measuring position to establish the cost function. Finally, we obtain the position coordinates of the multiple non-circular sources. The complexity of the proposed method is computed and compared with that of other methods, and the Cramer⁻Rao lower bound of DPD for multiple sources with a moving coprime array, is derived. Theoretical analysis and simulation results show that the proposed algorithm is not only applicable to circular sources, but can also improve the positioning accuracy of non-circular sources. Compared with existing two-step positioning algorithms and DPD algorithms based on uniform linear arrays, the proposed technique offers a significant improvement in positioning accuracy with a slight increase in complexity.

  9. Performance Analyses of IDEAL Algorithm on Highly Skewed Grid System

    Directory of Open Access Journals (Sweden)

    Dongliang Sun

    2014-03-01

    Full Text Available IDEAL is an efficient segregated algorithm for the fluid flow and heat transfer problems. This algorithm has now been extended to the 3D nonorthogonal curvilinear coordinates. Highly skewed grids in the nonorthogonal curvilinear coordinates can decrease the convergence rate and deteriorate the calculating stability. In this study, the feasibility of the IDEAL algorithm on highly skewed grid system is analyzed by investigating the lid-driven flow in the inclined cavity. It can be concluded that the IDEAL algorithm is more robust and more efficient than the traditional SIMPLER algorithm, especially for the highly skewed and fine grid system. For example, at θ = 5° and grid number = 70 × 70 × 70, the convergence rate of the IDEAL algorithm is 6.3 times faster than that of the SIMPLER algorithm, and the IDEAL algorithm can converge almost at any time step multiple.

  10. Fast algorithms for coordinate processors in Galois field for multiplicity t = 4.5 and t > 5

    International Nuclear Information System (INIS)

    Nikityuk, N.M.

    1989-01-01

    Fast algorithms for solving the coordinate equations for special-purpose processors at multiplicity t = 4.5 and t > 5 are described. Block diagrams of coordinate processor for t 4 in Galois field GF(2 m ) is presented which is solved by a table method. Economical algorithms for solving the coordinate equations by serial methods at t > 5 are described. The algorithms and devices proposed could be applied when creating fast processors in high energy physics spectrometers. 9 refs.; 3 figs

  11. On The Effective Construction of Asymmetric Chudnovsky Multiplication Algorithms in Finite Fields Without Derivated Evaluation

    OpenAIRE

    Ballet, Stéphane; Baudru, Nicolas; Bonnecaze, Alexis; Tukumuli, Mila

    2016-01-01

    The Chudnovsky and Chudnovsky algorithm for the multiplication in extensions of finite fields provides a bilinear complexity which is uniformly linear whith respect to the degree of the extension. Recently, Randriambololona has generalized the method, allowing asymmetry in the interpolation procedure and leading to new upper bounds on the bilinear complexity. We describe the effective algorithm of this asymmetric method, without derivated evaluation. Finally, we give examples with the finite ...

  12. Field theoretical approach to proton-nucleus reactions: II-Multiple-step excitation process

    International Nuclear Information System (INIS)

    Eiras, A.; Kodama, T.; Nemes, M.

    1989-01-01

    A field theoretical formulation to multiple step excitation process in proton-nucleus collision within the context of a relativistic eikonal approach is presented. A closed form expression for the double differential cross section can be obtained whose structure is very simple and makes the physics transparent. Glauber's formulation of the same process is obtained as a limit of ours and the necessary approximations are studied and discussed. (author) [pt

  13. Three-dimensional weight-accumulation algorithm for generating multiple excitation spots in fast optical stimulation

    Science.gov (United States)

    Takiguchi, Yu; Toyoda, Haruyoshi

    2017-11-01

    We report here an algorithm for calculating a hologram to be employed in a high-access speed microscope for observing sensory-driven synaptic activity across all inputs to single living neurons in an intact cerebral cortex. The system is based on holographic multi-beam generation using a two-dimensional phase-only spatial light modulator to excite multiple locations in three dimensions with a single hologram. The hologram was calculated with a three-dimensional weighted iterative Fourier transform method using the Ewald sphere restriction to increase the calculation speed. Our algorithm achieved good uniformity of three dimensionally generated excitation spots; the standard deviation of the spot intensities was reduced by a factor of two compared with a conventional algorithm.

  14. A Hybrid Genetic Algorithm for the Multiple Crossdocks Problem

    Directory of Open Access Journals (Sweden)

    Zhaowei Miao

    2012-01-01

    Full Text Available We study a multiple crossdocks problem with supplier and customer time windows, where any violation of time windows will incur a penalty cost and the flows through the crossdock are constrained by fixed transportation schedules and crossdock capacities. We prove this problem to be NP-hard in the strong sense and therefore focus on developing efficient heuristics. Based on the problem structure, we propose a hybrid genetic algorithm (HGA integrating greedy technique and variable neighborhood search method to solve the problem. Extensive experiments under different scenarios were conducted, and results show that HGA outperforms CPLEX solver, providing solutions in realistic timescales.

  15. An algorithm to compute a rule for division problems with multiple references

    Directory of Open Access Journals (Sweden)

    Sánchez Sánchez, Francisca J.

    2012-01-01

    Full Text Available In this paper we consider an extension of the classic division problem with claims: Thedivision problem with multiple references. Hinojosa et al. (2012 provide a solution for this type of pro-blems. The aim of this work is to extend their results by proposing an algorithm that calculates allocationsbased on these results. All computational details are provided in the paper.

  16. Investigating the Variation of Volatile Compound Composition in Maotai-Flavoured Liquor During Its Multiple Fermentation Steps Using Statistical Methods

    Directory of Open Access Journals (Sweden)

    Zheng-Yun Wu

    2016-01-01

    Full Text Available The use of multiple fermentations is one of the most specific characteristics of Maotai-flavoured liquor production. In this research, the variation of volatile composition of Maotai-flavoured liquor during its multiple fermentations is investigated using statistical approaches. Cluster analysis shows that the obtained samples are grouped mainly according to the fermentation steps rather than the distillery they originate from, and the samples from the first two fermentation steps show the greatest difference, suggesting that multiple fermentation and distillation steps result in the end in similar volatile composition of the liquor. Back-propagation neural network (BNN models were developed that satisfactorily predict the number of fermentation steps and the organoleptic evaluation scores of liquor samples from their volatile compositions. Mean impact value (MIV analysis shows that ethyl lactate, furfural and some high-boiling-point acids play important roles, while pyrazine contributes much less to the improvement of the flavour and taste of Maotai-flavoured liquor during its production. This study contributes to further understanding of the mechanisms of Maotai-flavoured liquor production.

  17. On the Convexity of Step out - Step in Sequencing Games

    NARCIS (Netherlands)

    Musegaas, Marieke; Borm, Peter; Quant, Marieke

    2016-01-01

    The main result of this paper is the convexity of Step out - Step in (SoSi) sequencing games, a class of relaxed sequencing games first analyzed by Musegaas, Borm, and Quant (2015). The proof makes use of a polynomial time algorithm determining the value and an optimal processing order for an

  18. Optimization of constrained multiple-objective reliability problems using evolutionary algorithms

    International Nuclear Information System (INIS)

    Salazar, Daniel; Rocco, Claudio M.; Galvan, Blas J.

    2006-01-01

    This paper illustrates the use of multi-objective optimization to solve three types of reliability optimization problems: to find the optimal number of redundant components, find the reliability of components, and determine both their redundancy and reliability. In general, these problems have been formulated as single objective mixed-integer non-linear programming problems with one or several constraints and solved by using mathematical programming techniques or special heuristics. In this work, these problems are reformulated as multiple-objective problems (MOP) and then solved by using a second-generation Multiple-Objective Evolutionary Algorithm (MOEA) that allows handling constraints. The MOEA used in this paper (NSGA-II) demonstrates the ability to identify a set of optimal solutions (Pareto front), which provides the Decision Maker with a complete picture of the optimal solution space. Finally, the advantages of both MOP and MOEA approaches are illustrated by solving four redundancy problems taken from the literature

  19. Optimization of constrained multiple-objective reliability problems using evolutionary algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Salazar, Daniel [Instituto de Sistemas Inteligentes y Aplicaciones Numericas en Ingenieria (IUSIANI), Division de Computacion Evolutiva y Aplicaciones (CEANI), Universidad de Las Palmas de Gran Canaria, Islas Canarias (Spain) and Facultad de Ingenieria, Universidad Central Venezuela, Caracas (Venezuela)]. E-mail: danielsalazaraponte@gmail.com; Rocco, Claudio M. [Facultad de Ingenieria, Universidad Central Venezuela, Caracas (Venezuela)]. E-mail: crocco@reacciun.ve; Galvan, Blas J. [Instituto de Sistemas Inteligentes y Aplicaciones Numericas en Ingenieria (IUSIANI), Division de Computacion Evolutiva y Aplicaciones (CEANI), Universidad de Las Palmas de Gran Canaria, Islas Canarias (Spain)]. E-mail: bgalvan@step.es

    2006-09-15

    This paper illustrates the use of multi-objective optimization to solve three types of reliability optimization problems: to find the optimal number of redundant components, find the reliability of components, and determine both their redundancy and reliability. In general, these problems have been formulated as single objective mixed-integer non-linear programming problems with one or several constraints and solved by using mathematical programming techniques or special heuristics. In this work, these problems are reformulated as multiple-objective problems (MOP) and then solved by using a second-generation Multiple-Objective Evolutionary Algorithm (MOEA) that allows handling constraints. The MOEA used in this paper (NSGA-II) demonstrates the ability to identify a set of optimal solutions (Pareto front), which provides the Decision Maker with a complete picture of the optimal solution space. Finally, the advantages of both MOP and MOEA approaches are illustrated by solving four redundancy problems taken from the literature.

  20. Phase-step retrieval for tunable phase-shifting algorithms

    Science.gov (United States)

    Ayubi, Gastón A.; Duarte, Ignacio; Perciante, César D.; Flores, Jorge L.; Ferrari, José A.

    2017-12-01

    Phase-shifting (PS) is a well-known technique for phase retrieval in interferometry, with applications in deflectometry and 3D-profiling, which requires a series of intensity measurements with certain phase-steps. Usually the phase-steps are evenly spaced, and its knowledge is crucial for the phase retrieval. In this work we present a method to extract the phase-step between consecutive interferograms. We test the proposed technique with images corrupted by additive noise. The results were compared with other known methods. We also present experimental results showing the performance of the method when spatial filters are applied to the interferograms and the effect that they have on their relative phase-steps.

  1. Pattern-set generation algorithm for the one-dimensional multiple stock sizes cutting stock problem

    Science.gov (United States)

    Cui, Yaodong; Cui, Yi-Ping; Zhao, Zhigang

    2015-09-01

    A pattern-set generation algorithm (PSG) for the one-dimensional multiple stock sizes cutting stock problem (1DMSSCSP) is presented. The solution process contains two stages. In the first stage, the PSG solves the residual problems repeatedly to generate the patterns in the pattern set, where each residual problem is solved by the column-generation approach, and each pattern is generated by solving a single large object placement problem. In the second stage, the integer linear programming model of the 1DMSSCSP is solved using a commercial solver, where only the patterns in the pattern set are considered. The computational results of benchmark instances indicate that the PSG outperforms existing heuristic algorithms and rivals the exact algorithm in solution quality.

  2. A General Algorithm for Reusing Krylov Subspace Information. I. Unsteady Navier-Stokes

    Science.gov (United States)

    Carpenter, Mark H.; Vuik, C.; Lucas, Peter; vanGijzen, Martin; Bijl, Hester

    2010-01-01

    A general algorithm is developed that reuses available information to accelerate the iterative convergence of linear systems with multiple right-hand sides A x = b (sup i), which are commonly encountered in steady or unsteady simulations of nonlinear equations. The algorithm is based on the classical GMRES algorithm with eigenvector enrichment but also includes a Galerkin projection preprocessing step and several novel Krylov subspace reuse strategies. The new approach is applied to a set of test problems, including an unsteady turbulent airfoil, and is shown in some cases to provide significant improvement in computational efficiency relative to baseline approaches.

  3. Analyzing the multiple-target-multiple-agent scenario using optimal assignment algorithms

    Science.gov (United States)

    Kwok, Kwan S.; Driessen, Brian J.; Phillips, Cynthia A.; Tovey, Craig A.

    1997-09-01

    This work considers the problem of maximum utilization of a set of mobile robots with limited sensor-range capabilities and limited travel distances. The robots are initially in random positions. A set of robots properly guards or covers a region if every point within the region is within the effective sensor range of at least one vehicle. We wish to move the vehicles into surveillance positions so as to guard or cover a region, while minimizing the maximum distance traveled by any vehicle. This problem can be formulated as an assignment problem, in which we must optimally decide which robot to assign to which slot of a desired matrix of grid points. The cost function is the maximum distance traveled by any robot. Assignment problems can be solved very efficiently. Solution times for one hundred robots took only seconds on a silicon graphics crimson workstation. The initial positions of all the robots can be sampled by a central base station and their newly assigned positions communicated back to the robots. Alternatively, the robots can establish their own coordinate system with the origin fixed at one of the robots and orientation determined by the compass bearing of another robot relative to this robot. This paper presents example solutions to the multiple-target-multiple-agent scenario using a matching algorithm. Two separate cases with one hundred agents in each were analyzed using this method. We have found these mobile robot problems to be a very interesting application of network optimization methods, and we expect this to be a fruitful area for future research.

  4. Analyzing the multiple-target-multiple-agent scenario using optimal assignment algorithms

    International Nuclear Information System (INIS)

    Kwok, K.S.; Driessen, B.J.; Phillips, C.A.; Tovey, C.A.

    1997-01-01

    This work considers the problem of maximum utilization of a set of mobile robots with limited sensor-range capabilities and limited travel distances. The robots are initially in random positions. A set of robots properly guards or covers a region if every point within the region is within the effective sensor range of at least one vehicle. The authors wish to move the vehicles into surveillance positions so as to guard or cover a region, while minimizing the maximum distance traveled by any vehicle. This problem can be formulated as an assignment problem, in which they must optimally decide which robot to assign to which slot of a desired matrix of grid points. The cost function is the maximum distance traveled by any robot. Assignment problems can be solved very efficiently. Solutions times for one hundred robots took only seconds on a Silicon Graphics Crimson workstation. The initial positions of all the robots can be sampled by a central base station and their newly assigned positions communicated back to the robots. Alternatively, the robots can establish their own coordinate system with the origin fixed at one of the robots and orientation determined by the compass bearing of another robot relative to this robot. This paper presents example solutions to the multiple-target-multiple-agent scenario using a matching algorithm. Two separate cases with one hundred agents in each were analyzed using this method. They have found these mobile robot problems to be a very interesting application of network optimization methods, and they expect this to be a fruitful area for future research

  5. Stepped-wedge cluster randomised controlled trials: a generic framework including parallel and multiple-level designs.

    Science.gov (United States)

    Hemming, Karla; Lilford, Richard; Girling, Alan J

    2015-01-30

    Stepped-wedge cluster randomised trials (SW-CRTs) are being used with increasing frequency in health service evaluation. Conventionally, these studies are cross-sectional in design with equally spaced steps, with an equal number of clusters randomised at each step and data collected at each and every step. Here we introduce several variations on this design and consider implications for power. One modification we consider is the incomplete cross-sectional SW-CRT, where the number of clusters varies at each step or where at some steps, for example, implementation or transition periods, data are not collected. We show that the parallel CRT with staggered but balanced randomisation can be considered a special case of the incomplete SW-CRT. As too can the parallel CRT with baseline measures. And we extend these designs to allow for multiple layers of clustering, for example, wards within a hospital. Building on results for complete designs, power and detectable difference are derived using a Wald test and obtaining the variance-covariance matrix of the treatment effect assuming a generalised linear mixed model. These variations are illustrated by several real examples. We recommend that whilst the impact of transition periods on power is likely to be small, where they are a feature of the design they should be incorporated. We also show examples in which the power of a SW-CRT increases as the intra-cluster correlation (ICC) increases and demonstrate that the impact of the ICC is likely to be smaller in a SW-CRT compared with a parallel CRT, especially where there are multiple levels of clustering. Finally, through this unified framework, the efficiency of the SW-CRT and the parallel CRT can be compared. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  6. A transport-based condensed history algorithm

    International Nuclear Information System (INIS)

    Tolar, D. R. Jr.

    1999-01-01

    Condensed history algorithms are approximate electron transport Monte Carlo methods in which the cumulative effects of multiple collisions are modeled in a single step of (user-specified) path length s 0 . This path length is the distance each Monte Carlo electron travels between collisions. Current condensed history techniques utilize a splitting routine over the range 0 le s le s 0 . For example, the PEnELOPE method splits each step into two substeps; one with length ξs 0 and one with length (1 minusξ)s 0 , where ξ is a random number from 0 0 is fixed (not sampled from an exponential distribution), conventional condensed history schemes are not transport processes. Here the authors describe a new condensed history algorithm that is a transport process. The method simulates a transport equation that approximates the exact Boltzmann equation. The new transport equation has a larger mean free path than, and preserves two angular moments of, the Boltzmann equation. Thus, the new process is solved more efficiently by Monte Carlo, and it conserves both particles and scattering power

  7. Application of multislice spiral CT (MSCT) in multiple injured patients and its effect on diagnostic and therapeutic algorithms

    International Nuclear Information System (INIS)

    Boehm, T.; Alkadhi, H.; Schertler, T.; Baumert, B.; Roos, J.; Marincek, B.; Wildermuth, S.

    2004-01-01

    The initial diagnostic work-up of trauma victims with multiple injuries is currently a combination of conventional radiography (CR), ultrasound (US), and computed tomography (CT). This article reviews the diagnostic quality of the different imaging modalities regarding detection and classification of injuries. CT performs better than US in detecting traumatic lesions of abdominal parenchymal organs. Furthermore, CT is better than CR in detecting therapeutically relevant chest and bone injuries. MSCT may replace CR and US under the condition that it is faster than or at least as fast as the conventional approach to diagnose lite threatening injuries. This can be achieved only by changing the work-flow for the entire trauma team including radiologist. Furthermore, certain prerequisites must be fulfilled including integration of a MSCT scanner into the emergency room. An optimized whole body CT protocol for the assessment of trauma victims using MSCT as well as a two-step algorithm for reporting the imaging findings depending on their clinical significance is presented. (orig.)

  8. A trust region interior point algorithm for optimal power flow problems

    Energy Technology Data Exchange (ETDEWEB)

    Wang Min [Hefei University of Technology (China). Dept. of Electrical Engineering and Automation; Liu Shengsong [Jiangsu Electric Power Dispatching and Telecommunication Company (China). Dept. of Automation

    2005-05-01

    This paper presents a new algorithm that uses the trust region interior point method to solve nonlinear optimal power flow (OPF) problems. The OPF problem is solved by a primal/dual interior point method with multiple centrality corrections as a sequence of linearized trust region sub-problems. It is the trust region that controls the linear step size and ensures the validity of the linear model. The convergence of the algorithm is improved through the modification of the trust region sub-problem. Numerical results of standard IEEE systems and two realistic networks ranging in size from 14 to 662 buses are presented. The computational results show that the proposed algorithm is very effective to optimal power flow applications, and favors the successive linear programming (SLP) method. Comparison with the predictor/corrector primal/dual interior point (PCPDIP) method is also made to demonstrate the superiority of the multiple centrality corrections technique. (author)

  9. Validation of the Welch Allyn SureBP (inflation) and StepBP (deflation) algorithms by AAMI standard testing and BHS data analysis.

    Science.gov (United States)

    Alpert, Bruce S

    2011-04-01

    We evaluated two new Welch Allyn automated blood pressure (BP) algorithms. The first, SureBP, estimates BP during cuff inflation; the second, StepBP, does so during deflation. We followed the American National Standards Institute/Association for the Advancement of Medical Instrumentation SP10:2006 standard for testing and data analysis. The data were also analyzed using the British Hypertension Society analysis strategy. We tested children, adolescents, and adults. The requirements of the American National Standards Institute/Association for the Advancement of Medical Instrumentation SP10:2006 standard were fulfilled with respect to BP levels, arm sizes, and ages. Association for the Advancement of Medical Instrumentation SP10 Method 1 data analysis was used. The mean±standard deviation for the device readings compared with auscultation by paired, trained, blinded observers in the SureBP mode were -2.14±7.44 mmHg for systolic BP (SBP) and -0.55±5.98 mmHg for diastolic BP (DBP). In the StepBP mode, the differences were -3.61±6.30 mmHg for SBP and -2.03±5.30 mmHg for DBP. Both algorithms achieved an A grade for both SBP and DBP by British Hypertension Society analysis. The SureBP inflation-based algorithm will be available in many new-generation Welch Allyn monitors. Its use will reduce the time it takes to estimate BP in critical patient care circumstances. The device will not need to inflate to excessive suprasystolic BPs to obtain the SBP values. Deflation is rapid once SBP has been determined, thus reducing the total time of cuff inflation and reducing patient discomfort. If the SureBP fails to obtain a BP value, the StepBP algorithm is activated to estimate BP by traditional deflation methodology.

  10. A constrained tracking algorithm to optimize plug patterns in multiple isocenter Gamma Knife radiosurgery planning

    International Nuclear Information System (INIS)

    Li Kaile; Ma Lijun

    2005-01-01

    We developed a source blocking optimization algorithm for Gamma Knife radiosurgery, which is based on tracking individual source contributions to arbitrarily shaped target and critical structure volumes. A scalar objective function and a direct search algorithm were used to produce near real-time calculation results. The algorithm allows the user to set and vary the total number of plugs for each shot to limit the total beam-on time. We implemented and tested the algorithm for several multiple-isocenter Gamma Knife cases. It was found that the use of limited number of plugs significantly lowered the integral dose to the critical structures such as an optical chiasm in pituitary adenoma cases. The main effect of the source blocking is the faster dose falloff in the junction area between the target and the critical structure. In summary, we demonstrated a useful source-plugging algorithm for improving complex multi-isocenter Gamma Knife treatment planning cases

  11. A New Quaternion-Based Kalman Filter for Real-Time Attitude Estimation Using the Two-Step Geometrically-Intuitive Correction Algorithm.

    Science.gov (United States)

    Feng, Kaiqiang; Li, Jie; Zhang, Xiaoming; Shen, Chong; Bi, Yu; Zheng, Tao; Liu, Jun

    2017-09-19

    In order to reduce the computational complexity, and improve the pitch/roll estimation accuracy of the low-cost attitude heading reference system (AHRS) under conditions of magnetic-distortion, a novel linear Kalman filter, suitable for nonlinear attitude estimation, is proposed in this paper. The new algorithm is the combination of two-step geometrically-intuitive correction (TGIC) and the Kalman filter. In the proposed algorithm, the sequential two-step geometrically-intuitive correction scheme is used to make the current estimation of pitch/roll immune to magnetic distortion. Meanwhile, the TGIC produces a computed quaternion input for the Kalman filter, which avoids the linearization error of measurement equations and reduces the computational complexity. Several experiments have been carried out to validate the performance of the filter design. The results demonstrate that the mean time consumption and the root mean square error (RMSE) of pitch/roll estimation under magnetic disturbances are reduced by 45.9% and 33.8%, respectively, when compared with a standard filter. In addition, the proposed filter is applicable for attitude estimation under various dynamic conditions.

  12. Step Detection Robust against the Dynamics of Smartphones

    Science.gov (United States)

    Lee, Hwan-hee; Choi, Suji; Lee, Myeong-jin

    2015-01-01

    A novel algorithm is proposed for robust step detection irrespective of step mode and device pose in smartphone usage environments. The dynamics of smartphones are decoupled into a peak-valley relationship with adaptive magnitude and temporal thresholds. For extracted peaks and valleys in the magnitude of acceleration, a step is defined as consisting of a peak and its adjacent valley. Adaptive magnitude thresholds consisting of step average and step deviation are applied to suppress pseudo peaks or valleys that mostly occur during the transition among step modes or device poses. Adaptive temporal thresholds are applied to time intervals between peaks or valleys to consider the time-varying pace of human walking or running for the correct selection of peaks or valleys. From the experimental results, it can be seen that the proposed step detection algorithm shows more than 98.6% average accuracy for any combination of step mode and device pose and outperforms state-of-the-art algorithms. PMID:26516857

  13. A new multiple robot path planning algorithm: dynamic distributed particle swarm optimization.

    Science.gov (United States)

    Ayari, Asma; Bouamama, Sadok

    2017-01-01

    Multiple robot systems have become a major study concern in the field of robotic research. Their control becomes unreliable and even infeasible if the number of robots increases. In this paper, a new dynamic distributed particle swarm optimization (D 2 PSO) algorithm is proposed for trajectory path planning of multiple robots in order to find collision-free optimal path for each robot in the environment. The proposed approach consists in calculating two local optima detectors, LOD pBest and LOD gBest . Particles which are unable to improve their personal best and global best for predefined number of successive iterations would be replaced with restructured ones. Stagnation and local optima problems would be avoided by adding diversity to the population, without losing the fast convergence characteristic of PSO. Experiments with multiple robots are provided and proved effectiveness of such approach compared with the distributed PSO.

  14. A branch-and-cut algorithm for the vehicle routing problem with multiple use of vehicles

    Directory of Open Access Journals (Sweden)

    İsmail Karaoğlan

    2015-06-01

    Full Text Available This paper addresses the vehicle routing problem with multiple use of vehicles (VRPMUV, an important variant of the classic vehicle routing problem (VRP. Unlike the classical VRP, vehicles are allowed to use more than one route in the VRPMUV. We propose a branch-and-cut algorithm for solving the VRPMUV. The proposed algorithm includes several valid inequalities from the literature for the purpose of improving its lower bounds, and a heuristic algorithm based on simulated annealing and a mixed integer programming-based intensification procedure for obtaining the upper bounds. The algorithm is evaluated in terms of the test problems derived from the literature. The computational results which follow show that, if there were 120 customers on the route (in the simulation, the problem would be solved optimally in a reasonable amount of time.

  15. Medical chart validation of an algorithm for identifying multiple sclerosis relapse in healthcare claims.

    Science.gov (United States)

    Chastek, Benjamin J; Oleen-Burkey, Merrikay; Lopez-Bresnahan, Maria V

    2010-01-01

    Relapse is a common measure of disease activity in relapsing-remitting multiple sclerosis (MS). The objective of this study was to test the content validity of an operational algorithm for detecting relapse in claims data. A claims-based relapse detection algorithm was tested by comparing its detection rate over a 1-year period with relapses identified based on medical chart review. According to the algorithm, MS patients in a US healthcare claims database who had either (1) a primary claim for MS during hospitalization or (2) a corticosteroid claim following a MS-related outpatient visit were designated as having a relapse. Patient charts were examined for explicit indication of relapse or care suggestive of relapse. Positive and negative predictive values were calculated. Medical charts were reviewed for 300 MS patients, half of whom had a relapse according to the algorithm. The claims-based criteria correctly classified 67.3% of patients with relapses (positive predictive value) and 70.0% of patients without relapses (negative predictive value; kappa 0.373: p value of the operational algorithm. Limitations of the algorithm include lack of differentiation between relapsing-remitting MS and other types, and that it does not incorporate measures of function and disability. The claims-based algorithm appeared to successfully detect moderate-to-severe MS relapse. This validated definition can be applied to future claims-based MS studies.

  16. Multi-Feature Based Multiple Landmine Detection Using Ground Penetration Radar

    Directory of Open Access Journals (Sweden)

    S. Park

    2014-06-01

    Full Text Available This paper presents a novel method for detection of multiple landmines using a ground penetrating radar (GPR. Conventional algorithms mainly focus on detection of a single landmine, which cannot linearly extend to the multiple landmine case. The proposed algorithm is composed of four steps; estimation of the number of multiple objects buried in the ground, isolation of each object, feature extraction and detection of landmines. The number of objects in the GPR signal is estimated by using the energy projection method. Then signals for the objects are extracted by using the symmetry filtering method. Each signal is then processed for features, which are given as input to the support vector machine (SVM for landmine detection. Three landmines buried in various ground conditions are considered for the test of the proposed method. They demonstrate that the proposed method can successfully detect multiple landmines.

  17. Validation of a Step Detection Algorithm during Straight Walking and Turning in Patients with Parkinson’s Disease and Older Adults Using an Inertial Measurement Unit at the Lower Back

    Directory of Open Access Journals (Sweden)

    Minh H. Pham

    2017-09-01

    Full Text Available IntroductionInertial measurement units (IMUs positioned on various body locations allow detailed gait analysis even under unconstrained conditions. From a medical perspective, the assessment of vulnerable populations is of particular relevance, especially in the daily-life environment. Gait analysis algorithms need thorough validation, as many chronic diseases show specific and even unique gait patterns. The aim of this study was therefore to validate an acceleration-based step detection algorithm for patients with Parkinson’s disease (PD and older adults in both a lab-based and home-like environment.MethodsIn this prospective observational study, data were captured from a single 6-degrees of freedom IMU (APDM (3DOF accelerometer and 3DOF gyroscope worn on the lower back. Detection of heel strike (HS and toe off (TO on a treadmill was validated against an optoelectronic system (Vicon (11 PD patients and 12 older adults. A second independent validation study in the home-like environment was performed against video observation (20 PD patients and 12 older adults and included step counting during turning and non-turning, defined with a previously published algorithm.ResultsA continuous wavelet transform (cwt-based algorithm was developed for step detection with very high agreement with the optoelectronic system. HS detection in PD patients/older adults, respectively, reached 99/99% accuracy. Similar results were obtained for TO (99/100%. In HS detection, Bland–Altman plots showed a mean difference of 0.002 s [95% confidence interval (CI −0.09 to 0.10] between the algorithm and the optoelectronic system. The Bland–Altman plot for TO detection showed mean differences of 0.00 s (95% CI −0.12 to 0.12. In the home-like assessment, the algorithm for detection of occurrence of steps during turning reached 90% (PD patients/90% (older adults sensitivity, 83/88% specificity, and 88/89% accuracy. The detection of steps during non-turning phases

  18. The collapsed cone algorithm for (192)Ir dosimetry using phantom-size adaptive multiple-scatter point kernels.

    Science.gov (United States)

    Tedgren, Åsa Carlsson; Plamondon, Mathieu; Beaulieu, Luc

    2015-07-07

    The aim of this work was to investigate how dose distributions calculated with the collapsed cone (CC) algorithm depend on the size of the water phantom used in deriving the point kernel for multiple scatter. A research version of the CC algorithm equipped with a set of selectable point kernels for multiple-scatter dose that had initially been derived in water phantoms of various dimensions was used. The new point kernels were generated using EGSnrc in spherical water phantoms of radii 5 cm, 7.5 cm, 10 cm, 15 cm, 20 cm, 30 cm and 50 cm. Dose distributions derived with CC in water phantoms of different dimensions and in a CT-based clinical breast geometry were compared to Monte Carlo (MC) simulations using the Geant4-based brachytherapy specific MC code Algebra. Agreement with MC within 1% was obtained when the dimensions of the phantom used to derive the multiple-scatter kernel were similar to those of the calculation phantom. Doses are overestimated at phantom edges when kernels are derived in larger phantoms and underestimated when derived in smaller phantoms (by around 2% to 7% depending on distance from source and phantom dimensions). CC agrees well with MC in the high dose region of a breast implant and is superior to TG43 in determining skin doses for all multiple-scatter point kernel sizes. Increased agreement between CC and MC is achieved when the point kernel is comparable to breast dimensions. The investigated approximation in multiple scatter dose depends on the choice of point kernel in relation to phantom size and yields a significant fraction of the total dose only at distances of several centimeters from a source/implant which correspond to volumes of low doses. The current implementation of the CC algorithm utilizes a point kernel derived in a comparatively large (radius 20 cm) water phantom. A fixed point kernel leads to predictable behaviour of the algorithm with the worst case being a source/implant located well within a patient

  19. Linear feature detection algorithm for astronomical surveys - I. Algorithm description

    Science.gov (United States)

    Bektešević, Dino; Vinković, Dejan

    2017-11-01

    Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.

  20. Improvement on LiFePO4 Cell Balancing Algorithm

    OpenAIRE

    Vencislav C. Valchev; Plamen V. Yankov; Dimo D. Stefanov

    2018-01-01

    The paper presents improvement on operation time of cell balancing algorithm compared to conventional multiple cell LiFePO4 charge methodology. A flowchart is synthesised to explain the main steps of the software design, which afterwards is implemented in a microcontroller. Experimental results are provided to clarify the transition between charge and balance process. Graphical data for a voltage equalization of eight cells is presented to verify the proposed improvement.

  1. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA for L p -Regularization Using the Multiple Sub-Dictionary Representation

    Directory of Open Access Journals (Sweden)

    Yunyi Li

    2017-12-01

    Full Text Available Both L 1 / 2 and L 2 / 3 are two typical non-convex regularizations of L p ( 0 < p < 1 , which can be employed to obtain a sparser solution than the L 1 regularization. Recently, the multiple-state sparse transformation strategy has been developed to exploit the sparsity in L 1 regularization for sparse signal recovery, which combines the iterative reweighted algorithms. To further exploit the sparse structure of signal and image, this paper adopts multiple dictionary sparse transform strategies for the two typical cases p ∈ { 1 / 2 ,   2 / 3 } based on an iterative L p thresholding algorithm and then proposes a sparse adaptive iterative-weighted L p thresholding algorithm (SAITA. Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based L p regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L 1 algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based L p case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work.

  2. Majorization arrow in quantum-algorithm design

    International Nuclear Information System (INIS)

    Latorre, J.I.; Martin-Delgado, M.A.

    2002-01-01

    We apply majorization theory to study the quantum algorithms known so far and find that there is a majorization principle underlying the way they operate. Grover's algorithm is a neat instance of this principle where majorization works step by step until the optimal target state is found. Extensions of this situation are also found in algorithms based in quantum adiabatic evolution and the family of quantum phase-estimation algorithms, including Shor's algorithm. We state that in quantum algorithms the time arrow is a majorization arrow

  3. FMRQ-A Multiagent Reinforcement Learning Algorithm for Fully Cooperative Tasks.

    Science.gov (United States)

    Zhang, Zhen; Zhao, Dongbin; Gao, Junwei; Wang, Dongqing; Dai, Yujie

    2017-06-01

    In this paper, we propose a multiagent reinforcement learning algorithm dealing with fully cooperative tasks. The algorithm is called frequency of the maximum reward Q-learning (FMRQ). FMRQ aims to achieve one of the optimal Nash equilibria so as to optimize the performance index in multiagent systems. The frequency of obtaining the highest global immediate reward instead of immediate reward is used as the reinforcement signal. With FMRQ each agent does not need the observation of the other agents' actions and only shares its state and reward at each step. We validate FMRQ through case studies of repeated games: four cases of two-player two-action and one case of three-player two-action. It is demonstrated that FMRQ can converge to one of the optimal Nash equilibria in these cases. Moreover, comparison experiments on tasks with multiple states and finite steps are conducted. One is box-pushing and the other one is distributed sensor network problem. Experimental results show that the proposed algorithm outperforms others with higher performance.

  4. Privacy Protection on Multiple Sensitive Attributes

    Science.gov (United States)

    Li, Zhen; Ye, Xiaojun

    In recent years, a privacy model called k-anonymity has gained popularity in the microdata releasing. As the microdata may contain multiple sensitive attributes about an individual, the protection of multiple sensitive attributes has become an important problem. Different from the existing models of single sensitive attribute, extra associations among multiple sensitive attributes should be invested. Two kinds of disclosure scenarios may happen because of logical associations. The Q&S Diversity is checked to prevent the foregoing disclosure risks, with an α Requirement definition used to ensure the diversity requirement. At last, a two-step greedy generalization algorithm is used to carry out the multiple sensitive attributes processing which deal with quasi-identifiers and sensitive attributes respectively. We reduce the overall distortion by the measure of Masking SA.

  5. Application of Multiple-Population Genetic Algorithm in Optimizing the Train-Set Circulation Plan Problem

    Directory of Open Access Journals (Sweden)

    Yu Zhou

    2017-01-01

    Full Text Available The train-set circulation plan problem (TCPP belongs to the rolling stock scheduling (RSS problem and is similar to the aircraft routing problem (ARP in airline operations and the vehicle routing problem (VRP in the logistics field. However, TCPP involves additional complexity due to the maintenance constraint of train-sets: train-sets must conduct maintenance tasks after running for a certain time and distance. The TCPP is nondeterministic polynomial hard (NP-hard. There is no available algorithm that can obtain the optimal global solution, and many factors such as the utilization mode and the maintenance mode impact the solution of the TCPP. This paper proposes a train-set circulation optimization model to minimize the total connection time and maintenance costs and describes the design of an efficient multiple-population genetic algorithm (MPGA to solve this model. A realistic high-speed railway (HSR case is selected to verify our model and algorithm, and, then, a comparison of different algorithms is carried out. Furthermore, a new maintenance mode is proposed, and related implementation requirements are discussed.

  6. Multimodal optimization by using hybrid of artificial bee colony algorithm and BFGS algorithm

    Science.gov (United States)

    Anam, S.

    2017-10-01

    Optimization has become one of the important fields in Mathematics. Many problems in engineering and science can be formulated into optimization problems. They maybe have many local optima. The optimization problem with many local optima, known as multimodal optimization problem, is how to find the global solution. Several metaheuristic methods have been proposed to solve multimodal optimization problems such as Particle Swarm Optimization (PSO), Genetics Algorithm (GA), Artificial Bee Colony (ABC) algorithm, etc. The performance of the ABC algorithm is better than or similar to those of other population-based algorithms with the advantage of employing a fewer control parameters. The ABC algorithm also has the advantages of strong robustness, fast convergence and high flexibility. However, it has the disadvantages premature convergence in the later search period. The accuracy of the optimal value cannot meet the requirements sometimes. Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is a good iterative method for finding a local optimum. Compared with other local optimization methods, the BFGS algorithm is better. Based on the advantages of the ABC algorithm and the BFGS algorithm, this paper proposes a hybrid of the artificial bee colony algorithm and the BFGS algorithm to solve the multimodal optimization problem. The first step is that the ABC algorithm is run to find a point. In the second step is that the point obtained by the first step is used as an initial point of BFGS algorithm. The results show that the hybrid method can overcome from the basic ABC algorithm problems for almost all test function. However, if the shape of function is flat, the proposed method cannot work well.

  7. Evaluation of Multiple Kernel Learning Algorithms for Crop Mapping Using Satellite Image Time-Series Data

    Science.gov (United States)

    Niazmardi, S.; Safari, A.; Homayouni, S.

    2017-09-01

    Crop mapping through classification of Satellite Image Time-Series (SITS) data can provide very valuable information for several agricultural applications, such as crop monitoring, yield estimation, and crop inventory. However, the SITS data classification is not straightforward. Because different images of a SITS data have different levels of information regarding the classification problems. Moreover, the SITS data is a four-dimensional data that cannot be classified using the conventional classification algorithms. To address these issues in this paper, we presented a classification strategy based on Multiple Kernel Learning (MKL) algorithms for SITS data classification. In this strategy, initially different kernels are constructed from different images of the SITS data and then they are combined into a composite kernel using the MKL algorithms. The composite kernel, once constructed, can be used for the classification of the data using the kernel-based classification algorithms. We compared the computational time and the classification performances of the proposed classification strategy using different MKL algorithms for the purpose of crop mapping. The considered MKL algorithms are: MKL-Sum, SimpleMKL, LPMKL and Group-Lasso MKL algorithms. The experimental tests of the proposed strategy on two SITS data sets, acquired by SPOT satellite sensors, showed that this strategy was able to provide better performances when compared to the standard classification algorithm. The results also showed that the optimization method of the used MKL algorithms affects both the computational time and classification accuracy of this strategy.

  8. Animation of planning algorithms

    OpenAIRE

    Sun, Fan

    2014-01-01

    Planning is the process of creating a sequence of steps/actions that will satisfy a goal of a problem. The partial order planning (POP) algorithm is one of Artificial Intelligence approach for problem planning. By learning G52PAS module, I find that it is difficult for students to understand this planning algorithm by just reading its pseudo code and doing some exercise in writing. Students cannot know how each actual step works clearly and might miss some steps because of their confusion. ...

  9. Multiple dual mode counter-current chromatography with variable duration of alternating phase elution steps.

    Science.gov (United States)

    Kostanyan, Artak E; Erastov, Andrey A; Shishilov, Oleg N

    2014-06-20

    The multiple dual mode (MDM) counter-current chromatography separation processes consist of a succession of two isocratic counter-current steps and are characterized by the shuttle (forward and back) transport of the sample in chromatographic columns. In this paper, the improved MDM method based on variable duration of alternating phase elution steps has been developed and validated. The MDM separation processes with variable duration of phase elution steps are analyzed. Basing on the cell model, analytical solutions are developed for impulse and non-impulse sample loading at the beginning of the column. Using the analytical solutions, a calculation program is presented to facilitate the simulation of MDM with variable duration of phase elution steps, which can be used to select optimal process conditions for the separation of a given feed mixture. Two options of the MDM separation are analyzed: 1 - with one-step solute elution: the separation is conducted so, that the sample is transferred forward and back with upper and lower phases inside the column until the desired separation of the components is reached, and then each individual component elutes entirely within one step; 2 - with multi-step solute elution, when the fractions of individual components are collected in over several steps. It is demonstrated that proper selection of the duration of individual cycles (phase flow times) can greatly increase the separation efficiency of CCC columns. Experiments were carried out using model mixtures of compounds from the GUESSmix with solvent systems hexane/ethyl acetate/methanol/water. The experimental results are compared to the predictions of the theory. A good agreement between theory and experiment has been demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. An Interference-Aware Traffic-Priority-Based Link Scheduling Algorithm for Interference Mitigation in Multiple Wireless Body Area Networks

    Directory of Open Access Journals (Sweden)

    Thien T. T. Le

    2016-12-01

    Full Text Available Currently, wireless body area networks (WBANs are effectively used for health monitoring services. However, in cases where WBANs are densely deployed, interference among WBANs can cause serious degradation of network performance and reliability. Inter-WBAN interference can be reduced by scheduling the communication links of interfering WBANs. In this paper, we propose an interference-aware traffic-priority-based link scheduling (ITLS algorithm to overcome inter-WBAN interference in densely deployed WBANs. First, we model a network with multiple WBANs as an interference graph where node-level interference and traffic priority are taken into account. Second, we formulate link scheduling for multiple WBANs as an optimization model where the objective is to maximize the throughput of the entire network while ensuring the traffic priority of sensor nodes. Finally, we propose the ITLS algorithm for multiple WBANs on the basis of the optimization model. High spatial reuse is also achieved in the proposed ITLS algorithm. The proposed ITLS achieves high spatial reuse while considering traffic priority, packet length, and the number of interfered sensor nodes. Our simulation results show that the proposed ITLS significantly increases spatial reuse and network throughput with lower delay by mitigating inter-WBAN interference.

  11. Parameter Estimation of Multiple Frequency-Hopping Signals with Two Sensors

    Directory of Open Access Journals (Sweden)

    Le Zuo

    2018-04-01

    Full Text Available This paper essentially focuses on parameter estimation of multiple wideband emitting sources with time-varying frequencies, such as two-dimensional (2-D direction of arrival (DOA and signal sorting, with a low-cost circular synthetic array (CSA consisting of only two rotating sensors. Our basic idea is to decompose the received data, which is a superimposition of phase measurements from multiple sources into separated groups and separately estimate the DOA associated with each source. Motivated by joint parameter estimation, we propose to adopt the expectation maximization (EM algorithm in this paper; our method involves two steps, namely, the expectation-step (E-step and the maximization (M-step. In the E-step, the correspondence of each signal with its emitting source is found. Then, in the M-step, the maximum-likelihood (ML estimates of the DOA parameters are obtained. These two steps are iteratively and alternatively executed to jointly determine the DOAs and sort multiple signals. Closed-form DOA estimation formulae are developed by ML estimation based on phase data, which also realize an optimal estimation. Directional ambiguity is also addressed by another ML estimation method based on received complex responses. The Cramer-Rao lower bound is derived for understanding the estimation accuracy and performance comparison. The verification of the proposed method is demonstrated with simulations.

  12. Multiple Memory Structure Bit Reversal Algorithm Based on Recursive Patterns of Bit Reversal Permutation

    Directory of Open Access Journals (Sweden)

    K. K. L. B. Adikaram

    2014-01-01

    Full Text Available With the increasing demand for online/inline data processing efficient Fourier analysis becomes more and more relevant. Due to the fact that the bit reversal process requires considerable processing time of the Fast Fourier Transform (FFT algorithm, it is vital to optimize the bit reversal algorithm (BRA. This paper is to introduce an efficient BRA with multiple memory structures. In 2009, Elster showed the relation between the first and the second halves of the bit reversal permutation (BRP and stated that it may cause serious impact on cache performance of the computer, if implemented. We found exceptions, especially when the said index mapping was implemented with multiple one-dimensional memory structures instead of multidimensional or one-dimensional memory structure. Also we found a new index mapping, even after the recursive splitting of BRP into equal sized slots. The four-array and the four-vector versions of BRA with new index mapping reported 34% and 16% improvement in performance in relation to similar versions of Linear BRA of Elster which uses single one-dimensional memory structure.

  13. Algorithms for Some Euler-Type Identities for Multiple Zeta Values

    Directory of Open Access Journals (Sweden)

    Shifeng Ding

    2013-01-01

    Full Text Available Multiple zeta values are the numbers defined by the convergent series ζ(s1,s2,…,sk=∑n1>n2>⋯>nk>0(1/n1s1 n2s2⋯nksk, where s1, s2, …, sk are positive integers with s1>1. For k≤n, let E(2n,k be the sum of all multiple zeta values with even arguments whose weight is 2n and whose depth is k. The well-known result E(2n,2=3ζ(2n/4 was extended to E(2n,3 and E(2n,4 by Z. Shen and T. Cai. Applying the theory of symmetric functions, Hoffman gave an explicit generating function for the numbers E(2n,k and then gave a direct formula for E(2n,k for arbitrary k≤n. In this paper we apply a technique introduced by Granville to present an algorithm to calculate E(2n,k and prove that the direct formula can also be deduced from Eisenstein's double product.

  14. Testing for direct genetic effects using a screening step in family-based association studies

    Directory of Open Access Journals (Sweden)

    Sharon M Lutz

    2013-11-01

    Full Text Available In genome wide association studies (GWAS, families based studies tend to have less power to detect genetic associations than population based studies, such as case-control studies. This can be an issue when testing if genes in a family based GWAS have a direct effect on the phenotype of interest or if the genes act indirectly through a secondary phenotype. When multiple SNPs are tested for a direct effect in the family based study, a screening step can be used to minimize the burden of multiple comparisons in the causal analysis. We propose a 2-stage screening step that can be incorporated into the family based association test (FBAT approach similar to the conditional mean model approach in the VanSteen-algorithm [1]. Simulations demonstrate that the type 1 error is preserved and this method is advantageous when multiple markers are tested. This method is illustrated by an application to the Framingham Heart Study.

  15. Improvement on LiFePO4 Cell Balancing Algorithm

    Directory of Open Access Journals (Sweden)

    Vencislav C. Valchev

    2018-02-01

    Full Text Available The paper presents improvement on operation time of cell balancing algorithm compared to conventional multiple cell LiFePO4 charge methodology. A flowchart is synthesised to explain the main steps of the software design, which afterwards is implemented in a microcontroller. Experimental results are provided to clarify the transition between charge and balance process. Graphical data for a voltage equalization of eight cells is presented to verify the proposed improvement.

  16. An Improved Image Encryption Algorithm Based on Cyclic Rotations and Multiple Chaotic Sequences: Application to Satellite Images

    Directory of Open Access Journals (Sweden)

    MADANI Mohammed

    2017-10-01

    Full Text Available In this paper, a new satellite image encryption algorithm based on the combination of multiple chaotic systems and a random cyclic rotation technique is proposed. Our contribution consists in implementing three different chaotic maps (logistic, sine, and standard combined to improve the security of satellite images. Besides enhancing the encryption, the proposed algorithm also focuses on advanced efficiency of the ciphered images. Compared with classical encryption schemes based on multiple chaotic maps and the Rubik's cube rotation, our approach has not only the same merits of chaos systems like high sensitivity to initial values, unpredictability, and pseudo-randomness, but also other advantages like a higher number of permutations, better performances in Peak Signal to Noise Ratio (PSNR and a Maximum Deviation (MD.

  17. Optimizing the Shunting Schedule of Electric Multiple Units Depot Using an Enhanced Particle Swarm Optimization Algorithm

    Science.gov (United States)

    Jin, Junchen

    2016-01-01

    The shunting schedule of electric multiple units depot (SSED) is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO) algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality. PMID:27436998

  18. Optimizing the Shunting Schedule of Electric Multiple Units Depot Using an Enhanced Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Jiaxi Wang

    2016-01-01

    Full Text Available The shunting schedule of electric multiple units depot (SSED is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality.

  19. Multiple Sclerosis Identification Based on Fractional Fourier Entropy and a Modified Jaya Algorithm

    Directory of Open Access Journals (Sweden)

    Shui-Hua Wang

    2018-04-01

    Full Text Available Aim: Currently, identifying multiple sclerosis (MS by human experts may come across the problem of “normal-appearing white matter”, which causes a low sensitivity. Methods: In this study, we presented a computer vision based approached to identify MS in an automatic way. This proposed method first extracted the fractional Fourier entropy map from a specified brain image. Afterwards, it sent the features to a multilayer perceptron trained by a proposed improved parameter-free Jaya algorithm. We used cost-sensitivity learning to handle the imbalanced data problem. Results: The 10 × 10-fold cross validation showed our method yielded a sensitivity of 97.40 ± 0.60%, a specificity of 97.39 ± 0.65%, and an accuracy of 97.39 ± 0.59%. Conclusions: We validated by experiments that the proposed improved Jaya performs better than plain Jaya algorithm and other latest bioinspired algorithms in terms of classification performance and training speed. In addition, our method is superior to four state-of-the-art MS identification approaches.

  20. Multiple Representation Instruction First versus Traditional Algorithmic Instruction First: Impact in Middle School Mathematics Classrooms

    Science.gov (United States)

    Flores, Raymond; Koontz, Esther; Inan, Fethi A.; Alagic, Mara

    2015-01-01

    This study examined the impact of the order of two teaching approaches on students' abilities and on-task behaviors while learning how to solve percentage problems. Two treatment groups were compared. MR first received multiple representation instruction followed by traditional algorithmic instruction and TA first received these teaching…

  1. Adaptive Waveform Design for Cognitive Radar in Multiple Targets Situations

    Directory of Open Access Journals (Sweden)

    Xiaowen Zhang

    2018-02-01

    Full Text Available In this paper, the problem of cognitive radar (CR waveform optimization design for target detection and estimation in multiple extended targets situations is investigated. This problem is analyzed in signal-dependent interference, as well as additive channel noise for extended targets with unknown target impulse response (TIR. To address this problem, an improved algorithm is employed for target detection by maximizing the detection probability of the received echo on the promise of ensuring the TIR estimation precision. In this algorithm, an additional weight vector is introduced to achieve a trade-off among different targets. Both the estimate of TIR and transmit waveform can be updated at each step based on the previous step. Under the same constraint on waveform energy and bandwidth, the information theoretical approach is also considered. In addition, the relationship between the waveforms that are designed based on the two criteria is discussed. Unlike most existing works that only consider single target with temporally correlated characteristics, waveform design for multiple extended targets is considered in this method. Simulation results demonstrate that compared with linear frequency modulated (LFM signal, waveforms designed based on maximum detection probability and maximum mutual information (MI criteria can make radar echoes contain more multiple-target information and improve radar performance as a result.

  2. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  3. SINGLE VERSUS MULTIPLE TRIAL VECTORS IN CLASSICAL DIFFERENTIAL EVOLUTION FOR OPTIMIZING THE QUANTIZATION TABLE IN JPEG BASELINE ALGORITHM

    Directory of Open Access Journals (Sweden)

    B Vinoth Kumar

    2017-07-01

    Full Text Available Quantization Table is responsible for compression / quality trade-off in baseline Joint Photographic Experts Group (JPEG algorithm and therefore it is viewed as an optimization problem. In the literature, it has been found that Classical Differential Evolution (CDE is a promising algorithm to generate the optimal quantization table. However, the searching capability of CDE could be limited due to generation of single trial vector in an iteration which in turn reduces the convergence speed. This paper studies the performance of CDE by employing multiple trial vectors in a single iteration. An extensive performance analysis has been made between CDE and CDE with multiple trial vectors in terms of Optimization process, accuracy, convergence speed and reliability. The analysis report reveals that CDE with multiple trial vectors improves the convergence speed of CDE and the same is confirmed using a statistical hypothesis test (t-test.

  4. A color based face detection system using multiple templates

    Institute of Scientific and Technical Information of China (English)

    王涛; 卜佳俊; 陈纯

    2003-01-01

    A color based system using multiple templates was developed and implemented for detecting human faces in color images. The algorithm consists of three image processing steps. The first step is human skin color statistics. Then it separates skin regions from non-skin regions. After that, it locates the frontal human face(s) within the skin regions. In the first step, 250 skin samples from persons of different ethnicities are used to determine the color distribution of human skin in chromatic color space in order to get a chroma chart showing likelihoods of skin colors. This chroma chart is used to generate, from the original color image, a gray scale image whose gray value at a pixel shows its likelihood of representing the skin. The algorithm uses an adaptive thresholding process to achieve the optimal threshold value for dividing the gray scale image into separate skin regions from non skin regions. Finally, multiple face templates matching is used to determine if a given skin region represents a frontal human face or not. Test of the system with more than 400 color images showed that the resulting detection rate was 83%, which is better than most color-based face detection systems. The average speed for face detection is 0.8 second/image (400×300 pixels) on a Pentium 3 (800MHz) PC.

  5. A color based face detection system using multiple templates

    Institute of Scientific and Technical Information of China (English)

    王涛; 卜佳酸; 陈纯

    2003-01-01

    A color based system using multiple templates was developed and implemented for detecting hu-man faces in color images.The algorithm comsists of three image processing steps.The first step is human skin color statistics.Then it separates skin regions from non-skin regions.After that,it locates the frontal human face(s) within the skin regions.In the first step,250 skin samples from persons of different ethnicities are used to determine the color distribution of human skin in chromatic color space in order to get a chroma chart showing likelihoods of skin colors.This chroma chart is used to generate,from the original color image,a gray scale image whose gray value at a pixel shows its likelihood of representing the shin,The algorithm uses an adaptive thresholding process to achieve the optimal threshold value for dividing the gray scale image into sep-arate skin regions from non skin regions.Finally,multiple face templates matching is used to determine if a given skin region represents a frontal human face or not.Test of the system with more than 400 color images showed that the resulting detection rate was 83%,which is better than most colou-based face detection sys-tems.The average speed for face detection is 0.8 second/image(400×300pixels) on a Pentium 3(800MHz) PC.

  6. Subroutine MLTGRD: a multigrid algorithm based on multiplicative correction and implicit non-stationary iteration

    International Nuclear Information System (INIS)

    Barry, J.M.; Pollard, J.P.

    1986-11-01

    A FORTRAN subroutine MLTGRD is provided to solve efficiently the large systems of linear equations arising from a five-point finite difference discretisation of some elliptic partial differential equations. MLTGRD is a multigrid algorithm which provides multiplicative correction to iterative solution estimates from successively reduced systems of linear equations. It uses the method of implicit non-stationary iteration for all grid levels

  7. Joint optimization of algorithmic suites for EEG analysis.

    Science.gov (United States)

    Santana, Eder; Brockmeier, Austin J; Principe, Jose C

    2014-01-01

    Electroencephalogram (EEG) data analysis algorithms consist of multiple processing steps each with a number of free parameters. A joint optimization methodology can be used as a wrapper to fine-tune these parameters for the patient or application. This approach is inspired by deep learning neural network models, but differs because the processing layers for EEG are heterogeneous with different approaches used for processing space and time. Nonetheless, we treat the processing stages as a neural network and apply backpropagation to jointly optimize the parameters. This approach outperforms previous results on the BCI Competition II - dataset IV; additionally, it outperforms the common spatial patterns (CSP) algorithm on the BCI Competition III dataset IV. In addition, the optimized parameters in the architecture are still interpretable.

  8. An improved affine projection algorithm for active noise cancellation

    Science.gov (United States)

    Zhang, Congyan; Wang, Mingjiang; Han, Yufei; Sun, Yunzhuo

    2017-08-01

    Affine projection algorithm is a signal reuse algorithm, and it has a good convergence rate compared to other traditional adaptive filtering algorithm. There are two factors that affect the performance of the algorithm, which are step factor and the projection length. In the paper, we propose a new variable step size affine projection algorithm (VSS-APA). It dynamically changes the step size according to certain rules, so that it can get smaller steady-state error and faster convergence speed. Simulation results can prove that its performance is superior to the traditional affine projection algorithm and in the active noise control (ANC) applications, the new algorithm can get very good results.

  9. Selfish Gene Algorithm Vs Genetic Algorithm: A Review

    Science.gov (United States)

    Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed

    2016-11-01

    Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.

  10. Variable depth recursion algorithm for leaf sequencing

    International Nuclear Information System (INIS)

    Siochi, R. Alfredo C.

    2007-01-01

    The processes of extraction and sweep are basic segmentation steps that are used in leaf sequencing algorithms. A modified version of a commercial leaf sequencer changed the way that the extracts are selected and expanded the search space, but the modification maintained the basic search paradigm of evaluating multiple solutions, each one consisting of up to 12 extracts and a sweep sequence. While it generated the best solutions compared to other published algorithms, it used more computation time. A new, faster algorithm selects one extract at a time but calls itself as an evaluation function a user-specified number of times, after which it uses the bidirectional sweeping window algorithm as the final evaluation function. To achieve a performance comparable to that of the modified commercial leaf sequencer, 2-3 calls were needed, and in all test cases, there were only slight improvements beyond two calls. For the 13 clinical test maps, computation speeds improved by a factor between 12 and 43, depending on the constraints, namely the ability to interdigitate and the avoidance of the tongue-and-groove under dose. The new algorithm was compared to the original and modified versions of the commercial leaf sequencer. It was also compared to other published algorithms for 1400, random, 15x15, test maps with 3-16 intensity levels. In every single case the new algorithm provided the best solution

  11. Kalman Filtering for Discrete Stochastic Systems with Multiplicative Noises and Random Two-Step Sensor Delays

    Directory of Open Access Journals (Sweden)

    Dongyan Chen

    2015-01-01

    Full Text Available This paper is concerned with the optimal Kalman filtering problem for a class of discrete stochastic systems with multiplicative noises and random two-step sensor delays. Three Bernoulli distributed random variables with known conditional probabilities are introduced to characterize the phenomena of the random two-step sensor delays which may happen during the data transmission. By using the state augmentation approach and innovation analysis technique, an optimal Kalman filter is constructed for the augmented system in the sense of the minimum mean square error (MMSE. Subsequently, the optimal Kalman filtering is derived for corresponding augmented system in initial instants. Finally, a simulation example is provided to demonstrate the feasibility and effectiveness of the proposed filtering method.

  12. A multiobjective non-dominated sorting genetic algorithm (NSGA-II for the Multiple Traveling Salesman Problem

    Directory of Open Access Journals (Sweden)

    Rubén Iván Bolaños

    2015-06-01

    Full Text Available This paper considers a multi-objective version of the Multiple Traveling Salesman Problem (MOmTSP. In particular, two objectives are considered: the minimization of the total traveled distance and the balance of the working times of the traveling salesmen. The problem is formulated as an integer multi-objective optimization model. A non-dominated sorting genetic algorithm (NSGA-II is proposed to solve the MOmTSP. The solution scheme allows one to find a set of ordered solutions in Pareto fronts by considering the concept of dominance. Tests on real world instances and instances adapted from the literature show the effectiveness of the proposed algorithm.

  13. 3-D Image Encryption Based on Rubik's Cube and RC6 Algorithm

    Science.gov (United States)

    Helmy, Mai; El-Rabaie, El-Sayed M.; Eldokany, Ibrahim M.; El-Samie, Fathi E. Abd

    2017-12-01

    A novel encryption algorithm based on the 3-D Rubik's cube is proposed in this paper to achieve 3D encryption of a group of images. This proposed encryption algorithm begins with RC6 as a first step for encrypting multiple images, separately. After that, the obtained encrypted images are further encrypted with the 3-D Rubik's cube. The RC6 encrypted images are used as the faces of the Rubik's cube. From the concepts of image encryption, the RC6 algorithm adds a degree of diffusion, while the Rubik's cube algorithm adds a degree of permutation. The simulation results demonstrate that the proposed encryption algorithm is efficient, and it exhibits strong robustness and security. The encrypted images are further transmitted over wireless Orthogonal Frequency Division Multiplexing (OFDM) system and decrypted at the receiver side. Evaluation of the quality of the decrypted images at the receiver side reveals good results.

  14. Simulation study of multi-step model algorithmic control of the nuclear reactor thermal power tracking system

    International Nuclear Information System (INIS)

    Shi Xiaoping; Xu Tianshu

    2001-01-01

    The classical control method is usually hard to ensure the thermal power tracking accuracy, because the nuclear reactor system is a complex nonlinear system with uncertain parameters and disturbances. A sort of non-parameter model is constructed with the open-loop impulse response of the system. Furthermore, a sort of thermal power tracking digital control law is presented using the multi-step model algorithmic control principle. The control method presented had good tracking performance and robustness. It can work despite the existence of unmeasurable disturbances. The simulation experiment testifies the correctness and effectiveness of the method. The high accuracy matching between the thermal power and the referenced load is achieved

  15. Application of multiple signal classification algorithm to frequency estimation in coherent dual-frequency lidar

    Science.gov (United States)

    Li, Ruixiao; Li, Kun; Zhao, Changming

    2018-01-01

    Coherent dual-frequency Lidar (CDFL) is a new development of Lidar which dramatically enhances the ability to decrease the influence of atmospheric interference by using dual-frequency laser to measure the range and velocity with high precision. Based on the nature of CDFL signals, we propose to apply the multiple signal classification (MUSIC) algorithm in place of the fast Fourier transform (FFT) to estimate the phase differences in dual-frequency Lidar. In the presence of Gaussian white noise, the simulation results show that the signal peaks are more evident when using MUSIC algorithm instead of FFT in condition of low signal-noise-ratio (SNR), which helps to improve the precision of detection on range and velocity, especially for the long distance measurement systems.

  16. Formulation of an explicit-multiple-time-step time integration method for use in a global primitive equation grid model

    Science.gov (United States)

    Chao, W. C.

    1982-01-01

    With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.

  17. Alignment of Short Reads: A Crucial Step for Application of Next-Generation Sequencing Data in Precision Medicine

    Directory of Open Access Journals (Sweden)

    Hao Ye

    2015-11-01

    Full Text Available Precision medicine or personalized medicine has been proposed as a modernized and promising medical strategy. Genetic variants of patients are the key information for implementation of precision medicine. Next-generation sequencing (NGS is an emerging technology for deciphering genetic variants. Alignment of raw reads to a reference genome is one of the key steps in NGS data analysis. Many algorithms have been developed for alignment of short read sequences since 2008. Users have to make a decision on which alignment algorithm to use in their studies. Selection of the right alignment algorithm determines not only the alignment algorithm but also the set of suitable parameters to be used by the algorithm. Understanding these algorithms helps in selecting the appropriate alignment algorithm for different applications in precision medicine. Here, we review current available algorithms and their major strategies such as seed-and-extend and q-gram filter. We also discuss the challenges in current alignment algorithms, including alignment in multiple repeated regions, long reads alignment and alignment facilitated with known genetic variants.

  18. Generalized Grover's Algorithm for Multiple Phase Inversion States

    Science.gov (United States)

    Byrnes, Tim; Forster, Gary; Tessler, Louis

    2018-02-01

    Grover's algorithm is a quantum search algorithm that proceeds by repeated applications of the Grover operator and the Oracle until the state evolves to one of the target states. In the standard version of the algorithm, the Grover operator inverts the sign on only one state. Here we provide an exact solution to the problem of performing Grover's search where the Grover operator inverts the sign on N states. We show the underlying structure in terms of the eigenspectrum of the generalized Hamiltonian, and derive an appropriate initial state to perform the Grover evolution. This allows us to use the quantum phase estimation algorithm to solve the search problem in this generalized case, completely bypassing the Grover algorithm altogether. We obtain a time complexity of this case of √{D /Mα }, where D is the search space dimension, M is the number of target states, and α ≈1 , which is close to the optimal scaling.

  19. An improved VSS NLMS algorithm for active noise cancellation

    Science.gov (United States)

    Sun, Yunzhuo; Wang, Mingjiang; Han, Yufei; Zhang, Congyan

    2017-08-01

    In this paper, an improved variable step size NLMS algorithm is proposed. NLMS has fast convergence rate and low steady state error compared to other traditional adaptive filtering algorithm. But there is a contradiction between the convergence speed and steady state error that affect the performance of the NLMS algorithm. Now, we propose a new variable step size NLMS algorithm. It dynamically changes the step size according to current error and iteration times. The proposed algorithm has simple formulation and easily setting parameters, and effectively solves the contradiction in NLMS. The simulation results show that the proposed algorithm has a good tracking ability, fast convergence rate and low steady state error simultaneously.

  20. Tracking of Multiple Moving Sources Using Recursive EM Algorithm

    Directory of Open Access Journals (Sweden)

    Böhme Johann F

    2005-01-01

    Full Text Available We deal with recursive direction-of-arrival (DOA estimation of multiple moving sources. Based on the recursive EM algorithm, we develop two recursive procedures to estimate the time-varying DOA parameter for narrowband signals. The first procedure requires no prior knowledge about the source movement. The second procedure assumes that the motion of moving sources is described by a linear polynomial model. The proposed recursion updates the polynomial coefficients when a new data arrives. The suggested approaches have two major advantages: simple implementation and easy extension to wideband signals. Numerical experiments show that both procedures provide excellent results in a slowly changing environment. When the DOA parameter changes fast or two source directions cross with each other, the procedure designed for a linear polynomial model has a better performance than the general procedure. Compared to the beamforming technique based on the same parameterization, our approach is computationally favorable and has a wider range of applications.

  1. A novel optimization method, Gravitational Search Algorithm (GSA), for PWR core optimization

    International Nuclear Information System (INIS)

    Mahmoudi, S.M.; Aghaie, M.; Bahonar, M.; Poursalehi, N.

    2016-01-01

    Highlights: • The Gravitational Search Algorithm (GSA) is introduced. • The advantage of GSA is verified in Shekel’s Foxholes. • Reload optimizing in WWER-1000 and WWER-440 cases are performed. • Maximizing K eff , minimizing PPFs and flattening power density is considered. - Abstract: In-core fuel management optimization (ICFMO) is one of the most challenging concepts of nuclear engineering. In recent decades several meta-heuristic algorithms or computational intelligence methods have been expanded to optimize reactor core loading pattern. This paper presents a new method of using Gravitational Search Algorithm (GSA) for in-core fuel management optimization. The GSA is constructed based on the law of gravity and the notion of mass interactions. It uses the theory of Newtonian physics and searcher agents are the collection of masses. In this work, at the first step, GSA method is compared with other meta-heuristic algorithms on Shekel’s Foxholes problem. In the second step for finding the best core, the GSA algorithm has been performed for three PWR test cases including WWER-1000 and WWER-440 reactors. In these cases, Multi objective optimizations with the following goals are considered, increment of multiplication factor (K eff ), decrement of power peaking factor (PPF) and power density flattening. It is notable that for neutronic calculation, PARCS (Purdue Advanced Reactor Core Simulator) code is used. The results demonstrate that GSA algorithm have promising performance and could be proposed for other optimization problems of nuclear engineering field.

  2. Multi-step prediction for influenza outbreak by an adjusted long short-term memory.

    Science.gov (United States)

    Zhang, J; Nawata, K

    2018-05-01

    Influenza results in approximately 3-5 million annual cases of severe illness and 250 000-500 000 deaths. We urgently need an accurate multi-step-ahead time-series forecasting model to help hospitals to perform dynamical assignments of beds to influenza patients for the annually varied influenza season, and aid pharmaceutical companies to formulate a flexible plan of manufacturing vaccine for the yearly different influenza vaccine. In this study, we utilised four different multi-step prediction algorithms in the long short-term memory (LSTM). The result showed that implementing multiple single-output prediction in a six-layer LSTM structure achieved the best accuracy. The mean absolute percentage errors from two- to 13-step-ahead prediction for the US influenza-like illness rates were all LSTM has been applied and refined to perform multi-step-ahead prediction for influenza outbreaks. Hopefully, this modelling methodology can be applied in other countries and therefore help prevent and control influenza worldwide.

  3. Robustness of Multiple Clustering Algorithms on Hyperspectral Images

    National Research Council Canada - National Science Library

    Williams, Jason P

    2007-01-01

    .... Various clustering algorithms were employed, including a hierarchical method, ISODATA, K-means, and X-means, and were used on a simple two dimensional dataset in order to discover potential problems with the algorithms...

  4. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards.

    Science.gov (United States)

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G

    2011-07-01

    In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids.The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable.In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation.We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  5. Improved Harmony Search Algorithm for Truck Scheduling Problem in Multiple-Door Cross-Docking Systems

    Directory of Open Access Journals (Sweden)

    Zhanzhong Wang

    2018-01-01

    Full Text Available The key of realizing the cross docking is to design the joint of inbound trucks and outbound trucks, so a proper sequence of trucks will make the cross-docking system much more efficient and need less makespan. A cross-docking system is proposed with multiple receiving and shipping dock doors. The objective is to find the best door assignments and the sequences of trucks in the principle of products distribution to minimize the total makespan of cross docking. To solve the problem that is regarded as a mixed integer linear programming (MILP model, three metaheuristics, namely, harmony search (HS, improved harmony search (IHS, and genetic algorithm (GA, are proposed. Furthermore, the fixed parameters are optimized by Taguchi experiments to improve the accuracy of solutions further. Finally, several numerical examples are put forward to evaluate the performances of proposed algorithms.

  6. Hybrid Multiple Soft-Sensor Models of Grinding Granularity Based on Cuckoo Searching Algorithm and Hysteresis Switching Strategy

    Directory of Open Access Journals (Sweden)

    Jie-Sheng Wang

    2015-01-01

    Full Text Available According to the characteristics of grinding process and accuracy requirements of technical indicators, a hybrid multiple soft-sensor modeling method of grinding granularity is proposed based on cuckoo searching (CS algorithm and hysteresis switching (HS strategy. Firstly, a mechanism soft-sensor model of grinding granularity is deduced based on the technique characteristics and a lot of experimental data of grinding process. Meanwhile, the BP neural network soft-sensor model and wavelet neural network (WNN soft-sensor model are set up. Then, the hybrid multiple soft-sensor model based on the hysteresis switching strategy is realized. That is to say, the optimum model is selected as the current predictive model according to the switching performance index at each sampling instant. Finally the cuckoo searching algorithm is adopted to optimize the performance parameters of hysteresis switching strategy. Simulation results show that the proposed model has better generalization results and prediction precision, which can satisfy the real-time control requirements of grinding classification process.

  7. Combinatorial algorithms

    CERN Document Server

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  8. Validation of patient determined disease steps (PDDS) scale scores in persons with multiple sclerosis.

    Science.gov (United States)

    Learmonth, Yvonne C; Motl, Robert W; Sandroff, Brian M; Pula, John H; Cadavid, Diego

    2013-04-25

    The Patient Determined Disease Steps (PDDS) is a promising patient-reported outcome (PRO) of disability in multiple sclerosis (MS). To date, there is limited evidence regarding the validity of PDDS scores, despite its sound conceptual development and broad inclusion in MS research. This study examined the validity of the PDDS based on (1) the association with Expanded Disability Status Scale (EDSS) scores and (2) the pattern of associations between PDDS and EDSS scores with Functional System (FS) scores as well as ambulatory and other outcomes. 96 persons with MS provided demographic/clinical information, completed the PDDS and other PROs including the Multiple Sclerosis Walking Scale-12 (MSWS-12), and underwent a neurological examination for generating FS and EDSS scores. Participants completed assessments of cognition, ambulation including the 6-minute walk (6 MW), and wore an accelerometer during waking hours over seven days. There was a strong correlation between EDSS and PDDS scores (ρ = .783). PDDS and EDSS scores were strongly correlated with Pyramidal (ρ = .578 &ρ = .647, respectively) and Cerebellar (ρ = .501 &ρ = .528, respectively) FS scores as well as 6 MW distance (ρ = .704 &ρ = .805, respectively), MSWS-12 scores (ρ = .801 &ρ = .729, respectively), and accelerometer steps/day (ρ = -.740 &ρ = -.717, respectively). This study provides novel evidence supporting the PDDS as valid PRO of disability in MS.

  9. Comparison of Statistical Algorithms for the Detection of Infectious Disease Outbreaks in Large Multiple Surveillance Systems

    Science.gov (United States)

    Farrington, C. Paddy; Noufaily, Angela; Andrews, Nick J.; Charlett, Andre

    2016-01-01

    A large-scale multiple surveillance system for infectious disease outbreaks has been in operation in England and Wales since the early 1990s. Changes to the statistical algorithm at the heart of the system were proposed and the purpose of this paper is to compare two new algorithms with the original algorithm. Test data to evaluate performance are created from weekly counts of the number of cases of each of more than 2000 diseases over a twenty-year period. The time series of each disease is separated into one series giving the baseline (background) disease incidence and a second series giving disease outbreaks. One series is shifted forward by twelve months and the two are then recombined, giving a realistic series in which it is known where outbreaks have been added. The metrics used to evaluate performance include a scoring rule that appropriately balances sensitivity against specificity and is sensitive to variation in probabilities near 1. In the context of disease surveillance, a scoring rule can be adapted to reflect the size of outbreaks and this was done. Results indicate that the two new algorithms are comparable to each other and better than the algorithm they were designed to replace. PMID:27513749

  10. Comparison of the efficiency of two algorithms which solve the shortest path problem with an emotional agent

    Directory of Open Access Journals (Sweden)

    Petruseva Silvana

    2006-01-01

    Full Text Available This paper discusses the comparison of the efficiency of two algorithms, by estimation of their complexity. For solving the problem, the Neural Network Crossbar Adaptive Array (NN-CAA is used as the agent architecture, implementing a model of an emotion. The problem discussed is how to find the shortest path in an environment with n states. The domains concerned are environments with n states, one of which is the starting state, one is the goal state, and some states are undesirable and they should be avoided. It is obtained that finding one path (one solution is efficient, i.e. in polynomial time by both algorithms. One of the algorithms is faster than the other only in the multiplicative constant, and it shows a step forward toward the optimality of the learning process. However, finding the optimal solution (the shortest path by both algorithms is in exponential time which is asserted by two theorems. It might be concluded that the concept of subgoal is one step forward toward the optimality of the process of the agent learning. Yet, it should be explored further on, in order to obtain an efficient, polynomial algorithm.

  11. Fast and Rigorous Assignment Algorithm Multiple Preference and Calculation

    Directory of Open Access Journals (Sweden)

    Ümit Çiftçi

    2010-03-01

    Full Text Available The goal of paper is to develop an algorithm that evaluates students then places them depending on their desired choices according to dependant preferences. The developed algorithm is also used to implement software. The success and accuracy of the software as well as the algorithm are tested by applying it to ability test at Beykent University. This ability test is repeated several times in order to fill all available places at Fine Art Faculty departments in every academic year. It has been shown that this algorithm is very fast and rigorous after application of 2008-2009 and 2009-20010 academic years.Key Words: Assignment algorithm, student placement, ability test

  12. A multiple objective magnet sorting algorithm for the ALS insertion devices

    International Nuclear Information System (INIS)

    Humphries, D.; Goetz, F.; Kownacki, P.; Marks, S.; Schlueter, R.

    1994-07-01

    Insertion devices for the Advanced Light Source (ALS) incorporate large numbers of permanent magnets which have a variety of magnetization orientation errors. These orientation errors can produce field errors which affect both the spectral brightness of the insertion devices and the storage ring electron beam dynamics. A perturbation study was carried out to quantify the effects of orientation errors acting in a hybrid magnetic structure. The results of this study were used to develop a multiple stage sorting algorithm which minimizes undesirable integrated field errors and essentially eliminates pole excitation errors. When applied to a measured magnet population for an existing insertion device, an order of magnitude reduction in integrated field errors was achieved while maintaining near zero pole excitation errors

  13. Clustering Multiple Sclerosis Subgroups with Multifractal Methods and Self-Organizing Map Algorithm

    Science.gov (United States)

    Karaca, Yeliz; Cattani, Carlo

    Magnetic resonance imaging (MRI) is the most sensitive method to detect chronic nervous system diseases such as multiple sclerosis (MS). In this paper, Brownian motion Hölder regularity functions (polynomial, periodic (sine), exponential) for 2D image, such as multifractal methods were applied to MR brain images, aiming to easily identify distressed regions, in MS patients. With these regions, we have proposed an MS classification based on the multifractal method by using the Self-Organizing Map (SOM) algorithm. Thus, we obtained a cluster analysis by identifying pixels from distressed regions in MR images through multifractal methods and by diagnosing subgroups of MS patients through artificial neural networks.

  14. A Conjugate Gradient Algorithm with Function Value Information and N-Step Quadratic Convergence for Unconstrained Optimization.

    Directory of Open Access Journals (Sweden)

    Xiangrong Li

    Full Text Available It is generally acknowledged that the conjugate gradient (CG method achieves global convergence--with at most a linear convergence rate--because CG formulas are generated by linear approximations of the objective functions. The quadratically convergent results are very limited. We introduce a new PRP method in which the restart strategy is also used. Moreover, the method we developed includes not only n-step quadratic convergence but also both the function value information and gradient value information. In this paper, we will show that the new PRP method (with either the Armijo line search or the Wolfe line search is both linearly and quadratically convergent. The numerical experiments demonstrate that the new PRP algorithm is competitive with the normal CG method.

  15. A Conjugate Gradient Algorithm with Function Value Information and N-Step Quadratic Convergence for Unconstrained Optimization.

    Science.gov (United States)

    Li, Xiangrong; Zhao, Xupei; Duan, Xiabin; Wang, Xiaoliang

    2015-01-01

    It is generally acknowledged that the conjugate gradient (CG) method achieves global convergence--with at most a linear convergence rate--because CG formulas are generated by linear approximations of the objective functions. The quadratically convergent results are very limited. We introduce a new PRP method in which the restart strategy is also used. Moreover, the method we developed includes not only n-step quadratic convergence but also both the function value information and gradient value information. In this paper, we will show that the new PRP method (with either the Armijo line search or the Wolfe line search) is both linearly and quadratically convergent. The numerical experiments demonstrate that the new PRP algorithm is competitive with the normal CG method.

  16. ON A NUMERICAL ALGORITHM FOR UNCERTAIN SYSTEM ∫ Φ ...

    African Journals Online (AJOL)

    Administrator

    Science World Journal Vol 7 (No 1) 2012 www.scienceworldjournal.org. ISSN 1597-6343. On a Numerical Algorithm for Uncertain System. Newton's Algorithm. Step 1 Calculate. )(),().(k k k. xAxgxF. Step 2. Check if ε. <. )(k xg for a predetermined ,ε if so stop, else. Step3. Set k k. PxA. )( = )(k xg. -. Step4. Set k k k. Px x. +. = +1.

  17. Design and selection of load control strategies using a multiple objective model and evolutionary algorithms

    International Nuclear Information System (INIS)

    Gomes, Alvaro; Antunes, Carlos Henggeler; Martins, Antonio Gomes

    2005-01-01

    This paper aims at presenting a multiple objective model to evaluate the attractiveness of the use of demand resources (through load management control actions) by different stakeholders and in diverse structure scenarios in electricity systems. For the sake of model flexibility, the multiple (and conflicting) objective functions of technical, economical and quality of service nature are able to capture distinct market scenarios and operating entities that may be interested in promoting load management activities. The computation of compromise solutions is made by resorting to evolutionary algorithms, which are well suited to tackle multiobjective problems of combinatorial nature herein involving the identification and selection of control actions to be applied to groups of loads. (Author)

  18. A genetic algorithm for multiple relay selection in two-way relaying cognitive radio networks

    KAUST Repository

    Alsharoa, Ahmad M.

    2013-09-01

    In this paper, we investigate a multiple relay selection scheme for two-way relaying cognitive radio networks where primary users and secondary users operate on the same frequency band. More specifically, cooperative relays using Amplifyand- Forward (AF) protocol are optimally selected to maximize the sum rate of the secondary users without degrading the Quality of Service (QoS) of the primary users by respecting a tolerated interference threshold. A strong optimization tool based on genetic algorithm is employed to solve our formulated optimization problem where discrete relay power levels are considered. Our simulation results show that the practical heuristic approach achieves almost the same performance of the optimal multiple relay selection scheme either with discrete or continuous power distributions. Copyright © 2013 by the Institute of Electrical and Electronic Engineers, Inc.

  19. The Research and Application of SURF Algorithm Based on Feature Point Selection Algorithm

    Directory of Open Access Journals (Sweden)

    Zhang Fang Hu

    2014-04-01

    Full Text Available As the pixel information of depth image is derived from the distance information, when implementing SURF algorithm with KINECT sensor for static sign language recognition, there can be some mismatched pairs in palm area. This paper proposes a feature point selection algorithm, by filtering the SURF feature points step by step based on the number of feature points within adaptive radius r and the distance between the two points, it not only greatly improves the recognition rate, but also ensures the robustness under the environmental factors, such as skin color, illumination intensity, complex background, angle and scale changes. The experiment results show that the improved SURF algorithm can effectively improve the recognition rate, has a good robustness.

  20. One-step isothermal detection of multiple KRAS mutations by forming SNP specific hairpins on a gold nanoshell.

    Science.gov (United States)

    Chung, Chan Ho; Kim, Joong Hyun

    2018-04-24

    We developed a one-step isothermal method for typing multiple KRAS mutations using a designed set of primers to form a hairpin on a gold nanoshell upon being ligated by a SNP specific DNA ligase after binding of targets. As a result, we could detect as low as 20 attomoles of KRAS mutations within 1 h.

  1. Evidence-based algorithm for heparin dosing before cardiopulmonary bypass. Part 1: Development of the algorithm.

    Science.gov (United States)

    McKinney, Mark C; Riley, Jeffrey B

    2007-12-01

    The incidence of heparin resistance during adult cardiac surgery with cardiopulmonary bypass has been reported at 15%-20%. The consistent use of a clinical decision-making algorithm may increase the consistency of patient care and likely reduce the total required heparin dose and other problems associated with heparin dosing. After a directed survey of practicing perfusionists regarding treatment of heparin resistance and a literature search for high-level evidence regarding the diagnosis and treatment of heparin resistance, an evidence-based decision-making algorithm was constructed. The face validity of the algorithm decisive steps and logic was confirmed by a second survey of practicing perfusionists. The algorithm begins with review of the patient history to identify predictors for heparin resistance. The definition for heparin resistance contained in the algorithm is an activated clotting time 450 IU/kg heparin loading dose. Based on the literature, the treatment for heparin resistance used in the algorithm is anti-thrombin III supplement. The algorithm seems to be valid and is supported by high-level evidence and clinician opinion. The next step is a human randomized clinical trial to test the clinical procedure guideline algorithm vs. current standard clinical practice.

  2. An energy efficient distance-aware routing algorithm with multiple mobile sinks for wireless sensor networks.

    Science.gov (United States)

    Wang, Jin; Li, Bin; Xia, Feng; Kim, Chang-Seob; Kim, Jeong-Uk

    2014-08-18

    Traffic patterns in wireless sensor networks (WSNs) usually follow a many-to-one model. Sensor nodes close to static sinks will deplete their limited energy more rapidly than other sensors, since they will have more data to forward during multihop transmission. This will cause network partition, isolated nodes and much shortened network lifetime. Thus, how to balance energy consumption for sensor nodes is an important research issue. In recent years, exploiting sink mobility technology in WSNs has attracted much research attention because it can not only improve energy efficiency, but prolong network lifetime. In this paper, we propose an energy efficient distance-aware routing algorithm with multiple mobile sink for WSNs, where sink nodes will move with a certain speed along the network boundary to collect monitored data. We study the influence of multiple mobile sink nodes on energy consumption and network lifetime, and we mainly focus on the selection of mobile sink node number and the selection of parking positions, as well as their impact on performance metrics above. We can see that both mobile sink node number and the selection of parking position have important influence on network performance. Simulation results show that our proposed routing algorithm has better performance than traditional routing ones in terms of energy consumption.

  3. An Energy Efficient Distance-Aware Routing Algorithm with Multiple Mobile Sinks for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Jin Wang

    2014-08-01

    Full Text Available Traffic patterns in wireless sensor networks (WSNs usually follow a many-to-one model. Sensor nodes close to static sinks will deplete their limited energy more rapidly than other sensors, since they will have more data to forward during multihop transmission. This will cause network partition, isolated nodes and much shortened network lifetime. Thus, how to balance energy consumption for sensor nodes is an important research issue. In recent years, exploiting sink mobility technology in WSNs has attracted much research attention because it can not only improve energy efficiency, but prolong network lifetime. In this paper, we propose an energy efficient distance-aware routing algorithm with multiple mobile sink for WSNs, where sink nodes will move with a certain speed along the network boundary to collect monitored data. We study the influence of multiple mobile sink nodes on energy consumption and network lifetime, and we mainly focus on the selection of mobile sink node number and the selection of parking positions, as well as their impact on performance metrics above. We can see that both mobile sink node number and the selection of parking position have important influence on network performance. Simulation results show that our proposed routing algorithm has better performance than traditional routing ones in terms of energy consumption.

  4. The Noise Clinic: a Blind Image Denoising Algorithm

    Directory of Open Access Journals (Sweden)

    Marc Lebrun

    2015-01-01

    Full Text Available This paper describes the complete implementation of a blind image algorithm, that takes any digital image as input. In a first step the algorithm estimates a Signal and Frequency Dependent (SFD noise model. In a second step, the image is denoised by a multiscale adaptation of the Non-local Bayes denoising method. We focus here on a careful analysis of the denoising step and present a detailed discussion of the influence of its parameters. Extensive commented tests of the blind denoising algorithm are presented, on real JPEG images and scans of old photographs.

  5. Effects of walking speed on the step-by-step control of step width.

    Science.gov (United States)

    Stimpson, Katy H; Heitkamp, Lauren N; Horne, Joscelyn S; Dean, Jesse C

    2018-02-08

    Young, healthy adults walking at typical preferred speeds use step-by-step adjustments of step width to appropriately redirect their center of mass motion and ensure mediolateral stability. However, it is presently unclear whether this control strategy is retained when walking at the slower speeds preferred by many clinical populations. We investigated whether the typical stabilization strategy is influenced by walking speed. Twelve young, neurologically intact participants walked on a treadmill at a range of prescribed speeds (0.2-1.2 m/s). The mediolateral stabilization strategy was quantified as the proportion of step width variance predicted by the mechanical state of the pelvis throughout a step (calculated as R 2 magnitude from a multiple linear regression). Our ability to accurately predict the upcoming step width increased over the course of a step. The strength of the relationship between step width and pelvis mechanics at the start of a step was reduced at slower speeds. However, these speed-dependent differences largely disappeared by the end of a step, other than at the slowest walking speed (0.2 m/s). These results suggest that mechanics-dependent adjustments in step width are a consistent component of healthy gait across speeds and contexts. However, slower walking speeds may ease this control by allowing mediolateral repositioning of the swing leg to occur later in a step, thus encouraging slower walking among clinical populations with limited sensorimotor control. Published by Elsevier Ltd.

  6. A quality and efficiency analysis of the IMFASTTM segmentation algorithm in head and neck 'step and shoot' IMRT treatments

    International Nuclear Information System (INIS)

    Potter, Larry D.; Chang, Sha X.; Cullip, Timothy J.; Siochi, Alfredo C.

    2002-01-01

    The performance of segmentation algorithms used in IMFAST for 'step and shoot' IMRT treatment delivery is evaluated for three head and neck clinical treatments of different optimization objectives. The segmentation uses the intensity maps generated by the in-house TPS PLANUNC using the index-dose minimization algorithm. The dose optimization objectives include PTV dose uniformity and dose volume histogram-specified critical structure sparing. The optimized continuous intensity maps were truncated into five and ten intensity levels and exported to IMFAST for MLC segments optimization. The MLC segments were imported back to PLUNC for dose optimization quality calculation. The five basic segmentation algorithms included in IMFAST were evaluated alone and in combination with either tongue and groove/match line correction or fluence correction or both. Two criteria were used in the evaluation: treatment efficiency represented by the total number of MLC segments and optimization quality represented by a clinically relevant optimization quality factor. We found that the treatment efficiency depends first on the number of intensity levels used in the intensity map and second the segmentation technique used. The standard optimal segmentation with fluence correction is a consistent good performer for all treatment plans studied. All segmentation techniques evaluated produced treatments with similar dose optimization quality values, especially when ten-level intensity maps are used

  7. Parallel algorithms for boundary value problems

    Science.gov (United States)

    Lin, Avi

    1991-01-01

    A general approach to solve boundary value problems numerically in a parallel environment is discussed. The basic algorithm consists of two steps: the local step where all the P available processors work in parallel, and the global step where one processor solves a tridiagonal linear system of the order P. The main advantages of this approach are twofold. First, this suggested approach is very flexible, especially in the local step and thus the algorithm can be used with any number of processors and with any of the SIMD or MIMD machines. Secondly, the communication complexity is very small and thus can be used as easily with shared memory machines. Several examples for using this strategy are discussed.

  8. Algorithmic Foundation of Spectral Rarefaction for Measuring Satellite Imagery Heterogeneity at Multiple Spatial Scales

    Science.gov (United States)

    Rocchini, Duccio

    2009-01-01

    Measuring heterogeneity in satellite imagery is an important task to deal with. Most measures of spectral diversity have been based on Shannon Information theory. However, this approach does not inherently address different scales, ranging from local (hereafter referred to alpha diversity) to global scales (gamma diversity). The aim of this paper is to propose a method for measuring spectral heterogeneity at multiple scales based on rarefaction curves. An algorithmic solution of rarefaction applied to image pixel values (Digital Numbers, DNs) is provided and discussed. PMID:22389600

  9. Planning paths through a spatial hierarchy - Eliminating stair-stepping effects

    Science.gov (United States)

    Slack, Marc G.

    1989-01-01

    Stair-stepping effects are a result of the loss of spatial continuity resulting from the decomposition of space into a grid. This paper presents a path planning algorithm which eliminates stair-stepping effects induced by the grid-based spatial representation. The algorithm exploits a hierarchical spatial model to efficiently plan paths for a mobile robot operating in dynamic domains. The spatial model and path planning algorithm map to a parallel machine, allowing the system to operate incrementally, thereby accounting for unexpected events in the operating space.

  10. An algorithm for gluinos on the lattice

    International Nuclear Information System (INIS)

    Montvay, I.

    1995-10-01

    Luescher's local bosonic algorithm for Monte Carlo simulations of quantum field theories with fermions is applied to the simulation of a possibly supersymmetric Yang-Mills theory with a Majorana fermion in the adjoint representation. Combined with a correction step in a two-step polynomial approximation scheme, the obtained algorithm seems to be promising and could be competitive with more conventional algorithms based on discretized classical (''molecular dynamics'') equations of motion. The application of the considered polynomial approximation scheme to optimized hopping parameter expansions is also discussed. (orig.)

  11. Performance Optimization of a Solar-Driven Multi-Step Irreversible Brayton Cycle Based on a Multi-Objective Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Ahmadi Mohammad Hosein

    2016-01-01

    Full Text Available An applicable approach for a multi-step regenerative irreversible Brayton cycle on the basis of thermodynamics and optimization of thermal efficiency and normalized output power is presented in this work. In the present study, thermodynamic analysis and a NSGA II algorithm are coupled to determine the optimum values of thermal efficiency and normalized power output for a Brayton cycle system. Moreover, three well-known decision-making methods are employed to indicate definite answers from the outputs gained from the aforementioned approach. Finally, with the aim of error analysis, the values of the average and maximum error of the results are also calculated.

  12. Imbalanced multi-modal multi-label learning for subcellular localization prediction of human proteins with both single and multiple sites.

    Directory of Open Access Journals (Sweden)

    Jianjun He

    Full Text Available It is well known that an important step toward understanding the functions of a protein is to determine its subcellular location. Although numerous prediction algorithms have been developed, most of them typically focused on the proteins with only one location. In recent years, researchers have begun to pay attention to the subcellular localization prediction of the proteins with multiple sites. However, almost all the existing approaches have failed to take into account the correlations among the locations caused by the proteins with multiple sites, which may be the important information for improving the prediction accuracy of the proteins with multiple sites. In this paper, a new algorithm which can effectively exploit the correlations among the locations is proposed by using gaussian process model. Besides, the algorithm also can realize optimal linear combination of various feature extraction technologies and could be robust to the imbalanced data set. Experimental results on a human protein data set show that the proposed algorithm is valid and can achieve better performance than the existing approaches.

  13. Sensitivity Analysis of the Scattering-Based SARBM3D Despeckling Algorithm.

    Science.gov (United States)

    Di Simone, Alessio

    2016-06-25

    Synthetic Aperture Radar (SAR) imagery greatly suffers from multiplicative speckle noise, typical of coherent image acquisition sensors, such as SAR systems. Therefore, a proper and accurate despeckling preprocessing step is almost mandatory to aid the interpretation and processing of SAR data by human users and computer algorithms, respectively. Very recently, a scattering-oriented version of the popular SAR Block-Matching 3D (SARBM3D) despeckling filter, named Scattering-Based (SB)-SARBM3D, was proposed. The new filter is based on the a priori knowledge of the local topography of the scene. In this paper, an experimental sensitivity analysis of the above-mentioned despeckling algorithm is carried out, and the main results are shown and discussed. In particular, the role of both electromagnetic and geometrical parameters of the surface and the impact of its scattering behavior are investigated. Furthermore, a comprehensive sensitivity analysis of the SB-SARBM3D filter against the Digital Elevation Model (DEM) resolution and the SAR image-DEM coregistration step is also provided. The sensitivity analysis shows a significant robustness of the algorithm against most of the surface parameters, while the DEM resolution plays a key role in the despeckling process. Furthermore, the SB-SARBM3D algorithm outperforms the original SARBM3D in the presence of the most realistic scattering behaviors of the surface. An actual scenario is also presented to assess the DEM role in real-life conditions.

  14. Phase Grouping Line Extraction Algorithm Using Overlapped Partition

    Directory of Open Access Journals (Sweden)

    WANG Jingxue

    2015-07-01

    Full Text Available Aiming at solving the problem of fracture at the discontinuities area and the challenges of line fitting in each partition, an innovative line extraction algorithm is proposed based on phase grouping using overlapped partition. The proposed algorithm adopted dual partition steps, which will generate overlapped eight partitions. Between the two steps, the middle axis in the first step coincides with the border lines in the other step. Firstly, the connected edge points that share the same phase gradients are merged into the line candidates, and fitted into line segments. Then to remedy the break lines at the border areas, the break segments in the second partition steps are refitted. The proposed algorithm is robust and does not need any parameter tuning. Experiments with various datasets have confirmed that the method is not only capable of handling the linear features, but also powerful enough in handling the curve features.

  15. GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling

    Science.gov (United States)

    Miki, Yohei; Umemura, Masayuki

    2017-04-01

    The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.

  16. Optimizing multiple sequence alignments using a genetic algorithm based on three objectives: structural information, non-gaps percentage and totally conserved columns.

    Science.gov (United States)

    Ortuño, Francisco M; Valenzuela, Olga; Rojas, Fernando; Pomares, Hector; Florido, Javier P; Urquiza, Jose M; Rojas, Ignacio

    2013-09-01

    Multiple sequence alignments (MSAs) are widely used approaches in bioinformatics to carry out other tasks such as structure predictions, biological function analyses or phylogenetic modeling. However, current tools usually provide partially optimal alignments, as each one is focused on specific biological features. Thus, the same set of sequences can produce different alignments, above all when sequences are less similar. Consequently, researchers and biologists do not agree about which is the most suitable way to evaluate MSAs. Recent evaluations tend to use more complex scores including further biological features. Among them, 3D structures are increasingly being used to evaluate alignments. Because structures are more conserved in proteins than sequences, scores with structural information are better suited to evaluate more distant relationships between sequences. The proposed multiobjective algorithm, based on the non-dominated sorting genetic algorithm, aims to jointly optimize three objectives: STRIKE score, non-gaps percentage and totally conserved columns. It was significantly assessed on the BAliBASE benchmark according to the Kruskal-Wallis test (P algorithm also outperforms other aligners, such as ClustalW, Multiple Sequence Alignment Genetic Algorithm (MSA-GA), PRRP, DIALIGN, Hidden Markov Model Training (HMMT), Pattern-Induced Multi-sequence Alignment (PIMA), MULTIALIGN, Sequence Alignment Genetic Algorithm (SAGA), PILEUP, Rubber Band Technique Genetic Algorithm (RBT-GA) and Vertical Decomposition Genetic Algorithm (VDGA), according to the Wilcoxon signed-rank test (P 0.05) with the advantage of being able to use less structures. Structural information is included within the objective function to evaluate more accurately the obtained alignments. The source code is available at http://www.ugr.es/~fortuno/MOSAStrE/MO-SAStrE.zip.

  17. Fuzzy Logic-Based Perturb and Observe Algorithm with Variable Step of a Reference Voltage for Solar Permanent Magnet Synchronous Motor Drive System Fed by Direct-Connected Photovoltaic Array

    Directory of Open Access Journals (Sweden)

    Mohamed Redha Rezoug

    2018-02-01

    Full Text Available Photovoltaic pumping is considered to be the most used application amongst other photovoltaic energy applications in isolated sites. This technology is developing with a slow progression to allow the photovoltaic system to operate at its maximum power. This work introduces the modified algorithm which is a perturb and observe (P&O type to overcome the limitations of the conventional P&O algorithm and increase its global performance in abrupt weather condition changes. The most significant conventional P&O algorithm restriction is the difficulty faced when choosing the variable step of the reference voltage value, a good compromise between the swift dynamic response and the stability in the steady state. To adjust the step reference voltage according to the location of the operating point of the maximum power point (MPP, a fuzzy logic controller (FLC block adapted to the P&O algorithm is used. This allows the improvement of the tracking pace and the steady state oscillation elimination. The suggested method was evaluated by simulation using MATLAB/SimPowerSystems blocks and compared to the classical P&O under different irradiation levels. The results obtained show the effectiveness of the technique proposed and its capacity for the practical and efficient tracking of maximum power.

  18. Multidimensional Scaling Localization Algorithm in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Zhang Dongyang

    2014-02-01

    Full Text Available Due to the localization algorithm in large-scale wireless sensor network exists shortcomings both in positioning accuracy and time complexity compared to traditional localization algorithm, this paper presents a fast multidimensional scaling location algorithm. By positioning algorithm for fast multidimensional scaling, fast mapping initialization, fast mapping and coordinate transform can get schematic coordinates of node, coordinates Initialize of MDS algorithm, an accurate estimate of the node coordinates and using the PRORUSTES to analysis alignment of the coordinate and final position coordinates of nodes etc. There are four steps, and the thesis gives specific implementation steps of the algorithm. Finally, compared with stochastic algorithms and classical MDS algorithm experiment, the thesis takes application of specific examples. Experimental results show that: the proposed localization algorithm has fast multidimensional scaling positioning accuracy in ensuring certain circumstances, but also greatly improves the speed of operation.

  19. An improved ASIFT algorithm for indoor panorama image matching

    Science.gov (United States)

    Fu, Han; Xie, Donghai; Zhong, Ruofei; Wu, Yu; Wu, Qiong

    2017-07-01

    The generation of 3D models for indoor objects and scenes is an attractive tool for digital city, virtual reality and SLAM purposes. Panoramic images are becoming increasingly more common in such applications due to their advantages to capture the complete environment in one single image with large field of view. The extraction and matching of image feature points are important and difficult steps in three-dimensional reconstruction, and ASIFT is a state-of-the-art algorithm to implement these functions. Compared with the SIFT algorithm, more feature points can be generated and the matching accuracy of ASIFT algorithm is higher, even for the panoramic images with obvious distortions. However, the algorithm is really time-consuming because of complex operations and performs not very well for some indoor scenes under poor light or without rich textures. To solve this problem, this paper proposes an improved ASIFT algorithm for indoor panoramic images: firstly, the panoramic images are projected into multiple normal perspective images. Secondly, the original ASIFT algorithm is simplified from the affine transformation of tilt and rotation with the images to the only tilt affine transformation. Finally, the results are re-projected to the panoramic image space. Experiments in different environments show that this method can not only ensure the precision of feature points extraction and matching, but also greatly reduce the computing time.

  20. Ab initio multiple cloning algorithm for quantum nonadiabatic molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Makhov, Dmitry V.; Shalashilin, Dmitrii V. [Department of Chemistry, University of Leeds, Leeds LS2 9JT (United Kingdom); Glover, William J.; Martinez, Todd J. [Department of Chemistry and The PULSE Institute, Stanford University, Stanford, California 94305, USA and SLAC National Accelerator Laboratory, Menlo Park, California 94025 (United States)

    2014-08-07

    We present a new algorithm for ab initio quantum nonadiabatic molecular dynamics that combines the best features of ab initio Multiple Spawning (AIMS) and Multiconfigurational Ehrenfest (MCE) methods. In this new method, ab initio multiple cloning (AIMC), the individual trajectory basis functions (TBFs) follow Ehrenfest equations of motion (as in MCE). However, the basis set is expanded (as in AIMS) when these TBFs become sufficiently mixed, preventing prolonged evolution on an averaged potential energy surface. We refer to the expansion of the basis set as “cloning,” in analogy to the “spawning” procedure in AIMS. This synthesis of AIMS and MCE allows us to leverage the benefits of mean-field evolution during periods of strong nonadiabatic coupling while simultaneously avoiding mean-field artifacts in Ehrenfest dynamics. We explore the use of time-displaced basis sets, “trains,” as a means of expanding the basis set for little cost. We also introduce a new bra-ket averaged Taylor expansion (BAT) to approximate the necessary potential energy and nonadiabatic coupling matrix elements. The BAT approximation avoids the necessity of computing electronic structure information at intermediate points between TBFs, as is usually done in saddle-point approximations used in AIMS. The efficiency of AIMC is demonstrated on the nonradiative decay of the first excited state of ethylene. The AIMC method has been implemented within the AIMS-MOLPRO package, which was extended to include Ehrenfest basis functions.

  1. A New Approach to Adaptive Control of Multiple Scales in Plasma Simulations

    Science.gov (United States)

    Omelchenko, Yuri

    2007-04-01

    A new approach to temporal refinement of kinetic (Particle-in-Cell, Vlasov) and fluid (MHD, two-fluid) simulations of plasmas is presented: Discrete-Event Simulation (DES). DES adaptively distributes CPU resources in accordance with local time scales and enables asynchronous integration of inhomogeneous nonlinear systems with multiple time scales on meshes of arbitrary topologies. This removes computational penalties usually incurred in explicit codes due to the global Courant-Friedrich-Levy (CFL) restriction on a time-step size. DES stands apart from multiple time-stepping algorithms in that it requires neither selecting a global synchronization time step nor pre-determining a sequence of time-integration operations for individual parts of the system (local time increments need not bear any integer multiple relations). Instead, elements of a mesh-distributed solution self-adaptively predict and synchronize their temporal trajectories by directly enforcing local causality (accuracy) constraints, which are formulated in terms of incremental changes to the evolving solution. Together with flux-conservative propagation of information, this new paradigm ensures stable and fast asynchronous runs, where idle computation is automatically eliminated. DES is parallelized via a novel Preemptive Event Processing (PEP) technique, which automatically synchronizes elements with similar update rates. In this mode, events with close execution times are projected onto time levels, which are adaptively determined by the program. PEP allows reuse of standard message-passing algorithms on distributed architectures. For optimum accuracy, DES can be combined with adaptive mesh refinement (AMR) techniques for structured and unstructured meshes. Current examples of event-driven models range from electrostatic, hybrid particle-in-cell plasma systems to reactive fluid dynamics simulations. They demonstrate the superior performance of DES in terms of accuracy, speed and robustness.

  2. Enhanced Algorithms for EO/IR Electronic Stabilization, Clutter Suppression, and Track-Before-Detect for Multiple Low Observable Targets

    Science.gov (United States)

    Tartakovsky, A.; Brown, A.; Brown, J.

    The paper describes the development and evaluation of a suite of advanced algorithms which provide significantly-improved capabilities for finding, fixing, and tracking multiple ballistic and flying low observable objects in highly stressing cluttered environments. The algorithms have been developed for use in satellite-based staring and scanning optical surveillance suites for applications including theatre and intercontinental ballistic missile early warning, trajectory prediction, and multi-sensor track handoff for midcourse discrimination and intercept. The functions performed by the algorithms include electronic sensor motion compensation providing sub-pixel stabilization (to 1/100 of a pixel), as well as advanced temporal-spatial clutter estimation and suppression to below sensor noise levels, followed by statistical background modeling and Bayesian multiple-target track-before-detect filtering. The multiple-target tracking is performed in physical world coordinates to allow for multi-sensor fusion, trajectory prediction, and intercept. Output of detected object cues and data visualization are also provided. The algorithms are designed to handle a wide variety of real-world challenges. Imaged scenes may be highly complex and infinitely varied -- the scene background may contain significant celestial, earth limb, or terrestrial clutter. For example, when viewing combined earth limb and terrestrial scenes, a combination of stationary and non-stationary clutter may be present, including cloud formations, varying atmospheric transmittance and reflectance of sunlight and other celestial light sources, aurora, glint off sea surfaces, and varied natural and man-made terrain features. The targets of interest may also appear to be dim, relative to the scene background, rendering much of the existing deployed software useless for optical target detection and tracking. Additionally, it may be necessary to detect and track a large number of objects in the threat cloud

  3. Multiple Harmonics Fitting Algorithms Applied to Periodic Signals Based on Hilbert-Huang Transform

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2013-01-01

    Full Text Available A new generation of multipurpose measurement equipment is transforming the role of computers in instrumentation. The new features involve mixed devices, such as kinds of sensors, analog-to-digital and digital-to-analog converters, and digital signal processing techniques, that are able to substitute typical discrete instruments like multimeters and analyzers. Signal-processing applications frequently use least-squares (LS sine-fitting algorithms. Periodic signals may be interpreted as a sum of sine waves with multiple frequencies: the Fourier series. This paper describes a new sine fitting algorithm that is able to fit a multiharmonic acquired periodic signal. By means of a “sinusoidal wave” whose amplitude and phase are both transient, the “triangular wave” can be reconstructed on the basis of Hilbert-Huang transform (HHT. This method can be used to test effective number of bits (ENOBs of analog-to-digital converter (ADC, avoiding the trouble of selecting initial value of the parameters and working out the nonlinear equations. The simulation results show that the algorithm is precise and efficient. In the case of enough sampling points, even under the circumstances of low-resolution signal with the harmonic distortion existing, the root mean square (RMS error between the sampling data of original “triangular wave” and the corresponding points of fitting “sinusoidal wave” is marvelously small. That maybe means, under the circumstances of any periodic signal, that ENOBs of high-resolution ADC can be tested accurately.

  4. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  5. Comparison of Metabolic Pathways in Escherichia coli by Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Patricia Ortegon

    2015-01-01

    Full Text Available In order to understand how cellular metabolism has taken its modern form, the conservation and variations between metabolic pathways were evaluated by using a genetic algorithm (GA. The GA approach considered information on the complete metabolism of the bacterium Escherichia coli K-12, as deposited in the KEGG database, and the enzymes belonging to a particular pathway were transformed into enzymatic step sequences by using the breadth-first search algorithm. These sequences represent contiguous enzymes linked to each other, based on their catalytic activities as they are encoded in the Enzyme Commission numbers. In a posterior step, these sequences were compared using a GA in an all-against-all (pairwise comparisons approach. Individual reactions were chosen based on their measure of fitness to act as parents of offspring, which constitute the new generation. The sequences compared were used to construct a similarity matrix (of fitness values that was then considered to be clustered by using a k-medoids algorithm. A total of 34 clusters of conserved reactions were obtained, and their sequences were finally aligned with a multiple-sequence alignment GA optimized to align all the reaction sequences included in each group or cluster. From these comparisons, maps associated with the metabolism of similar compounds also contained similar enzymatic step sequences, reinforcing the Patchwork Model for the evolution of metabolism in E. coli K-12, an observation that can be expanded to other organisms, for which there is metabolism information. Finally, our mapping of these reactions is discussed, with illustrations from a particular case.

  6. Improved hybridization of Fuzzy Analytic Hierarchy Process (FAHP) algorithm with Fuzzy Multiple Attribute Decision Making - Simple Additive Weighting (FMADM-SAW)

    Science.gov (United States)

    Zaiwani, B. E.; Zarlis, M.; Efendi, S.

    2018-03-01

    In this research, the improvement of hybridization algorithm of Fuzzy Analytic Hierarchy Process (FAHP) with Fuzzy Technique for Order Preference by Similarity to Ideal Solution (FTOPSIS) in selecting the best bank chief inspector based on several qualitative and quantitative criteria with various priorities. To improve the performance of the above research, FAHP algorithm hybridization with Fuzzy Multiple Attribute Decision Making - Simple Additive Weighting (FMADM-SAW) algorithm was adopted, which applied FAHP algorithm to the weighting process and SAW for the ranking process to determine the promotion of employee at a government institution. The result of improvement of the average value of Efficiency Rate (ER) is 85.24%, which means that this research has succeeded in improving the previous research that is equal to 77.82%. Keywords: Ranking and Selection, Fuzzy AHP, Fuzzy TOPSIS, FMADM-SAW.

  7. Microprocessor controller for stepping motors

    International Nuclear Information System (INIS)

    Strait, B.G.; Thuot, M.E.

    1977-01-01

    A new concept for digital computer control of multiple stepping motors which operate in a severe electromagnetic pulse environment is presented. The motors position mirrors in the beam-alignment system of a 100-kJ CO 2 laser. An asynchronous communications channel of a computer is used to send coded messages, containing the motor address and stepping-command information, to the stepping-motor controller in a bit serial format over a fiber-optics communications link. The addressed controller responds by transmitting to the computer its address and other motor information, thus confirming the received message. Each controller is capable of controlling three stepping motors. The controller contains the fiber-optics interface, a microprocessor, and the stepping-motor driven circuits. The microprocessor program, which resides in an EPROM, decodes the received messages, transmits responses, performs the stepping-motor sequence logic, maintains motor-position information, and monitors the motor's reference switch. For multiple stepping-motor application, the controllers are connected in a daisy chain providing control of many motors from one asynchronous communications channel of the computer

  8. Algorithms for Decision Tree Construction

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    The study of algorithms for decision tree construction was initiated in 1960s. The first algorithms are based on the separation heuristic [13, 31] that at each step tries dividing the set of objects as evenly as possible. Later Garey and Graham [28

  9. A parallel second-order adaptive mesh algorithm for incompressible flow in porous media.

    Science.gov (United States)

    Pau, George S H; Almgren, Ann S; Bell, John B; Lijewski, Michael J

    2009-11-28

    In this paper, we present a second-order accurate adaptive algorithm for solving multi-phase, incompressible flow in porous media. We assume a multi-phase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting, the total velocity, defined to be the sum of the phase velocities, is divergence free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids are advanced multiple steps to reach the same time as the coarse grids and the data at different levels are then synchronized. The single-grid algorithm is described briefly, but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behaviour of the method.

  10. SIMSAS - a window based software package for simulation and analysis of multiple small-angle scattering data

    International Nuclear Information System (INIS)

    Jayaswal, B.; Mazumder, S.

    1998-09-01

    Small-angle scattering data from strong scattering systems, e.g. porous materials, cannot be analysed invoking single scattering approximation as specimen needed to replicate the bulk matrix in essential properties are too thick to validate the approximation. The presence of multiple scattering is indicated by invalidity of the functional invariance property of the observed scattering profile with variation of sample thickness and/or wave length of the probing radiation. This article delineates how non accounting of multiple scattering affects the results of analysis and then how to correct the data for its effect. It deals with an algorithm to extract single scattering profile from small-angle scattering data affected by multiple scattering. The algorithm can process the scattering data and deduce single scattering profile in absolute scale. A software package, SIMSAS, is introduced for executing this inversion step. This package is useful both to simulate and to analyse multiple small-angle scattering data. (author)

  11. Efficient RNA structure comparison algorithms.

    Science.gov (United States)

    Arslan, Abdullah N; Anandan, Jithendar; Fry, Eric; Monschke, Keith; Ganneboina, Nitin; Bowerman, Jason

    2017-12-01

    Recently proposed relative addressing-based ([Formula: see text]) RNA secondary structure representation has important features by which an RNA structure database can be stored into a suffix array. A fast substructure search algorithm has been proposed based on binary search on this suffix array. Using this substructure search algorithm, we present a fast algorithm that finds the largest common substructure of given multiple RNA structures in [Formula: see text] format. The multiple RNA structure comparison problem is NP-hard in its general formulation. We introduced a new problem for comparing multiple RNA structures. This problem has more strict similarity definition and objective, and we propose an algorithm that solves this problem efficiently. We also develop another comparison algorithm that iteratively calls this algorithm to locate nonoverlapping large common substructures in compared RNAs. With the new resulting tools, we improved the RNASSAC website (linked from http://faculty.tamuc.edu/aarslan ). This website now also includes two drawing tools: one specialized for preparing RNA substructures that can be used as input by the search tool, and another one for automatically drawing the entire RNA structure from a given structure sequence.

  12. A three-dimensional reconstruction algorithm for an inverse-geometry volumetric CT system

    International Nuclear Information System (INIS)

    Schmidt, Taly Gilat; Fahrig, Rebecca; Pelc, Norbert J.

    2005-01-01

    An inverse-geometry volumetric computed tomography (IGCT) system has been proposed capable of rapidly acquiring sufficient data to reconstruct a thick volume in one circular scan. The system uses a large-area scanned source opposite a smaller detector. The source and detector have the same extent in the axial, or slice, direction, thus providing sufficient volumetric sampling and avoiding cone-beam artifacts. This paper describes a reconstruction algorithm for the IGCT system. The algorithm first rebins the acquired data into two-dimensional (2D) parallel-ray projections at multiple tilt and azimuthal angles, followed by a 3D filtered backprojection. The rebinning step is performed by gridding the data onto a Cartesian grid in a 4D projection space. We present a new method for correcting the gridding error caused by the finite and asymmetric sampling in the neighborhood of each output grid point in the projection space. The reconstruction algorithm was implemented and tested on simulated IGCT data. Results show that the gridding correction reduces the gridding errors to below one Hounsfield unit. With this correction, the reconstruction algorithm does not introduce significant artifacts or blurring when compared to images reconstructed from simulated 2D parallel-ray projections. We also present an investigation of the noise behavior of the method which verifies that the proposed reconstruction algorithm utilizes cross-plane rays as efficiently as in-plane rays and can provide noise comparable to an in-plane parallel-ray geometry for the same number of photons. Simulations of a resolution test pattern and the modulation transfer function demonstrate that the IGCT system, using the proposed algorithm, is capable of 0.4 mm isotropic resolution. The successful implementation of the reconstruction algorithm is an important step in establishing feasibility of the IGCT system

  13. A fully automated contour detection algorithm the preliminary step for scatter and attenuation compensation in SPECT

    International Nuclear Information System (INIS)

    Younes, R.B.; Mas, J.; Bidet, R.

    1988-01-01

    Contour detection is an important step in information extraction from nuclear medicine images. In order to perform accurate quantitative studies in single photon emission computed tomography (SPECT) a new procedure is described which can rapidly derive the best fit contour of an attenuated medium. Some authors evaluate the influence of the detected contour on the reconstructed images with various attenuation correction techniques. Most of the methods are strongly affected by inaccurately detected contours. This approach uses the Compton window to redetermine the convex contour: It seems to be simpler and more practical in clinical SPECT studies. The main advantages of this procedure are the high speed of computation, the accuracy of the contour found and the programme's automation. Results obtained using computer simulated and real phantoms or clinical studies demonstrate the reliability of the present algorithm. (orig.)

  14. An efficient algorithm for incompressible N-phase flows

    International Nuclear Information System (INIS)

    Dong, S.

    2014-01-01

    We present an efficient algorithm within the phase field framework for simulating the motion of a mixture of N (N⩾2) immiscible incompressible fluids, with possibly very different physical properties such as densities, viscosities, and pairwise surface tensions. The algorithm employs a physical formulation for the N-phase system that honors the conservations of mass and momentum and the second law of thermodynamics. We present a method for uniquely determining the mixing energy density coefficients involved in the N-phase model based on the pairwise surface tensions among the N fluids. Our numerical algorithm has several attractive properties that make it computationally very efficient: (i) it has completely de-coupled the computations for different flow variables, and has also completely de-coupled the computations for the (N−1) phase field functions; (ii) the algorithm only requires the solution of linear algebraic systems after discretization, and no nonlinear algebraic solve is needed; (iii) for each flow variable the linear algebraic system involves only constant and time-independent coefficient matrices, which can be pre-computed during pre-processing, despite the variable density and variable viscosity of the N-phase mixture; (iv) within a time step the semi-discretized system involves only individual de-coupled Helmholtz-type (including Poisson) equations, despite the strongly-coupled phase–field system of fourth spatial order at the continuum level; (v) the algorithm is suitable for large density contrasts and large viscosity contrasts among the N fluids. Extensive numerical experiments have been presented for several problems involving multiple fluid phases, large density contrasts and large viscosity contrasts. In particular, we compare our simulations with the de Gennes theory, and demonstrate that our method produces physically accurate results for multiple fluid phases. We also demonstrate the significant and sometimes dramatic effects of the

  15. InfoRoute: the CISMeF Context-specific Search Algorithm.

    Science.gov (United States)

    Merabti, Tayeb; Lelong, Romain; Darmoni, Stefan

    2015-01-01

    The aim of this paper was to present a practical InfoRoute algorithm and applications developed by CISMeF to perform a contextual information retrieval across multiple medical websites in different health domains. The algorithm was developed to treat multiple types of queries: natural, Boolean and advanced. The algorithm also generates multiple types of queries: Boolean query, PubMed query or Advanced query. Each query can be extended via an inter alignments relationship from UMLS and HeTOP portal. A web service and two web applications have been developed based on the InfoRoute algorithm to generate links-query across multiple websites, i.e.: "PubMed" or "ClinicalTrials.org". The InfoRoute algorithm is a useful tool to perform contextual information retrieval across multiple medical websites in both English and French.

  16. Improving Polyp Detection Algorithms for CT Colonography: Pareto Front Approach.

    Science.gov (United States)

    Huang, Adam; Li, Jiang; Summers, Ronald M; Petrick, Nicholas; Hara, Amy K

    2010-03-21

    We investigated a Pareto front approach to improving polyp detection algorithms for CT colonography (CTC). A dataset of 56 CTC colon surfaces with 87 proven positive detections of 53 polyps sized 4 to 60 mm was used to evaluate the performance of a one-step and a two-step curvature-based region growing algorithm. The algorithmic performance was statistically evaluated and compared based on the Pareto optimal solutions from 20 experiments by evolutionary algorithms. The false positive rate was lower (pPareto optimization process can effectively help in fine-tuning and redesigning polyp detection algorithms.

  17. Joint User Scheduling and MU-MIMO Hybrid Beamforming Algorithm for mmWave FDMA Massive MIMO System

    Directory of Open Access Journals (Sweden)

    Jing Jiang

    2016-01-01

    Full Text Available The large bandwidth and multipath in millimeter wave (mmWave cellular system assure the existence of frequency selective channels; it is necessary that mmWave system remains with frequency division multiple access (FDMA and user scheduling. But for the hybrid beamforming system, the analog beamforming is implemented by the same phase shifts in the entire frequency band, and the wideband phase shifts may not be harmonious with all users scheduled in frequency resources. This paper proposes a joint user scheduling and multiuser hybrid beamforming algorithm for downlink massive multiple input multiple output (MIMO orthogonal frequency division multiple access (OFDMA systems. In the first step of user scheduling, the users with identical optimal beams form an OFDMA user group and multiplex the entire frequency resource. Then base station (BS allocates the frequency resources for each member of OFDMA user group. An OFDMA user group can be regarded as a virtual user; thus it can support arbitrary MU-MIMO user selection and beamforming algorithms. Further, the analog beamforming vectors employ the best beam of each selected MU-MIMO user and the digital beamforming algorithm is solved by weight MMSE to acquire the best performance gain and mitigate the interuser inference. Simulation results show that hybrid beamforming together with user scheduling can greatly improve the performance of mmWave OFDMA massive MU-MIMO system.

  18. Efficient sequential and parallel algorithms for planted motif search.

    Science.gov (United States)

    Nicolae, Marius; Rajasekaran, Sanguthevar

    2014-01-31

    Motif searching is an important step in the detection of rare events occurring in a set of DNA or protein sequences. One formulation of the problem is known as (l,d)-motif search or Planted Motif Search (PMS). In PMS we are given two integers l and d and n biological sequences. We want to find all sequences of length l that appear in each of the input sequences with at most d mismatches. The PMS problem is NP-complete. PMS algorithms are typically evaluated on certain instances considered challenging. Despite ample research in the area, a considerable performance gap exists because many state of the art algorithms have large runtimes even for moderately challenging instances. This paper presents a fast exact parallel PMS algorithm called PMS8. PMS8 is the first algorithm to solve the challenging (l,d) instances (25,10) and (26,11). PMS8 is also efficient on instances with larger l and d such as (50,21). We include a comparison of PMS8 with several state of the art algorithms on multiple problem instances. This paper also presents necessary and sufficient conditions for 3 l-mers to have a common d-neighbor. The program is freely available at http://engr.uconn.edu/~man09004/PMS8/. We present PMS8, an efficient exact algorithm for Planted Motif Search. PMS8 introduces novel ideas for generating common neighborhoods. We have also implemented a parallel version for this algorithm. PMS8 can solve instances not solved by any previous algorithms.

  19. Fuzzy Rules for Ant Based Clustering Algorithm

    Directory of Open Access Journals (Sweden)

    Amira Hamdi

    2016-01-01

    Full Text Available This paper provides a new intelligent technique for semisupervised data clustering problem that combines the Ant System (AS algorithm with the fuzzy c-means (FCM clustering algorithm. Our proposed approach, called F-ASClass algorithm, is a distributed algorithm inspired by foraging behavior observed in ant colonyT. The ability of ants to find the shortest path forms the basis of our proposed approach. In the first step, several colonies of cooperating entities, called artificial ants, are used to find shortest paths in a complete graph that we called graph-data. The number of colonies used in F-ASClass is equal to the number of clusters in dataset. Hence, the partition matrix of dataset founded by artificial ants is given in the second step, to the fuzzy c-means technique in order to assign unclassified objects generated in the first step. The proposed approach is tested on artificial and real datasets, and its performance is compared with those of K-means, K-medoid, and FCM algorithms. Experimental section shows that F-ASClass performs better according to the error rate classification, accuracy, and separation index.

  20. Creating ensembles of oblique decision trees with evolutionary algorithms and sampling

    Science.gov (United States)

    Cantu-Paz, Erick [Oakland, CA; Kamath, Chandrika [Tracy, CA

    2006-06-13

    A decision tree system that is part of a parallel object-oriented pattern recognition system, which in turn is part of an object oriented data mining system. A decision tree process includes the step of reading the data. If necessary, the data is sorted. A potential split of the data is evaluated according to some criterion. An initial split of the data is determined. The final split of the data is determined using evolutionary algorithms and statistical sampling techniques. The data is split. Multiple decision trees are combined in ensembles.

  1. An efficient multiple exposure image fusion in JPEG domain

    Science.gov (United States)

    Hebbalaguppe, Ramya; Kakarala, Ramakrishna

    2012-01-01

    In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.

  2. A rapid two-step algorithm detects and identifies clinical macrolide and beta-lactam antibiotic resistance in clinical bacterial isolates.

    Science.gov (United States)

    Lu, Xuedong; Nie, Shuping; Xia, Chengjing; Huang, Lie; He, Ying; Wu, Runxiang; Zhang, Li

    2014-07-01

    Aiming to identify macrolide and beta-lactam resistance in clinical bacterial isolates rapidly and accurately, a two-step algorithm was developed based on detection of eight antibiotic resistance genes. Targeting at genes linked to bacterial macrolide (msrA, ermA, ermB, and ermC) and beta-lactam (blaTEM, blaSHV, blaCTX-M-1, blaCTX-M-9) antibiotic resistances, this method includes a multiplex real-time PCR, a melting temperature profile analysis as well as a liquid bead microarray assay. Liquid bead microarray assay is applied only when indistinguishable Tm profile is observed. The clinical validity of this method was assessed on clinical bacterial isolates. Among the total 580 isolates that were determined by our diagnostic method, 75% of them were identified by the multiplex real-time PCR with melting temperature analysis alone, while the remaining 25% required both multiplex real-time PCR with melting temperature analysis and liquid bead microarray assay for identification. Compared with the traditional phenotypic antibiotic susceptibility test, an overall agreement of 81.2% (kappa=0.614, 95% CI=0.550-0.679) was observed, with a sensitivity and specificity of 87.7% and 73% respectively. Besides, the average test turnaround time is 3.9h, which is much shorter in comparison with more than 24h for the traditional phenotypic tests. Having the advantages of the shorter operating time and comparable high sensitivity and specificity with the traditional phenotypic test, our two-step algorithm provides an efficient tool for rapid determination of macrolide and beta-lactam antibiotic resistances in clinical bacterial isolates. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. A molecular dynamics-based algorithm for evaluating the glycosaminoglycan mimicking potential of synthetic, homogenous, sulfated small molecules.

    Directory of Open Access Journals (Sweden)

    Balaji Nagarajan

    Full Text Available Glycosaminoglycans (GAGs are key natural biopolymers that exhibit a range of biological functions including growth and differentiation. Despite this multiplicity of function, natural GAG sequences have not yielded drugs because of problems of heterogeneity and synthesis. Recently, several homogenous non-saccharide glycosaminoglycan mimetics (NSGMs have been reported as agents displaying major therapeutic promise. Yet, it remains unclear whether sulfated NSGMs structurally mimic sulfated GAGs. To address this, we developed a three-step molecular dynamics (MD-based algorithm to compare sulfated NSGMs with GAGs. In the first step of this algorithm, parameters related to the range of conformations sampled by the two highly sulfated molecules as free entities in water were compared. The second step compared identity of binding site geometries and the final step evaluated comparable dynamics and interactions in the protein-bound state. Using a test case of interactions with fibroblast growth factor-related proteins, we show that this three-step algorithm effectively predicts the GAG structure mimicking property of NSGMs. Specifically, we show that two unique dimeric NSGMs mimic hexameric GAG sequences in the protein-bound state. In contrast, closely related monomeric and trimeric NSGMs do not mimic GAG in either the free or bound states. These results correspond well with the functional properties of NSGMs. The results show for the first time that appropriately designed sulfated NSGMs can be good structural mimetics of GAGs and the incorporation of a MD-based strategy at the NSGM library screening stage can identify promising mimetics of targeted GAG sequences.

  4. A quasi-Newton algorithm for large-scale nonlinear equations

    Directory of Open Access Journals (Sweden)

    Linghua Huang

    2017-02-01

    Full Text Available Abstract In this paper, the algorithm for large-scale nonlinear equations is designed by the following steps: (i a conjugate gradient (CG algorithm is designed as a sub-algorithm to obtain the initial points of the main algorithm, where the sub-algorithm’s initial point does not have any restrictions; (ii a quasi-Newton algorithm with the initial points given by sub-algorithm is defined as main algorithm, where a new nonmonotone line search technique is presented to get the step length α k $\\alpha_{k}$ . The given nonmonotone line search technique can avoid computing the Jacobian matrix. The global convergence and the 1 + q $1+q$ -order convergent rate of the main algorithm are established under suitable conditions. Numerical results show that the proposed method is competitive with a similar method for large-scale problems.

  5. Novel Adaptive Bacteria Foraging Algorithms for Global Optimization

    Directory of Open Access Journals (Sweden)

    Ahmad N. K. Nasir

    2014-01-01

    Full Text Available This paper presents improved versions of bacterial foraging algorithm (BFA. The chemotaxis feature of bacteria through random motion is an effective strategy for exploring the optimum point in a search area. The selection of small step size value in the bacteria motion leads to high accuracy in the solution but it offers slow convergence. On the contrary, defining a large step size in the motion provides faster convergence but the bacteria will be unable to locate the optimum point hence reducing the fitness accuracy. In order to overcome such problems, novel linear and nonlinear mathematical relationships based on the index of iteration, index of bacteria, and fitness cost are adopted which can dynamically vary the step size of bacteria movement. The proposed algorithms are tested with several unimodal and multimodal benchmark functions in comparison with the original BFA. Moreover, the application of the proposed algorithms in modelling of a twin rotor system is presented. The results show that the proposed algorithms outperform the predecessor algorithm in all test functions and acquire better model for the twin rotor system.

  6. Nonlinear Semi-Supervised Metric Learning Via Multiple Kernels and Local Topology.

    Science.gov (United States)

    Li, Xin; Bai, Yanqin; Peng, Yaxin; Du, Shaoyi; Ying, Shihui

    2018-03-01

    Changing the metric on the data may change the data distribution, hence a good distance metric can promote the performance of learning algorithm. In this paper, we address the semi-supervised distance metric learning (ML) problem to obtain the best nonlinear metric for the data. First, we describe the nonlinear metric by the multiple kernel representation. By this approach, we project the data into a high dimensional space, where the data can be well represented by linear ML. Then, we reformulate the linear ML by a minimization problem on the positive definite matrix group. Finally, we develop a two-step algorithm for solving this model and design an intrinsic steepest descent algorithm to learn the positive definite metric matrix. Experimental results validate that our proposed method is effective and outperforms several state-of-the-art ML methods.

  7. Multiple Signal Classification Algorithm Based Electric Dipole Source Localization Method in an Underwater Environment

    Directory of Open Access Journals (Sweden)

    Yidong Xu

    2017-10-01

    Full Text Available A novel localization method based on multiple signal classification (MUSIC algorithm is proposed for positioning an electric dipole source in a confined underwater environment by using electric dipole-receiving antenna array. In this method, the boundary element method (BEM is introduced to analyze the boundary of the confined region by use of a matrix equation. The voltage of each dipole pair is used as spatial-temporal localization data, and it does not need to obtain the field component in each direction compared with the conventional fields based localization method, which can be easily implemented in practical engineering applications. Then, a global-multiple region-conjugate gradient (CG hybrid search method is used to reduce the computation burden and to improve the operation speed. Two localization simulation models and a physical experiment are conducted. Both the simulation results and physical experiment result provide accurate positioning performance, with the help to verify the effectiveness of the proposed localization method in underwater environments.

  8. Fostering Autonomy through Syllabus Design: A Step-by-Step Guide for Success

    Science.gov (United States)

    Ramírez Espinosa, Alexánder

    2016-01-01

    Promoting learner autonomy is relevant in the field of applied linguistics due to the multiple benefits it brings to the process of learning a new language. However, despite the vast array of research on how to foster autonomy in the language classroom, it is difficult to find step-by-step processes to design syllabi and curricula focused on the…

  9. A multiobjective interval programming model for wind-hydrothermal power system dispatching using 2-step optimization algorithm.

    Science.gov (United States)

    Ren, Kun; Jihong, Qu

    2014-01-01

    Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision.

  10. A Multiobjective Interval Programming Model for Wind-Hydrothermal Power System Dispatching Using 2-Step Optimization Algorithm

    Science.gov (United States)

    Jihong, Qu

    2014-01-01

    Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision. PMID:24895663

  11. Multiple-Threshold Event Detection and Other Enhancements to the Virtual Seismologist (VS) Earthquake Early Warning Algorithm

    Science.gov (United States)

    Fischer, M.; Caprio, M.; Cua, G. B.; Heaton, T. H.; Clinton, J. F.; Wiemer, S.

    2009-12-01

    The Virtual Seismologist (VS) algorithm is a Bayesian approach to earthquake early warning (EEW) being implemented by the Swiss Seismological Service at ETH Zurich. The application of Bayes’ theorem in earthquake early warning states that the most probable source estimate at any given time is a combination of contributions from a likelihood function that evolves in response to incoming data from the on-going earthquake, and selected prior information, which can include factors such as network topology, the Gutenberg-Richter relationship or previously observed seismicity. The VS algorithm was one of three EEW algorithms involved in the California Integrated Seismic Network (CISN) real-time EEW testing and performance evaluation effort. Its compelling real-time performance in California over the last three years has led to its inclusion in the new USGS-funded effort to develop key components of CISN ShakeAlert, a prototype EEW system that could potentially be implemented in California. A significant portion of VS code development was supported by the SAFER EEW project in Europe. We discuss recent enhancements to the VS EEW algorithm. We developed and continue to test a multiple-threshold event detection scheme, which uses different association / location approaches depending on the peak amplitudes associated with an incoming P pick. With this scheme, an event with sufficiently high initial amplitudes can be declared on the basis of a single station, maximizing warning times for damaging events for which EEW is most relevant. Smaller, non-damaging events, which will have lower initial amplitudes, will require more picks to be declared an event to reduce false alarms. This transforms the VS codes from a regional EEW approach reliant on traditional location estimation (and it requirement of at least 4 picks as implemented by the Binder Earthworm phase associator) to a hybrid on-site/regional approach capable of providing a continuously evolving stream of EEW

  12. Incorporating a multi-criteria decision procedure into the combined dynamic programming/production simulation algorithm for generation expansion planning

    International Nuclear Information System (INIS)

    Yang, H.T.; Chen, S.L.

    1989-01-01

    A multi-objective optimization approach to generation expansion planning is presented. The approach is designed by adding a new multi-criteria decision (MCD) procedure to the conventional algorithm which combines dynamic programming with production simulation method. The MCD procedure can help decision makers weight the relative importance of multiple attributes associated with the decision alternatives, and find the near-best compromise solution efficiently at each optimization step of the conventional algorithm. Practical application of proposed approach to feasibility evaluation of the fourth nuclear power plant of Tawian is also presented, demonstrating the effectiveness and limitations of the approach

  13. Structural Damage Detection using Frequency Response Function Index and Surrogate Model Based on Optimized Extreme Learning Machine Algorithm

    Directory of Open Access Journals (Sweden)

    R. Ghiasi

    2017-09-01

    Full Text Available Utilizing surrogate models based on artificial intelligence methods for detecting structural damages has attracted the attention of many researchers in recent decades. In this study, a new kernel based on Littlewood-Paley Wavelet (LPW is proposed for Extreme Learning Machine (ELM algorithm to improve the accuracy of detecting multiple damages in structural systems.  ELM is used as metamodel (surrogate model of exact finite element analysis of structures in order to efficiently reduce the computational cost through updating process. In the proposed two-step method, first a damage index, based on Frequency Response Function (FRF of the structure, is used to identify the location of damages. In the second step, the severity of damages in identified elements is detected using ELM. In order to evaluate the efficacy of ELM, the results obtained from the proposed kernel were compared with other kernels proposed for ELM as well as Least Square Support Vector Machine algorithm. The solved numerical problems indicated that ELM algorithm accuracy in detecting structural damages is increased drastically in case of using LPW kernel.

  14. Interpolation algorithm for asynchronous ADC-data

    Directory of Open Access Journals (Sweden)

    S. Bramburger

    2017-09-01

    Full Text Available This paper presents a modified interpolation algorithm for signals with variable data rate from asynchronous ADCs. The Adaptive weights Conjugate gradient Toeplitz matrix (ACT algorithm is extended to operate with a continuous data stream. An additional preprocessing of data with constant and linear sections and a weighted overlap of step-by-step into spectral domain transformed signals improve the reconstruction of the asycnhronous ADC signal. The interpolation method can be used if asynchronous ADC data is fed into synchronous digital signal processing.

  15. Improved quantum backtracking algorithms using effective resistance estimates

    Science.gov (United States)

    Jarret, Michael; Wan, Kianna

    2018-02-01

    We investigate quantum backtracking algorithms of the type introduced by Montanaro (Montanaro, arXiv:1509.02374). These algorithms explore trees of unknown structure and in certain settings exponentially outperform their classical counterparts. Some of the previous work focused on obtaining a quantum advantage for trees in which a unique marked vertex is promised to exist. We remove this restriction by recharacterizing the problem in terms of the effective resistance of the search space. In this paper, we present a generalization of one of Montanaro's algorithms to trees containing k marked vertices, where k is not necessarily known a priori. Our approach involves using amplitude estimation to determine a near-optimal weighting of a diffusion operator, which can then be applied to prepare a superposition state with support only on marked vertices and ancestors thereof. By repeatedly sampling this state and updating the input vertex, a marked vertex is reached in a logarithmic number of steps. The algorithm thereby achieves the conjectured bound of O ˜(√{T Rmax }) for finding a single marked vertex and O ˜(k √{T Rmax }) for finding all k marked vertices, where T is an upper bound on the tree size and Rmax is the maximum effective resistance encountered by the algorithm. This constitutes a speedup over Montanaro's original procedure in both the case of finding one and the case of finding multiple marked vertices in an arbitrary tree.

  16. Optimal planning approaches with multiple impulses for rendezvous based on hybrid genetic algorithm and control method

    Directory of Open Access Journals (Sweden)

    JingRui Zhang

    2015-03-01

    Full Text Available In this article, we focus on safe and effective completion of a rendezvous and docking task by looking at planning approaches and control with fuel-optimal rendezvous for a target spacecraft running on a near-circular reference orbit. A variety of existent practical path constraints are considered, including the constraints of field of view, impulses, and passive safety. A rendezvous approach is calculated by using a hybrid genetic algorithm with those constraints. Furthermore, a control method of trajectory tracking is adopted to overcome the external disturbances. Based on Clohessy–Wiltshire equations, we first construct the mathematical model of optimal planning approaches of multiple impulses with path constraints. Second, we introduce the principle of hybrid genetic algorithm with both stronger global searching ability and local searching ability. We additionally explain the application of this algorithm in the problem of trajectory planning. Then, we give three-impulse simulation examples to acquire an optimal rendezvous trajectory with the path constraints presented in this article. The effectiveness and applicability of the tracking control method are verified with the optimal trajectory above as control objective through the numerical simulation.

  17. A Two-Step Strategy for System Identification of Civil Structures for Structural Health Monitoring Using Wavelet Transform and Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Carlos Andres Perez-Ramirez

    2017-01-01

    Full Text Available Nowadays, the accurate identification of natural frequencies and damping ratios play an important role in smart civil engineering, since they can be used for seismic design, vibration control, and condition assessment, among others. To achieve it in practical way, it is required to instrument the structure and apply techniques which are able to deal with noise-corrupted and non-linear signals, as they are common features in real-life civil structures. In this article, a two-step strategy is proposed for performing accurate modal parameters identification in an automated manner. In the first step, it is obtained and decomposed the measured signals using the natural excitation technique and the synchrosqueezed wavelet transform, respectively. Then, the second step estimates the modal parameters by solving an optimization problem employing a genetic algorithm-based approach, where the micropopulation concept is used to improve the speed convergence as well as the accuracy of the estimated values. The accuracy and effectiveness of the proposal are tested using both the simulated response of a benchmark structure and the measurements of a real eight-story building. The obtained results show that the proposed strategy can estimate the modal parameters accurately, indicating than the proposal can be considered as an alternative to perform the abovementioned task.

  18. Algorithms for the process management of sealed source brachytherapy

    International Nuclear Information System (INIS)

    Engler, M.J.; Ulin, K.; Sternick, E.S.

    1996-01-01

    Incidents and misadministrations suggest that brachytherapy may benefit form clarification of the quality management program and other mandates of the US Nuclear Regulatory Commission. To that end, flowcharts of step by step subprocesses were developed and formatted with dedicated software. The overall process was similarly organized in a complex flowchart termed a general process map. Procedural and structural indicators associated with each flowchart and map were critiqued and pre-existing documentation was revised. open-quotes Step-regulation tablesclose quotes were created to refer steps and subprocesses to Nuclear Regulatory Commission rules and recommendations in their sequences of applicability. Brachytherapy algorithms were specified as programmable, recursive processes, including therapeutic dose determination and monitoring doses to the public. These algorithms are embodied in flowcharts and step-regulation tables. A general algorithm is suggested as a template form which other facilities may derive tools to facilitate process management of sealed source brachytherapy. 11 refs., 9 figs., 2 tabs

  19. Applications of Fast Truncated Multiplication in Cryptography

    Directory of Open Access Journals (Sweden)

    Laszlo Hars

    2006-12-01

    Full Text Available Truncated multiplications compute truncated products, contiguous subsequences of the digits of integer products. For an n-digit multiplication algorithm of time complexity O(nα, with 1<α≤2, there is a truncated multiplication algorithm, which is constant times faster when computing a short enough truncated product. Applying these fast truncated multiplications, several cryptographic long integer arithmetic algorithms are improved, including integer reciprocals, divisions, Barrett and Montgomery multiplications, 2n-digit modular multiplication on hardware for n-digit half products. For example, Montgomery multiplication is performed in 2.6 Karatsuba multiplication time.

  20. Mosaic crystal algorithm for Monte Carlo simulations

    CERN Document Server

    Seeger, P A

    2002-01-01

    An algorithm is presented for calculating reflectivity, absorption, and scattering of mosaic crystals in Monte Carlo simulations of neutron instruments. The algorithm uses multi-step transport through the crystal with an exact solution of the Darwin equations at each step. It relies on the kinematical model for Bragg reflection (with parameters adjusted to reproduce experimental data). For computation of thermal effects (the Debye-Waller factor and coherent inelastic scattering), an expansion of the Debye integral as a rapidly converging series of exponential terms is also presented. Any crystal geometry and plane orientation may be treated. The algorithm has been incorporated into the neutron instrument simulation package NISP. (orig.)

  1. Continuous analog of multiplicative algebraic reconstruction technique for computed tomography

    Science.gov (United States)

    Tateishi, Kiyoko; Yamaguchi, Yusaku; Abou Al-Ola, Omar M.; Kojima, Takeshi; Yoshinaga, Tetsuya

    2016-03-01

    We propose a hybrid dynamical system as a continuous analog to the block-iterative multiplicative algebraic reconstruction technique (BI-MART), which is a well-known iterative image reconstruction algorithm for computed tomography. The hybrid system is described by a switched nonlinear system with a piecewise smooth vector field or differential equation and, for consistent inverse problems, the convergence of non-negatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem. Namely, we can prove theoretically that a weighted Kullback-Leibler divergence measure can be a common Lyapunov function for the switched system. We show that discretizing the differential equation by using the first-order approximation (Euler's method) based on the geometric multiplicative calculus leads to the same iterative formula of the BI-MART with the scaling parameter as a time-step of numerical discretization. The present paper is the first to reveal that a kind of iterative image reconstruction algorithm is constructed by the discretization of a continuous-time dynamical system for solving tomographic inverse problems. Iterative algorithms with not only the Euler method but also the Runge-Kutta methods of lower-orders applied for discretizing the continuous-time system can be used for image reconstruction. A numerical example showing the characteristics of the discretized iterative methods is presented.

  2. Fast Simulation of 3-D Surface Flanging and Prediction of the Flanging Lines Based On One-Step Inverse Forming Algorithm

    International Nuclear Information System (INIS)

    Bao Yidong; Hu Sibo; Lang Zhikui; Hu Ping

    2005-01-01

    A fast simulation scheme for 3D curved binder flanging and blank shape prediction of sheet metal based on one-step inverse finite element method is proposed, in which the total plasticity theory and proportional loading assumption are used. The scheme can be actually used to simulate 3D flanging with complex curve binder shape, and suitable for simulating any type of flanging model by numerically determining the flanging height and flanging lines. Compared with other methods such as analytic algorithm and blank sheet-cut return method, the prominent advantage of the present scheme is that it can directly predict the location of the 3D flanging lines when simulating the flanging process. Therefore, the prediction time of flanging lines will be obviously decreased. Two typical 3D curve binder flanging including stretch and shrink characters are simulated in the same time by using the present scheme and incremental FE non-inverse algorithm based on incremental plasticity theory, which show the validity and high efficiency of the present scheme

  3. Decoding Interleaved Gabidulin Codes using Alekhnovich's Algorithm

    DEFF Research Database (Denmark)

    Puchinger, Sven; Müelich, Sven; Mödinger, David

    2017-01-01

    We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log⁡(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent.......We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log⁡(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent....

  4. A low complexity algorithm for multiple relay selection in two-way relaying Cognitive Radio networks

    KAUST Repository

    Alsharoa, Ahmad M.

    2013-06-01

    In this paper, a multiple relay selection scheme for two-way relaying cognitive radio network is investigated. We consider a cooperative Cognitive Radio (CR) system with spectrum sharing scenario using Amplify-and-Forward (AF) protocol, where licensed users and unlicensed users operate on the same frequency band. The main objective is to maximize the sum rate of the unlicensed users allowed to share the spectrum with the licensed users by respecting a tolerated interference threshold. A practical low complexity heuristic approach is proposed to solve our formulated optimization problem. Selected numerical results show that the proposed algorithm reaches a performance close to the performance of the optimal multiple relay selection scheme either with discrete or continuous power distributions while providing a considerable saving in terms of computational complexity. In addition, these results show that our proposed scheme significantly outperforms the single relay selection scheme. © 2013 IEEE.

  5. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints.

    Science.gov (United States)

    Zhao, Yingfeng; Liu, Sanyang

    2016-01-01

    We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.

  6. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    Science.gov (United States)

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.

  7. Computer experiments of the time-sequence of individual steps in multiple Coulomb-excitation

    International Nuclear Information System (INIS)

    Boer, J. de; Dannhaueser, G.

    1982-01-01

    The way in which the multiple E2 steps in the Coulomb-excitation of a rotational band of a nucleus follow one another is elucidated for selected examples using semiclassical computer experiments. The role a given transition plays for the excitation of a given final state is measured by a quantity named ''importance function''. It is found that these functions, calculated for the highest rotational state, peak at times forming a sequence for the successive E2 transitions starting from the ground state. This sequential behaviour is used to approximately account for the effects on the projectile orbit of the sequential transfer of excitation energy and angular momentum from projectile to target. These orbits lead to similar deflection functions and cross sections as those obtained from a symmetrization procedure approximately accounting for the transfer of angular momentum and energy. (Auth.)

  8. A multiple objective magnet sorting algorithm for the Advanced Light Source insertion devices

    International Nuclear Information System (INIS)

    Humphries, D.; Goetz, F.; Kownacki, P.; Marks, S.; Schlueter, R.

    1995-01-01

    Insertion devices for the Advanced Light Source (ALS) incorporate large numbers of permanent magnets which have a variety of magnetization orientation errors. These orientation errors can produce field errors which affect both the spectral brightness of the insertion devices and the storage ring electron beam dynamics. A perturbation study was carried out to quantify the effects of orientation errors acting in a hybrid magnetic structure. The results of this study were used to develop a multiple stage sorting algorithm which minimizes undesirable integrated field errors and essentially eliminates pole excitation errors. When applied to a measured magnet population for an existing insertion device, an order of magnitude reduction in integrated field errors was achieved while maintaining near zero pole excitation errors

  9. An efficient quantum algorithm for spectral estimation

    Science.gov (United States)

    Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth

    2017-03-01

    We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.

  10. Comparison of the phenolic composition of fruit juices by single step gradient HPLC analysis of multiple components versus multiple chromatographic runs optimised for individual families.

    Science.gov (United States)

    Bremner, P D; Blacklock, C J; Paganga, G; Mullen, W; Rice-Evans, C A; Crozier, A

    2000-06-01

    After minimal sample preparation, two different HPLC methodologies, one based on a single gradient reversed-phase HPLC step, the other on multiple HPLC runs each optimised for specific components, were used to investigate the composition of flavonoids and phenolic acids in apple and tomato juices. The principal components in apple juice were identified as chlorogenic acid, phloridzin, caffeic acid and p-coumaric acid. Tomato juice was found to contain chlorogenic acid, caffeic acid, p-coumaric acid, naringenin and rutin. The quantitative estimates of the levels of these compounds, obtained with the two HPLC procedures, were very similar, demonstrating that either method can be used to analyse accurately the phenolic components of apple and tomato juices. Chlorogenic acid in tomato juice was the only component not fully resolved in the single run study and the multiple run analysis prior to enzyme treatment. The single run system of analysis is recommended for the initial investigation of plant phenolics and the multiple run approach for analyses where chromatographic resolution requires improvement.

  11. A parallel algorithm for Hamiltonian matrix construction in electron-molecule collision calculations: MPI-SCATCI

    Science.gov (United States)

    Al-Refaie, Ahmed F.; Tennyson, Jonathan

    2017-12-01

    Construction and diagonalization of the Hamiltonian matrix is the rate-limiting step in most low-energy electron - molecule collision calculations. Tennyson (1996) implemented a novel algorithm for Hamiltonian construction which took advantage of the structure of the wavefunction in such calculations. This algorithm is re-engineered to make use of modern computer architectures and the use of appropriate diagonalizers is considered. Test calculations demonstrate that significant speed-ups can be gained using multiple CPUs. This opens the way to calculations which consider higher collision energies, larger molecules and / or more target states. The methodology, which is implemented as part of the UK molecular R-matrix codes (UKRMol and UKRMol+) can also be used for studies of bound molecular Rydberg states, photoionization and positron-molecule collisions.

  12. The Viterbi Algorithm expressed in Constraint Handling Rules

    DEFF Research Database (Denmark)

    Christiansen, Henning; Have, Christian Theil; Lassen, Ole Torp

    2010-01-01

    The Viterbi algorithm is a classical example of a dynamic programming algorithm, in which pruning reduces the search space drastically, so that an otherwise exponential time complexity is reduced to linearity. The central steps of the algorithm, expansion and pruning, can be expressed in a concis...

  13. Multi-step EMG Classification Algorithm for Human-Computer Interaction

    Science.gov (United States)

    Ren, Peng; Barreto, Armando; Adjouadi, Malek

    A three-electrode human-computer interaction system, based on digital processing of the Electromyogram (EMG) signal, is presented. This system can effectively help disabled individuals paralyzed from the neck down to interact with computers or communicate with people through computers using point-and-click graphic interfaces. The three electrodes are placed on the right frontalis, the left temporalis and the right temporalis muscles in the head, respectively. The signal processing algorithm used translates the EMG signals during five kinds of facial movements (left jaw clenching, right jaw clenching, eyebrows up, eyebrows down, simultaneous left & right jaw clenching) into five corresponding types of cursor movements (left, right, up, down and left-click), to provide basic mouse control. The classification strategy is based on three principles: the EMG energy of one channel is typically larger than the others during one specific muscle contraction; the spectral characteristics of the EMG signals produced by the frontalis and temporalis muscles during different movements are different; the EMG signals from adjacent channels typically have correlated energy profiles. The algorithm is evaluated on 20 pre-recorded EMG signal sets, using Matlab simulations. The results show that this method provides improvements and is more robust than other previous approaches.

  14. Faster algorithms for RNA-folding using the Four-Russians method.

    Science.gov (United States)

    Venkatachalam, Balaji; Gusfield, Dan; Frid, Yelena

    2014-03-06

    The secondary structure that maximizes the number of non-crossing matchings between complimentary bases of an RNA sequence of length n can be computed in O(n3) time using Nussinov's dynamic programming algorithm. The Four-Russians method is a technique that reduces the running time for certain dynamic programming algorithms by a multiplicative factor after a preprocessing step where solutions to all smaller subproblems of a fixed size are exhaustively enumerated and solved. Frid and Gusfield designed an O(n3logn) algorithm for RNA folding using the Four-Russians technique. In their algorithm the preprocessing is interleaved with the algorithm computation. We simplify the algorithm and the analysis by doing the preprocessing once prior to the algorithm computation. We call this the two-vector method. We also show variants where instead of exhaustive preprocessing, we only solve the subproblems encountered in the main algorithm once and memoize the results. We give a simple proof of correctness and explore the practical advantages over the earlier method.The Nussinov algorithm admits an O(n2) time parallel algorithm. We show a parallel algorithm using the two-vector idea that improves the time bound to O(n2logn). We have implemented the parallel algorithm on graphics processing units using the CUDA platform. We discuss the organization of the data structures to exploit coalesced memory access for fast running times. The ideas to organize the data structures also help in improving the running time of the serial algorithms. For sequences of length up to 6000 bases the parallel algorithm takes only about 2.5 seconds and the two-vector serial method takes about 57 seconds on a desktop and 15 seconds on a server. Among the serial algorithms, the two-vector and memoized versions are faster than the Frid-Gusfield algorithm by a factor of 3, and are faster than Nussinov by up to a factor of 20. The source-code for the algorithms is available at http://github.com/ijalabv/FourRussiansRNAFolding.

  15. High-resolution seismic wave propagation using local time stepping

    KAUST Repository

    Peter, Daniel

    2017-03-13

    High-resolution seismic wave simulations often require local refinements in numerical meshes to accurately capture e.g. steep topography or complex fault geometry. Together with explicit time schemes, this dramatically reduces the global time step size for ground-motion simulations due to numerical stability conditions. To alleviate this problem, local time stepping (LTS) algorithms allow an explicit time stepping scheme to adapt the time step to the element size, allowing nearoptimal time steps everywhere in the mesh. This can potentially lead to significantly faster simulation runtimes.

  16. A Novel Sensor Selection and Power Allocation Algorithm for Multiple-Target Tracking in an LPI Radar Network

    Directory of Open Access Journals (Sweden)

    Ji She

    2016-12-01

    Full Text Available Radar networks are proven to have numerous advantages over traditional monostatic and bistatic radar. With recent developments, radar networks have become an attractive platform due to their low probability of intercept (LPI performance for target tracking. In this paper, a joint sensor selection and power allocation algorithm for multiple-target tracking in a radar network based on LPI is proposed. It is found that this algorithm can minimize the total transmitted power of a radar network on the basis of a predetermined mutual information (MI threshold between the target impulse response and the reflected signal. The MI is required by the radar network system to estimate target parameters, and it can be calculated predictively with the estimation of target state. The optimization problem of sensor selection and power allocation, which contains two variables, is non-convex and it can be solved by separating power allocation problem from sensor selection problem. To be specific, the optimization problem of power allocation can be solved by using the bisection method for each sensor selection scheme. Also, the optimization problem of sensor selection can be solved by a lower complexity algorithm based on the allocated powers. According to the simulation results, it can be found that the proposed algorithm can effectively reduce the total transmitted power of a radar network, which can be conducive to improving LPI performance.

  17. Higher-order force gradient symplectic algorithms

    Science.gov (United States)

    Chin, Siu A.; Kidwell, Donald W.

    2000-12-01

    We show that a recently discovered fourth order symplectic algorithm, which requires one evaluation of force gradient in addition to three evaluations of the force, when iterated to higher order, yielded algorithms that are far superior to similarly iterated higher order algorithms based on the standard Forest-Ruth algorithm. We gauge the accuracy of each algorithm by comparing the step-size independent error functions associated with energy conservation and the rotation of the Laplace-Runge-Lenz vector when solving a highly eccentric Kepler problem. For orders 6, 8, 10, and 12, the new algorithms are approximately a factor of 103, 104, 104, and 105 better.

  18. A Synthetic Algorithm for Tracking a Moving Object in a Multiple-Dynamic Obstacles Environment Based on Kinematically Planar Redundant Manipulators

    Directory of Open Access Journals (Sweden)

    Hongzhe Jin

    2017-01-01

    Full Text Available This paper presents a synthetic algorithm for tracking a moving object in a multiple-dynamic obstacles environment based on kinematically planar manipulators. By observing the motions of the object and obstacles, Spline filter associated with polynomial fitting is utilized to predict their moving paths for a period of time in the future. Several feasible paths for the manipulator in Cartesian space can be planned according to the predicted moving paths and the defined feasibility criterion. The shortest one among these feasible paths is selected as the optimized path. Then the real-time path along the optimized path is planned for the manipulator to track the moving object in real-time. To improve the convergence rate of tracking, a virtual controller based on PD controller is designed to adaptively adjust the real-time path. In the process of tracking, the null space of inverse kinematic and the local rotation coordinate method (LRCM are utilized for the arms and the end-effector to avoid obstacles, respectively. Finally, the moving object in a multiple-dynamic obstacles environment is thus tracked via real-time updating the joint angles of manipulator according to the iterative method. Simulation results show that the proposed algorithm is feasible to track a moving object in a multiple-dynamic obstacles environment.

  19. Multiple-source multiple-harmonic active vibration control of variable section cylindrical structures: A numerical study

    Science.gov (United States)

    Liu, Jinxin; Chen, Xuefeng; Gao, Jiawei; Zhang, Xingwu

    2016-12-01

    Air vehicles, space vehicles and underwater vehicles, the cabins of which can be viewed as variable section cylindrical structures, have multiple rotational vibration sources (e.g., engines, propellers, compressors and motors), making the spectrum of noise multiple-harmonic. The suppression of such noise has been a focus of interests in the field of active vibration control (AVC). In this paper, a multiple-source multiple-harmonic (MSMH) active vibration suppression algorithm with feed-forward structure is proposed based on reference amplitude rectification and conjugate gradient method (CGM). An AVC simulation scheme called finite element model in-loop simulation (FEMILS) is also proposed for rapid algorithm verification. Numerical studies of AVC are conducted on a variable section cylindrical structure based on the proposed MSMH algorithm and FEMILS scheme. It can be seen from the numerical studies that: (1) the proposed MSMH algorithm can individually suppress each component of the multiple-harmonic noise with an unified and improved convergence rate; (2) the FEMILS scheme is convenient and straightforward for multiple-source simulations with an acceptable loop time. Moreover, the simulations have similar procedure to real-life control and can be easily extended to physical model platform.

  20. Algorithms for Cytoplasm Segmentation of Fluorescence Labelled Cells

    Directory of Open Access Journals (Sweden)

    Carolina Wählby

    2002-01-01

    Full Text Available Automatic cell segmentation has various applications in cytometry, and while the nucleus is often very distinct and easy to identify, the cytoplasm provides a lot more challenge. A new combination of image analysis algorithms for segmentation of cells imaged by fluorescence microscopy is presented. The algorithm consists of an image pre‐processing step, a general segmentation and merging step followed by a segmentation quality measurement. The quality measurement consists of a statistical analysis of a number of shape descriptive features. Objects that have features that differ to that of correctly segmented single cells can be further processed by a splitting step. By statistical analysis we therefore get a feedback system for separation of clustered cells. After the segmentation is completed, the quality of the final segmentation is evaluated. By training the algorithm on a representative set of training images, the algorithm is made fully automatic for subsequent images created under similar conditions. Automatic cytoplasm segmentation was tested on CHO‐cells stained with calcein. The fully automatic method showed between 89% and 97% correct segmentation as compared to manual segmentation.

  1. Distributed Containment Control for Multiple Unknown Second-Order Nonlinear Systems With Application to Networked Lagrangian Systems.

    Science.gov (United States)

    Mei, Jie; Ren, Wei; Li, Bing; Ma, Guangfu

    2015-09-01

    In this paper, we consider the distributed containment control problem for multiagent systems with unknown nonlinear dynamics. More specifically, we focus on multiple second-order nonlinear systems and networked Lagrangian systems. We first study the distributed containment control problem for multiple second-order nonlinear systems with multiple dynamic leaders in the presence of unknown nonlinearities and external disturbances under a general directed graph that characterizes the interaction among the leaders and the followers. A distributed adaptive control algorithm with an adaptive gain design based on the approximation capability of neural networks is proposed. We present a necessary and sufficient condition on the directed graph such that the containment error can be reduced as small as desired. As a byproduct, the leaderless consensus problem is solved with asymptotical convergence. Because relative velocity measurements between neighbors are generally more difficult to obtain than relative position measurements, we then propose a distributed containment control algorithm without using neighbors' velocity information. A two-step Lyapunov-based method is used to study the convergence of the closed-loop system. Next, we apply the ideas to deal with the containment control problem for networked unknown Lagrangian systems under a general directed graph. All the proposed algorithms are distributed and can be implemented using only local measurements in the absence of communication. Finally, simulation examples are provided to show the effectiveness of the proposed control algorithms.

  2. Algorithms for Cytoplasm Segmentation of Fluorescence Labelled Cells

    OpenAIRE

    Carolina Wählby; Joakim Lindblad; Mikael Vondrus; Ewert Bengtsson; Lennart Björkesten

    2002-01-01

    Automatic cell segmentation has various applications in cytometry, and while the nucleus is often very distinct and easy to identify, the cytoplasm provides a lot more challenge. A new combination of image analysis algorithms for segmentation of cells imaged by fluorescence microscopy is presented. The algorithm consists of an image pre?processing step, a general segmentation and merging step followed by a segmentation quality measurement. The quality measurement consists of a statistical ana...

  3. Aggressive time step selection for the time asymptotic velocity diffusion problem

    International Nuclear Information System (INIS)

    Hewett, D.W.; Krapchev, V.B.; Hizanidis, K.; Bers, A.

    1984-12-01

    An aggressive time step selector for an ADI algorithm is preseneted that is applied to the linearized 2-D Fokker-Planck equation including an externally imposed quasilinear diffusion term. This method provides a reduction in CPU requirements by factors of two or three compared to standard ADI. More important, the robustness of the procedure greatly reduces the work load of the user. The procedure selects a nearly optimal Δt with a minimum of intervention by the user thus relieving the need to supervise the algorithm. In effect, the algorithm does its own supervision by discarding time steps made with Δt too large

  4. Hierarchical multiple binary image encryption based on a chaos and phase retrieval algorithm in the Fresnel domain

    International Nuclear Information System (INIS)

    Wang, Zhipeng; Hou, Chenxia; Lv, Xiaodong; Wang, Hongjuan; Gong, Qiong; Qin, Yi

    2016-01-01

    Based on the chaos and phase retrieval algorithm, a hierarchical multiple binary image encryption is proposed. In the encryption process, each plaintext is encrypted into a diffraction intensity pattern by two chaos-generated random phase masks (RPMs). Thereafter, the captured diffraction intensity patterns are partially selected by different binary masks and then combined together to form a single intensity pattern. The combined intensity pattern is saved as ciphertext. For decryption, an iterative phase retrieval algorithm is performed, in which a support constraint in the output plane and a median filtering operation are utilized to achieve a rapid convergence rate without a stagnation problem. The proposed scheme has a simple optical setup and large encryption capacity. In particular, it is well suited for constructing a hierarchical security system. The security and robustness of the proposal are also investigated. (letter)

  5. Direction-of-arrival estimation for co-located multiple-input multiple-output radar using structural sparsity Bayesian learning

    International Nuclear Information System (INIS)

    Wen Fang-Qing; Zhang Gong; Ben De

    2015-01-01

    This paper addresses the direction of arrival (DOA) estimation problem for the co-located multiple-input multiple-output (MIMO) radar with random arrays. The spatially distributed sparsity of the targets in the background makes compressive sensing (CS) desirable for DOA estimation. A spatial CS framework is presented, which links the DOA estimation problem to support recovery from a known over-complete dictionary. A modified statistical model is developed to accurately represent the intra-block correlation of the received signal. A structural sparsity Bayesian learning algorithm is proposed for the sparse recovery problem. The proposed algorithm, which exploits intra-signal correlation, is capable being applied to limited data support and low signal-to-noise ratio (SNR) scene. Furthermore, the proposed algorithm has less computation load compared to the classical Bayesian algorithm. Simulation results show that the proposed algorithm has a more accurate DOA estimation than the traditional multiple signal classification (MUSIC) algorithm and other CS recovery algorithms. (paper)

  6. Teaching AI Search Algorithms in a Web-Based Educational System

    Science.gov (United States)

    Grivokostopoulou, Foteini; Hatzilygeroudis, Ioannis

    2013-01-01

    In this paper, we present a way of teaching AI search algorithms in a web-based adaptive educational system. Teaching is based on interactive examples and exercises. Interactive examples, which use visualized animations to present AI search algorithms in a step-by-step way with explanations, are used to make learning more attractive. Practice…

  7. The Development of Advanced Processing and Analysis Algorithms for Improved Neutron Multiplicity Measurements

    International Nuclear Information System (INIS)

    Santi, P.; Favalli, A.; Hauck, D.; Henzl, V.; Henzlova, D.; Ianakiev, K.; Iliev, M.; Swinhoe, M.; Croft, S.; Worrall, L.

    2015-01-01

    One of the most distinctive and informative signatures of special nuclear materials is the emission of correlated neutrons from either spontaneous or induced fission. Because the emission of correlated neutrons is a unique and unmistakable signature of nuclear materials, the ability to effectively detect, process, and analyze these emissions will continue to play a vital role in the non-proliferation, safeguards, and security missions. While currently deployed neutron measurement techniques based on 3He proportional counter technology, such as neutron coincidence and multiplicity counters currently used by the International Atomic Energy Agency, have proven to be effective over the past several decades for a wide range of measurement needs, a number of technical and practical limitations exist in continuing to apply this technique to future measurement needs. In many cases, those limitations exist within the algorithms that are used to process and analyze the detected signals from these counters that were initially developed approximately 20 years ago based on the technology and computing power that was available at that time. Over the past three years, an effort has been undertaken to address the general shortcomings in these algorithms by developing new algorithms that are based on fundamental physics principles that should lead to the development of more sensitive neutron non-destructive assay instrumentation. Through this effort, a number of advancements have been made in correcting incoming data for electronic dead time, connecting the two main types of analysis techniques used to quantify the data (Shift register analysis and Feynman variance to mean analysis), and in the underlying physical model, known as the point model, that is used to interpret the data in terms of the characteristic properties of the item being measured. The current status of the testing and evaluation of these advancements in correlated neutron analysis techniques will be discussed

  8. Fast intersection detection algorithm for PC-based robot off-line programming

    Science.gov (United States)

    Fedrowitz, Christian H.

    1994-11-01

    This paper presents a method for fast and reliable collision detection in complex production cells. The algorithm is part of the PC-based robot off-line programming system of the University of Siegen (Ropsus). The method is based on a solid model which is managed by a simplified constructive solid geometry model (CSG-model). The collision detection problem is divided in two steps. In the first step the complexity of the problem is reduced in linear time. In the second step the remaining solids are tested for intersection. For this the Simplex algorithm, which is known from linear optimization, is used. It computes a point which is common to two convex polyhedra. The polyhedra intersect, if such a point exists. Regarding the simplified geometrical model of Ropsus the algorithm runs also in linear time. In conjunction with the first step a resultant collision detection algorithm is found which requires linear time in all. Moreover it computes the resultant intersection polyhedron using the dual transformation.

  9. On Chudnovsky-Based Arithmetic Algorithms in Finite Fields

    OpenAIRE

    Atighehchi, Kevin; Ballet, Stéphane; Bonnecaze, Alexis; Rolland, Robert

    2015-01-01

    Thanks to a new construction of the so-called Chudnovsky-Chudnovsky multiplication algorithm, we design efficient algorithms for both the exponentiation and the multiplication in finite fields. They are tailored to hardware implementation and they allow computations to be parallelized while maintaining a low number of bilinear multiplications. We give an example with the finite field ${\\mathbb F}_{16^{13}}$.

  10. Application of genetic algorithm - multiple linear regressions to predict the activity of RSK inhibitors

    Directory of Open Access Journals (Sweden)

    Avval Zhila Mohajeri

    2015-01-01

    Full Text Available This paper deals with developing a linear quantitative structure-activity relationship (QSAR model for predicting the RSK inhibition activity of some new compounds. A dataset consisting of 62 pyrazino [1,2-α] indole, diazepino [1,2-α] indole, and imidazole derivatives with known inhibitory activities was used. Multiple linear regressions (MLR technique combined with the stepwise (SW and the genetic algorithm (GA methods as variable selection tools was employed. For more checking stability, robustness and predictability of the proposed models, internal and external validation techniques were used. Comparison of the results obtained, indicate that the GA-MLR model is superior to the SW-MLR model and that it isapplicable for designing novel RSK inhibitors.

  11. On the implementation of the Ford | Fulkerson algorithm on the Multiple Instruction and Single Data computer system

    Directory of Open Access Journals (Sweden)

    A. Yu. Popov

    2014-01-01

    Full Text Available Algorithms of optimization in networks and direct graphs find a broad application when solving the practical tasks. However, along with large-scale introduction of information technologies in human activity, requirements for volumes of input data and retrieval rate of solution are aggravated. In spite of the fact that by now the large number of algorithms for the various models of computers and computing systems have been studied and implemented, the solution of key problems of optimization for real dimensions of tasks remains difficult. In this regard search of new and more efficient computing structures, as well as update of known algorithms are of great current interest.The work considers an implementation of the search-end algorithm of the maximum flow on the direct graph for multiple instructions and single data computer system (MISD developed in BMSTU. Key feature of this architecture is deep hardware support of operations over sets and structures of data. Functions of storage and access to them are realized on the specialized processor of structures processing (SP which is capable to perform at the hardware level such operations as: add, delete, search, intersect, complete, merge, and others. Advantage of such system is possibility of parallel execution of parts of the computing tasks regarding the access to the sets to data structures simultaneously with arithmetic and logical processing of information.The previous works present the general principles of the computing process arrangement and features of programs implemented in MISD system, describe the structure and principles of functioning the processor of structures processing, show the general principles of the graph task solutions in such system, and experimentally study the efficiency of the received algorithms.The work gives command formats of the SP processor, offers the technique to update the algorithms realized in MISD system, suggests the option of Ford-Falkersona algorithm

  12. Structured diagnostic imaging in patients with multiple trauma

    International Nuclear Information System (INIS)

    Linsenmaier, U.; Rieger, J.; Rock, C.; Pfeifer, K.J.; Reiser, M.; Kanz, K.G.

    2002-01-01

    Purpose. Development of a concept for structured diagnostic imaging in patients with multiple trauma.Material and methods. Evaluation of data from a prospective trial with over 2400 documented patients with multiple trauma. All diagnostic and therapeutic steps, primary and secondary death and the 90 days lethality were documented.Structured diagnostic imaging of multiple injured patients requires the integration of an experienced radiologist in an interdisciplinary trauma team consisting of anesthesia, radiology and trauma surgery. Radiology itself deserves standardized concepts for equipment, personnel and logistics to perform diagnostic imaging for a 24-h-coverage with constant quality.Results. This paper describes criteria for initiation of a shock room or emergency room treatment, strategies for documentation and interdisciplinary algorithms for the early clinical care coordinating diagnostic imaging and therapeutic procedures following standardized guidelines. Diagnostic imaging consists of basic diagnosis, radiological ABC-rule, radiological follow-up and structured organ diagnosis using CT. Radiological trauma scoring allows improved quality control of diagnosis and therapy of multiple injured patients.Conclusion. Structured diagnostic imaging of multiple injured patients leads to a standardization of diagnosis and therapy and ensures constant process quality. (orig.) [de

  13. Identification of alternative splice variants in Aspergillus flavus through comparison of multiple tandem MS search algorithms

    Directory of Open Access Journals (Sweden)

    Chang Kung-Yen

    2011-07-01

    Full Text Available Abstract Background Database searching is the most frequently used approach for automated peptide assignment and protein inference of tandem mass spectra. The results, however, depend on the sequences in target databases and on search algorithms. Recently by using an alternative splicing database, we identified more proteins than with the annotated proteins in Aspergillus flavus. In this study, we aimed at finding a greater number of eligible splice variants based on newly available transcript sequences and the latest genome annotation. The improved database was then used to compare four search algorithms: Mascot, OMSSA, X! Tandem, and InsPecT. Results The updated alternative splicing database predicted 15833 putative protein variants, 61% more than the previous results. There was transcript evidence for 50% of the updated genes compared to the previous 35% coverage. Database searches were conducted using the same set of spectral data, search parameters, and protein database but with different algorithms. The false discovery rates of the peptide-spectrum matches were estimated Conclusions We were able to detect dozens of new peptides using the improved alternative splicing database with the recently updated annotation of the A. flavus genome. Unlike the identifications of the peptides and the RefSeq proteins, large variations existed between the putative splice variants identified by different algorithms. 12 candidates of putative isoforms were reported based on the consensus peptide-spectrum matches. This suggests that applications of multiple search engines effectively reduced the possible false positive results and validated the protein identifications from tandem mass spectra using an alternative splicing database.

  14. An energy efficient multiple mobile sinks based routing algorithm for wireless sensor networks

    Science.gov (United States)

    Zhong, Peijun; Ruan, Feng

    2018-03-01

    With the fast development of wireless sensor networks (WSNs), more and more energy efficient routing algorithms have been proposed. However, one of the research challenges is how to alleviate the hot spot problem since nodes close to static sink (or base station) tend to die earlier than other sensors. The introduction of mobile sink node can effectively alleviate this problem since sink node can move along certain trajectories, causing hot spot nodes more evenly distributed. In this paper, we mainly study the energy efficient routing method with multiple mobile sinks support. We divide the whole network into several clusters and study the influence of mobile sink number on network lifetime. Simulation results show that the best network performance appears when mobile sink number is about 3 under our simulation environment.

  15. Bio-inspired step-climbing in a hexapod robot

    International Nuclear Information System (INIS)

    Chou, Ya-Cheng; Yu, Wei-Shun; Huang, Ke-Jung; Lin, Pei-Chun

    2012-01-01

    Inspired by the observation that the cockroach changes from a tripod gait to a different gait for climbing high steps, we report on the design and implementation of a novel, fully autonomous step-climbing maneuver, which enables a RHex-style hexapod robot to reliably climb a step up to 230% higher than the length of its leg. Similar to the climbing strategy most used by cockroaches, the proposed maneuver is composed of two stages. The first stage is the ‘rearing stage,’ inclining the body so the front side of the body is raised and it is easier for the front legs to catch the top of the step, followed by the ‘rising stage,’ maneuvering the body's center of mass to the top of the step. Two infrared range sensors are installed on the front of the robot to detect the presence of the step and its orientation relative to the robot's heading, so that the robot can perform automatic gait transition, from walking to step-climbing, as well as correct its initial tilt approaching posture. An inclinometer is utilized to measure body inclination and to compute step height, thus enabling the robot to adjust its gait automatically, in real time, and to climb steps of different heights and depths successfully. The algorithm is applicable for the robot to climb various rectangular obstacles, including a narrow bar, a bar and a step (i.e. a bar of infinite width). The performance of the algorithm is evaluated experimentally, and the comparison of climbing strategies and climbing behaviors in biological and robotic systems is discussed. (paper)

  16. Multiple Kernel Learning for Heterogeneous Anomaly Detection: Algorithm and Aviation Safety Case Study

    Science.gov (United States)

    Das, Santanu; Srivastava, Ashok N.; Matthews, Bryan L.; Oza, Nikunj C.

    2010-01-01

    The world-wide aviation system is one of the most complex dynamical systems ever developed and is generating data at an extremely rapid rate. Most modern commercial aircraft record several hundred flight parameters including information from the guidance, navigation, and control systems, the avionics and propulsion systems, and the pilot inputs into the aircraft. These parameters may be continuous measurements or binary or categorical measurements recorded in one second intervals for the duration of the flight. Currently, most approaches to aviation safety are reactive, meaning that they are designed to react to an aviation safety incident or accident. In this paper, we discuss a novel approach based on the theory of multiple kernel learning to detect potential safety anomalies in very large data bases of discrete and continuous data from world-wide operations of commercial fleets. We pose a general anomaly detection problem which includes both discrete and continuous data streams, where we assume that the discrete streams have a causal influence on the continuous streams. We also assume that atypical sequence of events in the discrete streams can lead to off-nominal system performance. We discuss the application domain, novel algorithms, and also discuss results on real-world data sets. Our algorithm uncovers operationally significant events in high dimensional data streams in the aviation industry which are not detectable using state of the art methods

  17. Cost-effectiveness of a modified two-step algorithm using a combined glutamate dehydrogenase/toxin enzyme immunoassay and real-time PCR for the diagnosis of Clostridium difficile infection.

    Science.gov (United States)

    Vasoo, Shawn; Stevens, Jane; Portillo, Lena; Barza, Ruby; Schejbal, Debra; Wu, May May; Chancey, Christina; Singh, Kamaljit

    2014-02-01

    The analytical performance and cost-effectiveness of the Wampole Toxin A/B EIA, the C. Diff. Quik Chek Complete (CdQCC) (a combined glutamate dehydrogenase antigen/toxin enzyme immunoassay), two RT-PCR assays (Progastro Cd and BD GeneOhm) and a modified two-step algorithm using the CdQCC reflexed to RT-PCR for indeterminate results were compared. The sensitivity of the Wampole Toxin A/B EIA, CdQCC (GDH antigen), BD GeneOhm and Progastro Cd RT-PCR were 85.4%, 95.8%, 100% and 93.8%, respectively. The algorithm provided rapid results for 86% of specimens and the remaining indeterminate results were resolved by RT-PCR, offering the best balance of sensitivity and cost savings per test (algorithm ∼US$13.50/test versus upfront RT-PCR ∼US$26.00/test). Copyright © 2012. Published by Elsevier B.V.

  18. The theory of hybrid stochastic algorithms

    International Nuclear Information System (INIS)

    Duane, S.; Kogut, J.B.

    1986-01-01

    The theory of hybrid stochastic algorithms is developed. A generalized Fokker-Planck equation is derived and is used to prove that the correct equilibrium distribution is generated by the algorithm. Systematic errors following from the discrete time-step used in the numerical implementation of the scheme are computed. Hybrid algorithms which simulate lattice gauge theory with dynamical fermions are presented. They are optimized in computer simulations and their systematic errors and efficiencies are studied. (orig.)

  19. Some nonlinear space decomposition algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Tai, Xue-Cheng; Espedal, M. [Univ. of Bergen (Norway)

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  20. An ultrafast line-by-line algorithm for calculating spectral transmittance and radiance

    International Nuclear Information System (INIS)

    Tan, X.

    2013-01-01

    An ultrafast line-by-line algorithm for calculating spectral transmittance and radiance of gases is presented. The algorithm is based on fast convolution of the Voigt line profile using Fourier transform and a binning technique. The algorithm breaks a radiative transfer calculation into two steps: a one-time pre-computation step in which a set of pressure independent coefficients are computed using the spectral line information; a normal calculation step in which the Fourier transform coefficients of the optical depth are calculated using the line of sight information and the coefficients pre-computed in the first step, the optical depth is then calculated using an inverse Fourier transform and the spectral transmittance and radiance are calculated. The algorithm is significantly faster than line-by-line algorithms that do not employ special speedup techniques by a factor of 10 3 –10 6 . A case study of the 2.7 μm band of H 2 O vapor is presented. -- Highlights: •An ultrafast line-by-line model based on FFT and a binning technique is presented. •Computationally expensive calculations are factored out into a pre-computation step. •It is 10 3 –10 8 times faster than LBL algorithms that do not employ speedup techniques. •Good agreement with experimental data for the 2.7 μm band of H 2 O

  1. Distributed consensus for metamorphic systems using a gossip algorithm for CAT(0) metric spaces

    Science.gov (United States)

    Bellachehab, Anass; Jakubowicz, Jérémie

    2015-01-01

    We present an application of distributed consensus algorithms to metamorphic systems. A metamorphic system is a set of identical units that can self-assemble to form a rigid structure. For instance, one can think of a robotic arm composed of multiple links connected by joints. The system can change its shape in order to adapt to different environments via reconfiguration of its constituting units. We assume in this work that several metamorphic systems form a network: two systems are connected whenever they are able to communicate with each other. The aim of this paper is to propose a distributed algorithm that synchronizes all the systems in the network. Synchronizing means that all the systems should end up having the same configuration. This aim is achieved in two steps: (i) we cast the problem as a consensus problem on a metric space and (ii) we use a recent distributed consensus algorithm that only make use of metrical notions.

  2. Algorithmic alternatives

    International Nuclear Information System (INIS)

    Creutz, M.

    1987-11-01

    A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/

  3. A two-step ionospheric modeling algorithm considering the impact of GLONASS pseudo-range inter-channel biases

    Science.gov (United States)

    Zhang, Rui; Yao, Yi-bin; Hu, Yue-ming; Song, Wei-wei

    2017-12-01

    The Global Navigation Satellite System presents a plausible and cost-effective way of computing the total electron content (TEC). But TEC estimated value could be seriously affected by the differential code biases (DCB) of frequency-dependent satellites and receivers. Unlike GPS and other satellite systems, GLONASS adopts a frequency-division multiplexing access mode to distinguish different satellites. This strategy leads to different wavelengths and inter-frequency biases (IFBs) for both pseudo-range and carrier phase observations, whose impacts are rarely considered in ionospheric modeling. We obtained observations from four groups of co-stations to analyze the characteristics of the GLONASS receiver P1P2 pseudo-range IFB with a double-difference method. The results showed that the GLONASS P1P2 pseudo-range IFB remained stable for a period of time and could catch up to several meters, which cannot be absorbed by the receiver DCB during ionospheric modeling. Given the characteristics of the GLONASS P1P2 pseudo-range IFB, we proposed a two-step ionosphere modeling method with the priori IFB information. The experimental analysis showed that the new algorithm can effectively eliminate the adverse effects on ionospheric model and hardware delay parameters estimation in different space environments. During high solar activity period, compared to the traditional GPS + GLONASS modeling algorithm, the absolute average deviation of TEC decreased from 2.17 to 2.07 TECu (TEC unit); simultaneously, the average RMS of GPS satellite DCB decreased from 0.225 to 0.219 ns, and the average deviation of GLONASS satellite DCB decreased from 0.253 to 0.113 ns with a great improvement in over 55%.

  4. Conjugate-Gradient Algorithms For Dynamics Of Manipulators

    Science.gov (United States)

    Fijany, Amir; Scheid, Robert E.

    1993-01-01

    Algorithms for serial and parallel computation of forward dynamics of multiple-link robotic manipulators by conjugate-gradient method developed. Parallel algorithms have potential for speedup of computations on multiple linked, specialized processors implemented in very-large-scale integrated circuits. Such processors used to stimulate dynamics, possibly faster than in real time, for purposes of planning and control.

  5. "Accelerated Perceptron": A Self-Learning Linear Decision Algorithm

    OpenAIRE

    Zuev, Yu. A.

    2003-01-01

    The class of linear decision rules is studied. A new algorithm for weight correction, called an "accelerated perceptron", is proposed. In contrast to classical Rosenblatt's perceptron this algorithm modifies the weight vector at each step. The algorithm may be employed both in learning and in self-learning modes. The theoretical aspects of the behaviour of the algorithm are studied when the algorithm is used for the purpose of increasing the decision reliability by means of weighted voting. I...

  6. Progressive multiple sequence alignments from triplets

    Directory of Open Access Journals (Sweden)

    Stadler Peter F

    2007-07-01

    Full Text Available Abstract Background The quality of progressive sequence alignments strongly depends on the accuracy of the individual pairwise alignment steps since gaps that are introduced at one step cannot be removed at later aggregation steps. Adjacent insertions and deletions necessarily appear in arbitrary order in pairwise alignments and hence form an unavoidable source of errors. Research Here we present a modified variant of progressive sequence alignments that addresses both issues. Instead of pairwise alignments we use exact dynamic programming to align sequence or profile triples. This avoids a large fractions of the ambiguities arising in pairwise alignments. In the subsequent aggregation steps we follow the logic of the Neighbor-Net algorithm, which constructs a phylogenetic network by step-wisely replacing triples by pairs instead of combining pairs to singletons. To this end the three-way alignments are subdivided into two partial alignments, at which stage all-gap columns are naturally removed. This alleviates the "once a gap, always a gap" problem of progressive alignment procedures. Conclusion The three-way Neighbor-Net based alignment program aln3nn is shown to compare favorably on both protein sequences and nucleic acids sequences to other progressive alignment tools. In the latter case one easily can include scoring terms that consider secondary structure features. Overall, the quality of resulting alignments in general exceeds that of clustalw or other multiple alignments tools even though our software does not included heuristics for context dependent (mismatch scores.

  7. Explorations of the implementation of a parallel IDW interpolation algorithm in a Linux cluster-based parallel GIS

    Science.gov (United States)

    Huang, Fang; Liu, Dingsheng; Tan, Xicheng; Wang, Jian; Chen, Yunping; He, Binbin

    2011-04-01

    To design and implement an open-source parallel GIS (OP-GIS) based on a Linux cluster, the parallel inverse distance weighting (IDW) interpolation algorithm has been chosen as an example to explore the working model and the principle of algorithm parallel pattern (APP), one of the parallelization patterns for OP-GIS. Based on an analysis of the serial IDW interpolation algorithm of GRASS GIS, this paper has proposed and designed a specific parallel IDW interpolation algorithm, incorporating both single process, multiple data (SPMD) and master/slave (M/S) programming modes. The main steps of the parallel IDW interpolation algorithm are: (1) the master node packages the related information, and then broadcasts it to the slave nodes; (2) each node calculates its assigned data extent along one row using the serial algorithm; (3) the master node gathers the data from all nodes; and (4) iterations continue until all rows have been processed, after which the results are outputted. According to the experiments performed in the course of this work, the parallel IDW interpolation algorithm can attain an efficiency greater than 0.93 compared with similar algorithms, which indicates that the parallel algorithm can greatly reduce processing time and maximize speed and performance.

  8. Sort-Mid tasks scheduling algorithm in grid computing.

    Science.gov (United States)

    Reda, Naglaa M; Tawfik, A; Marzok, Mohamed A; Khamis, Soheir M

    2015-11-01

    Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan.

  9. Handbook of Memetic Algorithms

    CERN Document Server

    Cotta, Carlos; Moscato, Pablo

    2012-01-01

    Memetic Algorithms (MAs) are computational intelligence structures combining multiple and various operators in order to address optimization problems.  The combination and interaction amongst operators evolves and promotes the diffusion of the most successful units and generates an algorithmic behavior which can handle complex objective functions and hard fitness landscapes.   “Handbook of Memetic Algorithms” organizes, in a structured way, all the the most important results in the field of MAs since their earliest definition until now.  A broad review including various algorithmic solutions as well as successful applications is included in this book. Each class of optimization problems, such as constrained optimization, multi-objective optimization, continuous vs combinatorial problems, uncertainties, are analysed separately and, for each problem,  memetic recipes for tackling the difficulties are given with some successful examples. Although this book contains chapters written by multiple authors, ...

  10. Analysis of multiplicities in e+e- interactions using 2-jet rates from different jet algorithms

    International Nuclear Information System (INIS)

    Dahiya, S.; Kaur, M.; Dhamija, S.

    2002-01-01

    The shoulder structure of charged particle multiplicity distribution measured in full phase space in e + e - interactions at various c.m. energies from 91 to 189 GeV has been analysed in terms of weighted superposition of two negative binomial distributions associated with 2-jet and multi-jet production. The 2-jet rates have been obtained from various jet algorithms. This phenomenological parametrization reproduces the shoulder structure behaviour quantitatively and improves the agreement with the experimental distributions than the conventional negative binomial distribution. The analysis at the higher energies where the shoulder structure appears more prominently, is important for the understanding of underlying structure. (author)

  11. Parallel algorithms for online trackfinding at PANDA

    Energy Technology Data Exchange (ETDEWEB)

    Bianchi, Ludovico; Ritman, James; Stockmanns, Tobias [IKP, Forschungszentrum Juelich GmbH (Germany); Herten, Andreas [JSC, Forschungszentrum Juelich GmbH (Germany); Collaboration: PANDA-Collaboration

    2016-07-01

    The PANDA experiment, one of the four scientific pillars of the FAIR facility currently in construction in Darmstadt, is a next-generation particle detector that will study collisions of antiprotons with beam momenta of 1.5-15 GeV/c on a fixed proton target. Because of the broad physics scope and the similar signature of signal and background events, PANDA's strategy for data acquisition is to continuously record data from the whole detector and use this global information to perform online event reconstruction and filtering. A real-time rejection factor of up to 1000 must be achieved to match the incoming data rate for offline storage, making all components of the data processing system computationally very challenging. Online particle track identification and reconstruction is an essential step, since track information is used as input in all following phases. Online tracking algorithms must ensure a delicate balance between high tracking efficiency and quality, and minimal computational footprint. For this reason, a massively parallel solution exploiting multiple Graphic Processing Units (GPUs) is under investigation. The talk presents the core concepts of the algorithms being developed for primary trackfinding, along with details of their implementation on GPUs.

  12. A consensus successive projections algorithm--multiple linear regression method for analyzing near infrared spectra.

    Science.gov (United States)

    Liu, Ke; Chen, Xiaojing; Li, Limin; Chen, Huiling; Ruan, Xiukai; Liu, Wenbin

    2015-02-09

    The successive projections algorithm (SPA) is widely used to select variables for multiple linear regression (MLR) modeling. However, SPA used only once may not obtain all the useful information of the full spectra, because the number of selected variables cannot exceed the number of calibration samples in the SPA algorithm. Therefore, the SPA-MLR method risks the loss of useful information. To make a full use of the useful information in the spectra, a new method named "consensus SPA-MLR" (C-SPA-MLR) is proposed herein. This method is the combination of consensus strategy and SPA-MLR method. In the C-SPA-MLR method, SPA-MLR is used to construct member models with different subsets of variables, which are selected from the remaining variables iteratively. A consensus prediction is obtained by combining the predictions of the member models. The proposed method is evaluated by analyzing the near infrared (NIR) spectra of corn and diesel. The results of C-SPA-MLR method showed a better prediction performance compared with the SPA-MLR and full-spectra PLS methods. Moreover, these results could serve as a reference for combination the consensus strategy and other variable selection methods when analyzing NIR spectra and other spectroscopic techniques. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Acomparative Study Comparing Low-dose Step-up Versus Step-down in Polycystic Ovary Syndrome Resistant to Clomiphene

    Directory of Open Access Journals (Sweden)

    S Peivandi

    2010-03-01

    Full Text Available Introduction: Polycystic ovary syndrome(PCOS is one of the most common cause of infertility in women. clomiphene is the first line of treatment. however 20% of patients are resistant to clomiphene. because of follicular hypersensitivity to gonadotropins in pcod, multiple follicular growth and development occurs which is cause of OHSS and multiple pregnancy. Our aim of this random and clinical study was comparation between step-down and low dose step-up methods for induction ovulation in clomiphene resistant. Methods: 60 cases were included 30 women in low-dose step-up group and 30 women in step-down group. In low-dose step-up HMG 75u/d and in step-down HMG 225u/d was started on 3th days of cycle, monitoring with vaginal sonography was done on 8th days of cycle. When follicle with>14 mm in diameter was seen HMG dose was continued in low-dose step-up and was decreased in step-down group. When follicle reached to 18mm in diameter, amp HCG 10000 unit was injected and IUI was performed 36 hours later. Results: Number of HMG ampules, number of follicles> 14mm on the day of HCG injection and level of serum estradiol was greater in low dose step up protocol than step down protocol(p<0/0001. Ovulation rate and pregnancy rate was greater in lowdose step up group than step down group with significant difference (p<0/0001. Conclusion: Our study showed that low-dose step-up regimen with HMG is effective for stimulating ovulation and clinical pregnancy but in view of monofollicular growth, the step down method was more effective and safe. In our study multifolliular growth in step-up method was higher than step-down method. We can predict possibility of Ovarian Hyperstimulation Syndrome syndrome in highly sensitive PCOS patients.

  14. Identification of novel adhesins of M. tuberculosis H37Rv using integrated approach of multiple computational algorithms and experimental analysis.

    Directory of Open Access Journals (Sweden)

    Sanjiv Kumar

    Full Text Available Pathogenic bacteria interacting with eukaryotic host express adhesins on their surface. These adhesins aid in bacterial attachment to the host cell receptors during colonization. A few adhesins such as Heparin binding hemagglutinin adhesin (HBHA, Apa, Malate Synthase of M. tuberculosis have been identified using specific experimental interaction models based on the biological knowledge of the pathogen. In the present work, we carried out computational screening for adhesins of M. tuberculosis. We used an integrated computational approach using SPAAN for predicting adhesins, PSORTb, SubLoc and LocTree for extracellular localization, and BLAST for verifying non-similarity to human proteins. These steps are among the first of reverse vaccinology. Multiple claims and attacks from different algorithms were processed through argumentative approach. Additional filtration criteria included selection for proteins with low molecular weights and absence of literature reports. We examined binding potential of the selected proteins using an image based ELISA. The protein Rv2599 (membrane protein binds to human fibronectin, laminin and collagen. Rv3717 (N-acetylmuramoyl-L-alanine amidase and Rv0309 (L,D-transpeptidase bind to fibronectin and laminin. We report Rv2599 (membrane protein, Rv0309 and Rv3717 as novel adhesins of M. tuberculosis H37Rv. Our results expand the number of known adhesins of M. tuberculosis and suggest their regulated expression in different stages.

  15. Modification of MSDR algorithm and ITS implementation on graph clustering

    Science.gov (United States)

    Prastiwi, D.; Sugeng, K. A.; Siswantining, T.

    2017-07-01

    Maximum Standard Deviation Reduction (MSDR) is a graph clustering algorithm to minimize the distance variation within a cluster. In this paper we propose a modified MSDR by replacing one technical step in MSDR which uses polynomial regression, with a new and simpler step. This leads to our new algorithm called Modified MSDR (MMSDR). We implement the new algorithm to separate a domestic flight network of an Indonesian airline into two large clusters. Further analysis allows us to discover a weak link in the network, which should be improved by adding more flights.

  16. A theoretical derivation of the condensed history algorithm

    International Nuclear Information System (INIS)

    Larsen, E.W.

    1992-01-01

    Although the Condensed History Algorithm is a successful and widely-used Monte Carlo method for solving electron transport problems, it has been derived only by an ad-hoc process based on physical reasoning. In this paper we show that the Condensed History Algorithm can be justified as a Monte Carlo simulation of an operator-split procedure in which the streaming, angular scattering, and slowing-down operators are separated within each time step. Different versions of the operator-split procedure lead to Ο(Δs) and Ο(Δs 2 ) versions of the method, where Δs is the path-length step. Our derivation also indicates that higher-order versions of the Condensed History Algorithm may be developed. (Author)

  17. Firefly Mating Algorithm for Continuous Optimization Problems

    Directory of Open Access Journals (Sweden)

    Amarita Ritthipakdee

    2017-01-01

    Full Text Available This paper proposes a swarm intelligence algorithm, called firefly mating algorithm (FMA, for solving continuous optimization problems. FMA uses genetic algorithm as the core of the algorithm. The main feature of the algorithm is a novel mating pair selection method which is inspired by the following 2 mating behaviors of fireflies in nature: (i the mutual attraction between males and females causes them to mate and (ii fireflies of both sexes are of the multiple-mating type, mating with multiple opposite sex partners. A female continues mating until her spermatheca becomes full, and, in the same vein, a male can provide sperms for several females until his sperm reservoir is depleted. This new feature enhances the global convergence capability of the algorithm. The performance of FMA was tested with 20 benchmark functions (sixteen 30-dimensional functions and four 2-dimensional ones against FA, ALC-PSO, COA, MCPSO, LWGSODE, MPSODDS, DFOA, SHPSOS, LSA, MPDPGA, DE, and GABC algorithms. The experimental results showed that the success rates of our proposed algorithm with these functions were higher than those of other algorithms and the proposed algorithm also required fewer numbers of iterations to reach the global optima.

  18. Algorithm for generating a Brownian motion on a sphere

    International Nuclear Information System (INIS)

    Carlsson, Tobias; Elvingson, Christer; Ekholm, Tobias

    2010-01-01

    We present a new algorithm for generation of a random walk on a two-dimensional sphere. The algorithm is obtained by viewing the 2-sphere as the equator in the 3-sphere surrounded by an infinitesimally thin band with boundary which reflects Brownian particles and then applying known effective methods for generating Brownian motion on the 3-sphere. To test the method, the diffusion coefficient was calculated in computer simulations using the new algorithm and, for comparison, also using a commonly used method in which the particle takes a Brownian step in the tangent plane to the 2-sphere and is then projected back to the spherical surface. The two methods are in good agreement for short time steps, while the method presented in this paper continues to give good results also for larger time steps, when the alternative method becomes unstable.

  19. 3D head pose estimation and tracking using particle filtering and ICP algorithm

    KAUST Repository

    Ben Ghorbel, Mahdi; Baklouti, Malek; Couvet, Serge

    2010-01-01

    This paper addresses the issue of 3D head pose estimation and tracking. Existing approaches generally need huge database, training procedure, manual initialization or use face feature extraction manually extracted. We propose a framework for estimating the 3D head pose in its fine level and tracking it continuously across multiple Degrees of Freedom (DOF) based on ICP and particle filtering. We propose to approach the problem, using 3D computational techniques, by aligning a face model to the 3D dense estimation computed by a stereo vision method, and propose a particle filter algorithm to refine and track the posteriori estimate of the position of the face. This work comes with two contributions: the first concerns the alignment part where we propose an extended ICP algorithm using an anisotropic scale transformation. The second contribution concerns the tracking part. We propose the use of the particle filtering algorithm and propose to constrain the search space using ICP algorithm in the propagation step. The results show that the system is able to fit and track the head properly, and keeps accurate the results on new individuals without a manual adaptation or training. © Springer-Verlag Berlin Heidelberg 2010.

  20. Diagnostic radiology on multiple injured patients: interdisciplinary management

    International Nuclear Information System (INIS)

    Linsenmaier, U.; Pfeifer, K.J.; Kanz, K.G.; Mutschler, W.

    2001-01-01

    The presence of a radiologist within the admitting area of an emergency department and his capability as a member of the trauma team have a major impact on the role of diagnostic radiology in trauma care. The knowledge of clinical decision criteria, algorithms, and standards of patient care are essential for the acceptance within a trauma team. We present an interdisciplinary management concept of diagnostic radiology for trauma patients, which comprises basic diagnosis, organ diagnosis, radiological ABC, and algorithms of early clinical care. It is the result of a prospective study comprising over 2000 documented multiple injured patients. The radiologist on a trauma team should support trauma surgery and anesthesia in diagnostic and clinical work-up. The radiological ABC provides a structured approach for diagnostic imaging in all steps of the early clinical care of the multiple injured patient. Radiological ABC requires a reevaluation in cases of equivocal findings or difficulties in the clinical course. Direct communication of radiological findings with the trauma team enables quick clinical decisions. In addition, the radiologist can priority-oriented influence the therapy by using interventional procedures. The clinical radiologist is an active member of the interdisciplinary trauma team, not only providing diagnostic imaging but also participating in clinical decisions. (orig.) [de

  1. The hybrid Monte Carlo Algorithm and the chiral transition

    International Nuclear Information System (INIS)

    Gupta, R.

    1987-01-01

    In this talk the author describes tests of the Hybrid Monte Carlo Algorithm for QCD done in collaboration with Greg Kilcup and Stephen Sharpe. We find that the acceptance in the glubal Metropolis step for Staggered fermions can be tuned and kept large without having to make the step-size prohibitively small. We present results for the finite temperature transition on 4 4 and 4 x 6 3 lattices using this algorithm

  2. A New Multi-Step Iterative Algorithm for Approximating Common Fixed Points of a Finite Family of Multi-Valued Bregman Relatively Nonexpansive Mappings

    Directory of Open Access Journals (Sweden)

    Wiyada Kumam

    2016-05-01

    Full Text Available In this article, we introduce a new multi-step iteration for approximating a common fixed point of a finite class of multi-valued Bregman relatively nonexpansive mappings in the setting of reflexive Banach spaces. We prove a strong convergence theorem for the proposed iterative algorithm under certain hypotheses. Additionally, we also use our results for the solution of variational inequality problems and to find the zero points of maximal monotone operators. The theorems furnished in this work are new and well-established and generalize many well-known recent research works in this field.

  3. "What Is a Step?" Differences in How a Step Is Detected among Three Popular Activity Monitors That Have Impacted Physical Activity Research.

    Science.gov (United States)

    John, Dinesh; Morton, Alvin; Arguello, Diego; Lyden, Kate; Bassett, David

    2018-04-15

    (1) Background: This study compared manually-counted treadmill walking steps from the hip-worn DigiwalkerSW200 and OmronHJ720ITC, and hip and wrist-worn ActiGraph GT3X+ and GT9X; determined brand-specific acceleration amplitude (g) and/or frequency (Hz) step-detection thresholds; and quantified key features of the acceleration signal during walking. (2) Methods: Twenty participants (Age: 26.7 ± 4.9 years) performed treadmill walking between 0.89-to-1.79 m/s (2-4 mph) while wearing a hip-worn DigiwalkerSW200, OmronHJ720ITC, GT3X+ and GT9X, and a wrist-worn GT3X+ and GT9X. A DigiwalkerSW200 and OmronHJ720ITC underwent shaker testing to determine device-specific frequency and amplitude step-detection thresholds. Simulated signal testing was used to determine thresholds for the ActiGraph step algorithm. Steps during human testing were compared using bias and confidence intervals. (3) Results: The OmronHJ720ITC was most accurate during treadmill walking. Hip and wrist-worn ActiGraph outputs were significantly different from the criterion. The DigiwalkerSW200 records steps for movements with a total acceleration of ≥1.21 g. The OmronHJ720ITC detects a step when movement has an acceleration ≥0.10 g with a dominant frequency of ≥1 Hz. The step-threshold for the ActiLife algorithm is variable based on signal frequency. Acceleration signals at the hip and wrist have distinctive patterns during treadmill walking. (4) Conclusions: Three common research-grade physical activity monitors employ different step-detection strategies, which causes variability in step output.

  4. An Adaptive Bacterial Foraging Optimization Algorithm with Lifecycle and Social Learning

    Directory of Open Access Journals (Sweden)

    Xiaohui Yan

    2012-01-01

    Full Text Available Bacterial Foraging Algorithm (BFO is a recently proposed swarm intelligence algorithm inspired by the foraging and chemotactic phenomenon of bacteria. However, its optimization ability is not so good compared with other classic algorithms as it has several shortages. This paper presents an improved BFO Algorithm. In the new algorithm, a lifecycle model of bacteria is founded. The bacteria could split, die, or migrate dynamically in the foraging processes, and population size varies as the algorithm runs. Social learning is also introduced so that the bacteria will tumble towards better directions in the chemotactic steps. Besides, adaptive step lengths are employed in chemotaxis. The new algorithm is named BFOLS and it is tested on a set of benchmark functions with dimensions of 2 and 20. Canonical BFO, PSO, and GA algorithms are employed for comparison. Experiment results and statistic analysis show that the BFOLS algorithm offers significant improvements than original BFO algorithm. Particulary with dimension of 20, it has the best performance among the four algorithms.

  5. Non-convex polygons clustering algorithm

    Directory of Open Access Journals (Sweden)

    Kruglikov Alexey

    2016-01-01

    Full Text Available A clustering algorithm is proposed, to be used as a preliminary step in motion planning. It is tightly coupled to the applied problem statement, i.e. uses parameters meaningful only with respect to it. Use of geometrical properties for polygons clustering allows for a better calculation time as opposed to general-purpose algorithms. A special form of map optimized for quick motion planning is constructed as a result.

  6. Approximated affine projection algorithm for feedback cancellation in hearing aids.

    Science.gov (United States)

    Lee, Sangmin; Kim, In-Young; Park, Young-Cheol

    2007-09-01

    We propose an approximated affine projection (AP) algorithm for feedback cancellation in hearing aids. It is based on the conventional approach using the Gauss-Seidel (GS) iteration, but provides more stable convergence behaviour even with small step sizes. In the proposed algorithm, a residue of the weighted error vector, instead of the current error sample, is used to provide stable convergence. A new learning rate control scheme is also applied to the proposed algorithm to prevent signal cancellation and system instability. The new scheme determines step size in proportion to the prediction factor of the input, so that adaptation is inhibited whenever tone-like signals are present in the input. Simulation results verified the efficiency of the proposed algorithm.

  7. Online algorithms for optimal energy distribution in microgrids

    CERN Document Server

    Wang, Yu; Nelms, R Mark

    2015-01-01

    Presenting an optimal energy distribution strategy for microgrids in a smart grid environment, and featuring a detailed analysis of the mathematical techniques of convex optimization and online algorithms, this book provides readers with essential content on how to achieve multi-objective optimization that takes into consideration power subscribers, energy providers and grid smoothing in microgrids. Featuring detailed theoretical proofs and simulation results that demonstrate and evaluate the correctness and effectiveness of the algorithm, this text explains step-by-step how the problem can b

  8. Prosthetic joint infection development of an evidence-based diagnostic algorithm.

    Science.gov (United States)

    Mühlhofer, Heinrich M L; Pohlig, Florian; Kanz, Karl-Georg; Lenze, Ulrich; Lenze, Florian; Toepfer, Andreas; Kelch, Sarah; Harrasser, Norbert; von Eisenhart-Rothe, Rüdiger; Schauwecker, Johannes

    2017-03-09

    Increasing rates of prosthetic joint infection (PJI) have presented challenges for general practitioners, orthopedic surgeons and the health care system in the recent years. The diagnosis of PJI is complex; multiple diagnostic tools are used in the attempt to correctly diagnose PJI. Evidence-based algorithms can help to identify PJI using standardized diagnostic steps. We reviewed relevant publications between 1990 and 2015 using a systematic literature search in MEDLINE and PUBMED. The selected search results were then classified into levels of evidence. The keywords were prosthetic joint infection, biofilm, diagnosis, sonication, antibiotic treatment, implant-associated infection, Staph. aureus, rifampicin, implant retention, pcr, maldi-tof, serology, synovial fluid, c-reactive protein level, total hip arthroplasty (THA), total knee arthroplasty (TKA) and combinations of these terms. From an initial 768 publications, 156 publications were stringently reviewed. Publications with class I-III recommendations (EAST) were considered. We developed an algorithm for the diagnostic approach to display the complex diagnosis of PJI in a clear and logically structured process according to ISO 5807. The evidence-based standardized algorithm combines modern clinical requirements and evidence-based treatment principles. The algorithm provides a detailed transparent standard operating procedure (SOP) for diagnosing PJI. Thus, consistently high, examiner-independent process quality is assured to meet the demands of modern quality management in PJI diagnosis.

  9. Algorithmic phase diagrams

    Science.gov (United States)

    Hockney, Roger

    1987-01-01

    Algorithmic phase diagrams are a neat and compact representation of the results of comparing the execution time of several algorithms for the solution of the same problem. As an example, the recent results are shown of Gannon and Van Rosendale on the solution of multiple tridiagonal systems of equations in the form of such diagrams. The act of preparing these diagrams has revealed an unexpectedly complex relationship between the best algorithm and the number and size of the tridiagonal systems, which was not evident from the algebraic formulae in the original paper. Even so, for a particular computer, one diagram suffices to predict the best algorithm for all problems that are likely to be encountered the prediction being read directly from the diagram without complex calculation.

  10. A General Event Location Algorithm with Applications to Eclipse and Station Line-of-Sight

    Science.gov (United States)

    Parker, Joel J. K.; Hughes, Steven P.

    2011-01-01

    A general-purpose algorithm for the detection and location of orbital events is developed. The proposed algorithm reduces the problem to a global root-finding problem by mapping events of interest (such as eclipses, station access events, etc.) to continuous, differentiable event functions. A stepping algorithm and a bracketing algorithm are used to detect and locate the roots. Examples of event functions and the stepping/bracketing algorithms are discussed, along with results indicating performance and accuracy in comparison to commercial tools across a variety of trajectories.

  11. A General Event Location Algorithm with Applications to Eclispe and Station Line-of-Sight

    Science.gov (United States)

    Parker, Joel J. K.; Hughes, Steven P.

    2011-01-01

    A general-purpose algorithm for the detection and location of orbital events is developed. The proposed algorithm reduces the problem to a global root-finding problem by mapping events of interest (such as eclipses, station access events, etc.) to continuous, differentiable event functions. A stepping algorithm and a bracketing algorithm are used to detect and locate the roots. Examples of event functions and the stepping/bracketing algorithms are discussed, along with results indicating performance and accuracy in comparison to commercial tools across a variety of trajectories.

  12. Multiple Charging Station Location-Routing Problem with Time Window of Electric Vehicle

    Directory of Open Access Journals (Sweden)

    Wang Li-ying

    2015-11-01

    Full Text Available This paper presents the electric vehicle (EV multiple charging station location-routing problem with time window to optimize the routing plan of capacitated EVs and the strategy of charging stations. In particular, the strategy of charging stations includes both infrastructure-type selection and station location decisions. The problem accounts for two critical constraints in logistic practice: the vehicle loading capacity and the customer time windows. A hybrid heuristic that incorporates an adaptive variable neighborhood search (AVNS with the tabu search algorithm for intensification was developed to address the problem. The specialized neighborhood structures and the selection methods of charging station used in the shaking step of AVNS were proposed. In contrast to the commercial solver CPLEX, experimental results on small-scale test instances demonstrate that the algorithm can find nearly optimal solutions on small-scale instances. The results on large-scale instances also show the effectiveness of the algorithm.

  13. Multi-User Identification-Based Eye-Tracking Algorithm Using Position Estimation

    Directory of Open Access Journals (Sweden)

    Suk-Ju Kang

    2016-12-01

    Full Text Available This paper proposes a new multi-user eye-tracking algorithm using position estimation. Conventional eye-tracking algorithms are typically suitable only for a single user, and thereby cannot be used for a multi-user system. Even though they can be used to track the eyes of multiple users, their detection accuracy is low and they cannot identify multiple users individually. The proposed algorithm solves these problems and enhances the detection accuracy. Specifically, the proposed algorithm adopts a classifier to detect faces for the red, green, and blue (RGB and depth images. Then, it calculates features based on the histogram of the oriented gradient for the detected facial region to identify multiple users, and selects the template that best matches the users from a pre-determined face database. Finally, the proposed algorithm extracts the final eye positions based on anatomical proportions. Simulation results show that the proposed algorithm improved the average F1 score by up to 0.490, compared with benchmark algorithms.

  14. Algorithms for contrast enhancement of electronic portal images

    International Nuclear Information System (INIS)

    Díez, S.; Sánchez, S.

    2015-01-01

    An implementation of two new automatized image processing algorithms for contrast enhancement of portal images is presented as suitable tools which facilitate the setup verification and visualization of patients during radiotherapy treatments. In the first algorithm, called Automatic Segmentation and Histogram Stretching (ASHS), the portal image is automatically segmented in two sub-images delimited by the conformed treatment beam: one image consisting of the imaged patient obtained directly from the radiation treatment field, and the second one is composed of the imaged patient outside it. By segmenting the original image, a histogram stretching can be independently performed and improved in both regions. The second algorithm involves a two-step process. In the first step, a Normalization to Local Mean (NLM), an inverse restoration filter is applied by dividing pixel by pixel a portal image by its blurred version. In the second step, named Lineally Combined Local Histogram Equalization (LCLHE), the contrast of the original image is strongly improved by a Local Contrast Enhancement (LCE) algorithm, revealing the anatomical structures of patients. The output image is lineally combined with a portal image of the patient. Finally the output images of the previous algorithms (NLM and LCLHE) are lineally combined, once again, in order to obtain a contrast enhanced image. These two algorithms have been tested on several portal images with great results. - Highlights: • Two Algorithms are implemented to improve the contrast of Electronic Portal Images. • The multi-leaf and conformed beam are automatically segmented into Portal Images. • Hidden anatomical and bony structures in portal images are revealed. • The task related to the patient setup verification is facilitated by the contrast enhancement then achieved.

  15. Pareto-depth for multiple-query image retrieval.

    Science.gov (United States)

    Hsiao, Ko-Jen; Calder, Jeff; Hero, Alfred O

    2015-02-01

    Most content-based image retrieval systems consider either one single query, or multiple queries that include the same object or represent the same semantic information. In this paper, we consider the content-based image retrieval problem for multiple query images corresponding to different image semantics. We propose a novel multiple-query information retrieval algorithm that combines the Pareto front method with efficient manifold ranking. We show that our proposed algorithm outperforms state of the art multiple-query retrieval algorithms on real-world image databases. We attribute this performance improvement to concavity properties of the Pareto fronts, and prove a theoretical result that characterizes the asymptotic concavity of the fronts.

  16. Estimation of perceptible water vapor of atmosphere using artificial neural network, support vector machine and multiple linear regression algorithm and their comparative study

    Science.gov (United States)

    Shastri, Niket; Pathak, Kamlesh

    2018-05-01

    The water vapor content in atmosphere plays very important role in climate. In this paper the application of GPS signal in meteorology is discussed, which is useful technique that is used to estimate the perceptible water vapor of atmosphere. In this paper various algorithms like artificial neural network, support vector machine and multiple linear regression are use to predict perceptible water vapor. The comparative studies in terms of root mean square error and mean absolute errors are also carried out for all the algorithms.

  17. Multi-time-step domain coupling method with energy control

    DEFF Research Database (Denmark)

    Mahjoubi, N.; Krenk, Steen

    2010-01-01

    the individual time step. It is demonstrated that displacement continuity between the subdomains leads to cancelation of the interface contributions to the energy balance equation, and thus stability and algorithmic damping properties of the original algorithms are retained. The various subdomains can...... by a numerical example using a refined mesh around concentrated forces. Copyright © 2010 John Wiley & Sons, Ltd....

  18. M4GB : Efficient Groebner Basis algorithm

    NARCIS (Netherlands)

    R.H. Makarim (Rusydi); M.M.J. Stevens (Marc)

    2017-01-01

    textabstractWe introduce a new efficient algorithm for computing Groebner-bases named M4GB. Like Faugere's algorithm F4 it is an extension of Buchberger's algorithm that describes: how to store already computed (tail-)reduced multiples of basis polynomials to prevent redundant work in the reduction

  19. An Adaptive Unified Differential Evolution Algorithm for Global Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Qiang, Ji; Mitchell, Chad

    2014-11-03

    In this paper, we propose a new adaptive unified differential evolution algorithm for single-objective global optimization. Instead of the multiple mutation strate- gies proposed in conventional differential evolution algorithms, this algorithm employs a single equation unifying multiple strategies into one expression. It has the virtue of mathematical simplicity and also provides users the flexibility for broader exploration of the space of mutation operators. By making all control parameters in the proposed algorithm self-adaptively evolve during the process of optimization, it frees the application users from the burden of choosing appro- priate control parameters and also improves the performance of the algorithm. In numerical tests using thirteen basic unimodal and multimodal functions, the proposed adaptive unified algorithm shows promising performance in compari- son to several conventional differential evolution algorithms.

  20. Bernstein Algorithm for Vertical Normalization to 3NF Using Synthesis

    Directory of Open Access Journals (Sweden)

    Matija Varga

    2013-07-01

    Full Text Available This paper demonstrates the use of Bernstein algorithm for vertical normalization to 3NF using synthesis. The aim of the paper is to provide an algorithm for database normalization and present a set of steps which minimize redundancy in order to increase the database management efficiency, and specify tests and algorithms for testing and proving the reversibility (i.e., proving that the normalization did not cause loss of information. Using Bernstein algorithm steps, the paper gives examples of vertical normalization to 3NF through synthesis and proposes a test and an algorithm to demonstrate decomposition reversibility. This paper also sets out to explain that the reasons for generating normal forms are to facilitate data search, eliminate data redundancy as well as delete, insert and update anomalies and explain how anomalies develop using examples-

  1. [Algorithms for treatment of complex hand injuries].

    Science.gov (United States)

    Pillukat, T; Prommersberger, K-J

    2011-07-01

    The primary treatment strongly influences the course and prognosis of hand injuries. Complex injuries which compromise functional recovery are especially challenging. Despite an apparently unlimited number of injury patterns it is possible to develop strategies which facilitate a standardized approach to operative treatment. In this situation algorithms can be important guidelines for a rational approach. The following algorithms have been proven in the treatment of complex injuries of the hand by our own experience. They were modified according to the current literature and refer to prehospital care, emergency room management, basic strategy in general and reconstruction of bone and joints, vessels, nerves, tendons and soft tissue coverage in detail. Algorithms facilitate the treatment of severe hand injuries. Applying simple yes/no decisions complex injury patterns are split into distinct partial problems which can be managed step by step.

  2. Sort-Mid tasks scheduling algorithm in grid computing

    Directory of Open Access Journals (Sweden)

    Naglaa M. Reda

    2015-11-01

    Full Text Available Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan.

  3. Mediator independently orchestrates multiple steps of preinitiation complex assembly in vivo.

    Science.gov (United States)

    Eyboulet, Fanny; Wydau-Dematteis, Sandra; Eychenne, Thomas; Alibert, Olivier; Neil, Helen; Boschiero, Claire; Nevers, Marie-Claire; Volland, Hervé; Cornu, David; Redeker, Virginie; Werner, Michel; Soutourina, Julie

    2015-10-30

    Mediator is a large multiprotein complex conserved in all eukaryotes, which has a crucial coregulator function in transcription by RNA polymerase II (Pol II). However, the molecular mechanisms of its action in vivo remain to be understood. Med17 is an essential and central component of the Mediator head module. In this work, we utilised our large collection of conditional temperature-sensitive med17 mutants to investigate Mediator's role in coordinating preinitiation complex (PIC) formation in vivo at the genome level after a transfer to a non-permissive temperature for 45 minutes. The effect of a yeast mutation proposed to be equivalent to the human Med17-L371P responsible for infantile cerebral atrophy was also analyzed. The ChIP-seq results demonstrate that med17 mutations differentially affected the global presence of several PIC components including Mediator, TBP, TFIIH modules and Pol II. Our data show that Mediator stabilizes TFIIK kinase and TFIIH core modules independently, suggesting that the recruitment or the stability of TFIIH modules is regulated independently on yeast genome. We demonstrate that Mediator selectively contributes to TBP recruitment or stabilization to chromatin. This study provides an extensive genome-wide view of Mediator's role in PIC formation, suggesting that Mediator coordinates multiple steps of a PIC assembly pathway. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. Use of genomic recursions and algorithm for proven and young animals for single-step genomic BLUP analyses--a simulation study.

    Science.gov (United States)

    Fragomeni, B O; Lourenco, D A L; Tsuruta, S; Masuda, Y; Aguilar, I; Misztal, I

    2015-10-01

    The purpose of this study was to examine accuracy of genomic selection via single-step genomic BLUP (ssGBLUP) when the direct inverse of the genomic relationship matrix (G) is replaced by an approximation of G(-1) based on recursions for young genotyped animals conditioned on a subset of proven animals, termed algorithm for proven and young animals (APY). With the efficient implementation, this algorithm has a cubic cost with proven animals and linear with young animals. Ten duplicate data sets mimicking a dairy cattle population were simulated. In a first scenario, genomic information for 20k genotyped bulls, divided in 7k proven and 13k young bulls, was generated for each replicate. In a second scenario, 5k genotyped cows with phenotypes were included in the analysis as young animals. Accuracies (average for the 10 replicates) in regular EBV were 0.72 and 0.34 for proven and young animals, respectively. When genomic information was included, they increased to 0.75 and 0.50. No differences between genomic EBV (GEBV) obtained with the regular G(-1) and the approximated G(-1) via the recursive method were observed. In the second scenario, accuracies in GEBV (0.76, 0.51 and 0.59 for proven bulls, young males and young females, respectively) were also higher than those in EBV (0.72, 0.35 and 0.49). Again, no differences between GEBV with regular G(-1) and with recursions were observed. With the recursive algorithm, the number of iterations to achieve convergence was reduced from 227 to 206 in the first scenario and from 232 to 209 in the second scenario. Cows can be treated as young animals in APY without reducing the accuracy. The proposed algorithm can be implemented to reduce computing costs and to overcome current limitations on the number of genotyped animals in the ssGBLUP method. © 2015 Blackwell Verlag GmbH.

  5. Simulation of Unique Pressure Changing Steps and Situations in Psa Processes

    Science.gov (United States)

    Ebner, Armin D.; Mehrotra, Amal; Knox, James C.; LeVan, Douglas; Ritter, James A.

    2007-01-01

    A more rigorous cyclic adsorption process simulator is being developed for use in the development and understanding of new and existing PSA processes. Unique features of this new version of the simulator that Ritter and co-workers have been developing for the past decade or so include: multiple absorbent layers in each bed, pressure drop in the column, valves for entering and exiting flows and predicting real-time pressurization and depressurization rates, ability to account for choked flow conditions, ability to pressurize and depressurize simultaneously from both ends of the columns, ability to equalize between multiple pairs of columns, ability to equalize simultaneously from both ends of pairs of columns, and ability to handle very large pressure ratios and hence velocities associated with deep vacuum systems. These changes to the simulator now provide for unique opportunities to study the effects of novel pressure changing steps and extreme process conditions on the performance of virtually any commercial or developmental PSA process. This presentation will provide an overview of the cyclic adsorption process simulator equations and algorithms used in the new adaptation. It will focus primarily on the novel pressure changing steps and their effects on the performance of a PSA system that epitomizes the extremes of PSA process design and operation. This PSA process is a sorbent-based atmosphere revitalization (SBAR) system that NASA is developing for new manned exploration vehicles. This SBAR system consists of a 2-bed 3-step 3-layer system that operates between atmospheric pressure and the vacuum of space, evacuates from both ends of the column simultaneously, experiences choked flow conditions during pressure changing steps, and experiences a continuously changing feed composition, as it removes metabolic CO2 and H20 from a closed and fixed volume, i.e., the spacecraft cabin. Important process performance indicators of this SBAR system are size, and the

  6. Accelerating staggered-fermion dynamics with the rational hybrid Monte Carlo algorithm

    International Nuclear Information System (INIS)

    Clark, M. A.; Kennedy, A. D.

    2007-01-01

    Improved staggered-fermion formulations are a popular choice for lattice QCD calculations. Historically, the algorithm used for such calculations has been the inexact R algorithm, which has systematic errors that only vanish as the square of the integration step size. We describe how the exact rational hybrid Monte Carlo (RHMC) algorithm may be used in this context, and show that for parameters corresponding to current state-of-the-art computations it leads to a factor of approximately seven decrease in cost as well as having no step-size errors

  7. Efficient scheduling request algorithm for opportunistic wireless access

    KAUST Repository

    Nam, Haewoon

    2011-08-01

    An efficient scheduling request algorithm for opportunistic wireless access based on user grouping is proposed in this paper. Similar to the well-known opportunistic splitting algorithm, the proposed algorithm initially adjusts (or lowers) the threshold during a guard period if no user sends a scheduling request. However, if multiple users make requests simultaneously and therefore a collision occurs, the proposed algorithm no longer updates the threshold but narrows down the user search space by splitting the users into multiple groups iteratively, whereas the opportunistic splitting algorithm keeps adjusting the threshold until a single user is found. Since the threshold is only updated when no user sends a request, it is shown that the proposed algorithm significantly alleviates the burden of the signaling for the threshold distribution to the users by the scheduler. More importantly, the proposed algorithm requires a less number of mini-slots to make a user selection given a certain scheduling outage probability. © 2011 IEEE.

  8. The relationship between randomness and power-law distributed move lengths in random walk algorithms

    Science.gov (United States)

    Sakiyama, Tomoko; Gunji, Yukio-Pegio

    2014-05-01

    Recently, we proposed a new random walk algorithm, termed the REV algorithm, in which the agent alters the directional rule that governs it using the most recent four random numbers. Here, we examined how a non-bounded number, i.e., "randomness" regarding move direction, was important for optimal searching and power-law distributed step lengths in rule change. We proposed two algorithms: the REV and REV-bounded algorithms. In the REV algorithm, one of the four random numbers used to change the rule is non-bounded. In contrast, all four random numbers in the REV-bounded algorithm are bounded. We showed that the REV algorithm exhibited more consistent power-law distributed step lengths and flexible searching behavior.

  9. Algorithms for optimal dyadic decision trees

    Energy Technology Data Exchange (ETDEWEB)

    Hush, Don [Los Alamos National Laboratory; Porter, Reid [Los Alamos National Laboratory

    2009-01-01

    A new algorithm for constructing optimal dyadic decision trees was recently introduced, analyzed, and shown to be very effective for low dimensional data sets. This paper enhances and extends this algorithm by: introducing an adaptive grid search for the regularization parameter that guarantees optimal solutions for all relevant trees sizes, revising the core tree-building algorithm so that its run time is substantially smaller for most regularization parameter values on the grid, and incorporating new data structures and data pre-processing steps that provide significant run time enhancement in practice.

  10. Computational algorithm for molybdenite concentrate annealing

    International Nuclear Information System (INIS)

    Alkatseva, V.M.

    1995-01-01

    Computational algorithm is presented for annealing of molybdenite concentrate with granulated return dust and that of granulated molybdenite concentrate. The algorithm differs from the known analogies for sulphide raw material annealing by including the calculation of return dust mass in stationary annealing; the latter quantity varies form the return dust mass value obtained in the first iteration step. Masses of solid products are determined by distribution of concentrate annealing products, including return dust and benthonite. The algorithm is applied to computations for annealing of other sulphide materials. 3 refs

  11. Step by Step: Biology Undergraduates’ Problem-Solving Procedures during Multiple-Choice Assessment

    Science.gov (United States)

    Prevost, Luanna B.; Lemons, Paula P.

    2016-01-01

    This study uses the theoretical framework of domain-specific problem solving to explore the procedures students use to solve multiple-choice problems about biology concepts. We designed several multiple-choice problems and administered them on four exams. We trained students to produce written descriptions of how they solved the problem, and this allowed us to systematically investigate their problem-solving procedures. We identified a range of procedures and organized them as domain general, domain specific, or hybrid. We also identified domain-general and domain-specific errors made by students during problem solving. We found that students use domain-general and hybrid procedures more frequently when solving lower-order problems than higher-order problems, while they use domain-specific procedures more frequently when solving higher-order problems. Additionally, the more domain-specific procedures students used, the higher the likelihood that they would answer the problem correctly, up to five procedures. However, if students used just one domain-general procedure, they were as likely to answer the problem correctly as if they had used two to five domain-general procedures. Our findings provide a categorization scheme and framework for additional research on biology problem solving and suggest several important implications for researchers and instructors. PMID:27909021

  12. Artifact removal algorithms for stroke detection using a multistatic MIST beamforming algorithm.

    Science.gov (United States)

    Ricci, E; Di Domenico, S; Cianca, E; Rossi, T

    2015-01-01

    Microwave imaging (MWI) has been recently proved as a promising imaging modality for low-complexity, low-cost and fast brain imaging tools, which could play a fundamental role to efficiently manage emergencies related to stroke and hemorrhages. This paper focuses on the UWB radar imaging approach and in particular on the processing algorithms of the backscattered signals. Assuming the use of the multistatic version of the MIST (Microwave Imaging Space-Time) beamforming algorithm, developed by Hagness et al. for the early detection of breast cancer, the paper proposes and compares two artifact removal algorithms. Artifacts removal is an essential step of any UWB radar imaging system and currently considered artifact removal algorithms have been shown not to be effective in the specific scenario of brain imaging. First of all, the paper proposes modifications of a known artifact removal algorithm. These modifications are shown to be effective to achieve good localization accuracy and lower false positives. However, the main contribution is the proposal of an artifact removal algorithm based on statistical methods, which allows to achieve even better performance but with much lower computational complexity.

  13. Trust-region based return mapping algorithm for implicit integration of elastic-plastic constitutive models

    Energy Technology Data Exchange (ETDEWEB)

    Lester, Brian T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Scherzinger, William M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-01-19

    A new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, and compared to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. As a result through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.

  14. Trust-region based return mapping algorithm for implicit integration of elastic-plastic constitutive models

    Energy Technology Data Exchange (ETDEWEB)

    Lester, Brian [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Scherzinger, William [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-01-19

    Here, a new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, and compared to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. Through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.

  15. Assessment of a novel mass detection algorithm in mammograms

    Directory of Open Access Journals (Sweden)

    Ehsan Kozegar

    2013-01-01

    Settings and Design: The proposed mass detector consists of two major steps. In the first step, several suspicious regions are extracted from the mammograms using an adaptive thresholding technique. In the second step, false positives originating by the previous stage are reduced by a machine learning approach. Materials and Methods: All modules of the mass detector were assessed on mini-MIAS database. In addition, the algorithm was tested on INBreast database for more validation. Results: According to FROC analysis, our mass detection algorithm outperforms other competing methods. Conclusions: We should not just insist on sensitivity in the segmentation phase because if we forgot FP rate, and our goal was just higher sensitivity, then the learning algorithm would be biased more toward false positives and the sensitivity would decrease dramatically in the false positive reduction phase. Therefore, we should consider the mass detection problem as a cost sensitive problem because misclassification costs are not the same in this type of problems.

  16. Study of multiple scattering effects in heavy ion RBS

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Z.; O`Connor, D.J. [Newcastle Univ., NSW (Australia). Dept. of Physics

    1996-12-31

    Multiple scattering effect is normally neglected in conventional Rutherford Backscattering (RBS) analysis. The backscattered particle yield normally agrees well with the theory based on the single scattering model. However, when heavy incident ions are used such as in heavy ion Rutherford backscattering (HIRBS), or the incident ion energy is reduced, multiple scattering effect starts to play a role in the analysis. In this paper, the experimental data of 6MeV C ions backscattered from a Au target are presented. In measured time of flight spectrum a small step in front of the Au high energy edge is observed. The high energy edge of the step is about 3.4 ns ahead of the Au signal which corresponds to an energy {approx} 300 keV higher than the 135 degree single scattering energy. This value coincides with the double scattering energy of C ion undergoes two consecutive 67.5 degree scattering. Efforts made to investigate the origin of the high energy step observed lead to an Monte Carlo simulation aimed to reproduce the experimental spectrum on computer. As a large angle scattering event is a rare event, two consecutive large angle scattering is extremely hard to reproduce in a random simulation process. Thus, the simulation has not found a particle scattering into 130-140 deg with an energy higher than the single scattering energy. Obviously faster algorithms and a better physical model are necessary for a successful simulation. 16 refs., 3 figs.

  17. Study of multiple scattering effects in heavy ion RBS

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Z; O` Connor, D J [Newcastle Univ., NSW (Australia). Dept. of Physics

    1997-12-31

    Multiple scattering effect is normally neglected in conventional Rutherford Backscattering (RBS) analysis. The backscattered particle yield normally agrees well with the theory based on the single scattering model. However, when heavy incident ions are used such as in heavy ion Rutherford backscattering (HIRBS), or the incident ion energy is reduced, multiple scattering effect starts to play a role in the analysis. In this paper, the experimental data of 6MeV C ions backscattered from a Au target are presented. In measured time of flight spectrum a small step in front of the Au high energy edge is observed. The high energy edge of the step is about 3.4 ns ahead of the Au signal which corresponds to an energy {approx} 300 keV higher than the 135 degree single scattering energy. This value coincides with the double scattering energy of C ion undergoes two consecutive 67.5 degree scattering. Efforts made to investigate the origin of the high energy step observed lead to an Monte Carlo simulation aimed to reproduce the experimental spectrum on computer. As a large angle scattering event is a rare event, two consecutive large angle scattering is extremely hard to reproduce in a random simulation process. Thus, the simulation has not found a particle scattering into 130-140 deg with an energy higher than the single scattering energy. Obviously faster algorithms and a better physical model are necessary for a successful simulation. 16 refs., 3 figs.

  18. Encryption and display of multiple-image information using computer-generated holography with modified GS iterative algorithm

    Science.gov (United States)

    Xiao, Dan; Li, Xiaowei; Liu, Su-Juan; Wang, Qiong-Hua

    2018-03-01

    In this paper, a new scheme of multiple-image encryption and display based on computer-generated holography (CGH) and maximum length cellular automata (MLCA) is presented. With the scheme, the computer-generated hologram, which has the information of the three primitive images, is generated by modified Gerchberg-Saxton (GS) iterative algorithm using three different fractional orders in fractional Fourier domain firstly. Then the hologram is encrypted using MLCA mask. The ciphertext can be decrypted combined with the fractional orders and the rules of MLCA. Numerical simulations and experimental display results have been carried out to verify the validity and feasibility of the proposed scheme.

  19. The study of Kruskal's and Prim's algorithms on the Multiple Instruction and Single Data stream computer system

    Directory of Open Access Journals (Sweden)

    A. Yu. Popov

    2015-01-01

    Full Text Available Bauman Moscow State Technical University is implementing a project to develop operating principles of computer system having radically new architecture. A developed working model of the system allowed us to evaluate an efficiency of developed hardware and software. The experimental results presented in previous studies, as well as the analysis of operating principles of new computer system permit to draw conclusions regarding its efficiency in solving discrete optimization problems related to processing of sets.The new architecture is based on a direct hardware support of operations of discrete mathematics, which is reflected in using the special facilities for processing of sets and data structures. Within the framework of the project a special device was designed, i.e. a structure processor (SP, which improved the performance, without limiting the scope of applications of such a computer system.The previous works presented the basic principles of the computational process organization in MISD (Multiple Instructions, Single Data system, showed the structure and features of the structure processor and the general principles to solve discrete optimization problems on graphs.This paper examines two search algorithms of the minimum spanning tree, namely Kruskal's and Prim's algorithms. It studies the implementations of algorithms for two SP operation modes: coprocessor mode and MISD one. The paper presents results of experimental comparison of MISD system performance in coprocessor mode with mainframes.

  20. de Casteljau's Algorithm Revisited

    DEFF Research Database (Denmark)

    Gravesen, Jens

    1998-01-01

    It is demonstrated how all the basic properties of Bezier curves can be derived swiftly and efficiently without any reference to the Bernstein polynomials and essentially with only geometric arguments. This is achieved by viewing one step in de Casteljau's algorithm as an operator (the de Casteljau...

  1. “What Is a Step?” Differences in How a Step Is Detected among Three Popular Activity Monitors That Have Impacted Physical Activity Research

    Science.gov (United States)

    John, Dinesh; Arguello, Diego; Lyden, Kate; Bassett, David

    2018-01-01

    (1) Background: This study compared manually-counted treadmill walking steps from the hip-worn DigiwalkerSW200 and OmronHJ720ITC, and hip and wrist-worn ActiGraph GT3X+ and GT9X; determined brand-specific acceleration amplitude (g) and/or frequency (Hz) step-detection thresholds; and quantified key features of the acceleration signal during walking. (2) Methods: Twenty participants (Age: 26.7 ± 4.9 years) performed treadmill walking between 0.89-to-1.79 m/s (2–4 mph) while wearing a hip-worn DigiwalkerSW200, OmronHJ720ITC, GT3X+ and GT9X, and a wrist-worn GT3X+ and GT9X. A DigiwalkerSW200 and OmronHJ720ITC underwent shaker testing to determine device-specific frequency and amplitude step-detection thresholds. Simulated signal testing was used to determine thresholds for the ActiGraph step algorithm. Steps during human testing were compared using bias and confidence intervals. (3) Results: The OmronHJ720ITC was most accurate during treadmill walking. Hip and wrist-worn ActiGraph outputs were significantly different from the criterion. The DigiwalkerSW200 records steps for movements with a total acceleration of ≥1.21 g. The OmronHJ720ITC detects a step when movement has an acceleration ≥0.10 g with a dominant frequency of ≥1 Hz. The step-threshold for the ActiLife algorithm is variable based on signal frequency. Acceleration signals at the hip and wrist have distinctive patterns during treadmill walking. (4) Conclusions: Three common research-grade physical activity monitors employ different step-detection strategies, which causes variability in step output. PMID:29662048

  2. DEVELOPMENT OF HOLE RECOGNITION SYSTEM FROM STEP FILE

    Directory of Open Access Journals (Sweden)

    C. F. Tan

    2017-11-01

    Full Text Available This paper describes the development of Hole Recognition System (HRS for Computer-Aided Process Planning (CAPP using a neutral data format produced by CAD system. The geometrical data of holes is retrieved from STandard for the Exchange of Product model data (STEP. Rule-based algorithm is used during recognising process. Current implementation of feature recognition is limited to simple hole feat ures. Test results are presented to demonstrate the capabilities of the feature recognition algorithm.

  3. Glowworm swarm optimization theory, algorithms, and applications

    CERN Document Server

    Kaipa, Krishnanand N

    2017-01-01

    This book provides a comprehensive account of the glowworm swarm optimization (GSO) algorithm, including details of the underlying ideas, theoretical foundations, algorithm development, various applications, and MATLAB programs for the basic GSO algorithm. It also discusses several research problems at different levels of sophistication that can be attempted by interested researchers. The generality of the GSO algorithm is evident in its application to diverse problems ranging from optimization to robotics. Examples include computation of multiple optima, annual crop planning, cooperative exploration, distributed search, multiple source localization, contaminant boundary mapping, wireless sensor networks, clustering, knapsack, numerical integration, solving fixed point equations, solving systems of nonlinear equations, and engineering design optimization. The book is a valuable resource for researchers as well as graduate and undergraduate students in the area of swarm intelligence and computational intellige...

  4. Modified Projection Algorithms for Solving the Split Equality Problems

    Directory of Open Access Journals (Sweden)

    Qiao-Li Dong

    2014-01-01

    proposed a CQ algorithm for solving it. In this paper, we propose a modification for the CQ algorithm, which computes the stepsize adaptively and performs an additional projection step onto two half-spaces in each iteration. We further propose a relaxation scheme for the self-adaptive projection algorithm by using projections onto half-spaces instead of those onto the original convex sets, which is much more practical. Weak convergence results for both algorithms are analyzed.

  5. Genetic Algorithm Applied to the Eigenvalue Equalization Filtered-x LMS Algorithm (EE-FXLMS

    Directory of Open Access Journals (Sweden)

    Stephan P. Lovstedt

    2008-01-01

    Full Text Available The FXLMS algorithm, used extensively in active noise control (ANC, exhibits frequency-dependent convergence behavior. This leads to degraded performance for time-varying tonal noise and noise with multiple stationary tones. Previous work by the authors proposed the eigenvalue equalization filtered-x least mean squares (EE-FXLMS algorithm. For that algorithm, magnitude coefficients of the secondary path transfer function are modified to decrease variation in the eigenvalues of the filtered-x autocorrelation matrix, while preserving the phase, giving faster convergence and increasing overall attenuation. This paper revisits the EE-FXLMS algorithm, using a genetic algorithm to find magnitude coefficients that give the least variation in eigenvalues. This method overcomes some of the problems with implementing the EE-FXLMS algorithm arising from finite resolution of sampled systems. Experimental control results using the original secondary path model, and a modified secondary path model for both the previous implementation of EE-FXLMS and the genetic algorithm implementation are compared.

  6. A review on quantum search algorithms

    Science.gov (United States)

    Giri, Pulak Ranjan; Korepin, Vladimir E.

    2017-12-01

    The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.

  7. Full-waveform data for building roof step edge localization

    Science.gov (United States)

    Słota, Małgorzata

    2015-08-01

    Airborne laser scanning data perfectly represent flat or gently sloped areas; to date, however, accurate breakline detection is the main drawback of this technique. This issue becomes particularly important in the case of modeling buildings, where accuracy higher than the footprint size is often required. This article covers several issues related to full-waveform data registered on building step edges. First, the full-waveform data simulator was developed and presented in this paper. Second, this article provides a full description of the changes in echo amplitude, echo width and returned power caused by the presence of edges within the laser footprint. Additionally, two important properties of step edge echoes, peak shift and echo asymmetry, were noted and described. It was shown that these properties lead to incorrect echo positioning along the laser center line and can significantly reduce the edge points' accuracy. For these reasons and because all points are aligned with the center of the beam, regardless of the actual target position within the beam footprint, we can state that step edge points require geometric corrections. This article presents a novel algorithm for the refinement of step edge points. The main distinguishing advantage of the developed algorithm is the fact that none of the additional data, such as emitted signal parameters, beam divergence, approximate edge geometry or scanning settings, are required. The proposed algorithm works only on georeferenced profiles of reflected laser energy. Another major advantage is the simplicity of the calculation, allowing for very efficient data processing. Additionally, the developed method of point correction allows for the accurate determination of points lying on edges and edge point densification. For this reason, fully automatic localization of building roof step edges based on LiDAR full-waveform data with higher accuracy than the size of the lidar footprint is feasible.

  8. Detection of uterine MMG contractions using a multiple change point estimator and the K-means cluster algorithm.

    Science.gov (United States)

    La Rosa, Patricio S; Nehorai, Arye; Eswaran, Hari; Lowery, Curtis L; Preissl, Hubert

    2008-02-01

    We propose a single channel two-stage time-segment discriminator of uterine magnetomyogram (MMG) contractions during pregnancy. We assume that the preprocessed signals are piecewise stationary having distribution in a common family with a fixed number of parameters. Therefore, at the first stage, we propose a model-based segmentation procedure, which detects multiple change-points in the parameters of a piecewise constant time-varying autoregressive model using a robust formulation of the Schwarz information criterion (SIC) and a binary search approach. In particular, we propose a test statistic that depends on the SIC, derive its asymptotic distribution, and obtain closed-form optimal detection thresholds in the sense of the Neyman-Pearson criterion; therefore, we control the probability of false alarm and maximize the probability of change-point detection in each stage of the binary search algorithm. We compute and evaluate the relative energy variation [root mean squares (RMS)] and the dominant frequency component [first order zero crossing (FOZC)] in discriminating between time segments with and without contractions. The former consistently detects a time segment with contractions. Thus, at the second stage, we apply a nonsupervised K-means cluster algorithm to classify the detected time segments using the RMS values. We apply our detection algorithm to real MMG records obtained from ten patients admitted to the hospital for contractions with gestational ages between 31 and 40 weeks. We evaluate the performance of our detection algorithm in computing the detection and false alarm rate, respectively, using as a reference the patients' feedback. We also analyze the fusion of the decision signals from all the sensors as in the parallel distributed detection approach.

  9. Parallel Directionally Split Solver Based on Reformulation of Pipelined Thomas Algorithm

    Science.gov (United States)

    Povitsky, A.

    1998-01-01

    In this research an efficient parallel algorithm for 3-D directionally split problems is developed. The proposed algorithm is based on a reformulated version of the pipelined Thomas algorithm that starts the backward step computations immediately after the completion of the forward step computations for the first portion of lines This algorithm has data available for other computational tasks while processors are idle from the Thomas algorithm. The proposed 3-D directionally split solver is based on the static scheduling of processors where local and non-local, data-dependent and data-independent computations are scheduled while processors are idle. A theoretical model of parallelization efficiency is used to define optimal parameters of the algorithm, to show an asymptotic parallelization penalty and to obtain an optimal cover of a global domain with subdomains. It is shown by computational experiments and by the theoretical model that the proposed algorithm reduces the parallelization penalty about two times over the basic algorithm for the range of the number of processors (subdomains) considered and the number of grid nodes per subdomain.

  10. Sparse Nonlinear Electromagnetic Imaging Accelerated With Projected Steepest Descent Algorithm

    KAUST Repository

    Desmal, Abdulla; Bagci, Hakan

    2017-01-01

    steepest descent algorithm. The algorithm uses a projection operator to enforce the sparsity constraint by thresholding the solution at every iteration. Thresholding level and iteration step are selected carefully to increase the efficiency without

  11. MaxBin 2.0: an automated binning algorithm to recover genomes from multiple metagenomic datasets

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Yu-Wei [Joint BioEnergy Inst. (JBEI), Emeryville, CA (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Simmons, Blake A. [Joint BioEnergy Inst. (JBEI), Emeryville, CA (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Singer, Steven W. [Joint BioEnergy Inst. (JBEI), Emeryville, CA (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-10-29

    The recovery of genomes from metagenomic datasets is a critical step to defining the functional roles of the underlying uncultivated populations. We previously developed MaxBin, an automated binning approach for high-throughput recovery of microbial genomes from metagenomes. Here, we present an expanded binning algorithm, MaxBin 2.0, which recovers genomes from co-assembly of a collection of metagenomic datasets. Tests on simulated datasets revealed that MaxBin 2.0 is highly accurate in recovering individual genomes, and the application of MaxBin 2.0 to several metagenomes from environmental samples demonstrated that it could achieve two complementary goals: recovering more bacterial genomes compared to binning a single sample as well as comparing the microbial community composition between different sampling environments. Availability and implementation: MaxBin 2.0 is freely available at http://sourceforge.net/projects/maxbin/ under BSD license. Supplementary information: Supplementary data are available at Bioinformatics online.

  12. A Stepped Frequency CW SAR for Lightweight UAV Operation

    National Research Council Canada - National Science Library

    Morrison, Keith

    2005-01-01

    A stepped-frequency continuous wave (SF-CW) synthetic aperture radar (SAR), with frequency-agile waveforms and real-time intelligent signal processing algorithms, is proposed for operation from a lightweight UAV platform...

  13. Adaptive Active Noise Suppression Using Multiple Model Switching Strategy

    Directory of Open Access Journals (Sweden)

    Quanzhen Huang

    2017-01-01

    Full Text Available Active noise suppression for applications where the system response varies with time is a difficult problem. The computation burden for the existing control algorithms with online identification is heavy and easy to cause control system instability. A new active noise control algorithm is proposed in this paper by employing multiple model switching strategy for secondary path varying. The computation is significantly reduced. Firstly, a noise control system modeling method is proposed for duct-like applications. Then a multiple model adaptive control algorithm is proposed with a new multiple model switching strategy based on filter-u least mean square (FULMS algorithm. Finally, the proposed algorithm was implemented on Texas Instruments digital signal processor (DSP TMS320F28335 and real time experiments were done to test the proposed algorithm and FULMS algorithm with online identification. Experimental verification tests show that the proposed algorithm is effective with good noise suppression performance.

  14. Array architectures for iterative algorithms

    Science.gov (United States)

    Jagadish, Hosagrahar V.; Rao, Sailesh K.; Kailath, Thomas

    1987-01-01

    Regular mesh-connected arrays are shown to be isomorphic to a class of so-called regular iterative algorithms. For a wide variety of problems it is shown how to obtain appropriate iterative algorithms and then how to translate these algorithms into arrays in a systematic fashion. Several 'systolic' arrays presented in the literature are shown to be specific cases of the variety of architectures that can be derived by the techniques presented here. These include arrays for Fourier Transform, Matrix Multiplication, and Sorting.

  15. Computational plasticity algorithm for particle dynamics simulations

    Science.gov (United States)

    Krabbenhoft, K.; Lyamin, A. V.; Vignes, C.

    2018-01-01

    The problem of particle dynamics simulation is interpreted in the framework of computational plasticity leading to an algorithm which is mathematically indistinguishable from the common implicit scheme widely used in the finite element analysis of elastoplastic boundary value problems. This algorithm provides somewhat of a unification of two particle methods, the discrete element method and the contact dynamics method, which usually are thought of as being quite disparate. In particular, it is shown that the former appears as the special case where the time stepping is explicit while the use of implicit time stepping leads to the kind of schemes usually labelled contact dynamics methods. The framing of particle dynamics simulation within computational plasticity paves the way for new approaches similar (or identical) to those frequently employed in nonlinear finite element analysis. These include mixed implicit-explicit time stepping, dynamic relaxation and domain decomposition schemes.

  16. HOW DOES FINGOLIMOD (GILENYA® FIT IN THE TREATMENT ALGORITHM FOR HIGHLY ACTIVE RELAPSING-REMITTING MULTIPLE SCLEROSIS?

    Directory of Open Access Journals (Sweden)

    Franz eFazekas

    2013-05-01

    Full Text Available Multiple sclerosis (MS is a neurological disorder characterised by inflammatory demyelination and neurodegeneration in the central nervous system (CNS. Until recently, disease modifying treatment was based on agents requiring parenteral delivery, thus limiting long-term compliance. Basic treatments such as beta-interferon provide only moderate efficacy, and although therapies for second-line treatment and highly active MS are more effective, they are associated with potentially severe side effects. Fingolimod (Gilenya® is the first oral treatment of MS and has recently been approved as single disease-modifying therapy in highly active relapsing-remitting multiple sclerosis (RRMS for adult patients with high disease activity despite basic treatment (beta-interferon and for treatment-naïve patients with rapidly evolving severe RRMS. At a scientific meeting that took place in Vienna on November 18th, 2011, experts from 10 Central and Eastern European countries discussed the clinical benefits and potential risks of fingolimod for MS, suggested how the new therapy fits within the current treatment algorithm and provided expert opinion for the selection and management of patients.

  17. A Pilot-Pattern Based Algorithm for MIMO-OFDM Channel Estimation

    Directory of Open Access Journals (Sweden)

    Guomin Li

    2016-12-01

    Full Text Available An improved pilot pattern algorithm for facilitating the channel estimation in multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM systems is proposed in this paper. The presented algorithm reconfigures the parameter in the least square (LS algorithm, which belongs to the space-time block-coded (STBC category for channel estimation in pilot-based MIMO-OFDM system. Simulation results show that the algorithm has better performance in contrast to the classical single symbol scheme. In contrast to the double symbols scheme, the proposed algorithm can achieve nearly the same performance with only half of the complexity of the double symbols scheme.

  18. Combined genetic algorithm and multiple linear regression (GA-MLR) optimizer: Application to multi-exponential fluorescence decay surface.

    Science.gov (United States)

    Fisz, Jacek J

    2006-12-07

    The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi

  19. Quick fuzzy backpropagation algorithm.

    Science.gov (United States)

    Nikov, A; Stoeva, S

    2001-03-01

    A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.

  20. Parallel grid generation algorithm for distributed memory computers

    Science.gov (United States)

    Moitra, Stuti; Moitra, Anutosh

    1994-01-01

    A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.

  1. An efficient algorithm for global periodic orbits generation near irregular-shaped asteroids

    Science.gov (United States)

    Shang, Haibin; Wu, Xiaoyu; Ren, Yuan; Shan, Jinjun

    2017-07-01

    Periodic orbits (POs) play an important role in understanding dynamical behaviors around natural celestial bodies. In this study, an efficient algorithm was presented to generate the global POs around irregular-shaped uniformly rotating asteroids. The algorithm was performed in three steps, namely global search, local refinement, and model continuation. First, a mascon model with a low number of particles and optimized mass distribution was constructed to remodel the exterior gravitational potential of the asteroid. Using this model, a multi-start differential evolution enhanced with a deflection strategy with strong global exploration and bypassing abilities was adopted. This algorithm can be regarded as a search engine to find multiple globally optimal regions in which potential POs were located. This was followed by applying a differential correction to locally refine global search solutions and generate the accurate POs in the mascon model in which an analytical Jacobian matrix was derived to improve convergence. Finally, the concept of numerical model continuation was introduced and used to convert the POs from the mascon model into a high-fidelity polyhedron model by sequentially correcting the initial states. The efficiency of the proposed algorithm was substantiated by computing the global POs around an elongated shoe-shaped asteroid 433 Eros. Various global POs with different topological structures in the configuration space were successfully located. Specifically, the proposed algorithm was generic and could be conveniently extended to explore periodic motions in other gravitational systems.

  2. A modeling and numerical algorithm for thermoporomechanics in multiple porosity media for naturally fractured reservoirs

    Science.gov (United States)

    Kim, J.; Sonnenthal, E. L.; Rutqvist, J.

    2011-12-01

    Rigorous modeling of coupling between fluid, heat, and geomechanics (thermo-poro-mechanics), in fractured porous media is one of the important and difficult topics in geothermal reservoir simulation, because the physics are highly nonlinear and strongly coupled. Coupled fluid/heat flow and geomechanics are investigated using the multiple interacting continua (MINC) method as applied to naturally fractured media. In this study, we generalize constitutive relations for the isothermal elastic dual porosity model proposed by Berryman (2002) to those for the non-isothermal elastic/elastoplastic multiple porosity model, and derive the coupling coefficients of coupled fluid/heat flow and geomechanics and constraints of the coefficients. When the off-diagonal terms of the total compressibility matrix for the flow problem are zero, the upscaled drained bulk modulus for geomechanics becomes the harmonic average of drained bulk moduli of the multiple continua. In this case, the drained elastic/elastoplastic moduli for mechanics are determined by a combination of the drained moduli and volume fractions in multiple porosity materials. We also determine a relation between local strains of all multiple porosity materials in a gridblock and the global strain of the gridblock, from which we can track local and global elastic/plastic variables. For elastoplasticity, the return mapping is performed for all multiple porosity materials in the gridblock. For numerical implementation, we employ and extend the fixed-stress sequential method of the single porosity model to coupled fluid/heat flow and geomechanics in multiple porosity systems, because it provides numerical stability and high accuracy. This sequential scheme can be easily implemented by using a porosity function and its corresponding porosity correction, making use of the existing robust flow and geomechanics simulators. We implemented the proposed modeling and numerical algorithm to the reaction transport simulator

  3. Efficiently computing exact geodesic loops within finite steps.

    Science.gov (United States)

    Xin, Shi-Qing; He, Ying; Fu, Chi-Wing

    2012-06-01

    Closed geodesics, or geodesic loops, are crucial to the study of differential topology and differential geometry. Although the existence and properties of closed geodesics on smooth surfaces have been widely studied in mathematics community, relatively little progress has been made on how to compute them on polygonal surfaces. Most existing algorithms simply consider the mesh as a graph and so the resultant loops are restricted only on mesh edges, which are far from the actual geodesics. This paper is the first to prove the existence and uniqueness of geodesic loop restricted on a closed face sequence; it contributes also with an efficient algorithm to iteratively evolve an initial closed path on a given mesh into an exact geodesic loop within finite steps. Our proposed algorithm takes only an O(k) space complexity and an O(mk) time complexity (experimentally), where m is the number of vertices in the region bounded by the initial loop and the resultant geodesic loop, and k is the average number of edges in the edge sequences that the evolving loop passes through. In contrast to the existing geodesic curvature flow methods which compute an approximate geodesic loop within a predefined threshold, our method is exact and can apply directly to triangular meshes without needing to solve any differential equation with a numerical solver; it can run at interactive speed, e.g., in the order of milliseconds, for a mesh with around 50K vertices, and hence, significantly outperforms existing algorithms. Actually, our algorithm could run at interactive speed even for larger meshes. Besides the complexity of the input mesh, the geometric shape could also affect the number of evolving steps, i.e., the performance. We motivate our algorithm with an interactive shape segmentation example shown later in the paper.

  4. Accuracy verification methods theory and algorithms

    CERN Document Server

    Mali, Olli; Repin, Sergey

    2014-01-01

    The importance of accuracy verification methods was understood at the very beginning of the development of numerical analysis. Recent decades have seen a rapid growth of results related to adaptive numerical methods and a posteriori estimates. However, in this important area there often exists a noticeable gap between mathematicians creating the theory and researchers developing applied algorithms that could be used in engineering and scientific computations for guaranteed and efficient error control.   The goals of the book are to (1) give a transparent explanation of the underlying mathematical theory in a style accessible not only to advanced numerical analysts but also to engineers and students; (2) present detailed step-by-step algorithms that follow from a theory; (3) discuss their advantages and drawbacks, areas of applicability, give recommendations and examples.

  5. Empirical study of parallel LRU simulation algorithms

    Science.gov (United States)

    Carr, Eric; Nicol, David M.

    1994-01-01

    This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.

  6. Star point centroid algorithm based on background forecast

    Science.gov (United States)

    Wang, Jin; Zhao, Rujin; Zhu, Nan

    2014-09-01

    The calculation of star point centroid is a key step of improving star tracker measuring error. A star map photoed by APS detector includes several noises which have a great impact on veracity of calculation of star point centroid. Through analysis of characteristic of star map noise, an algorithm of calculation of star point centroid based on background forecast is presented in this paper. The experiment proves the validity of the algorithm. Comparing with classic algorithm, this algorithm not only improves veracity of calculation of star point centroid, but also does not need calibration data memory. This algorithm is applied successfully in a certain star tracker.

  7. Modified SIMPLE algorithm for the numerical analysis of incompressible flows with free surface

    International Nuclear Information System (INIS)

    Mok, Jin Ho; Hong, Chun Pyo; Lee, Jin Ho

    2005-01-01

    While the SIMPLE algorithm is most widely used for the simulations of flow phenomena that take place in the industrial equipment or the manufacturing processes, it is less adopted for the simulations of the free surface flow. Though the SIMPLE algorithm is free from the limitation of time step, the free surface behavior imposes the restriction on the time step. As a result, the explicit schemes are faster than the implicit scheme in terms of computation time when the same time step is applied to, since the implicit scheme includes the numerical method to solve the simultaneous equations in its procedure. If the computation time of SIMPLE algorithm can be reduced when it is applied to the unsteady free surface flow problems, the calculation can be carried out in the more stable way and, in the design process, the process variables can be controlled based on the more accurate data base. In this study, a modified SIMPLE algorithm is presented for the free surface flow. The broken water column problem is adopted for the validation of the modified algorithm (MoSIMPLE) and for comparison to the conventional SIMPLE algorithm

  8. Signal Timing Optimization for Corridors with Multiple Highway-Rail Grade Crossings Using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Yifeng Chen

    2018-01-01

    Full Text Available Safety and efficiency are two critical issues at highway-rail grade crossings (HRGCs and their nearby intersections. Standard traffic signal optimization programs are not designed to work on roadway networks that contain multiple HRGCs, because their underlying assumption is that the roadway traffic is in a steady-state. During a train event, steady-state conditions do not occur. This is particularly true for corridors that experience high train traffic (e.g., over 2 trains per hour. In this situation, the non-steady-state conditions predominate. This paper develops a simulation-based methodology for optimizing traffic signal timing plan on corridors of this kind. The primary goal is to maximize safety, and the secondary goal is to minimize delay. A Genetic Algorithm (GA was used as the optimization approach in the proposed methodology. A new transition preemption strategy for dual tracks (TPS_DT and a train arrival prediction model were integrated in the proposed methodology. An urban road network with multiple HRGCs in Lincoln, NE, was used as the study network. The microsimulation model VISSIM was used for evaluation purposes and was calibrated to local traffic conditions. A sensitivity analysis with different train traffic scenarios was conducted. It was concluded that the methodology can significantly improve both the safety and efficiency of traffic corridors with HRGCs.

  9. GraDit: graph-based data repair algorithm for multiple data edits rule violations

    Science.gov (United States)

    Ode Zuhayeni Madjida, Wa; Gusti Bagus Baskara Nugraha, I.

    2018-03-01

    Constraint-based data cleaning captures data violation to a set of rule called data quality rules. The rules consist of integrity constraint and data edits. Structurally, they are similar, where the rule contain left hand side and right hand side. Previous research proposed a data repair algorithm for integrity constraint violation. The algorithm uses undirected hypergraph as rule violation representation. Nevertheless, this algorithm can not be applied for data edits because of different rule characteristics. This study proposed GraDit, a repair algorithm for data edits rule. First, we use bipartite-directed hypergraph as model representation of overall defined rules. These representation is used for getting interaction between violation rules and clean rules. On the other hand, we proposed undirected graph as violation representation. Our experimental study showed that algorithm with undirected graph as violation representation model gave better data quality than algorithm with undirected hypergraph as representation model.

  10. Protein structure modeling for CASP10 by multiple layers of global optimization.

    Science.gov (United States)

    Joo, Keehyoung; Lee, Juyong; Sim, Sangjin; Lee, Sun Young; Lee, Kiho; Heo, Seungryong; Lee, In-Ho; Lee, Sung Jong; Lee, Jooyoung

    2014-02-01

    In the template-based modeling (TBM) category of CASP10 experiment, we introduced a new protocol called protein modeling system (PMS) to generate accurate protein structures in terms of side-chains as well as backbone trace. In the new protocol, a global optimization algorithm, called conformational space annealing (CSA), is applied to the three layers of TBM procedure: multiple sequence-structure alignment, 3D chain building, and side-chain re-modeling. For 3D chain building, we developed a new energy function which includes new distance restraint terms of Lorentzian type (derived from multiple templates), and new energy terms that combine (physical) energy terms such as dynamic fragment assembly (DFA) energy, DFIRE statistical potential energy, hydrogen bonding term, etc. These physical energy terms are expected to guide the structure modeling especially for loop regions where no template structures are available. In addition, we developed a new quality assessment method based on random forest machine learning algorithm to screen templates, multiple alignments, and final models. For TBM targets of CASP10, we find that, due to the combination of three stages of CSA global optimizations and quality assessment, the modeling accuracy of PMS improves at each additional stage of the protocol. It is especially noteworthy that the side-chains of the final PMS models are far more accurate than the models in the intermediate steps. Copyright © 2013 Wiley Periodicals, Inc.

  11. Overview of fast algorithm in 3D dynamic holographic display

    Science.gov (United States)

    Liu, Juan; Jia, Jia; Pan, Yijie; Wang, Yongtian

    2013-08-01

    3D dynamic holographic display is one of the most attractive techniques for achieving real 3D vision with full depth cue without any extra devices. However, huge 3D information and data should be preceded and be computed in real time for generating the hologram in 3D dynamic holographic display, and it is a challenge even for the most advanced computer. Many fast algorithms are proposed for speeding the calculation and reducing the memory usage, such as:look-up table (LUT), compressed look-up table (C-LUT), split look-up table (S-LUT), and novel look-up table (N-LUT) based on the point-based method, and full analytical polygon-based methods, one-step polygon-based method based on the polygon-based method. In this presentation, we overview various fast algorithms based on the point-based method and the polygon-based method, and focus on the fast algorithm with low memory usage, the C-LUT, and one-step polygon-based method by the 2D Fourier analysis of the 3D affine transformation. The numerical simulations and the optical experiments are presented, and several other algorithms are compared. The results show that the C-LUT algorithm and the one-step polygon-based method are efficient methods for saving calculation time. It is believed that those methods could be used in the real-time 3D holographic display in future.

  12. Stepped MS(All) Relied Transition (SMART): An approach to rapidly determine optimal multiple reaction monitoring mass spectrometry parameters for small molecules.

    Science.gov (United States)

    Ye, Hui; Zhu, Lin; Wang, Lin; Liu, Huiying; Zhang, Jun; Wu, Mengqiu; Wang, Guangji; Hao, Haiping

    2016-02-11

    Multiple reaction monitoring (MRM) is a universal approach for quantitative analysis because of its high specificity and sensitivity. Nevertheless, optimization of MRM parameters remains as a time and labor-intensive task particularly in multiplexed quantitative analysis of small molecules in complex mixtures. In this study, we have developed an approach named Stepped MS(All) Relied Transition (SMART) to predict the optimal MRM parameters of small molecules. SMART requires firstly a rapid and high-throughput analysis of samples using a Stepped MS(All) technique (sMS(All)) on a Q-TOF, which consists of serial MS(All) events acquired from low CE to gradually stepped-up CE values in a cycle. The optimal CE values can then be determined by comparing the extracted ion chromatograms for the ion pairs of interest among serial scans. The SMART-predicted parameters were found to agree well with the parameters optimized on a triple quadrupole from the same vendor using a mixture of standards. The parameters optimized on a triple quadrupole from a different vendor was also employed for comparison, and found to be linearly correlated with the SMART-predicted parameters, suggesting the potential applications of the SMART approach among different instrumental platforms. This approach was further validated by applying to simultaneous quantification of 31 herbal components in the plasma of rats treated with a herbal prescription. Because the sMS(All) acquisition can be accomplished in a single run for multiple components independent of standards, the SMART approach are expected to find its wide application in the multiplexed quantitative analysis of complex mixtures. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Computationally Efficient DOA Tracking Algorithm in Monostatic MIMO Radar with Automatic Association

    Directory of Open Access Journals (Sweden)

    Huaxin Yu

    2014-01-01

    Full Text Available We consider the problem of tracking the direction of arrivals (DOA of multiple moving targets in monostatic multiple-input multiple-output (MIMO radar. A low-complexity DOA tracking algorithm in monostatic MIMO radar is proposed. The proposed algorithm obtains DOA estimation via the difference between previous and current covariance matrix of the reduced-dimension transformation signal, and it reduces the computational complexity and realizes automatic association in DOA tracking. Error analysis and Cramér-Rao lower bound (CRLB of DOA tracking are derived in the paper. The proposed algorithm not only can be regarded as an extension of array-signal-processing DOA tracking algorithm in (Zhang et al. (2008, but also is an improved version of the DOA tracking algorithm in (Zhang et al. (2008. Furthermore, the proposed algorithm has better DOA tracking performance than the DOA tracking algorithm in (Zhang et al. (2008. The simulation results demonstrate effectiveness of the proposed algorithm. Our work provides the technical support for the practical application of MIMO radar.

  14. Fast algorithm for computing complex number-theoretic transforms

    Science.gov (United States)

    Reed, I. S.; Liu, K. Y.; Truong, T. K.

    1977-01-01

    A high-radix FFT algorithm for computing transforms over FFT, where q is a Mersenne prime, is developed to implement fast circular convolutions. This new algorithm requires substantially fewer multiplications than the conventional FFT.

  15. A Line-Based Adaptive-Weight Matching Algorithm Using Loopy Belief Propagation

    Directory of Open Access Journals (Sweden)

    Hui Li

    2015-01-01

    Full Text Available In traditional adaptive-weight stereo matching, the rectangular shaped support region requires excess memory consumption and time. We propose a novel line-based stereo matching algorithm for obtaining a more accurate disparity map with low computation complexity. This algorithm can be divided into two steps: disparity map initialization and disparity map refinement. In the initialization step, a new adaptive-weight model based on the linear support region is put forward for cost aggregation. In this model, the neural network is used to evaluate the spatial proximity, and the mean-shift segmentation method is used to improve the accuracy of color similarity; the Birchfield pixel dissimilarity function and the census transform are adopted to establish the dissimilarity measurement function. Then the initial disparity map is obtained by loopy belief propagation. In the refinement step, the disparity map is optimized by iterative left-right consistency checking method and segmentation voting method. The parameter values involved in this algorithm are determined with many simulation experiments to further improve the matching effect. Simulation results indicate that this new matching method performs well on standard stereo benchmarks and running time of our algorithm is remarkably lower than that of algorithm with rectangle-shaped support region.

  16. Comparison of four Adaboost algorithm based artificial neural networks in wind speed predictions

    International Nuclear Information System (INIS)

    Liu, Hui; Tian, Hong-qi; Li, Yan-fei; Zhang, Lei

    2015-01-01

    Highlights: • Four hybrid algorithms are proposed for the wind speed decomposition. • Adaboost algorithm is adopted to provide a hybrid training framework. • MLP neural networks are built to do the forecasting computation. • Four important network training algorithms are included in the MLP networks. • All the proposed hybrid algorithms are suitable for the wind speed predictions. - Abstract: The technology of wind speed prediction is important to guarantee the safety of wind power utilization. In this paper, four different hybrid methods are proposed for the high-precision multi-step wind speed predictions based on the Adaboost (Adaptive Boosting) algorithm and the MLP (Multilayer Perceptron) neural networks. In the hybrid Adaboost–MLP forecasting architecture, four important algorithms are adopted for the training and modeling of the MLP neural networks, including GD-ALR-BP algorithm, GDM-ALR-BP algorithm, CG-BP-FR algorithm and BFGS algorithm. The aim of the study is to investigate the promoted forecasting percentages of the MLP neural networks by the Adaboost algorithm’ optimization under various training algorithms. The hybrid models in the performance comparison include Adaboost–GD-ALR-BP–MLP, Adaboost–GDM-ALR-BP–MLP, Adaboost–CG-BP-FR–MLP, Adaboost–BFGS–MLP, GD-ALR-BP–MLP, GDM-ALR-BP–MLP, CG-BP-FR–MLP and BFGS–MLP. Two experimental results show that: (1) the proposed hybrid Adaboost–MLP forecasting architecture is effective for the wind speed predictions; (2) the Adaboost algorithm has promoted the forecasting performance of the MLP neural networks considerably; (3) among the proposed Adaboost–MLP forecasting models, the Adaboost–CG-BP-FR–MLP model has the best performance; and (4) the improved percentages of the MLP neural networks by the Adaboost algorithm decrease step by step with the following sequence of training algorithms as: GD-ALR-BP, GDM-ALR-BP, CG-BP-FR and BFGS

  17. Designing synthetic networks in silico: a generalised evolutionary algorithm approach.

    Science.gov (United States)

    Smith, Robert W; van Sluijs, Bob; Fleck, Christian

    2017-12-02

    Evolution has led to the development of biological networks that are shaped by environmental signals. Elucidating, understanding and then reconstructing important network motifs is one of the principal aims of Systems & Synthetic Biology. Consequently, previous research has focused on finding optimal network structures and reaction rates that respond to pulses or produce stable oscillations. In this work we present a generalised in silico evolutionary algorithm that simultaneously finds network structures and reaction rates (genotypes) that can satisfy multiple defined objectives (phenotypes). The key step to our approach is to translate a schema/binary-based description of biological networks into systems of ordinary differential equations (ODEs). The ODEs can then be solved numerically to provide dynamic information about an evolved networks functionality. Initially we benchmark algorithm performance by finding optimal networks that can recapitulate concentration time-series data and perform parameter optimisation on oscillatory dynamics of the Repressilator. We go on to show the utility of our algorithm by finding new designs for robust synthetic oscillators, and by performing multi-objective optimisation to find a set of oscillators and feed-forward loops that are optimal at balancing different system properties. In sum, our results not only confirm and build on previous observations but we also provide new designs of synthetic oscillators for experimental construction. In this work we have presented and tested an evolutionary algorithm that can design a biological network to produce desired output. Given that previous designs of synthetic networks have been limited to subregions of network- and parameter-space, the use of our evolutionary optimisation algorithm will enable Synthetic Biologists to construct new systems with the potential to display a wider range of complex responses.

  18. Realization of quantum gates with multiple control qubits or multiple target qubits in a cavity

    Science.gov (United States)

    Waseem, Muhammad; Irfan, Muhammad; Qamar, Shahid

    2015-06-01

    We propose a scheme to realize a three-qubit controlled phase gate and a multi-qubit controlled NOT gate of one qubit simultaneously controlling n-target qubits with a four-level quantum system in a cavity. The implementation time for multi-qubit controlled NOT gate is independent of the number of qubit. Three-qubit phase gate is generalized to n-qubit phase gate with multiple control qubits. The number of steps reduces linearly as compared to conventional gate decomposition method. Our scheme can be applied to various types of physical systems such as superconducting qubits coupled to a resonator and trapped atoms in a cavity. Our scheme does not require adjustment of level spacing during the gate implementation. We also show the implementation of Deutsch-Joza algorithm. Finally, we discuss the imperfections due to cavity decay and the possibility of physical implementation of our scheme.

  19. Rotor Cascade Shape Optimization with Unsteady Passing Wakes Using Implicit Dual-Time Stepping and a Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Eun Seok Lee

    2003-01-01

    Full Text Available An axial turbine rotor cascade-shape optimization with unsteady passing wakes was performed to obtain an improved aerodynamic performance using an unsteady flow, Reynolds-averaged Navier-Stokes equations solver that was based on explicit, finite difference; Runge-Kutta multistage time marching; and the diagonalized alternating direction implicit scheme. The code utilized Baldwin-Lomax algebraic and k-ε turbulence modeling. The full approximation storage multigrid method and preconditioning were implemented as iterative convergence-acceleration techniques. An implicit dual-time stepping method was incorporated in order to simulate the unsteady flow fields. The objective function was defined as minimization of total pressure loss and maximization of lift, while the mass flow rate was fixed during the optimization. The design variables were several geometric parameters characterizing airfoil leading edge, camber, stagger angle, and inter-row spacing. The genetic algorithm was used as an optimizer, and the penalty method was introduced for combining the constraints with the objective function. Each individual's objective function was computed simultaneously by using a 32-processor distributedmemory computer. The optimization results indicated that only minor improvements are possible in unsteady rotor/stator aerodynamics by varying these geometric parameters.

  20. FEM simulation of multi step forming of thick sheet

    NARCIS (Netherlands)

    Wisselink, H.H.; Huetink, Han

    2004-01-01

    A case study has been performed on the forming of an industrial product. This product, a bracket, is made of 5mm thick sheet in multiple steps. The process exists of a bending step followed by a drawing and a flanging step. FEM simulations have been used to investigate this forming process. First,

  1. Cache-Oblivious Algorithms and Data Structures

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting

    2004-01-01

    Frigo, Leiserson, Prokop and Ramachandran in 1999 introduced the ideal-cache model as a formal model of computation for developing algorithms in environments with multiple levels of caching, and coined the terminology of cache-oblivious algorithms. Cache-oblivious algorithms are described...... as standard RAM algorithms with only one memory level, i.e. without any knowledge about memory hierarchies, but are analyzed in the two-level I/O model of Aggarwal and Vitter for an arbitrary memory and block size and an optimal off-line cache replacement strategy. The result are algorithms that automatically...... apply to multi-level memory hierarchies. This paper gives an overview of the results achieved on cache-oblivious algorithms and data structures since the seminal paper by Frigo et al....

  2. Seismic active control by a heuristic-based algorithm

    International Nuclear Information System (INIS)

    Tang, Yu.

    1996-01-01

    A heuristic-based algorithm for seismic active control is generalized to permit consideration of the effects of control-structure interaction and actuator dynamics. Control force is computed at onetime step ahead before being applied to the structure. Therefore, the proposed control algorithm is free from the problem of time delay. A numerical example is presented to show the effectiveness of the proposed control algorithm. Also, two indices are introduced in the paper to assess the effectiveness and efficiency of control laws

  3. Boris push with spatial stepping

    International Nuclear Information System (INIS)

    Penn, G; Stoltz, P H; Cary, J R; Wurtele, J

    2003-01-01

    The Boris push is commonly used in plasma physics simulations because of its speed and stability. It is second-order accurate, requires only one field evaluation per time step, and has good conservation properties. However, for accelerator simulations it is convenient to propagate particles in z down a changing beamline. A 'spatial Boris push' algorithm has been developed which is similar to the Boris push but uses a spatial coordinate as the independent variable, instead of time. This scheme is compared to the fourth-order Runge-Kutta algorithm, for two simplified muon beam lattices: a uniform solenoid field, and a 'FOFO' lattice where the solenoid field varies sinusoidally along the axis. Examination of the canonical angular momentum, which should be conserved in axisymmetric systems, shows that the spatial Boris push improves accuracy over long distances

  4. Walking pattern classification and walking distance estimation algorithms using gait phase information.

    Science.gov (United States)

    Wang, Jeen-Shing; Lin, Che-Wei; Yang, Ya-Ting C; Ho, Yu-Jen

    2012-10-01

    This paper presents a walking pattern classification and a walking distance estimation algorithm using gait phase information. A gait phase information retrieval algorithm was developed to analyze the duration of the phases in a gait cycle (i.e., stance, push-off, swing, and heel-strike phases). Based on the gait phase information, a decision tree based on the relations between gait phases was constructed for classifying three different walking patterns (level walking, walking upstairs, and walking downstairs). Gait phase information was also used for developing a walking distance estimation algorithm. The walking distance estimation algorithm consists of the processes of step count and step length estimation. The proposed walking pattern classification and walking distance estimation algorithm have been validated by a series of experiments. The accuracy of the proposed walking pattern classification was 98.87%, 95.45%, and 95.00% for level walking, walking upstairs, and walking downstairs, respectively. The accuracy of the proposed walking distance estimation algorithm was 96.42% over a walking distance.

  5. Direction of Radio Finding via MUSIC (Multiple Signal Classification) Algorithm for Hardware Design System

    Science.gov (United States)

    Zhang, Zheng

    2017-10-01

    Concept of radio direction finding systems, which use radio direction finding is based on digital signal processing algorithms. Thus, the radio direction finding system becomes capable to locate and track signals by the both. Performance of radio direction finding significantly depends on effectiveness of digital signal processing algorithms. The algorithm uses the Direction of Arrival (DOA) algorithms to estimate the number of incidents plane waves on the antenna array and their angle of incidence. This manuscript investigates implementation of the DOA algorithms (MUSIC) on the uniform linear array in the presence of white noise. The experiment results exhibit that MUSIC algorithm changed well with the radio direction.

  6. A multilevel-skin neighbor list algorithm for molecular dynamics simulation

    Science.gov (United States)

    Zhang, Chenglong; Zhao, Mingcan; Hou, Chaofeng; Ge, Wei

    2018-01-01

    Searching of the interaction pairs and organization of the interaction processes are important steps in molecular dynamics (MD) algorithms and are critical to the overall efficiency of the simulation. Neighbor lists are widely used for these steps, where thicker skin can reduce the frequency of list updating but is discounted by more computation in distance check for the particle pairs. In this paper, we propose a new neighbor-list-based algorithm with a precisely designed multilevel skin which can reduce unnecessary computation on inter-particle distances. The performance advantages over traditional methods are then analyzed against the main simulation parameters on Intel CPUs and MICs (many integrated cores), and are clearly demonstrated. The algorithm can be generalized for various discrete simulations using neighbor lists.

  7. A ROBUST METHOD FOR STEREO VISUAL ODOMETRY BASED ON MULTIPLE EUCLIDEAN DISTANCE CONSTRAINT AND RANSAC ALGORITHM

    Directory of Open Access Journals (Sweden)

    Q. Zhou

    2017-07-01

    Full Text Available Visual Odometry (VO is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC and Random Sample Consensus (RANSAC algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation. The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.

  8. New Method of Calculating a Multiplication by using the Generalized Bernstein-Vazirani Algorithm

    Science.gov (United States)

    Nagata, Koji; Nakamura, Tadao; Geurdes, Han; Batle, Josep; Abdalla, Soliman; Farouk, Ahmed

    2018-06-01

    We present a new method of more speedily calculating a multiplication by using the generalized Bernstein-Vazirani algorithm and many parallel quantum systems. Given the set of real values a1,a2,a3,\\ldots ,aN and a function g:bf {R}→ {0,1}, we shall determine the following values g(a1),g(a2),g(a3),\\ldots , g(aN) simultaneously. The speed of determining the values is shown to outperform the classical case by a factor of N. Next, we consider it as a number in binary representation; M 1 = ( g( a 1), g( a 2), g( a 3),…, g( a N )). By using M parallel quantum systems, we have M numbers in binary representation, simultaneously. The speed of obtaining the M numbers is shown to outperform the classical case by a factor of M. Finally, we calculate the product; M1× M2× \\cdots × MM. The speed of obtaining the product is shown to outperform the classical case by a factor of N × M.

  9. ALGORITHM TO CHOOSE ENERGY GENERATION MULTIPLE ROLE STATION

    Directory of Open Access Journals (Sweden)

    Alexandru STĂNESCU

    2014-05-01

    Full Text Available This paper proposes an algorithm that is based on a complex analysis method that is used for choosing the configuration of a power station. The station generates electric energy and hydrogen, and serves a "green" highway. The elements that need to be considered are: energy efficiency, location, availability of primary energy sources in the area, investment cost, workforce, environmental impact, compatibility with existing systems, meantime between failure.

  10. Step by Step: Biology Undergraduates' Problem-Solving Procedures during Multiple-Choice Assessment.

    Science.gov (United States)

    Prevost, Luanna B; Lemons, Paula P

    2016-01-01

    This study uses the theoretical framework of domain-specific problem solving to explore the procedures students use to solve multiple-choice problems about biology concepts. We designed several multiple-choice problems and administered them on four exams. We trained students to produce written descriptions of how they solved the problem, and this allowed us to systematically investigate their problem-solving procedures. We identified a range of procedures and organized them as domain general, domain specific, or hybrid. We also identified domain-general and domain-specific errors made by students during problem solving. We found that students use domain-general and hybrid procedures more frequently when solving lower-order problems than higher-order problems, while they use domain-specific procedures more frequently when solving higher-order problems. Additionally, the more domain-specific procedures students used, the higher the likelihood that they would answer the problem correctly, up to five procedures. However, if students used just one domain-general procedure, they were as likely to answer the problem correctly as if they had used two to five domain-general procedures. Our findings provide a categorization scheme and framework for additional research on biology problem solving and suggest several important implications for researchers and instructors. © 2016 L. B. Prevost and P. P. Lemons. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  11. Towards the run and walk activity classification through step detection--an android application.

    Science.gov (United States)

    Oner, Melis; Pulcifer-Stump, Jeffry A; Seeling, Patrick; Kaya, Tolga

    2012-01-01

    Falling is one of the most common accidents with potentially irreversible consequences, especially considering special groups, such as the elderly or disabled. One approach to solve this issue would be an early detection of the falling event. Towards reaching the goal of early fall detection, we have worked on distinguishing and monitoring some basic human activities such as walking and running. Since we plan to implement the system mostly for seniors and the disabled, simplicity of the usage becomes very important. We have successfully implemented an algorithm that would not require the acceleration sensor to be fixed in a specific position (the smart phone itself in our application), whereas most of the previous research dictates the sensor to be fixed in a certain direction. This algorithm reviews data from the accelerometer to determine if a user has taken a step or not and keeps track of the total amount of steps. After testing, the algorithm was more accurate than a commercial pedometer in terms of comparing outputs to the actual number of steps taken by the user.

  12. A new hybrid genetic algorithm for optimizing the single and multivariate objective functions

    Energy Technology Data Exchange (ETDEWEB)

    Tumuluru, Jaya Shankar [Idaho National Laboratory; McCulloch, Richard Chet James [Idaho National Laboratory

    2015-07-01

    In this work a new hybrid genetic algorithm was developed which combines a rudimentary adaptive steepest ascent hill climbing algorithm with a sophisticated evolutionary algorithm in order to optimize complex multivariate design problems. By combining a highly stochastic algorithm (evolutionary) with a simple deterministic optimization algorithm (adaptive steepest ascent) computational resources are conserved and the solution converges rapidly when compared to either algorithm alone. In genetic algorithms natural selection is mimicked by random events such as breeding and mutation. In the adaptive steepest ascent algorithm each variable is perturbed by a small amount and the variable that caused the most improvement is incremented by a small step. If the direction of most benefit is exactly opposite of the previous direction with the most benefit then the step size is reduced by a factor of 2, thus the step size adapts to the terrain. A graphical user interface was created in MATLAB to provide an interface between the hybrid genetic algorithm and the user. Additional features such as bounding the solution space and weighting the objective functions individually are also built into the interface. The algorithm developed was tested to optimize the functions developed for a wood pelleting process. Using process variables (such as feedstock moisture content, die speed, and preheating temperature) pellet properties were appropriately optimized. Specifically, variables were found which maximized unit density, bulk density, tapped density, and durability while minimizing pellet moisture content and specific energy consumption. The time and computational resources required for the optimization were dramatically decreased using the hybrid genetic algorithm when compared to MATLAB's native evolutionary optimization tool.

  13. Direction-of-arrival estimation for co-located multiple-input multiple-output radar using structural sparsity Bayesian learning

    Science.gov (United States)

    Wen, Fang-Qing; Zhang, Gong; Ben, De

    2015-11-01

    This paper addresses the direction of arrival (DOA) estimation problem for the co-located multiple-input multiple-output (MIMO) radar with random arrays. The spatially distributed sparsity of the targets in the background makes compressive sensing (CS) desirable for DOA estimation. A spatial CS framework is presented, which links the DOA estimation problem to support recovery from a known over-complete dictionary. A modified statistical model is developed to accurately represent the intra-block correlation of the received signal. A structural sparsity Bayesian learning algorithm is proposed for the sparse recovery problem. The proposed algorithm, which exploits intra-signal correlation, is capable being applied to limited data support and low signal-to-noise ratio (SNR) scene. Furthermore, the proposed algorithm has less computation load compared to the classical Bayesian algorithm. Simulation results show that the proposed algorithm has a more accurate DOA estimation than the traditional multiple signal classification (MUSIC) algorithm and other CS recovery algorithms. Project supported by the National Natural Science Foundation of China (Grant Nos. 61071163, 61271327, and 61471191), the Funding for Outstanding Doctoral Dissertation in Nanjing University of Aeronautics and Astronautics, China (Grant No. BCXJ14-08), the Funding of Innovation Program for Graduate Education of Jiangsu Province, China (Grant No. KYLX 0277), the Fundamental Research Funds for the Central Universities, China (Grant No. 3082015NP2015504), and the Priority Academic Program Development of Jiangsu Higher Education Institutions (PADA), China.

  14. Accelerating simulation for the multiple-point statistics algorithm using vector quantization

    Science.gov (United States)

    Zuo, Chen; Pan, Zhibin; Liang, Hao

    2018-03-01

    Multiple-point statistics (MPS) is a prominent algorithm to simulate categorical variables based on a sequential simulation procedure. Assuming training images (TIs) as prior conceptual models, MPS extracts patterns from TIs using a template and records their occurrences in a database. However, complex patterns increase the size of the database and require considerable time to retrieve the desired elements. In order to speed up simulation and improve simulation quality over state-of-the-art MPS methods, we propose an accelerating simulation for MPS using vector quantization (VQ), called VQ-MPS. First, a variable representation is presented to make categorical variables applicable for vector quantization. Second, we adopt a tree-structured VQ to compress the database so that stationary simulations are realized. Finally, a transformed template and classified VQ are used to address nonstationarity. A two-dimensional (2D) stationary channelized reservoir image is used to validate the proposed VQ-MPS. In comparison with several existing MPS programs, our method exhibits significantly better performance in terms of computational time, pattern reproductions, and spatial uncertainty. Further demonstrations consist of a 2D four facies simulation, two 2D nonstationary channel simulations, and a three-dimensional (3D) rock simulation. The results reveal that our proposed method is also capable of solving multifacies, nonstationarity, and 3D simulations based on 2D TIs.

  15. Hierarchical Parallel Matrix Multiplication on Large-Scale Distributed Memory Platforms

    KAUST Repository

    Quintin, Jean-Noel

    2013-10-01

    Matrix multiplication is a very important computation kernel both in its own right as a building block of many scientific applications and as a popular representative for other scientific applications. Cannon\\'s algorithm which dates back to 1969 was the first efficient algorithm for parallel matrix multiplication providing theoretically optimal communication cost. However this algorithm requires a square number of processors. In the mid-1990s, the SUMMA algorithm was introduced. SUMMA overcomes the shortcomings of Cannon\\'s algorithm as it can be used on a nonsquare number of processors as well. Since then the number of processors in HPC platforms has increased by two orders of magnitude making the contribution of communication in the overall execution time more significant. Therefore, the state of the art parallel matrix multiplication algorithms should be revisited to reduce the communication cost further. This paper introduces a new parallel matrix multiplication algorithm, Hierarchical SUMMA (HSUMMA), which is a redesign of SUMMA. Our algorithm reduces the communication cost of SUMMA by introducing a two-level virtual hierarchy into the two-dimensional arrangement of processors. Experiments on an IBM BlueGene/P demonstrate the reduction of communication cost up to 2.08 times on 2048 cores and up to 5.89 times on 16384 cores. © 2013 IEEE.

  16. Hierarchical Parallel Matrix Multiplication on Large-Scale Distributed Memory Platforms

    KAUST Repository

    Quintin, Jean-Noel; Hasanov, Khalid; Lastovetsky, Alexey

    2013-01-01

    Matrix multiplication is a very important computation kernel both in its own right as a building block of many scientific applications and as a popular representative for other scientific applications. Cannon's algorithm which dates back to 1969 was the first efficient algorithm for parallel matrix multiplication providing theoretically optimal communication cost. However this algorithm requires a square number of processors. In the mid-1990s, the SUMMA algorithm was introduced. SUMMA overcomes the shortcomings of Cannon's algorithm as it can be used on a nonsquare number of processors as well. Since then the number of processors in HPC platforms has increased by two orders of magnitude making the contribution of communication in the overall execution time more significant. Therefore, the state of the art parallel matrix multiplication algorithms should be revisited to reduce the communication cost further. This paper introduces a new parallel matrix multiplication algorithm, Hierarchical SUMMA (HSUMMA), which is a redesign of SUMMA. Our algorithm reduces the communication cost of SUMMA by introducing a two-level virtual hierarchy into the two-dimensional arrangement of processors. Experiments on an IBM BlueGene/P demonstrate the reduction of communication cost up to 2.08 times on 2048 cores and up to 5.89 times on 16384 cores. © 2013 IEEE.

  17. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

  18. Grade Distribution Modeling within the Bauxite Seams of the Wachangping Mine, China, Using a Multi-Step Interpolation Algorithm

    Directory of Open Access Journals (Sweden)

    Shaofeng Wang

    2017-05-01

    Full Text Available Mineral reserve estimation and mining design depend on a precise modeling of the mineralized deposit. A multi-step interpolation algorithm, including 1D biharmonic spline estimator for interpolating floor altitudes, 2D nearest neighbor, linear, natural neighbor, cubic, biharmonic spline, inverse distance weighted, simple kriging, and ordinary kriging interpolations for grade distribution on the two vertical sections at roadways, and 3D linear interpolation for grade distribution between sections, was proposed to build a 3D grade distribution model of the mineralized seam in a longwall mining panel with a U-shaped layout having two roadways at both sides. Compared to field data from exploratory boreholes, this multi-step interpolation using a natural neighbor method shows an optimal stability and a minimal difference between interpolation and field data. Using this method, the 97,576 m3 of bauxite, in which the mass fraction of Al2O3 (Wa and the mass ratio of Al2O3 to SiO2 (Wa/s are 61.68% and 27.72, respectively, was delimited from the 189,260 m3 mineralized deposit in the 1102 longwall mining panel in the Wachangping mine, Southwest China. The mean absolute errors, the root mean squared errors and the relative standard deviations of errors between interpolated data and exploratory grade data at six boreholes are 2.544, 2.674, and 32.37% of Wa; and 1.761, 1.974, and 67.37% of Wa/s, respectively. The proposed method can be used for characterizing the grade distribution in a mineralized seam between two roadways at both sides of a longwall mining panel.

  19. A Clustal Alignment Improver Using Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Thomsen, Rene; Fogel, Gary B.; Krink, Thimo

    2002-01-01

    Multiple sequence alignment (MSA) is a crucial task in bioinformatics. In this paper we extended previous work with evolutionary algorithms (EA) by using MSA solutions obtained from the wellknown Clustal V algorithm as a candidate solution seed of the initial EA population. Our results clearly show...

  20. TRANSFORMATION ALGORITHM FOR IMAGES OBTAINED BY OMNIDIRECTIONAL CAMERAS

    Directory of Open Access Journals (Sweden)

    V. P. Lazarenko

    2015-01-01

    Full Text Available Omnidirectional optoelectronic systems find their application in areas where a wide viewing angle is critical. However, omnidirectional optoelectronic systems have a large distortion that makes their application more difficult. The paper compares the projection functions of traditional perspective lenses and omnidirectional wide angle fish-eye lenses with a viewing angle not less than 180°. This comparison proves that distortion models of omnidirectional cameras cannot be described as a deviation from the classic model of pinhole camera. To solve this problem, an algorithm for transforming omnidirectional images has been developed. The paper provides a brief comparison of the four calibration methods available in open source toolkits for omnidirectional optoelectronic systems. Geometrical projection model is given used for calibration of omnidirectional optical system. The algorithm consists of three basic steps. At the first step, we calculate he field of view of a virtual pinhole PTZ camera. This field of view is characterized by an array of 3D points in the object space. At the second step the array of corresponding pixels for these three-dimensional points is calculated. Then we make a calculation of the projection function that expresses the relation between a given 3D point in the object space and a corresponding pixel point. In this paper we use calibration procedure providing the projection function for calibrated instance of the camera. At the last step final image is formed pixel-by-pixel from the original omnidirectional image using calculated array of 3D points and projection function. The developed algorithm gives the possibility for obtaining an image for a part of the field of view of an omnidirectional optoelectronic system with the corrected distortion from the original omnidirectional image. The algorithm is designed for operation with the omnidirectional optoelectronic systems with both catadioptric and fish-eye lenses

  1. Models and Algorithms for Tracking Target with Coordinated Turn Motion

    Directory of Open Access Journals (Sweden)

    Xianghui Yuan

    2014-01-01

    Full Text Available Tracking target with coordinated turn (CT motion is highly dependent on the models and algorithms. First, the widely used models are compared in this paper—coordinated turn (CT model with known turn rate, augmented coordinated turn (ACT model with Cartesian velocity, ACT model with polar velocity, CT model using a kinematic constraint, and maneuver centered circular motion model. Then, in the single model tracking framework, the tracking algorithms for the last four models are compared and the suggestions on the choice of models for different practical target tracking problems are given. Finally, in the multiple models (MM framework, the algorithm based on expectation maximization (EM algorithm is derived, including both the batch form and the recursive form. Compared with the widely used interacting multiple model (IMM algorithm, the EM algorithm shows its effectiveness.

  2. A new algorithm for coding geological terminology

    Science.gov (United States)

    Apon, W.

    The Geological Survey of The Netherlands has developed an algorithm to convert the plain geological language of lithologic well logs into codes suitable for computer processing and link these to existing plotting programs. The algorithm is based on the "direct method" and operates in three steps: (1) searching for defined word combinations and assigning codes; (2) deleting duplicated codes; (3) correcting incorrect code combinations. Two simple auxiliary files are used. A simple PC demonstration program is included to enable readers to experiment with this algorithm. The Department of Quarternary Geology of the Geological Survey of The Netherlands possesses a large database of shallow lithologic well logs in plain language and has been using a program based on this algorithm for about 3 yr. Erroneous codes resulting from using this algorithm are less than 2%.

  3. A Dimensionality Reduction-Based Multi-Step Clustering Method for Robust Vessel Trajectory Analysis

    Directory of Open Access Journals (Sweden)

    Huanhuan Li

    2017-08-01

    Full Text Available The Shipboard Automatic Identification System (AIS is crucial for navigation safety and maritime surveillance, data mining and pattern analysis of AIS information have attracted considerable attention in terms of both basic research and practical applications. Clustering of spatio-temporal AIS trajectories can be used to identify abnormal patterns and mine customary route data for transportation safety. Thus, the capacities of navigation safety and maritime traffic monitoring could be enhanced correspondingly. However, trajectory clustering is often sensitive to undesirable outliers and is essentially more complex compared with traditional point clustering. To overcome this limitation, a multi-step trajectory clustering method is proposed in this paper for robust AIS trajectory clustering. In particular, the Dynamic Time Warping (DTW, a similarity measurement method, is introduced in the first step to measure the distances between different trajectories. The calculated distances, inversely proportional to the similarities, constitute a distance matrix in the second step. Furthermore, as a widely-used dimensional reduction method, Principal Component Analysis (PCA is exploited to decompose the obtained distance matrix. In particular, the top k principal components with above 95% accumulative contribution rate are extracted by PCA, and the number of the centers k is chosen. The k centers are found by the improved center automatically selection algorithm. In the last step, the improved center clustering algorithm with k clusters is implemented on the distance matrix to achieve the final AIS trajectory clustering results. In order to improve the accuracy of the proposed multi-step clustering algorithm, an automatic algorithm for choosing the k clusters is developed according to the similarity distance. Numerous experiments on realistic AIS trajectory datasets in the bridge area waterway and Mississippi River have been implemented to compare our

  4. A Dimensionality Reduction-Based Multi-Step Clustering Method for Robust Vessel Trajectory Analysis.

    Science.gov (United States)

    Li, Huanhuan; Liu, Jingxian; Liu, Ryan Wen; Xiong, Naixue; Wu, Kefeng; Kim, Tai-Hoon

    2017-08-04

    The Shipboard Automatic Identification System (AIS) is crucial for navigation safety and maritime surveillance, data mining and pattern analysis of AIS information have attracted considerable attention in terms of both basic research and practical applications. Clustering of spatio-temporal AIS trajectories can be used to identify abnormal patterns and mine customary route data for transportation safety. Thus, the capacities of navigation safety and maritime traffic monitoring could be enhanced correspondingly. However, trajectory clustering is often sensitive to undesirable outliers and is essentially more complex compared with traditional point clustering. To overcome this limitation, a multi-step trajectory clustering method is proposed in this paper for robust AIS trajectory clustering. In particular, the Dynamic Time Warping (DTW), a similarity measurement method, is introduced in the first step to measure the distances between different trajectories. The calculated distances, inversely proportional to the similarities, constitute a distance matrix in the second step. Furthermore, as a widely-used dimensional reduction method, Principal Component Analysis (PCA) is exploited to decompose the obtained distance matrix. In particular, the top k principal components with above 95% accumulative contribution rate are extracted by PCA, and the number of the centers k is chosen. The k centers are found by the improved center automatically selection algorithm. In the last step, the improved center clustering algorithm with k clusters is implemented on the distance matrix to achieve the final AIS trajectory clustering results. In order to improve the accuracy of the proposed multi-step clustering algorithm, an automatic algorithm for choosing the k clusters is developed according to the similarity distance. Numerous experiments on realistic AIS trajectory datasets in the bridge area waterway and Mississippi River have been implemented to compare our proposed method with

  5. Computerized detection of masses on mammograms: A comparative study of two algorithms

    International Nuclear Information System (INIS)

    Tiedeu, A.; Kom, G.; Kom, M.

    2007-02-01

    In this paper, we implement and carry out the comparison of two methods of computer-aided-detection of masses on mammograms. The two algorithms basically consist of 3 steps each: segmentation, binarization and noise suppression but using different techniques for each step. A database of 60 images was used to compare the performance of the two algorithms in terms of general detection efficiency, conservation of size and shape of detected masses. (author)

  6. A simple test of choice stepping reaction time for assessing fall risk in people with multiple sclerosis.

    Science.gov (United States)

    Tijsma, Mylou; Vister, Eva; Hoang, Phu; Lord, Stephen R

    2017-03-01

    Purpose To determine (a) the discriminant validity for established fall risk factors and (b) the predictive validity for falls of a simple test of choice stepping reaction time (CSRT) in people with multiple sclerosis (MS). Method People with MS (n = 210, 21-74y) performed the CSRT, sensorimotor, balance and neuropsychological tests in a single session. They were then followed up for falls using monthly fall diaries for 6 months. Results The CSRT test had excellent discriminant validity with respect to established fall risk factors. Frequent fallers (≥3 falls) performed significantly worse in the CSRT test than non-frequent fallers (0-2 falls). With the odds of suffering frequent falls increasing 69% with each SD increase in CSRT (OR = 1.69, 95% CI: 1.27-2.26, p = falls in people with MS. This test may prove useful in documenting longitudinal changes in fall risk in relation to MS disease progression and effects of interventions. Implications for rehabilitation Good choice stepping reaction time (CSRT) is required for maintaining balance. A simple low-tech CSRT test has excellent discriminative and predictive validity in relation to falls in people with MS. This test may prove useful documenting longitudinal changes in fall risk in relation to MS disease progression and effects of interventions.

  7. Schedule Optimization of Imaging Missions for Multiple Satellites and Ground Stations Using Genetic Algorithm

    Science.gov (United States)

    Lee, Junghyun; Kim, Heewon; Chung, Hyun; Kim, Haedong; Choi, Sujin; Jung, Okchul; Chung, Daewon; Ko, Kwanghee

    2018-04-01

    In this paper, we propose a method that uses a genetic algorithm for the dynamic schedule optimization of imaging missions for multiple satellites and ground systems. In particular, the visibility conflicts of communication and mission operation using satellite resources (electric power and onboard memory) are integrated in sequence. Resource consumption and restoration are considered in the optimization process. Image acquisition is an essential part of satellite missions and is performed via a series of subtasks such as command uplink, image capturing, image storing, and image downlink. An objective function for optimization is designed to maximize the usability by considering the following components: user-assigned priority, resource consumption, and image-acquisition time. For the simulation, a series of hypothetical imaging missions are allocated to a multi-satellite control system comprising five satellites and three ground stations having S- and X-band antennas. To demonstrate the performance of the proposed method, simulations are performed via three operation modes: general, commercial, and tactical.

  8. A new taxonomy of sublinear keyword pattern matching algorithms

    NARCIS (Netherlands)

    Cleophas, L.G.W.A.; Watson, B.W.; Zwaan, G.

    2004-01-01

    Abstract This paper presents a new taxonomy of sublinear (multiple) keyword pattern matching algorithms. Based on an earlier taxonomy by Watson and Zwaan [WZ96, WZ95], this new taxonomy includes not only suffix-based algorithms related to the Boyer-Moore, Commentz-Walter and Fan-Su algorithms, but

  9. Development of radio frequency interference detection algorithms for passive microwave remote sensing

    Science.gov (United States)

    Misra, Sidharth

    Radio Frequency Interference (RFI) signals are man-made sources that are increasingly plaguing passive microwave remote sensing measurements. RFI is of insidious nature, with some signals low power enough to go undetected but large enough to impact science measurements and their results. With the launch of the European Space Agency (ESA) Soil Moisture and Ocean Salinity (SMOS) satellite in November 2009 and the upcoming launches of the new NASA sea-surface salinity measuring Aquarius mission in June 2011 and soil-moisture measuring Soil Moisture Active Passive (SMAP) mission around 2015, active steps are being taken to detect and mitigate RFI at L-band. An RFI detection algorithm was designed for the Aquarius mission. The algorithm performance was analyzed using kurtosis based RFI ground-truth. The algorithm has been developed with several adjustable location dependant parameters to control the detection statistics (false-alarm rate and probability of detection). The kurtosis statistical detection algorithm has been compared with the Aquarius pulse detection method. The comparative study determines the feasibility of the kurtosis detector for the SMAP radiometer, as a primary RFI detection algorithm in terms of detectability and data bandwidth. The kurtosis algorithm has superior detection capabilities for low duty-cycle radar like pulses, which are more prevalent according to analysis of field campaign data. Most RFI algorithms developed have generally been optimized for performance with individual pulsed-sinusoidal RFI sources. A new RFI detection model is developed that takes into account multiple RFI sources within an antenna footprint. The performance of the kurtosis detection algorithm under such central-limit conditions is evaluated. The SMOS mission has a unique hardware system, and conventional RFI detection techniques cannot be applied. Instead, an RFI detection algorithm for SMOS is developed and applied in the angular domain. This algorithm compares

  10. A Unified Differential Evolution Algorithm for Global Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Qiang, Ji; Mitchell, Chad

    2014-06-24

    Abstract?In this paper, we propose a new unified differential evolution (uDE) algorithm for single objective global optimization. Instead of selecting among multiple mutation strategies as in the conventional differential evolution algorithm, this algorithm employs a single equation as the mutation strategy. It has the virtue of mathematical simplicity and also provides users the flexbility for broader exploration of different mutation strategies. Numerical tests using twelve basic unimodal and multimodal functions show promising performance of the proposed algorithm in comparison to convential differential evolution algorithms.

  11. Novel applications of multitask learning and multiple output regression to multiple genetic trait prediction.

    Science.gov (United States)

    He, Dan; Kuhn, David; Parida, Laxmi

    2016-06-15

    Given a set of biallelic molecular markers, such as SNPs, with genotype values encoded numerically on a collection of plant, animal or human samples, the goal of genetic trait prediction is to predict the quantitative trait values by simultaneously modeling all marker effects. Genetic trait prediction is usually represented as linear regression models. In many cases, for the same set of samples and markers, multiple traits are observed. Some of these traits might be correlated with each other. Therefore, modeling all the multiple traits together may improve the prediction accuracy. In this work, we view the multitrait prediction problem from a machine learning angle: as either a multitask learning problem or a multiple output regression problem, depending on whether different traits share the same genotype matrix or not. We then adapted multitask learning algorithms and multiple output regression algorithms to solve the multitrait prediction problem. We proposed a few strategies to improve the least square error of the prediction from these algorithms. Our experiments show that modeling multiple traits together could improve the prediction accuracy for correlated traits. The programs we used are either public or directly from the referred authors, such as MALSAR (http://www.public.asu.edu/~jye02/Software/MALSAR/) package. The Avocado data set has not been published yet and is available upon request. dhe@us.ibm.com. © The Author 2016. Published by Oxford University Press.

  12. An Improved User Selection Algorithm in Multiuser MIMO Broadcast with Channel Prediction

    Science.gov (United States)

    Min, Zhi; Ohtsuki, Tomoaki

    In multiuser MIMO-BC (Multiple-Input Multiple-Output Broadcasting) systems, user selection is important to achieve multiuser diversity. The optimal user selection algorithm is to try all the combinations of users to find the user group that can achieve the multiuser diversity. Unfortunately, the high calculation cost of the optimal algorithm prevents its implementation. Thus, instead of the optimal algorithm, some suboptimal user selection algorithms were proposed based on semiorthogonality of user channel vectors. The purpose of this paper is to achieve multiuser diversity with a small amount of calculation. For this purpose, we propose a user selection algorithm that can improve the orthogonality of a selected user group. We also apply a channel prediction technique to a MIMO-BC system to get more accurate channel information at the transmitter. Simulation results show that the channel prediction can improve the accuracy of channel information for user selections, and the proposed user selection algorithm achieves higher sum rate capacity than the SUS (Semiorthogonal User Selection) algorithm. Also we discuss the setting of the algorithm threshold. As the result of a discussion on the calculation complexity, which uses the number of complex multiplications as the parameter, the proposed algorithm is shown to have a calculation complexity almost equal to that of the SUS algorithm, and they are much lower than that of the optimal user selection algorithm.

  13. Mapping robust parallel multigrid algorithms to scalable memory architectures

    Science.gov (United States)

    Overman, Andrea; Vanrosendale, John

    1993-01-01

    The convergence rate of standard multigrid algorithms degenerates on problems with stretched grids or anisotropic operators. The usual cure for this is the use of line or plane relaxation. However, multigrid algorithms based on line and plane relaxation have limited and awkward parallelism and are quite difficult to map effectively to highly parallel architectures. Newer multigrid algorithms that overcome anisotropy through the use of multiple coarse grids rather than relaxation are better suited to massively parallel architectures because they require only simple point-relaxation smoothers. In this paper, we look at the parallel implementation of a V-cycle multiple semicoarsened grid (MSG) algorithm on distributed-memory architectures such as the Intel iPSC/860 and Paragon computers. The MSG algorithms provide two levels of parallelism: parallelism within the relaxation or interpolation on each grid and across the grids on each multigrid level. Both levels of parallelism must be exploited to map these algorithms effectively to parallel architectures. This paper describes a mapping of an MSG algorithm to distributed-memory architectures that demonstrates how both levels of parallelism can be exploited. The result is a robust and effective multigrid algorithm for distributed-memory machines.

  14. Adaptive Numerical Algorithms in Space Weather Modeling

    Science.gov (United States)

    Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; hide

    2010-01-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical

  15. A pipelined FPGA implementation of an encryption algorithm based on genetic algorithm

    Science.gov (United States)

    Thirer, Nonel

    2013-05-01

    With the evolution of digital data storage and exchange, it is essential to protect the confidential information from every unauthorized access. High performance encryption algorithms were developed and implemented by software and hardware. Also many methods to attack the cipher text were developed. In the last years, the genetic algorithm has gained much interest in cryptanalysis of cipher texts and also in encryption ciphers. This paper analyses the possibility to use the genetic algorithm as a multiple key sequence generator for an AES (Advanced Encryption Standard) cryptographic system, and also to use a three stages pipeline (with four main blocks: Input data, AES Core, Key generator, Output data) to provide a fast encryption and storage/transmission of a large amount of data.

  16. A Cost-Effective Tracking Algorithm for Hypersonic Glide Vehicle Maneuver Based on Modified Aerodynamic Model

    Directory of Open Access Journals (Sweden)

    Yu Fan

    2016-10-01

    Full Text Available In order to defend the hypersonic glide vehicle (HGV, a cost-effective single-model tracking algorithm using Cubature Kalman filter (CKF is proposed in this paper based on modified aerodynamic model (MAM as process equation and radar measurement model as measurement equation. In the existing aerodynamic model, the two control variables attack angle and bank angle cannot be measured by the existing radar equipment and their control laws cannot be known by defenders. To establish the process equation, the MAM for HGV tracking is proposed by using additive white noise to model the rates of change of the two control variables. For the ease of comparison several multiple model algorithms based on CKF are presented, including interacting multiple model (IMM algorithm, adaptive grid interacting multiple model (AGIMM algorithm and hybrid grid multiple model (HGMM algorithm. The performances of these algorithms are compared and analyzed according to the simulation results. The simulation results indicate that the proposed tracking algorithm based on modified aerodynamic model has the best tracking performance with the best accuracy and least computational cost among all tracking algorithms in this paper. The proposed algorithm is cost-effective for HGV tracking.

  17. Multiple-Vehicle Longitudinal Collision Mitigation by Coordinated Brake Control

    Directory of Open Access Journals (Sweden)

    Xiao-Yun Lu

    2014-01-01

    Full Text Available Rear-end collision often leads to serious casualties and traffic congestion. The consequences are even worse for multiple-vehicle collision. Many previous works focused on collision warning and avoidance strategies of two consecutive vehicles based on onboard sensor detection only. This paper proposes a centralized control strategy for multiple vehicles to minimize the impact of multiple-vehicle collision based on vehicle-to-vehicle communication technique. The system is defined as a coupled group of vehicles with wireless communication capability and short following distances. The safety relationship can be represented as lower bound limit on deceleration of the first vehicle and upper bound on maximum deceleration of the last vehicle. The objective is to determine the desired deceleration for each vehicle such that the total impact energy is minimized at each time step. The impact energy is defined as the relative kinetic energy between a consecutive pair of vehicles (approaching only. Model predictive control (MPC framework is used to formulate the problem to be constrained quadratic programming. Simulations show its effectiveness on collision mitigation. The developed algorithm has the potential to be used for progressive market penetration of connected vehicles in practice.

  18. M-GCF: Multicolor-Green Conflict Free Scheduling Algorithm for WSN

    DEFF Research Database (Denmark)

    Pawar, Pranav M.; Nielsen, Rasmus Hjorth; Prasad, Neeli R.

    2012-01-01

    division multiple access (TDMA) scheduling algorithm, Multicolor-Green Conflict Free (M-GCF), for WSNs. The proposed algorithm finds multiple conflict free slots across a three-hop neighbor view. The algorithm shows better slot sharing with fewer conflicts along with good energy efficiency, throughput...... and delay as compared with state-of-the-art solutions. The results also include the performance of M-GCF with varying traffic rates, which also shows good energy efficiency, throughput and delay. The contribution of this paper and the main reason for the improved performance with varying number of nodes...

  19. A simple fall detection algorithm for Powered Two Wheelers

    OpenAIRE

    BOUBEZOUL, Abderrahmane; ESPIE, Stéphane; LARNAUDIE, Bruno; BOUAZIZ, Samir

    2013-01-01

    The aim of this study is to evaluate a low-complexity fall detection algorithm, that use both acceleration and angular velocity signals to trigger an alert-system or to inflate an airbag jacket. The proposed fall detection algorithm is a threshold-based algorithm, using data from 3-accelerometers and 3-gyroscopes sensors mounted on the motorcycle. During the first step, the commonly fall accident configurations were selected and analyzed in order to identify the main causation factors. On the...

  20. Joint Angle and Frequency Estimation Using Multiple-Delay Output Based on ESPRIT

    Science.gov (United States)

    Xudong, Wang

    2010-12-01

    This paper presents a novel ESPRIT algorithm-based joint angle and frequency estimation using multiple-delay output (MDJAFE). The algorithm can estimate the joint angles and frequencies, since the use of multiple output makes the estimation accuracy greatly improved when compared with a conventional algorithm. The useful behavior of the proposed algorithm is verified by simulations.

  1. TESTING THE GENERALIZATION EFFICIENCY OF OIL SLICK CLASSIFICATION ALGORITHM USING MULTIPLE SAR DATA FOR DEEPWATER HORIZON OIL SPILL

    Directory of Open Access Journals (Sweden)

    C. Ozkan

    2012-07-01

    Full Text Available Marine oil spills due to releases of crude oil from tankers, offshore platforms, drilling rigs and wells, etc. are seriously affecting the fragile marine and coastal ecosystem and cause political and environmental concern. A catastrophic explosion and subsequent fire in the Deepwater Horizon oil platform caused the platform to burn and sink, and oil leaked continuously between April 20th and July 15th of 2010, releasing about 780,000 m3 of crude oil into the Gulf of Mexico. Today, space-borne SAR sensors are extensively used for the detection of oil spills in the marine environment, as they are independent from sun light, not affected by cloudiness, and more cost-effective than air patrolling due to covering large areas. In this study, generalization extent of an object based classification algorithm was tested for oil spill detection using multiple SAR imagery data. Among many geometrical, physical and textural features, some more distinctive ones were selected to distinguish oil and look alike objects from each others. The tested classifier was constructed from a Multilayer Perception Artificial Neural Network trained by ABC, LM and BP optimization algorithms. The training data to train the classifier were constituted from SAR data consisting of oil spill originated from Lebanon in 2007. The classifier was then applied to the Deepwater Horizon oil spill data in the Gulf of Mexico on RADARSAT-2 and ALOS PALSAR images to demonstrate the generalization efficiency of oil slick classification algorithm.

  2. A Nonlinear GMRES Optimization Algorithm for Canonical Tensor Decomposition

    OpenAIRE

    De Sterck, Hans

    2011-01-01

    A new algorithm is presented for computing a canonical rank-R tensor approximation that has minimal distance to a given tensor in the Frobenius norm, where the canonical rank-R tensor consists of the sum of R rank-one components. Each iteration of the method consists of three steps. In the first step, a tentative new iterate is generated by a stand-alone one-step process, for which we use alternating least squares (ALS). In the second step, an accelerated iterate is generated by a nonlinear g...

  3. Algorithms for image recovery calculation in extended single-shot phase-shifting digital holography

    Science.gov (United States)

    Hasegawa, Shin-ya; Hirata, Ryo

    2018-04-01

    The single-shot phase-shifting method of image recovery using an inclined reference wave has the advantages of reducing the effects of vibration, being capable of operating in real time, and affording low-cost sensing. In this method, relatively low reference angles compared with that in the conventional method using phase shift between three or four pixels has been required. We propose an extended single-shot phase-shifting technique which uses the multiple-step phase-shifting algorithm and the corresponding multiple pixels which are the same as that of the period of an interference fringe. We have verified the theory underlying this recovery method by means of Fourier spectral analysis and its effectiveness by evaluating the visibility of the image using a high-resolution pattern. Finally, we have demonstrated high-contrast image recovery experimentally using a resolution chart. This method can be used in a variety of applications such as color holographic interferometry.

  4. A Novel DOA Estimation Algorithm Using Array Rotation Technique

    Directory of Open Access Journals (Sweden)

    Xiaoyu Lan

    2014-03-01

    Full Text Available The performance of traditional direction of arrival (DOA estimation algorithm based on uniform circular array (UCA is constrained by the array aperture. Furthermore, the array requires more antenna elements than targets, which will increase the size and weight of the device and cause higher energy loss. In order to solve these issues, a novel low energy algorithm utilizing array base-line rotation for multiple targets estimation is proposed. By rotating two elements and setting a fixed time delay, even the number of elements is selected to form a virtual UCA. Then, the received data of signals will be sampled at multiple positions, which improves the array elements utilization greatly. 2D-DOA estimation of the rotation array is accomplished via multiple signal classification (MUSIC algorithms. Finally, the Cramer-Rao bound (CRB is derived and simulation results verified the effectiveness of the proposed algorithm with high resolution and estimation accuracy performance. Besides, because of the significant reduction of array elements number, the array antennas system is much simpler and less complex than traditional array.

  5. M3: Matrix Multiplication on MapReduce

    DEFF Research Database (Denmark)

    Silvestri, Francesco; Ceccarello, Matteo

    2015-01-01

    M3 is an Hadoop library for performing dense and sparse matrix multiplication in MapReduce. The library is based on multi-round algorithms exploiting the 3D decomposition of the problem.......M3 is an Hadoop library for performing dense and sparse matrix multiplication in MapReduce. The library is based on multi-round algorithms exploiting the 3D decomposition of the problem....

  6. Quantum algorithms for computational nuclear physics

    Directory of Open Access Journals (Sweden)

    Višňák Jakub

    2015-01-01

    Full Text Available While quantum algorithms have been studied as an efficient tool for the stationary state energy determination in the case of molecular quantum systems, no similar study for analogical problems in computational nuclear physics (computation of energy levels of nuclei from empirical nucleon-nucleon or quark-quark potentials have been realized yet. Although the difference between the above mentioned studies might seem negligible, it will be examined. First steps towards a particular simulation (on classical computer of the Iterative Phase Estimation Algorithm for deuterium and tritium nuclei energy level computation will be carried out with the aim to prove algorithm feasibility (and extensibility to heavier nuclei for its possible practical realization on a real quantum computer.

  7. Evaluation of expansion algorithm of measurement range suited for 3D shape measurement using two pitches of projected grating with light source-stepping method

    Science.gov (United States)

    Sakaguchi, Toshimasa; Fujigaki, Motoharu; Murata, Yorinobu

    2015-03-01

    Accurate and wide-range shape measurement method is required in industrial field. The same technique is possible to be used for a shape measurement of a human body for the garment industry. Compact 3D shape measurement equipment is also required for embedding in the inspection system. A shape measurement by a phase shifting method can measure the shape with high spatial resolution because the coordinates can be obtained pixel by pixel. A key-device to develop compact equipment is a grating projector. Authors developed a linear LED projector and proposed a light source stepping method (LSSM) using the linear LED projector. The shape measurement euipment can be produced with low-cost and compact without any phase-shifting mechanical systems by using this method. Also it enables us to measure 3D shape in very short time by switching the light sources quickly. A phase unwrapping method is necessary to widen the measurement range with constant accuracy for phase shifting method. A general phase unwrapping method with difference grating pitches is often used. It is one of a simple phase unwrapping method. It is, however, difficult to apply the conventional phase unwrapping algorithm to the LSSM. Authors, therefore, developed an expansion unwrapping algorithm for the LSSM. In this paper, an expansion algorithm of measurement range suited for 3D shape measurement using two pitches of projected grating with the LSSM was evaluated.

  8. A GPU-accelerated semi-implicit fractional step method for numerical solutions of incompressible Navier-Stokes equations

    Science.gov (United States)

    Ha, Sanghyun; Park, Junshin; You, Donghyun

    2017-11-01

    Utility of the computational power of modern Graphics Processing Units (GPUs) is elaborated for solutions of incompressible Navier-Stokes equations which are integrated using a semi-implicit fractional-step method. Due to its serial and bandwidth-bound nature, the present choice of numerical methods is considered to be a good candidate for evaluating the potential of GPUs for solving Navier-Stokes equations using non-explicit time integration. An efficient algorithm is presented for GPU acceleration of the Alternating Direction Implicit (ADI) and the Fourier-transform-based direct solution method used in the semi-implicit fractional-step method. OpenMP is employed for concurrent collection of turbulence statistics on a CPU while Navier-Stokes equations are computed on a GPU. Extension to multiple NVIDIA GPUs is implemented using NVLink supported by the Pascal architecture. Performance of the present method is experimented on multiple Tesla P100 GPUs compared with a single-core Xeon E5-2650 v4 CPU in simulations of boundary-layer flow over a flat plate. Supported by the National Research Foundation of Korea (NRF) Grant funded by the Korea government (Ministry of Science, ICT and Future Planning NRF-2016R1E1A2A01939553, NRF-2014R1A2A1A11049599, and Ministry of Trade, Industry and Energy 201611101000230).

  9. A Multiple-Label Guided Clustering Algorithm for Historical Document Dating and Localization.

    Science.gov (United States)

    He, Sheng; Samara, Petros; Burgers, Jan; Schomaker, Lambert

    2016-11-01

    It is of essential importance for historians to know the date and place of origin of the documents they study. It would be a huge advancement for historical scholars if it would be possible to automatically estimate the geographical and temporal provenance of a handwritten document by inferring them from the handwriting style of such a document. We propose a multiple-label guided clustering algorithm to discover the correlations between the concrete low-level visual elements in historical documents and abstract labels, such as date and location. First, a novel descriptor, called histogram of orientations of handwritten strokes, is proposed to extract and describe the visual elements, which is built on a scale-invariant polar-feature space. In addition, the multi-label self-organizing map (MLSOM) is proposed to discover the correlations between the low-level visual elements and their labels in a single framework. Our proposed MLSOM can be used to predict the labels directly. Moreover, the MLSOM can also be considered as a pre-structured clustering method to build a codebook, which contains more discriminative information on date and geography. The experimental results on the medieval paleographic scale data set demonstrate that our method achieves state-of-the-art results.

  10. Optimal Siting and Sizing of Multiple DG Units for the Enhancement of Voltage Profile and Loss Minimization in Transmission Systems Using Nature Inspired Algorithms

    Directory of Open Access Journals (Sweden)

    Ambika Ramamoorthy

    2016-01-01

    Full Text Available Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF and weak (WK bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5 and PQ capacities of DGs (P alone, Q alone, and  P and Q both are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology.

  11. Optimal Siting and Sizing of Multiple DG Units for the Enhancement of Voltage Profile and Loss Minimization in Transmission Systems Using Nature Inspired Algorithms.

    Science.gov (United States)

    Ramamoorthy, Ambika; Ramachandran, Rajeswari

    2016-01-01

    Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF) and weak (WK) bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5) and PQ capacities of DGs (P alone, Q alone, and P and Q both) are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology.

  12. SWAMP+: multiple subsequence alignment using associative massive parallelism

    Energy Technology Data Exchange (ETDEWEB)

    Steinfadt, Shannon Irene [Los Alamos National Laboratory; Baker, Johnnie W [KENT STATE UNIV.

    2010-10-18

    A new parallel algorithm SWAMP+ incorporates the Smith-Waterman sequence alignment on an associative parallel model known as ASC. It is a highly sensitive parallel approach that expands traditional pairwise sequence alignment. This is the first parallel algorithm to provide multiple non-overlapping, non-intersecting subsequence alignments with the accuracy of Smith-Waterman. The efficient algorithm provides multiple alignments similar to BLAST while creating a better workflow for the end users. The parallel portions of the code run in O(m+n) time using m processors. When m = n, the algorithmic analysis becomes O(n) with a coefficient of two, yielding a linear speedup. Implementation of the algorithm on the SIMD ClearSpeed CSX620 confirms this theoretical linear speedup with real timings.

  13. Direct aperture optimization: A turnkey solution for step-and-shoot IMRT

    International Nuclear Information System (INIS)

    Shepard, D.M.; Earl, M.A.; Li, X.A.; Naqvi, S.; Yu, C.

    2002-01-01

    IMRT treatment plans for step-and-shoot delivery have traditionally been produced through the optimization of intensity distributions (or maps) for each beam angle. The optimization step is followed by the application of a leaf-sequencing algorithm that translates each intensity map into a set of deliverable aperture shapes. In this article, we introduce an automated planning system in which we bypass the traditional intensity optimization, and instead directly optimize the shapes and the weights of the apertures. We call this approach 'direct aperture optimization'. This technique allows the user to specify the maximum number of apertures per beam direction, and hence provides significant control over the complexity of the treatment delivery. This is possible because the machine dependent delivery constraints imposed by the MLC are enforced within the aperture optimization algorithm rather than in a separate leaf-sequencing step. The leaf settings and the aperture intensities are optimized simultaneously using a simulated annealing algorithm. We have tested direct aperture optimization on a variety of patient cases using the EGS4/BEAM Monte Carlo package for our dose calculation engine. The results demonstrate that direct aperture optimization can produce highly conformal step-and-shoot treatment plans using only three to five apertures per beam direction. As compared with traditional optimization strategies, our studies demonstrate that direct aperture optimization can result in a significant reduction in both the number of beam segments and the number of monitor units. Direct aperture optimization therefore produces highly efficient treatment deliveries that maintain the full dosimetric benefits of IMRT

  14. Algorithms for Decision Tree Construction

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    The study of algorithms for decision tree construction was initiated in 1960s. The first algorithms are based on the separation heuristic [13, 31] that at each step tries dividing the set of objects as evenly as possible. Later Garey and Graham [28] showed that such algorithm may construct decision trees whose average depth is arbitrarily far from the minimum. Hyafil and Rivest in [35] proved NP-hardness of DT problem that is constructing a tree with the minimum average depth for a diagnostic problem over 2-valued information system and uniform probability distribution. Cox et al. in [22] showed that for a two-class problem over information system, even finding the root node attribute for an optimal tree is an NP-hard problem. © Springer-Verlag Berlin Heidelberg 2011.

  15. Frequency up-conversion in nonpolar a-plane GaN/AlGaN based multiple quantum wells optimized for applications with silicon solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Radosavljević, S.; Radovanović, J., E-mail: radovanovic@etf.bg.ac.rs; Milanović, V. [School of Electrical Engineering, University of Belgrade, Bulevar kralja Aleksandra 73, 11200 Belgrade (Serbia); Tomić, S. [Joule Physics Laboratory, School of Computing, Science and Engineering, University of Salford, Manchester M5 4WT (United Kingdom)

    2014-07-21

    We have described a method for structural parameters optimization of GaN/AlGaN multiple quantum well based up-converter for silicon solar cells. It involves a systematic tuning of individual step quantum wells by use of the genetic algorithm for global optimization. In quantum well structures, the up-conversion process can be achieved by utilizing nonlinear optical effects based on intersubband transitions. Both single and double step quantum wells have been tested in order to maximize the second order susceptibility derived from the density matrix formalism. The results obtained for single step wells proved slightly better and have been further pursued to obtain a more complex design, optimized for conversion of an entire range of incident photon energies.

  16. Multiple-time-stepping generalized hybrid Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Escribano, Bruno, E-mail: bescribano@bcamath.org [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); Akhmatskaya, Elena [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao (Spain); Reich, Sebastian [Universität Potsdam, Institut für Mathematik, D-14469 Potsdam (Germany); Azpiroz, Jon M. [Kimika Fakultatea, Euskal Herriko Unibertsitatea (UPV/EHU) and Donostia International Physics Center (DIPC), P.K. 1072, Donostia (Spain)

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  17. Multiple Convective Cell Identification and Tracking Algorithm for documenting time-height evolution of measured polarimetric radar and lightning properties

    Science.gov (United States)

    Rosenfeld, D.; Hu, J.; Zhang, P.; Snyder, J.; Orville, R. E.; Ryzhkov, A.; Zrnic, D.; Williams, E.; Zhang, R.

    2017-12-01

    A methodology to track the evolution of the hydrometeors and electrification of convective cells is presented and applied to various convective clouds from warm showers to super-cells. The input radar data are obtained from the polarimetric NEXRAD weather radars, The information on cloud electrification is obtained from Lightning Mapping Arrays (LMA). The development time and height of the hydrometeors and electrification requires tracking the evolution and lifecycle of convective cells. A new methodology for Multi-Cell Identification and Tracking (MCIT) is presented in this study. This new algorithm is applied to time series of radar volume scans. A cell is defined as a local maximum in the Vertical Integrated Liquid (VIL), and the echo area is divided between cells using a watershed algorithm. The tracking of the cells between radar volume scans is done by identifying the two cells in consecutive radar scans that have maximum common VIL. The vertical profile of the polarimetric radar properties are used for constructing the time-height cross section of the cell properties around the peak reflectivity as a function of height. The LMA sources that occur within the cell area are integrated as a function of height as well for each time step, as determined by the radar volume scans. The result of the tracking can provide insights to the evolution of storms, hydrometer types, precipitation initiation and cloud electrification under different thermodynamic, aerosol and geographic conditions. The details of the MCIT algorithm, its products and their performance for different types of storm are described in this poster.

  18. The Algorithm of Link Prediction on Social Network

    Directory of Open Access Journals (Sweden)

    Liyan Dong

    2013-01-01

    Full Text Available At present, most link prediction algorithms are based on the similarity between two entities. Social network topology information is one of the main sources to design the similarity function between entities. But the existing link prediction algorithms do not apply the network topology information sufficiently. For lack of traditional link prediction algorithms, we propose two improved algorithms: CNGF algorithm based on local information and KatzGF algorithm based on global information network. For the defect of the stationary of social network, we also provide the link prediction algorithm based on nodes multiple attributes information. Finally, we verified these algorithms on DBLP data set, and the experimental results show that the performance of the improved algorithm is superior to that of the traditional link prediction algorithm.

  19. Enhanced sampling algorithms.

    Science.gov (United States)

    Mitsutake, Ayori; Mori, Yoshiharu; Okamoto, Yuko

    2013-01-01

    In biomolecular systems (especially all-atom models) with many degrees of freedom such as proteins and nucleic acids, there exist an astronomically large number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. Enhanced conformational sampling techniques are thus in great demand. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present various extensions of these three generalized-ensemble algorithms. The effectiveness of the methods is tested with short peptide and protein systems.

  20. A Nth-order linear algorithm for extracting diffuse correlation spectroscopy blood flow indices in heterogeneous tissues.

    Science.gov (United States)

    Shang, Yu; Yu, Guoqiang

    2014-09-29

    Conventional semi-infinite analytical solutions of correlation diffusion equation may lead to errors when calculating blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements in tissues with irregular geometries. Very recently, we created an algorithm integrating a N th-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in homogenous tissues with arbitrary geometries for extraction of BFI (i.e., αD B ). The purpose of this study is to extend the capability of the N th-order linear algorithm for extracting BFI in heterogeneous tissues with arbitrary geometries. The previous linear algorithm was modified to extract BFIs in different types of tissues simultaneously through utilizing DCS data at multiple source-detector separations. We compared the proposed linear algorithm with the semi-infinite homogenous solution in a computer model of adult head with heterogeneous tissue layers of scalp, skull, cerebrospinal fluid, and brain. To test the capability of the linear algorithm for extracting relative changes of cerebral blood flow (rCBF) in deep brain, we assigned ten levels of αD B in the brain layer with a step decrement of 10% while maintaining αD B values constant in other layers. Simulation results demonstrate the accuracy (errors model simplifies data analysis, thus allowing for online data processing and displaying. Future study will test this linear algorithm in heterogeneous tissues with different levels of blood flow variations and noises.