WorldWideScience

Sample records for multiple step algorithms

  1. Multiple Time-Step Dual-Hamiltonian Hybrid Molecular Dynamics - Monte Carlo Canonical Propagation Algorithm.

    Chen, Yunjie; Kale, Seyit; Weare, Jonathan; Dinner, Aaron R; Roux, Benoît

    2016-04-12

    A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method.

  2. DOA Estimation of Low Altitude Target Based on Adaptive Step Glowworm Swarm Optimization-multiple Signal Classification Algorithm

    Zhou Hao

    2015-06-01

    Full Text Available The traditional MUltiple SIgnal Classification (MUSIC algorithm requires significant computational effort and can not be employed for the Direction Of Arrival (DOA estimation of targets in a low-altitude multipath environment. As such, a novel MUSIC approach is proposed on the basis of the algorithm of Adaptive Step Glowworm Swarm Optimization (ASGSO. The virtual spatial smoothing of the matrix formed by each snapshot is used to realize the decorrelation of the multipath signal and the establishment of a fullorder correlation matrix. ASGSO optimizes the function and estimates the elevation of the target. The simulation results suggest that the proposed method can overcome the low altitude multipath effect and estimate the DOA of target readily and precisely without radar effective aperture loss.

  3. Self-Adaptive Step Firefly Algorithm

    Shuhao Yu

    2013-01-01

    Full Text Available In the standard firefly algorithm, each firefly has the same step settings and its values decrease from iteration to iteration. Therefore, it may fall into the local optimum. Furthermore, the decreasing of step is restrained by the maximum of iteration, which has an influence on the convergence speed and precision. In order to avoid falling into the local optimum and reduce the impact of the maximum of iteration, a self-adaptive step firefly algorithm is proposed in the paper. Its core idea is setting the step of each firefly varying with the iteration, according to each firefly’s historical information and current situation. Experiments are made to show the performance of our approach compared with the standard FA, based on sixteen standard testing benchmark functions. The results reveal that our method can prevent the premature convergence and improve the convergence speed and accurateness.

  4. Multiple stage miniature stepping motor

    Niven, W.A.; Shikany, S.D.; Shira, M.L.

    1981-01-01

    A stepping motor comprising a plurality of stages which may be selectively activated to effect stepping movement of the motor, and which are mounted along a common rotor shaft to achieve considerable reduction in motor size and minimum diameter, whereby sequential activation of the stages results in successive rotor steps with direction being determined by the particular activating sequence followed

  5. Using Alternative Multiplication Algorithms to "Offload" Cognition

    Jazby, Dan; Pearn, Cath

    2015-01-01

    When viewed through a lens of embedded cognition, algorithms may enable aspects of the cognitive work of multi-digit multiplication to be "offloaded" to the environmental structure created by an algorithm. This study analyses four multiplication algorithms by viewing different algorithms as enabling cognitive work to be distributed…

  6. An explicit multi-time-stepping algorithm for aerodynamic flows

    Niemann-Tuitman, B.E.; Veldman, A.E.P.

    1997-01-01

    An explicit multi-time-stepping algorithm with applications to aerodynamic flows is presented. In the algorithm, in different parts of the computational domain different time steps are taken, and the flow is synchronized at the so-called synchronization levels. The algorithm is validated for

  7. An explicit multi-time-stepping algorithm for aerodynamic flows

    Niemann-Tuitman, B.E.; Veldman, A.E.P.

    1997-01-01

    An explicit multi-time-stepping algorithm with applications to aerodynamic flows is presented. In the algorithm, in different parts of the computational domain different time steps are taken, and the flow is synchronized at the so-called synchronization levels. The algorithm is validated for aerodynamic turbulent flows. For two-dimensional flows speedups in the order of five with respect to single time stepping are obtained.

  8. A Two-Step Resume Information Extraction Algorithm

    Jie Chen

    2018-01-01

    Full Text Available With the rapid growth of Internet-based recruiting, there are a great number of personal resumes among recruiting systems. To gain more attention from the recruiters, most resumes are written in diverse formats, including varying font size, font colour, and table cells. However, the diversity of format is harmful to data mining, such as resume information extraction, automatic job matching, and candidates ranking. Supervised methods and rule-based methods have been proposed to extract facts from resumes, but they strongly rely on hierarchical structure information and large amounts of labelled data, which are hard to collect in reality. In this paper, we propose a two-step resume information extraction approach. In the first step, raw text of resume is identified as different resume blocks. To achieve the goal, we design a novel feature, Writing Style, to model sentence syntax information. Besides word index and punctuation index, word lexical attribute and prediction results of classifiers are included in Writing Style. In the second step, multiple classifiers are employed to identify different attributes of fact information in resumes. Experimental results on a real-world dataset show that the algorithm is feasible and effective.

  9. Outcome of a 4-step treatment algorithm for depressed inpatients

    Birkenhäger, T.K.; Broek, W.W. van den; Moleman, P.; Bruijn, J.A.

    2006-01-01

    Objective: The aim of this study was to examine the efficacy and the feasibility of a 4-step treatment algorithm for inpatients with major depressive disorder. Method: Depressed inpatients, meeting DSM-IV criteria for major depressive disorder, were enrolled in the algorithm that consisted of

  10. CSA: An efficient algorithm to improve circular DNA multiple alignment

    Pereira Luísa

    2009-07-01

    Full Text Available Abstract Background The comparison of homologous sequences from different species is an essential approach to reconstruct the evolutionary history of species and of the genes they harbour in their genomes. Several complete mitochondrial and nuclear genomes are now available, increasing the importance of using multiple sequence alignment algorithms in comparative genomics. MtDNA has long been used in phylogenetic analysis and errors in the alignments can lead to errors in the interpretation of evolutionary information. Although a large number of multiple sequence alignment algorithms have been proposed to date, they all deal with linear DNA and cannot handle directly circular DNA. Researchers interested in aligning circular DNA sequences must first rotate them to the "right" place using an essentially manual process, before they can use multiple sequence alignment tools. Results In this paper we propose an efficient algorithm that identifies the most interesting region to cut circular genomes in order to improve phylogenetic analysis when using standard multiple sequence alignment algorithms. This algorithm identifies the largest chain of non-repeated longest subsequences common to a set of circular mitochondrial DNA sequences. All the sequences are then rotated and made linear for multiple alignment purposes. To evaluate the effectiveness of this new tool, three different sets of mitochondrial DNA sequences were considered. Other tests considering randomly rotated sequences were also performed. The software package Arlequin was used to evaluate the standard genetic measures of the alignments obtained with and without the use of the CSA algorithm with two well known multiple alignment algorithms, the CLUSTALW and the MAVID tools, and also the visualization tool SinicView. Conclusion The results show that a circularization and rotation pre-processing step significantly improves the efficiency of public available multiple sequence alignment

  11. New management algorithms in multiple sclerosis

    Sorensen, Per Soelberg

    2014-01-01

    complex. The purpose of the review has been to work out new management algorithms for treatment of relapsing-remitting multiple sclerosis including new oral therapies and therapeutic monoclonal antibodies. RECENT FINDINGS: Recent large placebo-controlled trials in relapsing-remitting multiple sclerosis......PURPOSE OF REVIEW: Our current treatment algorithms include only IFN-β and glatiramer as available first-line disease-modifying drugs and natalizumab and fingolimod as second-line therapies. Today, 10 drugs have been approved in Europe and nine in the United States making the choice of therapy more...

  12. Multiple time step integrators in ab initio molecular dynamics

    Luehr, Nathan; Martínez, Todd J.; Markland, Thomas E.

    2014-01-01

    Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy

  13. Variable Step Size Maximum Correntropy Criteria Based Adaptive Filtering Algorithm

    S. Radhika

    2016-04-01

    Full Text Available Maximum correntropy criterion (MCC based adaptive filters are found to be robust against impulsive interference. This paper proposes a novel MCC based adaptive filter with variable step size in order to obtain improved performance in terms of both convergence rate and steady state error with robustness against impulsive interference. The optimal variable step size is obtained by minimizing the Mean Square Deviation (MSD error from one iteration to the other. Simulation results in the context of a highly impulsive system identification scenario show that the proposed algorithm has faster convergence and lesser steady state error than the conventional MCC based adaptive filters.

  14. Conjugate gradient algorithms using multiple recursions

    Barth, T.; Manteuffel, T.

    1996-12-31

    Much is already known about when a conjugate gradient method can be implemented with short recursions for the direction vectors. The work done in 1984 by Faber and Manteuffel gave necessary and sufficient conditions on the iteration matrix A, in order for a conjugate gradient method to be implemented with a single recursion of a certain form. However, this form does not take into account all possible recursions. This became evident when Jagels and Reichel used an algorithm of Gragg for unitary matrices to demonstrate that the class of matrices for which a practical conjugate gradient algorithm exists can be extended to include unitary and shifted unitary matrices. The implementation uses short double recursions for the direction vectors. This motivates the study of multiple recursion algorithms.

  15. Genetic Algorithms for Multiple-Choice Problems

    Aickelin, Uwe

    2010-04-01

    This thesis investigates the use of problem-specific knowledge to enhance a genetic algorithm approach to multiple-choice optimisation problems.It shows that such information can significantly enhance performance, but that the choice of information and the way it is included are important factors for success.Two multiple-choice problems are considered.The first is constructing a feasible nurse roster that considers as many requests as possible.In the second problem, shops are allocated to locations in a mall subject to constraints and maximising the overall income.Genetic algorithms are chosen for their well-known robustness and ability to solve large and complex discrete optimisation problems.However, a survey of the literature reveals room for further research into generic ways to include constraints into a genetic algorithm framework.Hence, the main theme of this work is to balance feasibility and cost of solutions.In particular, co-operative co-evolution with hierarchical sub-populations, problem structure exploiting repair schemes and indirect genetic algorithms with self-adjusting decoder functions are identified as promising approaches.The research starts by applying standard genetic algorithms to the problems and explaining the failure of such approaches due to epistasis.To overcome this, problem-specific information is added in a variety of ways, some of which are designed to increase the number of feasible solutions found whilst others are intended to improve the quality of such solutions.As well as a theoretical discussion as to the underlying reasons for using each operator,extensive computational experiments are carried out on a variety of data.These show that the indirect approach relies less on problem structure and hence is easier to implement and superior in solution quality.

  16. Step-by-Step Model for the Study of the Apriori Algorithm for Predictive Analysis

    Daniel Grigore ROŞCA

    2015-06-01

    Full Text Available The goal of this paper was to develop an educational oriented application based on the Data Mining Apriori Algorithm which facilitates both the research and the study of data mining by graduate students. The application could be used to discover interesting patterns in the corpus of data and to measure the impact on the speed of execution as a function of problem constraints (value of support and confidence variables or size of the transactional data-base. The paper presents a brief overview of the Apriori Algorithm, aspects about the implementation of the algorithm using a step-by-step process, a discussion of the education-oriented user interface and the process of data mining of a test transactional data base. The impact of some constraints on the speed of the algorithm is also experimentally measured without a systematic review of different approaches to increase execution speed. Possible applications of the implementation, as well as its limits, are briefly reviewed.

  17. Effective arithmetic in finite fields based on Chudnovsky's multiplication algorithm

    Atighehchi , Kévin; Ballet , Stéphane; Bonnecaze , Alexis; Rolland , Robert

    2016-01-01

    International audience; Thanks to a new construction of the Chudnovsky and Chudnovsky multiplication algorithm, we design efficient algorithms for both the exponentiation and the multiplication in finite fields. They are tailored to hardware implementation and they allow computations to be parallelized, while maintaining a low number of bilinear multiplications.À partir d'une nouvelle construction de l'algorithme de multiplication de Chudnovsky et Chudnovsky, nous concevons des algorithmes ef...

  18. Four wind speed multi-step forecasting models using extreme learning machines and signal decomposing algorithms

    Liu, Hui; Tian, Hong-qi; Li, Yan-fei

    2015-01-01

    Highlights: • A hybrid architecture is proposed for the wind speed forecasting. • Four algorithms are used for the wind speed multi-scale decomposition. • The extreme learning machines are employed for the wind speed forecasting. • All the proposed hybrid models can generate the accurate results. - Abstract: Realization of accurate wind speed forecasting is important to guarantee the safety of wind power utilization. In this paper, a new hybrid forecasting architecture is proposed to realize the wind speed accurate forecasting. In this architecture, four different hybrid models are presented by combining four signal decomposing algorithms (e.g., Wavelet Decomposition/Wavelet Packet Decomposition/Empirical Mode Decomposition/Fast Ensemble Empirical Mode Decomposition) and Extreme Learning Machines. The originality of the study is to investigate the promoted percentages of the Extreme Learning Machines by those mainstream signal decomposing algorithms in the multiple step wind speed forecasting. The results of two forecasting experiments indicate that: (1) the method of Extreme Learning Machines is suitable for the wind speed forecasting; (2) by utilizing the decomposing algorithms, all the proposed hybrid algorithms have better performance than the single Extreme Learning Machines; (3) in the comparisons of the decomposing algorithms in the proposed hybrid architecture, the Fast Ensemble Empirical Mode Decomposition has the best performance in the three-step forecasting results while the Wavelet Packet Decomposition has the best performance in the one and two step forecasting results. At the same time, the Wavelet Packet Decomposition and the Fast Ensemble Empirical Mode Decomposition are better than the Wavelet Decomposition and the Empirical Mode Decomposition in all the step predictions, respectively; and (4) the proposed algorithms are effective in the wind speed accurate predictions

  19. The bilinear complexity and practical algorithms for matrix multiplication

    Smirnov, A. V.

    2013-12-01

    A method for deriving bilinear algorithms for matrix multiplication is proposed. New estimates for the bilinear complexity of a number of problems of the exact and approximate multiplication of rectangular matrices are obtained. In particular, the estimate for the boundary rank of multiplying 3 × 3 matrices is improved and a practical algorithm for the exact multiplication of square n × n matrices is proposed. The asymptotic arithmetic complexity of this algorithm is O( n 2.7743).

  20. Modified random hinge transport mechanics and multiple scattering step-size selection in EGS5

    Wilderman, S.J.; Bielajew, A.F.

    2005-01-01

    The new transport mechanics in EGS5 allows for significantly longer electron transport step sizes and hence shorter computation times than required for identical problems in EGS4. But as with all Monte Carlo electron transport algorithms, certain classes of problems exhibit step-size dependencies even when operating within recommended ranges, sometimes making selection of step-sizes a daunting task for novice users. Further contributing to this problem, because of the decoupling of multiple scattering and continuous energy loss in the dual random hinge transport mechanics of EGS5, there are two independent step sizes in EGS5, one for multiple scattering and one for continuous energy loss, each of which influences speed and accuracy in a different manner. Further, whereas EGS4 used a single value of fractional energy loss (ESTEPE) to determine step sizes at all energies, to increase performance by decreasing the amount of effort expended simulating lower energy particles, EGS5 permits the fractional energy loss values which are used to determine both the multiple scattering and continuous energy loss step sizes to vary with energy. This results in requiring the user to specify four fractional energy loss values when optimizing computations for speed. Thus, in order to simplify step-size selection and to mitigate step-size dependencies, a method has been devised to automatically optimize step-size selection based on a single material dependent input related to the size of problem tally region. In this paper we discuss the new transport mechanics in EGS5 and describe the automatic step-size optimization algorithm. (author)

  1. One-Step Leapfrog LOD-BOR-FDTD Algorithm with CPML Implementation

    Yi-Gang Wang

    2016-01-01

    Full Text Available An unconditionally stable one-step leapfrog locally one-dimensional finite-difference time-domain (LOD-FDTD algorithm towards body of revolution (BOR is presented. The equations of the proposed algorithm are obtained by the algebraic manipulation of those used in the conventional LOD-BOR-FDTD algorithm. The equations for z-direction electric and magnetic fields in the proposed algorithm should be treated specially. The new algorithm obtains a higher computational efficiency while preserving the properties of the conventional LOD-BOR-FDTD algorithm. Moreover, the convolutional perfectly matched layer (CPML is introduced into the one-step leapfrog LOD-BOR-FDTD algorithm. The equation of the one-step leapfrog CPML is concise. Numerical results show that its reflection error is small. It can be concluded that the similar CPML scheme can also be easily applied to the one-step leapfrog LOD-FDTD algorithm in the Cartesian coordinate system.

  2. Perinatal Depression Algorithm: A Home Visitor Step-by-Step Guide for Advanced Management of Perinatal Depressive Symptoms

    Laszewski, Audrey; Wichman, Christina L.; Doering, Jennifer J.; Maletta, Kristyn; Hammel, Jennifer

    2016-01-01

    Early childhood professionals do many things to support young families. This is true now more than ever, as researchers continue to discover the long-term benefits of early, healthy, nurturing relationships. This article provides an overview of the development of an advanced practice perinatal depression algorithm created as a step-by-step guide…

  3. Local multiplicative Schwarz algorithms for convection-diffusion equations

    Cai, Xiao-Chuan; Sarkis, Marcus

    1995-01-01

    We develop a new class of overlapping Schwarz type algorithms for solving scalar convection-diffusion equations discretized by finite element or finite difference methods. The preconditioners consist of two components, namely, the usual two-level additive Schwarz preconditioner and the sum of some quadratic terms constructed by using products of ordered neighboring subdomain preconditioners. The ordering of the subdomain preconditioners is determined by considering the direction of the flow. We prove that the algorithms are optimal in the sense that the convergence rates are independent of the mesh size, as well as the number of subdomains. We show by numerical examples that the new algorithms are less sensitive to the direction of the flow than either the classical multiplicative Schwarz algorithms, and converge faster than the additive Schwarz algorithms. Thus, the new algorithms are more suitable for fluid flow applications than the classical additive or multiplicative Schwarz algorithms.

  4. Adaptive Step Size Gradient Ascent ICA Algorithm for Wireless MIMO Systems

    Zahoor Uddin

    2018-01-01

    Full Text Available Independent component analysis (ICA is a technique of blind source separation (BSS used for separation of the mixed received signals. ICA algorithms are classified into adaptive and batch algorithms. Adaptive algorithms perform well in time-varying scenario with high-computational complexity, while batch algorithms have better separation performance in quasistatic channels with low-computational complexity. Amongst batch algorithms, the gradient-based ICA algorithms perform well, but step size selection is critical in these algorithms. In this paper, an adaptive step size gradient ascent ICA (ASS-GAICA algorithm is presented. The proposed algorithm is free from selection of the step size parameter with improved convergence and separation performance. Different performance evaluation criteria are used to verify the effectiveness of the proposed algorithm. Performance of the proposed algorithm is compared with the FastICA and optimum block adaptive ICA (OBAICA algorithms for quasistatic and time-varying wireless channels. Simulation is performed over quadrature amplitude modulation (QAM and binary phase shift keying (BPSK signals. Results show that the proposed algorithm outperforms the FastICA and OBAICA algorithms for a wide range of signal-to-noise ratio (SNR and input data block lengths.

  5. Phase-step retrieval for tunable phase-shifting algorithms

    Ayubi, Gastón A.; Duarte, Ignacio; Perciante, César D.; Flores, Jorge L.; Ferrari, José A.

    2017-12-01

    Phase-shifting (PS) is a well-known technique for phase retrieval in interferometry, with applications in deflectometry and 3D-profiling, which requires a series of intensity measurements with certain phase-steps. Usually the phase-steps are evenly spaced, and its knowledge is crucial for the phase retrieval. In this work we present a method to extract the phase-step between consecutive interferograms. We test the proposed technique with images corrupted by additive noise. The results were compared with other known methods. We also present experimental results showing the performance of the method when spatial filters are applied to the interferograms and the effect that they have on their relative phase-steps.

  6. Single-step reinitialization and extending algorithms for level-set based multi-phase flow simulations

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-12-01

    We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.

  7. Dynamic multiple thresholding breast boundary detection algorithm for mammograms

    Wu, Yi-Ta; Zhou Chuan; Chan, Heang-Ping; Paramagul, Chintana; Hadjiiski, Lubomir M.; Daly, Caroline Plowden; Douglas, Julie A.; Zhang Yiheng; Sahiner, Berkman; Shi Jiazheng; Wei Jun

    2010-01-01

    Purpose: Automated detection of breast boundary is one of the fundamental steps for computer-aided analysis of mammograms. In this study, the authors developed a new dynamic multiple thresholding based breast boundary (MTBB) detection method for digitized mammograms. Methods: A large data set of 716 screen-film mammograms (442 CC view and 274 MLO view) obtained from consecutive cases of an Institutional Review Board approved project were used. An experienced breast radiologist manually traced the breast boundary on each digitized image using a graphical interface to provide a reference standard. The initial breast boundary (MTBB-Initial) was obtained by dynamically adapting the threshold to the gray level range in local regions of the breast periphery. The initial breast boundary was then refined by using gradient information from horizontal and vertical Sobel filtering to obtain the final breast boundary (MTBB-Final). The accuracy of the breast boundary detection algorithm was evaluated by comparison with the reference standard using three performance metrics: The Hausdorff distance (HDist), the average minimum Euclidean distance (AMinDist), and the area overlap measure (AOM). Results: In comparison with the authors' previously developed gradient-based breast boundary (GBB) algorithm, it was found that 68%, 85%, and 94% of images had HDist errors less than 6 pixels (4.8 mm) for GBB, MTBB-Initial, and MTBB-Final, respectively. 89%, 90%, and 96% of images had AMinDist errors less than 1.5 pixels (1.2 mm) for GBB, MTBB-Initial, and MTBB-Final, respectively. 96%, 98%, and 99% of images had AOM values larger than 0.9 for GBB, MTBB-Initial, and MTBB-Final, respectively. The improvement by the MTBB-Final method was statistically significant for all the evaluation measures by the Wilcoxon signed rank test (p<0.0001). Conclusions: The MTBB approach that combined dynamic multiple thresholding and gradient information provided better performance than the breast boundary

  8. A scalable parallel algorithm for multiple objective linear programs

    Wiecek, Malgorzata M.; Zhang, Hong

    1994-01-01

    This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.

  9. An efficient parallel algorithm for matrix-vector multiplication

    Hendrickson, B.; Leland, R.; Plimpton, S.

    1993-03-01

    The multiplication of a vector by a matrix is the kernel computation of many algorithms in scientific computation. A fast parallel algorithm for this calculation is therefore necessary if one is to make full use of the new generation of parallel supercomputers. This paper presents a high performance, parallel matrix-vector multiplication algorithm that is particularly well suited to hypercube multiprocessors. For an n x n matrix on p processors, the communication cost of this algorithm is O(n/[radical]p + log(p)), independent of the matrix sparsity pattern. The performance of the algorithm is demonstrated by employing it as the kernel in the well-known NAS conjugate gradient benchmark, where a run time of 6.09 seconds was observed. This is the best published performance on this benchmark achieved to date using a massively parallel supercomputer.

  10. Identifying multiple influential spreaders by a heuristic clustering algorithm

    Bao, Zhong-Kui [School of Mathematical Science, Anhui University, Hefei 230601 (China); Liu, Jian-Guo [Data Science and Cloud Service Research Center, Shanghai University of Finance and Economics, Shanghai, 200133 (China); Zhang, Hai-Feng, E-mail: haifengzhang1978@gmail.com [School of Mathematical Science, Anhui University, Hefei 230601 (China); Department of Communication Engineering, North University of China, Taiyuan, Shan' xi 030051 (China)

    2017-03-18

    The problem of influence maximization in social networks has attracted much attention. However, traditional centrality indices are suitable for the case where a single spreader is chosen as the spreading source. Many times, spreading process is initiated by simultaneously choosing multiple nodes as the spreading sources. In this situation, choosing the top ranked nodes as multiple spreaders is not an optimal strategy, since the chosen nodes are not sufficiently scattered in networks. Therefore, one ideal situation for multiple spreaders case is that the spreaders themselves are not only influential but also they are dispersively distributed in networks, but it is difficult to meet the two conditions together. In this paper, we propose a heuristic clustering (HC) algorithm based on the similarity index to classify nodes into different clusters, and finally the center nodes in clusters are chosen as the multiple spreaders. HC algorithm not only ensures that the multiple spreaders are dispersively distributed in networks but also avoids the selected nodes to be very “negligible”. Compared with the traditional methods, our experimental results on synthetic and real networks indicate that the performance of HC method on influence maximization is more significant. - Highlights: • A heuristic clustering algorithm is proposed to identify the multiple influential spreaders in complex networks. • The algorithm can not only guarantee the selected spreaders are sufficiently scattered but also avoid to be “insignificant”. • The performance of our algorithm is generally better than other methods, regardless of real networks or synthetic networks.

  11. Identifying multiple influential spreaders by a heuristic clustering algorithm

    Bao, Zhong-Kui; Liu, Jian-Guo; Zhang, Hai-Feng

    2017-01-01

    The problem of influence maximization in social networks has attracted much attention. However, traditional centrality indices are suitable for the case where a single spreader is chosen as the spreading source. Many times, spreading process is initiated by simultaneously choosing multiple nodes as the spreading sources. In this situation, choosing the top ranked nodes as multiple spreaders is not an optimal strategy, since the chosen nodes are not sufficiently scattered in networks. Therefore, one ideal situation for multiple spreaders case is that the spreaders themselves are not only influential but also they are dispersively distributed in networks, but it is difficult to meet the two conditions together. In this paper, we propose a heuristic clustering (HC) algorithm based on the similarity index to classify nodes into different clusters, and finally the center nodes in clusters are chosen as the multiple spreaders. HC algorithm not only ensures that the multiple spreaders are dispersively distributed in networks but also avoids the selected nodes to be very “negligible”. Compared with the traditional methods, our experimental results on synthetic and real networks indicate that the performance of HC method on influence maximization is more significant. - Highlights: • A heuristic clustering algorithm is proposed to identify the multiple influential spreaders in complex networks. • The algorithm can not only guarantee the selected spreaders are sufficiently scattered but also avoid to be “insignificant”. • The performance of our algorithm is generally better than other methods, regardless of real networks or synthetic networks.

  12. Two-Step Proximal Gradient Algorithm for Low-Rank Matrix Completion

    Qiuyu Wang

    2016-06-01

    Full Text Available In this paper, we  propose a two-step proximal gradient algorithm to solve nuclear norm regularized least squares for the purpose of recovering low-rank data matrix from sampling of its entries. Each iteration generated by the proposed algorithm is a combination of the latest three points, namely, the previous point, the current iterate, and its proximal gradient point. This algorithm preserves the computational simplicity of classical proximal gradient algorithm where a singular value decomposition in proximal operator is involved. Global convergence is followed directly in the literature. Numerical results are reported to show the efficiency of the algorithm.

  13. Convergence Analysis for the Multiplicative Schwarz Preconditioned Inexact Newton Algorithm

    Liu, Lulu

    2016-10-26

    The multiplicative Schwarz preconditioned inexact Newton (MSPIN) algorithm, based on decomposition by field type rather than by subdomain, was recently introduced to improve the convergence of systems with unbalanced nonlinearities. This paper provides a convergence analysis of the MSPIN algorithm. Under reasonable assumptions, it is shown that MSPIN is locally convergent, and desired superlinear or even quadratic convergence can be obtained when the forcing terms are picked suitably.

  14. Convergence Analysis for the Multiplicative Schwarz Preconditioned Inexact Newton Algorithm

    Liu, Lulu; Keyes, David E.

    2016-01-01

    The multiplicative Schwarz preconditioned inexact Newton (MSPIN) algorithm, based on decomposition by field type rather than by subdomain, was recently introduced to improve the convergence of systems with unbalanced nonlinearities. This paper provides a convergence analysis of the MSPIN algorithm. Under reasonable assumptions, it is shown that MSPIN is locally convergent, and desired superlinear or even quadratic convergence can be obtained when the forcing terms are picked suitably.

  15. Optimization design for the stepped impedance transformer based on the genetic algorithm

    Zou Dehui; Lai Wanchang; Qiu Dong

    2007-01-01

    This paper introduces the basic principium and mathematic model of the stepped impedance transformer, then puts the emphasis on comparing two kinds of design methods of the stepped impedance transformer. The design results are simulated by EDA, which indicates that genetic algorithm design is better than Chebyshev integrated design in the term of the most reflect coefficient's module. (authors)

  16. A deterministic algorithm for fitting a step function to a weighted point-set

    Fournier, Hervé ; Vigneron, Antoine E.

    2013-01-01

    Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance

  17. A deterministic algorithm for fitting a step function to a weighted point-set

    Fournier, Hervé

    2013-02-01

    Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance to the input points. It matches the expected time bound of the best known randomized algorithm for this problem. Our approach relies on Coles improved parametric searching technique. As a direct application, our result yields the first O(nlogn)-time algorithm for computing a k-center of a set of n weighted points on the real line. © 2012 Elsevier B.V.

  18. Application of algorithms and artificial-intelligence approach for locating multiple harmonics in distribution systems

    Hong, Y.-Y.; Chen, Y.-C. [Chung Yuan University (China). Dept. of Electrical Engineering

    1999-05-01

    A new method is proposed for locating multiple harmonic sources in distribution systems. The proposed method first determines the proper locations for metering measurement using fuzzy clustering. Next, an artificial neural network based on the back-propagation approach is used to identify the most likely location for multiple harmonic sources. A set of systematic algorithmic steps is developed until all harmonic locations are identified. The simulation results for an 18-busbar system show that the proposed method is very efficient in locating the multiple harmonics in a distribution system. (author)

  19. HMC algorithm with multiple time scale integration and mass preconditioning

    Urbach, C.; Jansen, K.; Shindler, A.; Wenger, U.

    2006-01-01

    We present a variant of the HMC algorithm with mass preconditioning (Hasenbusch acceleration) and multiple time scale integration. We have tested this variant for standard Wilson fermions at β=5.6 and at pion masses ranging from 380 to 680 MeV. We show that in this situation its performance is comparable to the recently proposed HMC variant with domain decomposition as preconditioner. We give an update of the "Berlin Wall" figure, comparing the performance of our variant of the HMC algorithm to other published performance data. Advantages of the HMC algorithm with mass preconditioning and multiple time scale integration are that it is straightforward to implement and can be used in combination with a wide variety of lattice Dirac operators.

  20. Multiple Object Tracking Using the Shortest Path Faster Association Algorithm

    Zhenghao Xi

    2014-01-01

    Full Text Available To solve the persistently multiple object tracking in cluttered environments, this paper presents a novel tracking association approach based on the shortest path faster algorithm. First, the multiple object tracking is formulated as an integer programming problem of the flow network. Then we relax the integer programming to a standard linear programming problem. Therefore, the global optimum can be quickly obtained using the shortest path faster algorithm. The proposed method avoids the difficulties of integer programming, and it has a lower worst-case complexity than competing methods but better robustness and tracking accuracy in complex environments. Simulation results show that the proposed algorithm takes less time than other state-of-the-art methods and can operate in real time.

  1. The Effects of Multiple-Step and Single-Step Directions on Fourth and Fifth Grade Students' Grammar Assessment Performance

    Mazerik, Matthew B.

    2006-01-01

    The mean scores of English Language Learners (ELL) and English Only (EO) students in 4th and 5th grade (N = 110), across the teacher-administered Grammar Skills Test, were examined for differences in participants' scores on assessments containing single-step directions and assessments containing multiple-step directions. The results indicated no…

  2. Use of multiple objective evolutionary algorithms in optimizing surveillance requirements

    Martorell, S.; Carlos, S.; Villanueva, J.F.; Sanchez, A.I; Galvan, B.; Salazar, D.; Cepin, M.

    2006-01-01

    This paper presents the development and application of a double-loop Multiple Objective Evolutionary Algorithm that uses a Multiple Objective Genetic Algorithm to perform the simultaneous optimization of periodic Test Intervals (TI) and Test Planning (TP). It takes into account the time-dependent effect of TP performed on stand-by safety-related equipment. TI and TP are part of the Surveillance Requirements within Technical Specifications at Nuclear Power Plants. It addresses the problem of multi-objective optimization in the space of dependable variables, i.e. TI and TP, using a novel flexible structure of the optimization algorithm. Lessons learnt from the cases of application of the methodology to optimize TI and TP for the High-Pressure Injection System are given. The results show that the double-loop Multiple Objective Evolutionary Algorithm is able to find the Pareto set of solutions that represents a surface of non-dominated solutions that satisfy all the constraints imposed on the objective functions and decision variables. Decision makers can adopt then the best solution found depending on their particular preference, e.g. minimum cost, minimum unavailability

  3. Multiple Lookup Table-Based AES Encryption Algorithm Implementation

    Gong, Jin; Liu, Wenyi; Zhang, Huixin

    Anew AES (Advanced Encryption Standard) encryption algorithm implementation was proposed in this paper. It is based on five lookup tables, which are generated from S-box(the substitution table in AES). The obvious advantages are reducing the code-size, improving the implementation efficiency, and helping new learners to understand the AES encryption algorithm and GF(28) multiplication which are necessary to correctly implement AES[1]. This method can be applied on processors with word length 32 or above, FPGA and others. And correspondingly we can implement it by VHDL, Verilog, VB and other languages.

  4. A NEW HEURISTIC ALGORITHM FOR MULTIPLE TRAVELING SALESMAN PROBLEM

    F. NURIYEVA

    2017-06-01

    Full Text Available The Multiple Traveling Salesman Problem (mTSP is a combinatorial optimization problem in NP-hard class. The mTSP aims to acquire the minimum cost for traveling a given set of cities by assigning each of them to a different salesman in order to create m number of tours. This paper presents a new heuristic algorithm based on the shortest path algorithm to find a solution for the mTSP. The proposed method has been programmed in C language and its performance analysis has been carried out on the library instances. The computational results show the efficiency of this method.

  5. Modeling and Design of MPPT Controller Using Stepped P&O Algorithm in Solar Photovoltaic System

    R. Prakash; B. Meenakshipriya; R. Kumaravelan

    2014-01-01

    This paper presents modeling and simulation of Grid Connected Photovoltaic (PV) system by using improved mathematical model. The model is used to study different parameter variations and effects on the PV array including operating temperature and solar irradiation level. In this paper stepped P&O algorithm is proposed for MPPT control. This algorithm will identify the suitable duty ratio in which the DC-DC converter should be operated to maximize the power output. Photo voltaic array with pro...

  6. Coastal Zone Color Scanner atmospheric correction algorithm - Multiple scattering effects

    Gordon, Howard R.; Castano, Diego J.

    1987-01-01

    Errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm are analyzed. The analysis is based on radiative transfer computations in model atmospheres, in which the aerosols and molecules are distributed vertically in an exponential manner, with most of the aerosol scattering located below the molecular scattering. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates, making it possible to determine the errors along typical CZCS scan lines. Information provided by the analysis makes it possible to judge the efficacy of the current algorithm with the current sensor and to estimate the impact of the algorithm-induced errors on a variety of applications.

  7. A chaos wolf optimization algorithm with self-adaptive variable step-size

    Yong Zhu

    2017-10-01

    Full Text Available To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as “winner-take-all” and the update mechanism as “survival of the fittest” were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.

  8. A chaos wolf optimization algorithm with self-adaptive variable step-size

    Zhu, Yong; Jiang, Wanlu; Kong, Xiangdong; Quan, Lingxiao; Zhang, Yongshun

    2017-10-01

    To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA) with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as "winner-take-all" and the update mechanism as "survival of the fittest" were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.

  9. A Multiple Model Prediction Algorithm for CNC Machine Wear PHM

    Huimin Chen

    2011-01-01

    Full Text Available The 2010 PHM data challenge focuses on the remaining useful life (RUL estimation for cutters of a high speed CNC milling machine using measurements from dynamometer, accelerometer, and acoustic emission sensors. We present a multiple model approach for wear depth estimation of milling machine cutters using the provided data. The feature selection, initial wear estimation and multiple model fusion components of the proposed algorithm are explained in details and compared with several alternative methods using the training data. The final submission ranked #2 among professional and student participants and the method is applicable to other data driven PHM problems.

  10. Genomic multiple sequence alignments: refinement using a genetic algorithm

    Lefkowitz Elliot J

    2005-08-01

    Full Text Available Abstract Background Genomic sequence data cannot be fully appreciated in isolation. Comparative genomics – the practice of comparing genomic sequences from different species – plays an increasingly important role in understanding the genotypic differences between species that result in phenotypic differences as well as in revealing patterns of evolutionary relationships. One of the major challenges in comparative genomics is producing a high-quality alignment between two or more related genomic sequences. In recent years, a number of tools have been developed for aligning large genomic sequences. Most utilize heuristic strategies to identify a series of strong sequence similarities, which are then used as anchors to align the regions between the anchor points. The resulting alignment is globally correct, but in many cases is suboptimal locally. We describe a new program, GenAlignRefine, which improves the overall quality of global multiple alignments by using a genetic algorithm to improve local regions of alignment. Regions of low quality are identified, realigned using the program T-Coffee, and then refined using a genetic algorithm. Because a better COFFEE (Consistency based Objective Function For alignmEnt Evaluation score generally reflects greater alignment quality, the algorithm searches for an alignment that yields a better COFFEE score. To improve the intrinsic slowness of the genetic algorithm, GenAlignRefine was implemented as a parallel, cluster-based program. Results We tested the GenAlignRefine algorithm by running it on a Linux cluster to refine sequences from a simulation, as well as refine a multiple alignment of 15 Orthopoxvirus genomic sequences approximately 260,000 nucleotides in length that initially had been aligned by Multi-LAGAN. It took approximately 150 minutes for a 40-processor Linux cluster to optimize some 200 fuzzy (poorly aligned regions of the orthopoxvirus alignment. Overall sequence identity increased only

  11. Step wise, multiple objective calibration of a hydrologic model for a snowmelt dominated basin

    Hay, L.E.; Leavesley, G.H.; Clark, M.P.; Markstrom, S.L.; Viger, R.J.; Umemoto, M.

    2006-01-01

    The ability to apply a hydrologic model to large numbers of basins for forecasting purposes requires a quick and effective calibration strategy. This paper presents a step wise, multiple objective, automated procedure for hydrologic model calibration. This procedure includes the sequential calibration of a model's simulation of solar radiation (SR), potential evapotranspiration (PET), water balance, and daily runoff. The procedure uses the Shuffled Complex Evolution global search algorithm to calibrate the U.S. Geological Survey's Precipitation Runoff Modeling System in the Yampa River basin of Colorado. This process assures that intermediate states of the model (SR and PET on a monthly mean basis), as well as the water balance and components of the daily hydrograph are simulated, consistently with measured values.

  12. A matrix-free, implicit, incompressible fractional-step algorithm for fluid–structure interaction applications

    Oxtoby, Oliver F

    2012-05-01

    Full Text Available In this paper we detail a fast, fully-coupled, partitioned fluid–structure interaction (FSI) scheme. For the incompressible fluid, new fractional-step algorithms are proposed which make possible the fully implicit, but matrixfree, parallel solution...

  13. Invited Review Article: Measurement uncertainty of linear phase-stepping algorithms

    Hack, Erwin [EMPA, Laboratory Electronics/Metrology/Reliability, Ueberlandstrasse 129, CH-8600 Duebendorf (Switzerland); Burke, Jan [Australian Centre for Precision Optics, CSIRO (Commonwealth Scientific and Industrial Research Organisation) Materials Science and Engineering, P.O. Box 218, Lindfield, NSW 2070 (Australia)

    2011-06-15

    Phase retrieval techniques are widely used in optics, imaging and electronics. Originating in signal theory, they were introduced to interferometry around 1970. Over the years, many robust phase-stepping techniques have been developed that minimize specific experimental influence quantities such as phase step errors or higher harmonic components of the signal. However, optimizing a technique for a specific influence quantity can compromise its performance with regard to others. We present a consistent quantitative analysis of phase measurement uncertainty for the generalized linear phase stepping algorithm with nominally equal phase stepping angles thereby reviewing and generalizing several results that have been reported in literature. All influence quantities are treated on equal footing, and correlations between them are described in a consistent way. For the special case of classical N-bucket algorithms, we present analytical formulae that describe the combined variance as a function of the phase angle values. For the general Arctan algorithms, we derive expressions for the measurement uncertainty averaged over the full 2{pi}-range of phase angles. We also give an upper bound for the measurement uncertainty which can be expressed as being proportional to an algorithm specific factor. Tabular compilations help the reader to quickly assess the uncertainties that are involved with his or her technique.

  14. SU-C-BRF-07: A Pattern Fusion Algorithm for Multi-Step Ahead Prediction of Surrogate Motion

    Zawisza, I; Yan, H; Yin, F

    2014-01-01

    Purpose: To assure that tumor motion is within the radiation field during high-dose and high-precision radiosurgery, real-time imaging and surrogate monitoring are employed. These methods are useful in providing real-time tumor/surrogate motion but no future information is available. In order to anticipate future tumor/surrogate motion and track target location precisely, an algorithm is developed and investigated for estimating surrogate motion multiple-steps ahead. Methods: The study utilized a one-dimensional surrogate motion signal divided into three components: (a) training component containing the primary data including the first frame to the beginning of the input subsequence; (b) input subsequence component of the surrogate signal used as input to the prediction algorithm: (c) output subsequence component is the remaining signal used as the known output of the prediction algorithm for validation. The prediction algorithm consists of three major steps: (1) extracting subsequences from training component which best-match the input subsequence according to given criterion; (2) calculating weighting factors from these best-matched subsequence; (3) collecting the proceeding parts of the subsequences and combining them together with assigned weighting factors to form output. The prediction algorithm was examined for several patients, and its performance is assessed based on the correlation between prediction and known output. Results: Respiratory motion data was collected for 20 patients using the RPM system. The output subsequence is the last 50 samples (∼2 seconds) of a surrogate signal, and the input subsequence was 100 (∼3 seconds) frames prior to the output subsequence. Based on the analysis of correlation coefficient between predicted and known output subsequence, the average correlation is 0.9644±0.0394 and 0.9789±0.0239 for equal-weighting and relative-weighting strategies, respectively. Conclusion: Preliminary results indicate that the prediction

  15. Robust and unobtrusive algorithm based on position independence for step detection

    Qiu, KeCheng; Li, MengYang; Luo, YiHan

    2018-04-01

    Running is becoming one of the most popular exercises among the people, monitoring steps can help users better understand their running process and improve exercise efficiency. In this paper, we design and implement a robust and unobtrusive algorithm based on position independence for step detection under real environment. It applies Butterworth filter to suppress high frequency interference and then employs the projection based on mathematics to transform system to solve the problem of unknown position of smartphone. Finally, using sliding window to suppress the false peak. The algorithm was tested for eight participants on the Android 7.0 platform. In our experiments, the results show that the proposed algorithm can achieve desired effect in spite of device pose.

  16. On using the Multiple Signal Classification algorithm to study microbaroms

    Marcillo, O. E.; Blom, P. S.; Euler, G. G.

    2016-12-01

    Multiple Signal Classification (MUSIC) (Schmidt, 1986) is a well-known high-resolution algorithm used in array processing for parameter estimation. We report on the application of MUSIC to infrasonic array data in a study of the structure of microbaroms. Microbaroms can be globally observed and display energy centered around 0.2 Hz. Microbaroms are an infrasonic signal generated by the non-linear interaction of ocean surface waves that radiate into the ocean and atmosphere as well as the solid earth in the form of microseisms. Microbaroms sources are dynamic and, in many cases, distributed in space and moving in time. We assume that the microbarom energy detected by an infrasonic array is the result of multiple sources (with different back-azimuths) in the same bandwidth and apply the MUSIC algorithm accordingly to recover the back-azimuth and trace velocity of the individual components. Preliminary results show that the multiple component assumption in MUSIC allows one to resolve the fine structure in the microbarom band that can be related to multiple ocean surface phenomena.

  17. Can Reduced-Step Polishers Be as Effective as Multiple-Step Polishers in Enhancing Surface Smoothness?

    Kemaloglu, Hande; Karacolak, Gamze; Turkun, L Sebnem

    2017-02-01

    The aim of this study was to evaluate the effects of various finishing and polishing systems on the final surface roughness of a resin composite. Hypotheses tested were: (1) reduced-step polishing systems are as effective as multiple-step systems on reducing the surface roughness of a resin composite and (2) the number of application steps in an F/P system has no effect on reducing surface roughness. Ninety discs of a nano-hybrid resin composite were fabricated and divided into nine groups (n = 10). Except the control, all of the specimens were roughened prior to be polished by: Enamel Plus Shiny, Venus Supra, One-gloss, Sof-Lex Wheels, Super-Snap, Enhance/PoGo, Clearfil Twist Dia, and rubber cups. The surface roughness was measured and the surfaces were examined under scanning electron microscope. Results were analyzed with analysis of variance and Holm-Sidak's multiple comparisons test (p One-gloss, Enamel Plus Shiny, and Venus Supra groups. (1) The number of application steps has no effect on the performance of F/P systems. (2) Reduced-step polishers used after a finisher can be preferable to multiple-step systems when used on nanohybrid resin composites. (3) The effect of F/P systems on surface roughness seems to be material-dependent rather than instrument- or system-dependent. Reduced-step systems used after a prepolisher can be an acceptable alternative to multiple-step systems on enhancing the surface smoothness of a nanohybrid composite; however, their effectiveness depends on the materials' properties. (J Esthet Restor Dent 29:31-40, 2017). © 2016 Wiley Periodicals, Inc.

  18. Multiplicative algorithms for constrained non-negative matrix factorization

    Peng, Chengbin

    2012-12-01

    Non-negative matrix factorization (NMF) provides the advantage of parts-based data representation through additive only combinations. It has been widely adopted in areas like item recommending, text mining, data clustering, speech denoising, etc. In this paper, we provide an algorithm that allows the factorization to have linear or approximatly linear constraints with respect to each factor. We prove that if the constraint function is linear, algorithms within our multiplicative framework will converge. This theory supports a large variety of equality and inequality constraints, and can facilitate application of NMF to a much larger domain. Taking the recommender system as an example, we demonstrate how a specialized weighted and constrained NMF algorithm can be developed to fit exactly for the problem, and the tests justify that our constraints improve the performance for both weighted and unweighted NMF algorithms under several different metrics. In particular, on the Movielens data with 94% of items, the Constrained NMF improves recall rate 3% compared to SVD50 and 45% compared to SVD150, which were reported as the best two in the top-N metric. © 2012 IEEE.

  19. Principles of a new treatment algorithm in multiple sclerosis

    Hartung, Hans-Peter; Montalban, Xavier; Sorensen, Per Soelberg

    2011-01-01

    We are entering a new era in the management of patients with multiple sclerosis (MS). The first oral treatment (fingolimod) has now gained US FDA approval, addressing an unmet need for patients with MS who wish to avoid parenteral administration. A second agent (cladribine) is currently being...... considered for approval. With the arrival of these oral agents, a key question is where they may fit into the existing MS treatment algorithm. This article aims to help answer this question by analyzing the trial data for the new oral therapies, as well as for existing MS treatments, by applying practical...... clinical experience, and through consideration of our increased understanding of how to define treatment success in MS. This article also provides a speculative look at what the treatment algorithm may look like in 5 years, with the availability of new data, greater experience and, potentially, other novel...

  20. Alternate mutation based artificial immune algorithm for step fixed charge transportation problem

    Mahmoud Moustafa El-Sherbiny

    2012-07-01

    Full Text Available Step fixed charge transportation problem (SFCTP is considered as a special version of the fixed-charge transportation problem (FCTP. In SFCTP, the fixed cost is incurred for every route that is used in the solution and is proportional to the amount shipped. This cost structure causes the value of the objective function to behave like a step function. Both FCTP and SFCTP are considered to be NP-hard problems. While a lot of research has been carried out concerning FCTP, not much has been done concerning SFCTP. This paper introduces an alternate Mutation based Artificial Immune (MAI algorithm for solving SFCTPs. The proposed MAI algorithm solves both balanced and unbalanced SFCTP without introducing a dummy supplier or a dummy customer. In MAI algorithm a coding schema is designed and procedures are developed for decoding such schema and shipping units. MAI algorithm guarantees the feasibility of all the generated solutions. Due to the significant role of mutation function on the MAI algorithm’s quality, 16 mutation functions are presented and their performances are compared to select the best one. For this purpose, forty problems with different sizes have been generated at random and then a robust calibration is applied using the relative percentage deviation (RPD method. Through two illustrative problems of different sizes the performance of the MAI algorithm has been compared with most recent methods.

  1. A Hybrid Genetic Algorithm for the Multiple Crossdocks Problem

    Zhaowei Miao

    2012-01-01

    Full Text Available We study a multiple crossdocks problem with supplier and customer time windows, where any violation of time windows will incur a penalty cost and the flows through the crossdock are constrained by fixed transportation schedules and crossdock capacities. We prove this problem to be NP-hard in the strong sense and therefore focus on developing efficient heuristics. Based on the problem structure, we propose a hybrid genetic algorithm (HGA integrating greedy technique and variable neighborhood search method to solve the problem. Extensive experiments under different scenarios were conducted, and results show that HGA outperforms CPLEX solver, providing solutions in realistic timescales.

  2. Transform Domain Robust Variable Step Size Griffiths' Adaptive Algorithm for Noise Cancellation in ECG

    Hegde, Veena; Deekshit, Ravishankar; Satyanarayana, P. S.

    2011-12-01

    The electrocardiogram (ECG) is widely used for diagnosis of heart diseases. Good quality of ECG is utilized by physicians for interpretation and identification of physiological and pathological phenomena. However, in real situations, ECG recordings are often corrupted by artifacts or noise. Noise severely limits the utility of the recorded ECG and thus needs to be removed, for better clinical evaluation. In the present paper a new noise cancellation technique is proposed for removal of random noise like muscle artifact from ECG signal. A transform domain robust variable step size Griffiths' LMS algorithm (TVGLMS) is proposed for noise cancellation. For the TVGLMS, the robust variable step size has been achieved by using the Griffiths' gradient which uses cross-correlation between the desired signal contaminated with observation or random noise and the input. The algorithm is discrete cosine transform (DCT) based and uses symmetric property of the signal to represent the signal in frequency domain with lesser number of frequency coefficients when compared to that of discrete Fourier transform (DFT). The algorithm is implemented for adaptive line enhancer (ALE) filter which extracts the ECG signal in a noisy environment using LMS filter adaptation. The proposed algorithm is found to have better convergence error/misadjustment when compared to that of ordinary transform domain LMS (TLMS) algorithm, both in the presence of white/colored observation noise. The reduction in convergence error achieved by the new algorithm with desired signal decomposition is found to be lower than that obtained without decomposition. The experimental results indicate that the proposed method is better than traditional adaptive filter using LMS algorithm in the aspects of retaining geometrical characteristics of ECG signal.

  3. Detecting free-living steps and walking bouts: validating an algorithm for macro gait analysis.

    Hickey, Aodhán; Del Din, Silvia; Rochester, Lynn; Godfrey, Alan

    2017-01-01

    Research suggests wearables and not instrumented walkways are better suited to quantify gait outcomes in clinic and free-living environments, providing a more comprehensive overview of walking due to continuous monitoring. Numerous validation studies in controlled settings exist, but few have examined the validity of wearables and associated algorithms for identifying and quantifying step counts and walking bouts in uncontrolled (free-living) environments. Studies which have examined free-living step and bout count validity found limited agreement due to variations in walking speed, changing terrain or task. Here we present a gait segmentation algorithm to define free-living step count and walking bouts from an open-source, high-resolution, accelerometer-based wearable (AX3, Axivity). Ten healthy participants (20-33 years) wore two portable gait measurement systems; a wearable accelerometer on the lower-back and a wearable body-mounted camera (GoPro HERO) on the chest, for 1 h on two separate occasions (24 h apart) during free-living activities. Step count and walking bouts were derived for both measurement systems and compared. For all participants during a total of almost 20 h of uncontrolled and unscripted free-living activity data, excellent relative (rho  ⩾  0.941) and absolute (ICC (2,1)   ⩾  0.975) agreement with no presence of bias were identified for step count compared to the camera (gold standard reference). Walking bout identification showed excellent relative (rho  ⩾  0.909) and absolute agreement (ICC (2,1)   ⩾  0.941) but demonstrated significant bias. The algorithm employed for identifying and quantifying steps and bouts from a single wearable accelerometer worn on the lower-back has been demonstrated to be valid and could be used for pragmatic gait analysis in prolonged uncontrolled free-living environments.

  4. Multiple Walkers in the Wang-Landau Algorithm

    Brown, G

    2005-12-28

    The mean cost for converging an estimated density of states using the Wang-Landau algorithm is measured for the Ising and Heisenberg models. The cost increases in a power-law fashion with the number of spins, with an exponent near 3 for one-dimensional models, and closer to 2.4 for two-dimensional models. The effect of multiple, simultaneous walkers on the cost is also measured. For the one-dimensional Ising model the cost can increase with the number of walkers for large systems. For both the Ising and Heisenberg models in two-dimensions, no adverse impact on the cost is observed. Thus multiple walkers is a strategy that should scale well in a parallel computing environment for many models of magnetic materials.

  5. A Novel Walking Detection and Step Counting Algorithm Using Unconstrained Smartphones.

    Kang, Xiaomin; Huang, Baoqi; Qi, Guodong

    2018-01-19

    Recently, with the development of artificial intelligence technologies and the popularity of mobile devices, walking detection and step counting have gained much attention since they play an important role in the fields of equipment positioning, saving energy, behavior recognition, etc. In this paper, a novel algorithm is proposed to simultaneously detect walking motion and count steps through unconstrained smartphones in the sense that the smartphone placement is not only arbitrary but also alterable. On account of the periodicity of the walking motion and sensitivity of gyroscopes, the proposed algorithm extracts the frequency domain features from three-dimensional (3D) angular velocities of a smartphone through FFT (fast Fourier transform) and identifies whether its holder is walking or not irrespective of its placement. Furthermore, the corresponding step frequency is recursively updated to evaluate the step count in real time. Extensive experiments are conducted by involving eight subjects and different walking scenarios in a realistic environment. It is shown that the proposed method achieves the precision of 93.76 % and recall of 93.65 % for walking detection, and its overall performance is significantly better than other well-known methods. Moreover, the accuracy of step counting by the proposed method is 95.74 % , and is better than both of the several well-known counterparts and commercial products.

  6. A Novel Walking Detection and Step Counting Algorithm Using Unconstrained Smartphones

    Xiaomin Kang

    2018-01-01

    Full Text Available Recently, with the development of artificial intelligence technologies and the popularity of mobile devices, walking detection and step counting have gained much attention since they play an important role in the fields of equipment positioning, saving energy, behavior recognition, etc. In this paper, a novel algorithm is proposed to simultaneously detect walking motion and count steps through unconstrained smartphones in the sense that the smartphone placement is not only arbitrary but also alterable. On account of the periodicity of the walking motion and sensitivity of gyroscopes, the proposed algorithm extracts the frequency domain features from three-dimensional (3D angular velocities of a smartphone through FFT (fast Fourier transform and identifies whether its holder is walking or not irrespective of its placement. Furthermore, the corresponding step frequency is recursively updated to evaluate the step count in real time. Extensive experiments are conducted by involving eight subjects and different walking scenarios in a realistic environment. It is shown that the proposed method achieves the precision of 93.76 % and recall of 93.65 % for walking detection, and its overall performance is significantly better than other well-known methods. Moreover, the accuracy of step counting by the proposed method is 95.74 % , and is better than both of the several well-known counterparts and commercial products.

  7. Multiple-time-stepping generalized hybrid Monte Carlo methods

    Escribano, Bruno, E-mail: bescribano@bcamath.org [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); Akhmatskaya, Elena [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao (Spain); Reich, Sebastian [Universität Potsdam, Institut für Mathematik, D-14469 Potsdam (Germany); Azpiroz, Jon M. [Kimika Fakultatea, Euskal Herriko Unibertsitatea (UPV/EHU) and Donostia International Physics Center (DIPC), P.K. 1072, Donostia (Spain)

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  8. Multiple-step fault estimation for interval type-II T-S fuzzy system of hypersonic vehicle with time-varying elevator faults

    Jin Wang

    2017-03-01

    Full Text Available This article proposes a multiple-step fault estimation algorithm for hypersonic flight vehicles that uses an interval type-II Takagi–Sugeno fuzzy model. An interval type-II Takagi–Sugeno fuzzy model is developed to approximate the nonlinear dynamic system and handle the parameter uncertainties of hypersonic firstly. Then, a multiple-step time-varying additive fault estimation algorithm is designed to estimate time-varying additive elevator fault of hypersonic flight vehicles. Finally, the simulation is conducted in both aspects of modeling and fault estimation; the validity and availability of such method are verified by a series of the comparison of numerical simulation results.

  9. Two-step calibration method for multi-algorithm score-based face recognition systems by minimizing discrimination loss

    Susyanto, N.; Veldhuis, R.N.J.; Spreeuwers, L.J.; Klaassen, C.A.J.; Fierrez, J.; Li, S.Z.; Ross, A.; Veldhuis, R.; Alonso-Fernandez, F.; Bigun, J.

    2016-01-01

    We propose a new method for combining multi-algorithm score-based face recognition systems, which we call the two-step calibration method. Typically, algorithms for face recognition systems produce dependent scores. The two-step method is based on parametric copulas to handle this dependence. Its

  10. Parallelizing flow-accumulation calculations on graphics processing units—From iterative DEM preprocessing algorithm to recursive multiple-flow-direction algorithm

    Qin, Cheng-Zhi; Zhan, Lijun

    2012-06-01

    As one of the important tasks in digital terrain analysis, the calculation of flow accumulations from gridded digital elevation models (DEMs) usually involves two steps in a real application: (1) using an iterative DEM preprocessing algorithm to remove the depressions and flat areas commonly contained in real DEMs, and (2) using a recursive flow-direction algorithm to calculate the flow accumulation for every cell in the DEM. Because both algorithms are computationally intensive, quick calculation of the flow accumulations from a DEM (especially for a large area) presents a practical challenge to personal computer (PC) users. In recent years, rapid increases in hardware capacity of the graphics processing units (GPUs) provided in modern PCs have made it possible to meet this challenge in a PC environment. Parallel computing on GPUs using a compute-unified-device-architecture (CUDA) programming model has been explored to speed up the execution of the single-flow-direction algorithm (SFD). However, the parallel implementation on a GPU of the multiple-flow-direction (MFD) algorithm, which generally performs better than the SFD algorithm, has not been reported. Moreover, GPU-based parallelization of the DEM preprocessing step in the flow-accumulation calculations has not been addressed. This paper proposes a parallel approach to calculate flow accumulations (including both iterative DEM preprocessing and a recursive MFD algorithm) on a CUDA-compatible GPU. For the parallelization of an MFD algorithm (MFD-md), two different parallelization strategies using a GPU are explored. The first parallelization strategy, which has been used in the existing parallel SFD algorithm on GPU, has the problem of computing redundancy. Therefore, we designed a parallelization strategy based on graph theory. The application results show that the proposed parallel approach to calculate flow accumulations on a GPU performs much faster than either sequential algorithms or other parallel GPU

  11. Step by Step: Biology Undergraduates’ Problem-Solving Procedures during Multiple-Choice Assessment

    Prevost, Luanna B.; Lemons, Paula P.

    2016-01-01

    This study uses the theoretical framework of domain-specific problem solving to explore the procedures students use to solve multiple-choice problems about biology concepts. We designed several multiple-choice problems and administered them on four exams. We trained students to produce written descriptions of how they solved the problem, and this allowed us to systematically investigate their problem-solving procedures. We identified a range of procedures and organized them as domain general, domain specific, or hybrid. We also identified domain-general and domain-specific errors made by students during problem solving. We found that students use domain-general and hybrid procedures more frequently when solving lower-order problems than higher-order problems, while they use domain-specific procedures more frequently when solving higher-order problems. Additionally, the more domain-specific procedures students used, the higher the likelihood that they would answer the problem correctly, up to five procedures. However, if students used just one domain-general procedure, they were as likely to answer the problem correctly as if they had used two to five domain-general procedures. Our findings provide a categorization scheme and framework for additional research on biology problem solving and suggest several important implications for researchers and instructors. PMID:27909021

  12. Electrohydraulic linear actuator with two stepping motors controlled by overshoot-free algorithm

    Milecki, Andrzej; Ortmann, Jarosław

    2017-11-01

    The paper describes electrohydraulic spool valves with stepping motors used as electromechanical transducers. A new concept of a proportional valve in which two stepping motors are working differentially is introduced. Such valve changes the fluid flow proportionally to the sum or difference of the motors' steps numbers. The valve design and principle of its operation is described. Theoretical equations and simulation models are proposed for all elements of the drive, i.e., the stepping motor units, hydraulic valve and cylinder. The main features of the valve and drive operation are described; some specific problem areas covering the nature of stepping motors and their differential work in the valve are also considered. The whole servo drive non-linear model is proposed and used further for simulation investigations. The initial simulation investigations of the drive with a new valve have shown that there is a significant overshoot in the drive step response, which is not allowed in positioning process. Therefore additional effort is spent to reduce the overshoot and in consequence reduce the settling time. A special predictive algorithm is proposed to this end. Then the proposed control method is tested and further improved in simulations. Further on, the model is implemented in reality and the whole servo drive system is tested. The investigation results presented in this paper, are showing an overshoot-free positioning process which enables high positioning accuracy.

  13. Step by Step: Biology Undergraduates' Problem-Solving Procedures during Multiple-Choice Assessment.

    Prevost, Luanna B; Lemons, Paula P

    2016-01-01

    This study uses the theoretical framework of domain-specific problem solving to explore the procedures students use to solve multiple-choice problems about biology concepts. We designed several multiple-choice problems and administered them on four exams. We trained students to produce written descriptions of how they solved the problem, and this allowed us to systematically investigate their problem-solving procedures. We identified a range of procedures and organized them as domain general, domain specific, or hybrid. We also identified domain-general and domain-specific errors made by students during problem solving. We found that students use domain-general and hybrid procedures more frequently when solving lower-order problems than higher-order problems, while they use domain-specific procedures more frequently when solving higher-order problems. Additionally, the more domain-specific procedures students used, the higher the likelihood that they would answer the problem correctly, up to five procedures. However, if students used just one domain-general procedure, they were as likely to answer the problem correctly as if they had used two to five domain-general procedures. Our findings provide a categorization scheme and framework for additional research on biology problem solving and suggest several important implications for researchers and instructors. © 2016 L. B. Prevost and P. P. Lemons. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  14. A composite step conjugate gradients squared algorithm for solving nonsymmetric linear systems

    Chan, Tony; Szeto, Tedd

    1994-03-01

    We propose a new and more stable variant of the CGS method [27] for solving nonsymmetric linear systems. The method is based on squaring the Composite Step BCG method, introduced recently by Bank and Chan [1,2], which itself is a stabilized variant of BCG in that it skips over steps for which the BCG iterate is not defined and causes one kind of breakdown in BCG. By doing this, we obtain a method (Composite Step CGS or CSCGS) which not only handles the breakdowns described above, but does so with the advantages of CGS, namely, no multiplications by the transpose matrix and a faster convergence rate than BCG. Our strategy for deciding whether to skip a step does not involve any machine dependent parameters and is designed to skip near breakdowns as well as produce smoother iterates. Numerical experiments show that the new method does produce improved performance over CGS on practical problems.

  15. Multiple R&D projects scheduling optimization with improved particle swarm algorithm.

    Liu, Mengqi; Shan, Miyuan; Wu, Juan

    2014-01-01

    For most enterprises, in order to win the initiative in the fierce competition of market, a key step is to improve their R&D ability to meet the various demands of customers more timely and less costly. This paper discusses the features of multiple R&D environments in large make-to-order enterprises under constrained human resource and budget, and puts forward a multi-project scheduling model during a certain period. Furthermore, we make some improvements to existed particle swarm algorithm and apply the one developed here to the resource-constrained multi-project scheduling model for a simulation experiment. Simultaneously, the feasibility of model and the validity of algorithm are proved in the experiment.

  16. Tracking of Multiple Moving Sources Using Recursive EM Algorithm

    Böhme Johann F

    2005-01-01

    Full Text Available We deal with recursive direction-of-arrival (DOA estimation of multiple moving sources. Based on the recursive EM algorithm, we develop two recursive procedures to estimate the time-varying DOA parameter for narrowband signals. The first procedure requires no prior knowledge about the source movement. The second procedure assumes that the motion of moving sources is described by a linear polynomial model. The proposed recursion updates the polynomial coefficients when a new data arrives. The suggested approaches have two major advantages: simple implementation and easy extension to wideband signals. Numerical experiments show that both procedures provide excellent results in a slowly changing environment. When the DOA parameter changes fast or two source directions cross with each other, the procedure designed for a linear polynomial model has a better performance than the general procedure. Compared to the beamforming technique based on the same parameterization, our approach is computationally favorable and has a wider range of applications.

  17. Ab initio multiple cloning algorithm for quantum nonadiabatic molecular dynamics

    Makhov, Dmitry V.; Shalashilin, Dmitrii V. [Department of Chemistry, University of Leeds, Leeds LS2 9JT (United Kingdom); Glover, William J.; Martinez, Todd J. [Department of Chemistry and The PULSE Institute, Stanford University, Stanford, California 94305, USA and SLAC National Accelerator Laboratory, Menlo Park, California 94025 (United States)

    2014-08-07

    We present a new algorithm for ab initio quantum nonadiabatic molecular dynamics that combines the best features of ab initio Multiple Spawning (AIMS) and Multiconfigurational Ehrenfest (MCE) methods. In this new method, ab initio multiple cloning (AIMC), the individual trajectory basis functions (TBFs) follow Ehrenfest equations of motion (as in MCE). However, the basis set is expanded (as in AIMS) when these TBFs become sufficiently mixed, preventing prolonged evolution on an averaged potential energy surface. We refer to the expansion of the basis set as “cloning,” in analogy to the “spawning” procedure in AIMS. This synthesis of AIMS and MCE allows us to leverage the benefits of mean-field evolution during periods of strong nonadiabatic coupling while simultaneously avoiding mean-field artifacts in Ehrenfest dynamics. We explore the use of time-displaced basis sets, “trains,” as a means of expanding the basis set for little cost. We also introduce a new bra-ket averaged Taylor expansion (BAT) to approximate the necessary potential energy and nonadiabatic coupling matrix elements. The BAT approximation avoids the necessity of computing electronic structure information at intermediate points between TBFs, as is usually done in saddle-point approximations used in AIMS. The efficiency of AIMC is demonstrated on the nonradiative decay of the first excited state of ethylene. The AIMC method has been implemented within the AIMS-MOLPRO package, which was extended to include Ehrenfest basis functions.

  18. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Murari, A.; Barana, O.; Albanese, R.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.

    2004-01-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with internal transport barriers. Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  19. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    Murari, A.; Barana, O.; Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Albanese, R.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.

    2004-01-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  20. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    Murari, A.; Barana, O. [Consorzio RFX Associazione EURATOM ENEA per la Fusione, Corso Stati Uniti 4, Padua (Italy); Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F. [Euratom/UKAEA Fusion Assoc., Culham Science Centre, Abingdon, Oxon (United Kingdom); Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D. [Association EURATOM-CEA, CEA Cadarache, 13 - Saint-Paul-lez-Durance (France); Albanese, R. [Assoc. Euratom-ENEA-CREATE, Univ. Mediterranea RC (Italy); Arena, P.; Bruno, M. [Assoc. Euratom-ENEA-CREATE, Univ.di Catania (Italy); Ambrosino, G.; Ariola, M. [Assoc. Euratom-ENEA-CREATE, Univ. Napoli Federico Napoli (Italy); Crisanti, F. [Associazone EURATOM ENEA sulla Fusione, C.R. Frascati (Italy); Luna, E. de la; Sanchez, J. [Associacion EURATOM CIEMAT para Fusion, Madrid (Spain)

    2004-07-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  1. On an efficient multiple time step Monte Carlo simulation of the SABR model

    Leitao Rodriguez, A.; Grzelak, L.A.; Oosterlee, C.W.

    2017-01-01

    In this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl. Math.

  2. Improvement of arm solutions via step width self-tuning algorithm

    Sasaki, Shinobu

    1993-09-01

    This paper is concerned with the significant numerical problems encountered in solving the manipulator inverse kinematics. That is, essential difficulties occurred in linearized calculations such as dependence on initial guess or narrow search region are improved with great success by means of a step width self-tuning algorithm. In a practical optimization model based on the reduction of dimensionality and linearized approximation, it is shown that the desired arm solutions are found out at a faster rate over a wider application range. Also, the capability of finding solutions via a traditional Newton method is enhanced to a large extent by combined application of the proposed idea and simplex method. (author)

  3. Improvement of the temporal resolution of cardiac CT reconstruction algorithms using an optimized filtering step

    Roux, S.; Desbat, L.; Koenig, A.; Grangeat, P.

    2005-01-01

    In this paper we study a property of the filtering step of multi-cycle reconstruction algorithm used in the field of cardiac CT. We show that the common filtering step procedure is not optimal in the case of divergent geometry and decrease slightly the temporal resolution. We propose to use the filtering procedure related to the work of Noo at al ( F.Noo, M. Defrise, R. Clakdoyle, and H. Kudo. Image reconstruction from fan-beam projections on less than a short-scan. Phys. Med.Biol., 47:2525-2546, July 2002)and show that this alternative allows to reach the optimal temporal resolution with the same computational effort. (N.C.)

  4. Two-step digit-set-restricted modified signed-digit addition-subtraction algorithm and its optoelectronic implementation.

    Qian, F; Li, G; Ruan, H; Jing, H; Liu, L

    1999-09-10

    A novel, to our knowledge, two-step digit-set-restricted modified signed-digit (MSD) addition-subtraction algorithm is proposed. With the introduction of the reference digits, the operand words are mapped into an intermediate carry word with all digits restricted to the set {1, 0} and an intermediate sum word with all digits restricted to the set {0, 1}, which can be summed to form the final result without carry generation. The operation can be performed in parallel by use of binary logic. An optical system that utilizes an electron-trapping device is suggested for accomplishing the required binary logic operations. By programming of the illumination of data arrays, any complex logic operations of multiple variables can be realized without additional temporal latency of the intermediate results. This technique has a high space-bandwidth product and signal-to-noise ratio. The main structure can be stacked to construct a compact optoelectronic MSD adder-subtracter.

  5. An improved algorithm to convert CAD model to MCNP geometry model based on STEP file

    Zhou, Qingguo; Yang, Jiaming; Wu, Jiong; Tian, Yanshan; Wang, Junqiong; Jiang, Hai; Li, Kuan-Ching

    2015-01-01

    Highlights: • Fully exploits common features of cells, making the processing efficient. • Accurately provide the cell position. • Flexible to add new parameters in the structure. • Application of novel structure in INP file processing, conveniently evaluate cell location. - Abstract: MCNP (Monte Carlo N-Particle Transport Code) is a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron, or coupled neutron/photon/electron transport. Its input file, the INP file, has the characteristics of complicated form and is error-prone when describing geometric models. Due to this, a conversion algorithm that can solve the problem by converting general geometric model to MCNP model during MCNP aided modeling is highly needed. In this paper, we revised and incorporated a number of improvements over our previous work (Yang et al., 2013), which was proposed and targeted after STEP file and INP file were analyzed. Results of experiments show that the revised algorithm is more applicable and efficient than previous work, with the optimized extraction of geometry and topology information of the STEP file, as well as the production efficiency of output INP file. This proposed research is promising, and serves as valuable reference for the majority of researchers involved with MCNP-related researches

  6. A 3-Step Algorithm Using Region-Based Active Contours for Video Objects Detection

    Stéphanie Jehan-Besson

    2002-06-01

    Full Text Available We propose a 3-step algorithm for the automatic detection of moving objects in video sequences using region-based active contours. First, we introduce a very full general framework for region-based active contours with a new Eulerian method to compute the evolution equation of the active contour from a criterion including both region-based and boundary-based terms. This framework can be easily adapted to various applications, thanks to the introduction of functions named descriptors of the different regions. With this new Eulerian method based on shape optimization principles, we can easily take into account the case of descriptors depending upon features globally attached to the regions. Second, we propose a 3-step algorithm for detection of moving objects, with a static or a mobile camera, using region-based active contours. The basic idea is to hierarchically associate temporal and spatial information. The active contour evolves with successively three sets of descriptors: a temporal one, and then two spatial ones. The third spatial descriptor takes advantage of the segmentation of the image in intensity homogeneous regions. User interaction is reduced to the choice of a few parameters at the beginning of the process. Some experimental results are supplied.

  7. Robustness of Multiple Clustering Algorithms on Hyperspectral Images

    Williams, Jason P

    2007-01-01

    .... Various clustering algorithms were employed, including a hierarchical method, ISODATA, K-means, and X-means, and were used on a simple two dimensional dataset in order to discover potential problems with the algorithms...

  8. Adaptive step-size algorithm for Fourier beam-propagation method with absorbing boundary layer of auto-determined width.

    Learn, R; Feigenbaum, E

    2016-06-01

    Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. The second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.

  9. Algorithm of axial fuel optimization based in progressive steps of turned search

    Martin del Campo, C.; Francois, J.L.

    2003-01-01

    The development of an algorithm for the axial optimization of fuel of boiling water reactors (BWR) is presented. The algorithm is based in a serial optimizations process in the one that the best solution in each stage is the starting point of the following stage. The objective function of each stage adapts to orient the search toward better values of one or two parameters leaving the rest like restrictions. Conform to it advances in those optimization stages, it is increased the fineness of the evaluation of the investigated designs. The algorithm is based on three stages, in the first one are used Genetic algorithms and in the two following Tabu Search. The objective function of the first stage it looks for to minimize the average enrichment of the one it assembles and to fulfill with the generation of specified energy for the operation cycle besides not violating none of the limits of the design base. In the following stages the objective function looks for to minimize the power factor peak (PPF) and to maximize the margin of shutdown (SDM), having as restrictions the one average enrichment obtained for the best design in the first stage and those other restrictions. The third stage, very similar to the previous one, it begins with the design of the previous stage but it carries out a search of the margin of shutdown to different exhibition steps with calculations in three dimensions (3D). An application to the case of the design of the fresh assemble for the fourth fuel reload of the Unit 1 reactor of the Laguna Verde power plant (U1-CLV) is presented. The obtained results show an advance in the handling of optimization methods and in the construction of the objective functions that should be used for the different design stages of the fuel assemblies. (Author)

  10. Protein alignment algorithms with an efficient backtracking routine on multiple GPUs

    Kierzynka Michal

    2011-05-01

    Full Text Available Abstract Background Pairwise sequence alignment methods are widely used in biological research. The increasing number of sequences is perceived as one of the upcoming challenges for sequence alignment methods in the nearest future. To overcome this challenge several GPU (Graphics Processing Unit computing approaches have been proposed lately. These solutions show a great potential of a GPU platform but in most cases address the problem of sequence database scanning and computing only the alignment score whereas the alignment itself is omitted. Thus, the need arose to implement the global and semiglobal Needleman-Wunsch, and Smith-Waterman algorithms with a backtracking procedure which is needed to construct the alignment. Results In this paper we present the solution that performs the alignment of every given sequence pair, which is a required step for progressive multiple sequence alignment methods, as well as for DNA recognition at the DNA assembly stage. Performed tests show that the implementation, with performance up to 6.3 GCUPS on a single GPU for affine gap penalties, is very efficient in comparison to other CPU and GPU-based solutions. Moreover, multiple GPUs support with load balancing makes the application very scalable. Conclusions The article shows that the backtracking procedure of the sequence alignment algorithms may be designed to fit in with the GPU architecture. Therefore, our algorithm, apart from scores, is able to compute pairwise alignments. This opens a wide range of new possibilities, allowing other methods from the area of molecular biology to take advantage of the new computational architecture. Performed tests show that the efficiency of the implementation is excellent. Moreover, the speed of our GPU-based algorithms can be almost linearly increased when using more than one graphics card.

  11. Combinatorial Optimization Algorithms for Dynamic Multiple Fault Diagnosis in Automotive and Aerospace Applications

    Kodali, Anuradha

    facility, respectively. The set-covering matrix encapsulates the relationship among the rows (tests or demand points) and columns (faults or locations) of the system at each time. By relaxing the coupling constraints using Lagrange multipliers, the DSC problem can be decoupled into independent subproblems, one for each column. Each subproblem is solved using the Viterbi decoding algorithm, and a primal feasible solution is constructed by modifying the Viterbi solutions via a heuristic. The proposed Viterbi-Lagrangian relaxation algorithm (VLRA) provides a measure of suboptimality via an approximate duality gap. As a major practical extension of the above problem, we also consider the problem of diagnosing faults with delayed test outcomes, termed delay-dynamic set-covering (DDSC), and experiment with real-world problems that exhibit masking faults. Also, we present simulation results on OR-library datasets (set-covering formulations are predominantly validated on these matrices in the literature), posed as facility location problems. Finally, we implement these algorithms to solve problems in aerospace and automotive applications. Firstly, we address the diagnostic ambiguity problem in aerospace and automotive applications by developing a dynamic fusion framework that includes dynamic multiple fault diagnosis algorithms. This improves the correct fault isolation rate, while minimizing the false alarm rates, by considering multiple faults instead of the traditional data-driven techniques based on single fault (class)-single epoch (static) assumption. The dynamic fusion problem is formulated as a maximum a posteriori decision problem of inferring the fault sequence based on uncertain outcomes of multiple binary classifiers over time. The fusion process involves three steps: the first step transforms the multi-class problem into dichotomies using error correcting output codes (ECOC), thereby solving the concomitant binary classification problems; the second step fuses the

  12. A fully automated contour detection algorithm the preliminary step for scatter and attenuation compensation in SPECT

    Younes, R.B.; Mas, J.; Bidet, R.

    1988-01-01

    Contour detection is an important step in information extraction from nuclear medicine images. In order to perform accurate quantitative studies in single photon emission computed tomography (SPECT) a new procedure is described which can rapidly derive the best fit contour of an attenuated medium. Some authors evaluate the influence of the detected contour on the reconstructed images with various attenuation correction techniques. Most of the methods are strongly affected by inaccurately detected contours. This approach uses the Compton window to redetermine the convex contour: It seems to be simpler and more practical in clinical SPECT studies. The main advantages of this procedure are the high speed of computation, the accuracy of the contour found and the programme's automation. Results obtained using computer simulated and real phantoms or clinical studies demonstrate the reliability of the present algorithm. (orig.)

  13. Multiple depots vehicle routing based on the ant colony with the genetic algorithm

    ChunYing Liu

    2013-09-01

    Full Text Available Purpose: the distribution routing plans of multi-depots vehicle scheduling problem will increase exponentially along with the adding of customers. So, it becomes an important studying trend to solve the vehicle scheduling problem with heuristic algorithm. On the basis of building the model of multi-depots vehicle scheduling problem, in order to improve the efficiency of the multiple depots vehicle routing, the paper puts forward a fusion algorithm on multiple depots vehicle routing based on the ant colony algorithm with genetic algorithm. Design/methodology/approach: to achieve this objective, the genetic algorithm optimizes the parameters of the ant colony algorithm. The fusion algorithm on multiple depots vehicle based on the ant colony algorithm with genetic algorithm is proposed. Findings: simulation experiment indicates that the result of the fusion algorithm is more excellent than the other algorithm, and the improved algorithm has better convergence effective and global ability. Research limitations/implications: in this research, there are some assumption that might affect the accuracy of the model such as the pheromone volatile factor, heuristic factor in each period, and the selected multiple depots. These assumptions can be relaxed in future work. Originality/value: In this research, a new method for the multiple depots vehicle routing is proposed. The fusion algorithm eliminate the influence of the selected parameter by optimizing the heuristic factor, evaporation factor, initial pheromone distribute, and have the strong global searching ability. The Ant Colony algorithm imports cross operator and mutation operator for operating the first best solution and the second best solution in every iteration, and reserves the best solution. The cross and mutation operator extend the solution space and improve the convergence effective and the global ability. This research shows that considering both the ant colony and genetic algorithm

  14. Fast and Rigorous Assignment Algorithm Multiple Preference and Calculation

    Ümit Çiftçi

    2010-03-01

    Full Text Available The goal of paper is to develop an algorithm that evaluates students then places them depending on their desired choices according to dependant preferences. The developed algorithm is also used to implement software. The success and accuracy of the software as well as the algorithm are tested by applying it to ability test at Beykent University. This ability test is repeated several times in order to fill all available places at Fine Art Faculty departments in every academic year. It has been shown that this algorithm is very fast and rigorous after application of 2008-2009 and 2009-20010 academic years.Key Words: Assignment algorithm, student placement, ability test

  15. Generalized Grover's Algorithm for Multiple Phase Inversion States

    Byrnes, Tim; Forster, Gary; Tessler, Louis

    2018-02-01

    Grover's algorithm is a quantum search algorithm that proceeds by repeated applications of the Grover operator and the Oracle until the state evolves to one of the target states. In the standard version of the algorithm, the Grover operator inverts the sign on only one state. Here we provide an exact solution to the problem of performing Grover's search where the Grover operator inverts the sign on N states. We show the underlying structure in terms of the eigenspectrum of the generalized Hamiltonian, and derive an appropriate initial state to perform the Grover evolution. This allows us to use the quantum phase estimation algorithm to solve the search problem in this generalized case, completely bypassing the Grover algorithm altogether. We obtain a time complexity of this case of √{D /Mα }, where D is the search space dimension, M is the number of target states, and α ≈1 , which is close to the optimal scaling.

  16. Multi-step wind speed forecasting based on a hybrid forecasting architecture and an improved bat algorithm

    Xiao, Liye; Qian, Feng; Shao, Wei

    2017-01-01

    Highlights: • Propose a hybrid architecture based on a modified bat algorithm for multi-step wind speed forecasting. • Improve the accuracy of multi-step wind speed forecasting. • Modify bat algorithm with CG to improve optimized performance. - Abstract: As one of the most promising sustainable energy sources, wind energy plays an important role in energy development because of its cleanliness without causing pollution. Generally, wind speed forecasting, which has an essential influence on wind power systems, is regarded as a challenging task. Analyses based on single-step wind speed forecasting have been widely used, but their results are insufficient in ensuring the reliability and controllability of wind power systems. In this paper, a new forecasting architecture based on decomposing algorithms and modified neural networks is successfully developed for multi-step wind speed forecasting. Four different hybrid models are contained in this architecture, and to further improve the forecasting performance, a modified bat algorithm (BA) with the conjugate gradient (CG) method is developed to optimize the initial weights between layers and thresholds of the hidden layer of neural networks. To investigate the forecasting abilities of the four models, the wind speed data collected from four different wind power stations in Penglai, China, were used as a case study. The numerical experiments showed that the hybrid model including the singular spectrum analysis and general regression neural network with CG-BA (SSA-CG-BA-GRNN) achieved the most accurate forecasting results in one-step to three-step wind speed forecasting.

  17. Low-dose multiple-information retrieval algorithm for X-ray grating-based imaging

    Wang Zhentian; Huang Zhifeng; Chen Zhiqiang; Zhang Li; Jiang Xiaolei; Kang Kejun; Yin Hongxia; Wang Zhenchang; Stampanoni, Marco

    2011-01-01

    The present work proposes a low dose information retrieval algorithm for X-ray grating-based multiple-information imaging (GB-MII) method, which can retrieve the attenuation, refraction and scattering information of samples by only three images. This algorithm aims at reducing the exposure time and the doses delivered to the sample. The multiple-information retrieval problem in GB-MII is solved by transforming a nonlinear equations set to a linear equations and adopting the nature of the trigonometric functions. The proposed algorithm is validated by experiments both on conventional X-ray source and synchrotron X-ray source, and compared with the traditional multiple-image-based retrieval algorithm. The experimental results show that our algorithm is comparable with the traditional retrieval algorithm and especially suitable for high Signal-to-Noise system.

  18. Application of multiple tabu search algorithm to solve dynamic economic dispatch considering generator constraints

    Pothiya, Saravuth; Ngamroo, Issarachai; Kongprawechnon, Waree

    2008-01-01

    This paper presents a new optimization technique based on a multiple tabu search algorithm (MTS) to solve the dynamic economic dispatch (ED) problem with generator constraints. In the constrained dynamic ED problem, the load demand and spinning reserve capacity as well as some practical operation constraints of generators, such as ramp rate limits and prohibited operating zone are taken into consideration. The MTS algorithm introduces additional mechanisms such as initialization, adaptive searches, multiple searches, crossover and restarting process. To show its efficiency, the MTS algorithm is applied to solve constrained dynamic ED problems of power systems with 6 and 15 units. The results obtained from the MTS algorithm are compared to those achieved from the conventional approaches, such as simulated annealing (SA), genetic algorithm (GA), tabu search (TS) algorithm and particle swarm optimization (PSO). The experimental results show that the proposed MTS algorithm approaches is able to obtain higher quality solutions efficiently and with less computational time than the conventional approaches

  19. A face recognition algorithm based on multiple individual discriminative models

    Fagertun, Jens; Gomez, David Delgado; Ersbøll, Bjarne Kjær

    2005-01-01

    Abstract—In this paper, a novel algorithm for facial recognition is proposed. The technique combines the color texture and geometrical configuration provided by face images. Landmarks and pixel intensities are used by Principal Component Analysis and Fisher Linear Discriminant Analysis to associate...

  20. Analyzing the multiple-target-multiple-agent scenario using optimal assignment algorithms

    Kwok, Kwan S.; Driessen, Brian J.; Phillips, Cynthia A.; Tovey, Craig A.

    1997-09-01

    This work considers the problem of maximum utilization of a set of mobile robots with limited sensor-range capabilities and limited travel distances. The robots are initially in random positions. A set of robots properly guards or covers a region if every point within the region is within the effective sensor range of at least one vehicle. We wish to move the vehicles into surveillance positions so as to guard or cover a region, while minimizing the maximum distance traveled by any vehicle. This problem can be formulated as an assignment problem, in which we must optimally decide which robot to assign to which slot of a desired matrix of grid points. The cost function is the maximum distance traveled by any robot. Assignment problems can be solved very efficiently. Solution times for one hundred robots took only seconds on a silicon graphics crimson workstation. The initial positions of all the robots can be sampled by a central base station and their newly assigned positions communicated back to the robots. Alternatively, the robots can establish their own coordinate system with the origin fixed at one of the robots and orientation determined by the compass bearing of another robot relative to this robot. This paper presents example solutions to the multiple-target-multiple-agent scenario using a matching algorithm. Two separate cases with one hundred agents in each were analyzed using this method. We have found these mobile robot problems to be a very interesting application of network optimization methods, and we expect this to be a fruitful area for future research.

  1. Analyzing the multiple-target-multiple-agent scenario using optimal assignment algorithms

    Kwok, K.S.; Driessen, B.J.; Phillips, C.A.; Tovey, C.A.

    1997-01-01

    This work considers the problem of maximum utilization of a set of mobile robots with limited sensor-range capabilities and limited travel distances. The robots are initially in random positions. A set of robots properly guards or covers a region if every point within the region is within the effective sensor range of at least one vehicle. The authors wish to move the vehicles into surveillance positions so as to guard or cover a region, while minimizing the maximum distance traveled by any vehicle. This problem can be formulated as an assignment problem, in which they must optimally decide which robot to assign to which slot of a desired matrix of grid points. The cost function is the maximum distance traveled by any robot. Assignment problems can be solved very efficiently. Solutions times for one hundred robots took only seconds on a Silicon Graphics Crimson workstation. The initial positions of all the robots can be sampled by a central base station and their newly assigned positions communicated back to the robots. Alternatively, the robots can establish their own coordinate system with the origin fixed at one of the robots and orientation determined by the compass bearing of another robot relative to this robot. This paper presents example solutions to the multiple-target-multiple-agent scenario using a matching algorithm. Two separate cases with one hundred agents in each were analyzed using this method. They have found these mobile robot problems to be a very interesting application of network optimization methods, and they expect this to be a fruitful area for future research

  2. ALGORITHM TO CHOOSE ENERGY GENERATION MULTIPLE ROLE STATION

    Alexandru STĂNESCU

    2014-05-01

    Full Text Available This paper proposes an algorithm that is based on a complex analysis method that is used for choosing the configuration of a power station. The station generates electric energy and hydrogen, and serves a "green" highway. The elements that need to be considered are: energy efficiency, location, availability of primary energy sources in the area, investment cost, workforce, environmental impact, compatibility with existing systems, meantime between failure.

  3. ESPRIT-like algorithm for computational-efficient angle estimation in bistatic multiple-input multiple-output radar

    Gong, Jian; Lou, Shuntian; Guo, Yiduo

    2016-04-01

    An estimation of signal parameters via a rotational invariance techniques-like (ESPRIT-like) algorithm is proposed to estimate the direction of arrival and direction of departure for bistatic multiple-input multiple-output (MIMO) radar. The properties of a noncircular signal and Euler's formula are first exploited to establish a real-valued bistatic MIMO radar array data, which is composed of sine and cosine data. Then the receiving/transmitting selective matrices are constructed to obtain the receiving/transmitting rotational invariance factors. Since the rotational invariance factor is a cosine function, symmetrical mirror angle ambiguity may occur. Finally, a maximum likelihood function is used to avoid the estimation ambiguities. Compared with the existing ESPRIT, the proposed algorithm can save about 75% of computational load owing to the real-valued ESPRIT algorithm. Simulation results confirm the effectiveness of the ESPRIT-like algorithm.

  4. Multiple-algorithm parallel fusion of infrared polarization and intensity images based on algorithmic complementarity and synergy

    Zhang, Lei; Yang, Fengbao; Ji, Linna; Lv, Sheng

    2018-01-01

    Diverse image fusion methods perform differently. Each method has advantages and disadvantages compared with others. One notion is that the advantages of different image methods can be effectively combined. A multiple-algorithm parallel fusion method based on algorithmic complementarity and synergy is proposed. First, in view of the characteristics of the different algorithms and difference-features among images, an index vector-based feature-similarity is proposed to define the degree of complementarity and synergy. This proposed index vector is a reliable evidence indicator for algorithm selection. Second, the algorithms with a high degree of complementarity and synergy are selected. Then, the different degrees of various features and infrared intensity images are used as the initial weights for the nonnegative matrix factorization (NMF). This avoids randomness of the NMF initialization parameter. Finally, the fused images of different algorithms are integrated using the NMF because of its excellent data fusing performance on independent features. Experimental results demonstrate that the visual effect and objective evaluation index of the fused images obtained using the proposed method are better than those obtained using traditional methods. The proposed method retains all the advantages that individual fusion algorithms have.

  5. Magnetoresistance in hybrid organic spin valves at the onset of multiple-step tunneling.

    Schoonus, J J H M; Lumens, P G E; Wagemans, W; Kohlhepp, J T; Bobbert, P A; Swagten, H J M; Koopmans, B

    2009-10-02

    By combining experiments with simple model calculations, we obtain new insight in spin transport through hybrid, CoFeB/Al2O3(1.5 nm)/tris(8-hydroxyquinoline)aluminium (Alq3)/Co spin valves. We have measured the characteristic changes in the I-V behavior as well as the intrinsic loss of magnetoresistance at the onset of multiple-step tunneling. In the regime of multiple-step tunneling, under the condition of low hopping rates, spin precession in the presence of hyperfine coupling is conjectured to be the relevant source of spin relaxation. A quantitative analysis leads to the prediction of a symmetric magnetoresistance around zero magnetic field in addition to the hysteretic magnetoresistance curves, which are indeed observed in our experiments.

  6. Multiple dual mode counter-current chromatography with variable duration of alternating phase elution steps.

    Kostanyan, Artak E; Erastov, Andrey A; Shishilov, Oleg N

    2014-06-20

    The multiple dual mode (MDM) counter-current chromatography separation processes consist of a succession of two isocratic counter-current steps and are characterized by the shuttle (forward and back) transport of the sample in chromatographic columns. In this paper, the improved MDM method based on variable duration of alternating phase elution steps has been developed and validated. The MDM separation processes with variable duration of phase elution steps are analyzed. Basing on the cell model, analytical solutions are developed for impulse and non-impulse sample loading at the beginning of the column. Using the analytical solutions, a calculation program is presented to facilitate the simulation of MDM with variable duration of phase elution steps, which can be used to select optimal process conditions for the separation of a given feed mixture. Two options of the MDM separation are analyzed: 1 - with one-step solute elution: the separation is conducted so, that the sample is transferred forward and back with upper and lower phases inside the column until the desired separation of the components is reached, and then each individual component elutes entirely within one step; 2 - with multi-step solute elution, when the fractions of individual components are collected in over several steps. It is demonstrated that proper selection of the duration of individual cycles (phase flow times) can greatly increase the separation efficiency of CCC columns. Experiments were carried out using model mixtures of compounds from the GUESSmix with solvent systems hexane/ethyl acetate/methanol/water. The experimental results are compared to the predictions of the theory. A good agreement between theory and experiment has been demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Field theoretical approach to proton-nucleus reactions: II-Multiple-step excitation process

    Eiras, A.; Kodama, T.; Nemes, M.

    1989-01-01

    A field theoretical formulation to multiple step excitation process in proton-nucleus collision within the context of a relativistic eikonal approach is presented. A closed form expression for the double differential cross section can be obtained whose structure is very simple and makes the physics transparent. Glauber's formulation of the same process is obtained as a limit of ours and the necessary approximations are studied and discussed. (author) [pt

  8. Security Analysis of a Block Encryption Algorithm Based on Dynamic Sequences of Multiple Chaotic Systems

    Du, Mao-Kang; He, Bo; Wang, Yong

    2011-01-01

    Recently, the cryptosystem based on chaos has attracted much attention. Wang and Yu (Commun. Nonlin. Sci. Numer. Simulat. 14 (2009) 574) proposed a block encryption algorithm based on dynamic sequences of multiple chaotic systems. We analyze the potential flaws in the algorithm. Then, a chosen-plaintext attack is presented. Some remedial measures are suggested to avoid the flaws effectively. Furthermore, an improved encryption algorithm is proposed to resist the attacks and to keep all the merits of the original cryptosystem.

  9. Somatic embryogenesis and in-vitro regeneration of rice (Oryza sativa L.) cultivars under one-step and multiple-step salinity stresses

    Khattak, Mohammad S. K.; Abiri, Rambod; Valdiani, Alireza

    2017-01-01

    The present study aimed to examine the effect of one-step and multiple-step salinity stress on the somatic embryogenesis of rice cultivars within the solid and liquid (cell suspension) culture media conditions. Five rice cultivars, including Puteh Perak, Mahsuri, Basmati-370, Nona Bokra and Khari......, and significant morphological changes were observed. In contrast, the multiple-step NaCl treatment of the calli and cell suspensions led to higher growth of the cultures in the presence of NaCl compared to the controls. The solid MS media, containing 3 μM IAA and 40 μM Kinetin performed as the best media...

  10. An Improved Wake Vortex Tracking Algorithm for Multiple Aircraft

    Switzer, George F.; Proctor, Fred H.; Ahmad, Nashat N.; LimonDuparcmeur, Fanny M.

    2010-01-01

    The accurate tracking of vortex evolution from Large Eddy Simulation (LES) data is a complex and computationally intensive problem. The vortex tracking requires the analysis of very large three-dimensional and time-varying datasets. The complexity of the problem is further compounded by the fact that these vortices are embedded in a background turbulence field, and they may interact with the ground surface. Another level of complication can arise, if vortices from multiple aircrafts are simulated. This paper presents a new technique for post-processing LES data to obtain wake vortex tracks and wake intensities. The new approach isolates vortices by defining "regions of interest" (ROI) around each vortex and has the ability to identify vortex pairs from multiple aircraft. The paper describes the new methodology for tracking wake vortices and presents application of the technique for single and multiple aircraft.

  11. Strong convergence of an extragradient-type algorithm for the multiple-sets split equality problem.

    Zhao, Ying; Shi, Luoyi

    2017-01-01

    This paper introduces a new extragradient-type method to solve the multiple-sets split equality problem (MSSEP). Under some suitable conditions, the strong convergence of an algorithm can be verified in the infinite-dimensional Hilbert spaces. Moreover, several numerical results are given to show the effectiveness of our algorithm.

  12. Principles of a new treatment algorithm in multiple sclerosis

    Hartung, Hans-Peter; Montalban, Xavier; Sorensen, Per Soelberg

    2011-01-01

    We are entering a new era in the management of patients with multiple sclerosis (MS). The first oral treatment (fingolimod) has now gained US FDA approval, addressing an unmet need for patients with MS who wish to avoid parenteral administration. A second agent (cladribine) is currently being...

  13. Reload pattern optimization by application of multiple cyclic interchange algorithms

    Geemert, R. van; Quist, A.J.; Hoogenboom, J.E. [Technische Univ. Delft (Netherlands)

    1996-09-01

    Reload pattern optimization procedures are proposed which are based on the multiple cyclic interchange approach, according to which the search for the reload pattern associated with the highest objective function value can be thought of as divided in multiple stages. The transition from the initial to the final stage is characterized by an increase in the degree of locality of the search procedure. The general idea is that, during the first stages, the `elite` cluster containing the group of best patterns must be located, after which the solution space is sampled in a more and more local sense to find the local optimum in this cluster. The transition(s) from global search behaviour to local search behaviour can be either prompt, by defining strictly separate search regimes, or gradual by introducing stochastic tests for the number of fuel bundles involved in a cyclic interchange. Equilibrium cycle optimization results are reported for a test PWR reactor core of modest size. (author)

  14. Reload pattern optimization by application of multiple cyclic interchange algorithms

    Geemert, R. van; Quist, A.J.; Hoogenboom, J.E.

    1996-01-01

    Reload pattern optimization procedures are proposed which are based on the multiple cyclic interchange approach, according to which the search for the reload pattern associated with the highest objective function value can be thought of as divided in multiple stages. The transition from the initial to the final stage is characterized by an increase in the degree of locality of the search procedure. The general idea is that, during the first stages, the 'elite' cluster containing the group of best patterns must be located, after which the solution space is sampled in a more and more local sense to find the local optimum in this cluster. The transition(s) from global search behaviour to local search behaviour can be either prompt, by defining strictly separate search regimes, or gradual by introducing stochastic tests for the number of fuel bundles involved in a cyclic interchange. Equilibrium cycle optimization results are reported for a test PWR reactor core of modest size. (author)

  15. On the Stationarity of Multiple Autoregressive Approximants: Theory and Algorithms

    1976-08-01

    a I (3.4) Hannan and Terrell (1972) consider problems of a similar nature. Efficient estimates A(1),... , A(p) , and i of A(1)... ,A(p) and...34Autoregressive model fitting for control, Ann . Inst. Statist. Math., 23, 163-180. Hannan, E. J. (1970), Multiple Time Series, New York, John Wiley...Hannan, E. J. and Terrell , R. D. (1972), "Time series regression with linear constraints, " International Economic Review, 13, 189-200. Masani, P

  16. A proposed adaptive step size perturbation and observation maximum power point tracking algorithm based on photovoltaic system modeling

    Huang, Yu

    Solar energy becomes one of the major alternative renewable energy options for its huge abundance and accessibility. Due to the intermittent nature, the high demand of Maximum Power Point Tracking (MPPT) techniques exists when a Photovoltaic (PV) system is used to extract energy from the sunlight. This thesis proposed an advanced Perturbation and Observation (P&O) algorithm aiming for relatively practical circumstances. Firstly, a practical PV system model is studied with determining the series and shunt resistances which are neglected in some research. Moreover, in this proposed algorithm, the duty ratio of a boost DC-DC converter is the object of the perturbation deploying input impedance conversion to achieve working voltage adjustment. Based on the control strategy, the adaptive duty ratio step size P&O algorithm is proposed with major modifications made for sharp insolation change as well as low insolation scenarios. Matlab/Simulink simulation for PV model, boost converter control strategy and various MPPT process is conducted step by step. The proposed adaptive P&O algorithm is validated by the simulation results and detail analysis of sharp insolation changes, low insolation condition and continuous insolation variation.

  17. Optimization of Multiple Traveling Salesman Problem Based on Simulated Annealing Genetic Algorithm

    Xu Mingji

    2017-01-01

    Full Text Available It is very effective to solve the multi variable optimization problem by using hierarchical genetic algorithm. This thesis analyzes both advantages and disadvantages of hierarchical genetic algorithm and puts forward an improved simulated annealing genetic algorithm. The new algorithm is applied to solve the multiple traveling salesman problem, which can improve the performance of the solution. First, it improves the design of chromosomes hierarchical structure in terms of redundant hierarchical algorithm, and it suggests a suffix design of chromosomes; Second, concerning to some premature problems of genetic algorithm, it proposes a self-identify crossover operator and mutation; Third, when it comes to the problem of weak ability of local search of genetic algorithm, it stretches the fitness by mixing genetic algorithm with simulated annealing algorithm. Forth, it emulates the problems of N traveling salesmen and M cities so as to verify its feasibility. The simulation and calculation shows that this improved algorithm can be quickly converged to a best global solution, which means the algorithm is encouraging in practical uses.

  18. Kalman Filtering for Discrete Stochastic Systems with Multiplicative Noises and Random Two-Step Sensor Delays

    Dongyan Chen

    2015-01-01

    Full Text Available This paper is concerned with the optimal Kalman filtering problem for a class of discrete stochastic systems with multiplicative noises and random two-step sensor delays. Three Bernoulli distributed random variables with known conditional probabilities are introduced to characterize the phenomena of the random two-step sensor delays which may happen during the data transmission. By using the state augmentation approach and innovation analysis technique, an optimal Kalman filter is constructed for the augmented system in the sense of the minimum mean square error (MMSE. Subsequently, the optimal Kalman filtering is derived for corresponding augmented system in initial instants. Finally, a simulation example is provided to demonstrate the feasibility and effectiveness of the proposed filtering method.

  19. Reconstructing Genetic Regulatory Networks Using Two-Step Algorithms with the Differential Equation Models of Neural Networks.

    Chen, Chi-Kan

    2017-07-26

    The identification of genetic regulatory networks (GRNs) provides insights into complex cellular processes. A class of recurrent neural networks (RNNs) captures the dynamics of GRN. Algorithms combining the RNN and machine learning schemes were proposed to reconstruct small-scale GRNs using gene expression time series. We present new GRN reconstruction methods with neural networks. The RNN is extended to a class of recurrent multilayer perceptrons (RMLPs) with latent nodes. Our methods contain two steps: the edge rank assignment step and the network construction step. The former assigns ranks to all possible edges by a recursive procedure based on the estimated weights of wires of RNN/RMLP (RE RNN /RE RMLP ), and the latter constructs a network consisting of top-ranked edges under which the optimized RNN simulates the gene expression time series. The particle swarm optimization (PSO) is applied to optimize the parameters of RNNs and RMLPs in a two-step algorithm. The proposed RE RNN -RNN and RE RMLP -RNN algorithms are tested on synthetic and experimental gene expression time series of small GRNs of about 10 genes. The experimental time series are from the studies of yeast cell cycle regulated genes and E. coli DNA repair genes. The unstable estimation of RNN using experimental time series having limited data points can lead to fairly arbitrary predicted GRNs. Our methods incorporate RNN and RMLP into a two-step structure learning procedure. Results show that the RE RMLP using the RMLP with a suitable number of latent nodes to reduce the parameter dimension often result in more accurate edge ranks than the RE RNN using the regularized RNN on short simulated time series. Combining by a weighted majority voting rule the networks derived by the RE RMLP -RNN using different numbers of latent nodes in step one to infer the GRN, the method performs consistently and outperforms published algorithms for GRN reconstruction on most benchmark time series. The framework of two-step

  20. [Therapy of fatigue in multiple sclerosis : A treatment algorithm].

    Veauthier, C; Paul, F

    2016-12-01

    Fatigue is one of the most frequent symptoms of multiple sclerosis (MS) and one of the main reasons for underemployment and early retirement. The mechanisms of MS-related fatigue are unknown but comorbid disorders play a major role. Anemia, diabetes, side effects of medication and depression should be ruled out. Moreover, excessive daytime sleepiness (EDS) should be differentiated from fatigue. No approved medicinal therapy of MS fatigue is currently available. Presentation of current treatment strategies with a particular focus on secondary fatigue due to sleep disorders. A review of the literature was carried out. All MS patients suffering from fatigue should be questioned with respect to EDS and if necessary sleep medical investigations should be carried out; however, pure fatigue without accompanying EDS can also be caused by a sleep disorder. Medications, particularly freely available antihistamines, can also increase fatigue. Furthermore, anemia, iron deficits, diabetes and hypothyroidism should be excluded. Self-assessment questionnaires show an overlap between depression and fatigue. Several studies have shown that cognitive behavioral therapy and various psychotherapeutic measures, such as vertigo training, progressive exercise training and individualized physiotherapy as well as fatigue management interventions can lead to a significant improvement of MS-related fatigue. There is currently no medication which is suitable for treatment of fatigue, with the exception of fampridine for the treatment of motor functions and motor fatigue.

  1. Validation of patient determined disease steps (PDDS) scale scores in persons with multiple sclerosis.

    Learmonth, Yvonne C; Motl, Robert W; Sandroff, Brian M; Pula, John H; Cadavid, Diego

    2013-04-25

    The Patient Determined Disease Steps (PDDS) is a promising patient-reported outcome (PRO) of disability in multiple sclerosis (MS). To date, there is limited evidence regarding the validity of PDDS scores, despite its sound conceptual development and broad inclusion in MS research. This study examined the validity of the PDDS based on (1) the association with Expanded Disability Status Scale (EDSS) scores and (2) the pattern of associations between PDDS and EDSS scores with Functional System (FS) scores as well as ambulatory and other outcomes. 96 persons with MS provided demographic/clinical information, completed the PDDS and other PROs including the Multiple Sclerosis Walking Scale-12 (MSWS-12), and underwent a neurological examination for generating FS and EDSS scores. Participants completed assessments of cognition, ambulation including the 6-minute walk (6 MW), and wore an accelerometer during waking hours over seven days. There was a strong correlation between EDSS and PDDS scores (ρ = .783). PDDS and EDSS scores were strongly correlated with Pyramidal (ρ = .578 &ρ = .647, respectively) and Cerebellar (ρ = .501 &ρ = .528, respectively) FS scores as well as 6 MW distance (ρ = .704 &ρ = .805, respectively), MSWS-12 scores (ρ = .801 &ρ = .729, respectively), and accelerometer steps/day (ρ = -.740 &ρ = -.717, respectively). This study provides novel evidence supporting the PDDS as valid PRO of disability in MS.

  2. Sequential multiple-step europium ion implantation and annealing of GaN

    Miranda, S. M C; Edwards, Paul R.; O'Donnell, Kevin Peter; Boćkowski, Michał X.; Alves, Eduardo Jorge; Roqan, Iman S.; Vantomme, André ; Lorenz, Katharina

    2014-01-01

    Sequential multiple Eu ion implantations at low fluence (1×1013 cm-2 at 300 keV) and subsequent rapid thermal annealing (RTA) steps (30 s at 1000 °C or 1100 °C) were performed on high quality nominally undoped GaN films grown by metal organic chemical vapour deposition (MOCVD) and medium quality GaN:Mg grown by hydride vapour phase epitaxy (HVPE). Compared to samples implanted in a single step, multiple implantation/annealing shows only marginal structural improvement for the MOCVD samples, but a significant improvement of crystal quality and optical activation of Eu was achieved in the HVPE films. This improvement is attributed to the lower crystalline quality of the starting material, which probably enhances the diffusion of defects and acts to facilitate the annealing of implantation damage and the effective incorporation of the Eu ions in the crystal structure. Optical activation of Eu3+ ions in the HVPE samples was further improved by high temperature and high pressure annealing (HTHP) up to 1400 °C. After HTHP annealing the main room temperature cathodo- and photoluminescence line in Mg-doped samples lies at ∼ 619 nm, characteristic of a known Mg-related Eu3+ centre, while after RTA treatment the dominant line lies at ∼ 622 nm, typical for undoped GaN:Eu. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Sequential multiple-step europium ion implantation and annealing of GaN

    Miranda, S. M C

    2014-01-20

    Sequential multiple Eu ion implantations at low fluence (1×1013 cm-2 at 300 keV) and subsequent rapid thermal annealing (RTA) steps (30 s at 1000 °C or 1100 °C) were performed on high quality nominally undoped GaN films grown by metal organic chemical vapour deposition (MOCVD) and medium quality GaN:Mg grown by hydride vapour phase epitaxy (HVPE). Compared to samples implanted in a single step, multiple implantation/annealing shows only marginal structural improvement for the MOCVD samples, but a significant improvement of crystal quality and optical activation of Eu was achieved in the HVPE films. This improvement is attributed to the lower crystalline quality of the starting material, which probably enhances the diffusion of defects and acts to facilitate the annealing of implantation damage and the effective incorporation of the Eu ions in the crystal structure. Optical activation of Eu3+ ions in the HVPE samples was further improved by high temperature and high pressure annealing (HTHP) up to 1400 °C. After HTHP annealing the main room temperature cathodo- and photoluminescence line in Mg-doped samples lies at ∼ 619 nm, characteristic of a known Mg-related Eu3+ centre, while after RTA treatment the dominant line lies at ∼ 622 nm, typical for undoped GaN:Eu. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. On The Effective Construction of Asymmetric Chudnovsky Multiplication Algorithms in Finite Fields Without Derivated Evaluation

    Ballet, Stéphane; Baudru, Nicolas; Bonnecaze, Alexis; Tukumuli, Mila

    2016-01-01

    The Chudnovsky and Chudnovsky algorithm for the multiplication in extensions of finite fields provides a bilinear complexity which is uniformly linear whith respect to the degree of the extension. Recently, Randriambololona has generalized the method, allowing asymmetry in the interpolation procedure and leading to new upper bounds on the bilinear complexity. We describe the effective algorithm of this asymmetric method, without derivated evaluation. Finally, we give examples with the finite ...

  5. Evaluation of Multiple Kernel Learning Algorithms for Crop Mapping Using Satellite Image Time-Series Data

    Niazmardi, S.; Safari, A.; Homayouni, S.

    2017-09-01

    Crop mapping through classification of Satellite Image Time-Series (SITS) data can provide very valuable information for several agricultural applications, such as crop monitoring, yield estimation, and crop inventory. However, the SITS data classification is not straightforward. Because different images of a SITS data have different levels of information regarding the classification problems. Moreover, the SITS data is a four-dimensional data that cannot be classified using the conventional classification algorithms. To address these issues in this paper, we presented a classification strategy based on Multiple Kernel Learning (MKL) algorithms for SITS data classification. In this strategy, initially different kernels are constructed from different images of the SITS data and then they are combined into a composite kernel using the MKL algorithms. The composite kernel, once constructed, can be used for the classification of the data using the kernel-based classification algorithms. We compared the computational time and the classification performances of the proposed classification strategy using different MKL algorithms for the purpose of crop mapping. The considered MKL algorithms are: MKL-Sum, SimpleMKL, LPMKL and Group-Lasso MKL algorithms. The experimental tests of the proposed strategy on two SITS data sets, acquired by SPOT satellite sensors, showed that this strategy was able to provide better performances when compared to the standard classification algorithm. The results also showed that the optimization method of the used MKL algorithms affects both the computational time and classification accuracy of this strategy.

  6. A constrained tracking algorithm to optimize plug patterns in multiple isocenter Gamma Knife radiosurgery planning

    Li Kaile; Ma Lijun

    2005-01-01

    We developed a source blocking optimization algorithm for Gamma Knife radiosurgery, which is based on tracking individual source contributions to arbitrarily shaped target and critical structure volumes. A scalar objective function and a direct search algorithm were used to produce near real-time calculation results. The algorithm allows the user to set and vary the total number of plugs for each shot to limit the total beam-on time. We implemented and tested the algorithm for several multiple-isocenter Gamma Knife cases. It was found that the use of limited number of plugs significantly lowered the integral dose to the critical structures such as an optical chiasm in pituitary adenoma cases. The main effect of the source blocking is the faster dose falloff in the junction area between the target and the critical structure. In summary, we demonstrated a useful source-plugging algorithm for improving complex multi-isocenter Gamma Knife treatment planning cases

  7. A Novel Algorithm for Efficient Downlink Packet Scheduling for Multiple-Component-Carrier Cellular Systems

    Yao-Liang Chung

    2016-11-01

    Full Text Available The simultaneous aggregation of multiple component carriers (CCs for use by a base station constitutes one of the more promising strategies for providing substantially enhanced bandwidths for packet transmissions in 4th and 5th generation cellular systems. To the best of our knowledge, however, few previous studies have undertaken a thorough investigation of various performance aspects of the use of a simple yet effective packet scheduling algorithm in which multiple CCs are aggregated for transmission in such systems. Consequently, the present study presents an efficient packet scheduling algorithm designed on the basis of the proportional fair criterion for use in multiple-CC systems for downlink transmission. The proposed algorithm includes a focus on providing simultaneous transmission support for both real-time (RT and non-RT traffic. This algorithm can, when applied with sufficiently efficient designs, provide adequate utilization of spectrum resources for the purposes of transmissions, while also improving energy efficiency to some extent. According to simulation results, the performance of the proposed algorithm in terms of system throughput, mean delay, and fairness constitute substantial improvements over those of an algorithm in which the CCs are used independently instead of being aggregated.

  8. Near-optimal power allocation with PSO algorithm for MIMO cognitive networks using multiple AF two-way relays

    Alsharoa, Ahmad M.

    2014-06-01

    In this paper, the problem of power allocation for a multiple-input multiple-output two-way system is investigated in underlay Cognitive Radio (CR) set-up. In the CR underlay mode, secondary users are allowed to exploit the spectrum allocated to primary users in an opportunistic manner by respecting a tolerated temperature limit. The secondary networks employ an amplify-and-forward two-way relaying technique in order to maximize the sum rate under power budget and interference constraints. In this context, we formulate an optimization problem that is solved in two steps. First, we derive a closed-form expression of the optimal power allocated to terminals. Then, we employ a strong optimization tool based on particle swarm optimization algorithm to find the power allocated to secondary relays. Simulation results demonstrate the efficiency of the proposed solution and analyze the impact of some system parameters on the achieved performance. © 2014 IEEE.

  9. Double evolutsional artificial bee colony algorithm for multiple traveling salesman problem

    Xue Ming Hao

    2016-01-01

    Full Text Available The double evolutional artificial bee colony algorithm (DEABC is proposed for solving the single depot multiple traveling salesman problem (MTSP. The proposed DEABC algorithm, which takes advantage of the strength of the upgraded operators, is characterized by its guidance in exploitation search and diversity in exploration search. The double evolutional process for exploitation search is composed of two phases of half stochastic optimal search, and the diversity generating operator for exploration search is used for solutions which cannot be improved after limited times. The computational results demonstrated the superiority of our algorithm over previous state-of-the-art methods.

  10. A comparison of step-and-shoot leaf sequencing algorithms that eliminate tongue-and-groove effects

    Kamath, Srijit; Sahni, Sartaj; Ranka, Sanjay; Li, Jonathan; Palta, Jatinder

    2004-01-01

    The performances of three recently published leaf sequencing algorithms for step-and-shoot intensity-modulated radiation therapy delivery that eliminates tongue-and-groove underdosage are evaluated. Proofs are given to show that the algorithm of Que et al (2004 Phys. Med. Biol. 49 399-405) generates leaf sequences free of tongue-and-groove underdosage and interdigitation. However, the total beam-on times could be up to n times those of the sequences generated by the algorithms of Kamath et al (2004 Phys. Med. Biol. 49 N7-N19), which are optimal in beam-on time for unidirectional leaf movement under the same constraints, where n is the total number of involved leaf pairs. Using 19 clinical fluence matrices and 100 000 randomly generated 15 x 15 matrices, the average monitor units and number of segments of the leaf sequences generated using the algorithm of Que et al are about two to four times those generated by the algorithm of Kamath et al

  11. A comparison of step-and-shoot leaf sequencing algorithms that eliminate tongue-and-groove effects

    Kamath, Srijit [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Sahni, Sartaj [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Ranka, Sanjay [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Li, Jonathan [Department of Radiation Oncology, University of Florida, Gainesville, FL (United States); Palta, Jatinder [Department of Radiation Oncology, University of Florida, Gainesville, FL (United States)

    2004-07-21

    The performances of three recently published leaf sequencing algorithms for step-and-shoot intensity-modulated radiation therapy delivery that eliminates tongue-and-groove underdosage are evaluated. Proofs are given to show that the algorithm of Que et al (2004 Phys. Med. Biol. 49 399-405) generates leaf sequences free of tongue-and-groove underdosage and interdigitation. However, the total beam-on times could be up to n times those of the sequences generated by the algorithms of Kamath et al (2004 Phys. Med. Biol. 49 N7-N19), which are optimal in beam-on time for unidirectional leaf movement under the same constraints, where n is the total number of involved leaf pairs. Using 19 clinical fluence matrices and 100 000 randomly generated 15 x 15 matrices, the average monitor units and number of segments of the leaf sequences generated using the algorithm of Que et al are about two to four times those generated by the algorithm of Kamath et al.

  12. Reduction rules-based search algorithm for opportunistic replacement strategy of multiple life-limited parts

    Xuyun FU

    2018-01-01

    Full Text Available The opportunistic replacement of multiple Life-Limited Parts (LLPs is a problem widely existing in industry. The replacement strategy of LLPs has a great impact on the total maintenance cost to a lot of equipment. This article focuses on finding a quick and effective algorithm for this problem. To improve the algorithm efficiency, six reduction rules are suggested from the perspectives of solution feasibility, determination of the replacement of LLPs, determination of the maintenance occasion and solution optimality. Based on these six reduction rules, a search algorithm is proposed. This search algorithm can identify one or several optimal solutions. A numerical experiment shows that these six reduction rules are effective, and the time consumed by the algorithm is less than 38 s if the total life of equipment is shorter than 55000 and the number of LLPs is less than 11. A specific case shows that the algorithm can obtain optimal solutions which are much better than the result of the traditional method in 10 s, and it can provide support for determining to-be-replaced LLPs when determining the maintenance workscope of an aircraft engine. Therefore, the algorithm is applicable to engineering applications concerning opportunistic replacement of multiple LLPs in aircraft engines.

  13. A Class of Manifold Regularized Multiplicative Update Algorithms for Image Clustering.

    Yang, Shangming; Yi, Zhang; He, Xiaofei; Li, Xuelong

    2015-12-01

    Multiplicative update algorithms are important tools for information retrieval, image processing, and pattern recognition. However, when the graph regularization is added to the cost function, different classes of sample data may be mapped to the same subspace, which leads to the increase of data clustering error rate. In this paper, an improved nonnegative matrix factorization (NMF) cost function is introduced. Based on the cost function, a class of novel graph regularized NMF algorithms is developed, which results in a class of extended multiplicative update algorithms with manifold structure regularization. Analysis shows that in the learning, the proposed algorithms can efficiently minimize the rank of the data representation matrix. Theoretical results presented in this paper are confirmed by simulations. For different initializations and data sets, variation curves of cost functions and decomposition data are presented to show the convergence features of the proposed update rules. Basis images, reconstructed images, and clustering results are utilized to present the efficiency of the new algorithms. Last, the clustering accuracies of different algorithms are also investigated, which shows that the proposed algorithms can achieve state-of-the-art performance in applications of image clustering.

  14. A three-step algorithm for CANDECOMP/PARAFAC analysis of large data sets with multicollinearity

    Kiers, H.A.L.

    1998-01-01

    Fitting the CANDECOMP/PARAFAC model by the standard alternating least squares algorithm often requires very many iterations. One case in point is that of analysing data with mild to severe multicollinearity. If, in addition, the size of the data is large, the computation of one CANDECOMP/PARAFAC

  15. In-depth analysis of protein inference algorithms using multiple search engines and well-defined metrics.

    Audain, Enrique; Uszkoreit, Julian; Sachsenberg, Timo; Pfeuffer, Julianus; Liang, Xiao; Hermjakob, Henning; Sanchez, Aniel; Eisenacher, Martin; Reinert, Knut; Tabb, David L; Kohlbacher, Oliver; Perez-Riverol, Yasset

    2017-01-06

    In mass spectrometry-based shotgun proteomics, protein identifications are usually the desired result. However, most of the analytical methods are based on the identification of reliable peptides and not the direct identification of intact proteins. Thus, assembling peptides identified from tandem mass spectra into a list of proteins, referred to as protein inference, is a critical step in proteomics research. Currently, different protein inference algorithms and tools are available for the proteomics community. Here, we evaluated five software tools for protein inference (PIA, ProteinProphet, Fido, ProteinLP, MSBayesPro) using three popular database search engines: Mascot, X!Tandem, and MS-GF+. All the algorithms were evaluated using a highly customizable KNIME workflow using four different public datasets with varying complexities (different sample preparation, species and analytical instruments). We defined a set of quality control metrics to evaluate the performance of each combination of search engines, protein inference algorithm, and parameters on each dataset. We show that the results for complex samples vary not only regarding the actual numbers of reported protein groups but also concerning the actual composition of groups. Furthermore, the robustness of reported proteins when using databases of differing complexities is strongly dependant on the applied inference algorithm. Finally, merging the identifications of multiple search engines does not necessarily increase the number of reported proteins, but does increase the number of peptides per protein and thus can generally be recommended. Protein inference is one of the major challenges in MS-based proteomics nowadays. Currently, there are a vast number of protein inference algorithms and implementations available for the proteomics community. Protein assembly impacts in the final results of the research, the quantitation values and the final claims in the research manuscript. Even though protein

  16. Computer experiments of the time-sequence of individual steps in multiple Coulomb-excitation

    Boer, J. de; Dannhaueser, G.

    1982-01-01

    The way in which the multiple E2 steps in the Coulomb-excitation of a rotational band of a nucleus follow one another is elucidated for selected examples using semiclassical computer experiments. The role a given transition plays for the excitation of a given final state is measured by a quantity named ''importance function''. It is found that these functions, calculated for the highest rotational state, peak at times forming a sequence for the successive E2 transitions starting from the ground state. This sequential behaviour is used to approximately account for the effects on the projectile orbit of the sequential transfer of excitation energy and angular momentum from projectile to target. These orbits lead to similar deflection functions and cross sections as those obtained from a symmetrization procedure approximately accounting for the transfer of angular momentum and energy. (Auth.)

  17. Multiple Representation Instruction First versus Traditional Algorithmic Instruction First: Impact in Middle School Mathematics Classrooms

    Flores, Raymond; Koontz, Esther; Inan, Fethi A.; Alagic, Mara

    2015-01-01

    This study examined the impact of the order of two teaching approaches on students' abilities and on-task behaviors while learning how to solve percentage problems. Two treatment groups were compared. MR first received multiple representation instruction followed by traditional algorithmic instruction and TA first received these teaching…

  18. An algorithm to compute a rule for division problems with multiple references

    Sánchez Sánchez, Francisca J.

    2012-01-01

    Full Text Available In this paper we consider an extension of the classic division problem with claims: Thedivision problem with multiple references. Hinojosa et al. (2012 provide a solution for this type of pro-blems. The aim of this work is to extend their results by proposing an algorithm that calculates allocationsbased on these results. All computational details are provided in the paper.

  19. Subroutine MLTGRD: a multigrid algorithm based on multiplicative correction and implicit non-stationary iteration

    Barry, J.M.; Pollard, J.P.

    1986-11-01

    A FORTRAN subroutine MLTGRD is provided to solve efficiently the large systems of linear equations arising from a five-point finite difference discretisation of some elliptic partial differential equations. MLTGRD is a multigrid algorithm which provides multiplicative correction to iterative solution estimates from successively reduced systems of linear equations. It uses the method of implicit non-stationary iteration for all grid levels

  20. New time-saving predictor algorithm for multiple breath washout in adolescents

    Grønbæk, Jonathan; Hallas, Henrik Wegener; Arianto, Lambang

    2016-01-01

    BACKGROUND: Multiple breath washout (MBW) is an informative but time-consuming test. This study evaluates the uncertainty of a time-saving predictor algorithm in adolescents. METHODS: Adolescents were recruited from the Copenhagen Prospective Study on Asthma in Childhood (COPSAC2000) birth cohort...

  1. A branch-and-cut algorithm for the vehicle routing problem with multiple use of vehicles

    İsmail Karaoğlan

    2015-06-01

    Full Text Available This paper addresses the vehicle routing problem with multiple use of vehicles (VRPMUV, an important variant of the classic vehicle routing problem (VRP. Unlike the classical VRP, vehicles are allowed to use more than one route in the VRPMUV. We propose a branch-and-cut algorithm for solving the VRPMUV. The proposed algorithm includes several valid inequalities from the literature for the purpose of improving its lower bounds, and a heuristic algorithm based on simulated annealing and a mixed integer programming-based intensification procedure for obtaining the upper bounds. The algorithm is evaluated in terms of the test problems derived from the literature. The computational results which follow show that, if there were 120 customers on the route (in the simulation, the problem would be solved optimally in a reasonable amount of time.

  2. Optimizing the Shunting Schedule of Electric Multiple Units Depot Using an Enhanced Particle Swarm Optimization Algorithm

    Jiaxi Wang

    2016-01-01

    Full Text Available The shunting schedule of electric multiple units depot (SSED is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality.

  3. Optimizing the Shunting Schedule of Electric Multiple Units Depot Using an Enhanced Particle Swarm Optimization Algorithm

    Jin, Junchen

    2016-01-01

    The shunting schedule of electric multiple units depot (SSED) is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO) algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality. PMID:27436998

  4. Multi-Target Angle Tracking Algorithm for Bistatic Multiple-Input Multiple-Output (MIMO Radar Based on the Elements of the Covariance Matrix

    Zhengyan Zhang

    2018-03-01

    Full Text Available In this paper, we consider the problem of tracking the direction of arrivals (DOA and the direction of departure (DOD of multiple targets for bistatic multiple-input multiple-output (MIMO radar. A high-precision tracking algorithm for target angle is proposed. First, the linear relationship between the covariance matrix difference and the angle difference of the adjacent moment was obtained through three approximate relations. Then, the proposed algorithm obtained the relationship between the elements in the covariance matrix difference. On this basis, the performance of the algorithm was improved by averaging the covariance matrix element. Finally, the least square method was used to estimate the DOD and DOA. The algorithm realized the automatic correlation of the angle and provided better performance when compared with the adaptive asymmetric joint diagonalization (AAJD algorithm. The simulation results demonstrated the effectiveness of the proposed algorithm. The algorithm provides the technical support for the practical application of MIMO radar.

  5. Multi-Target Angle Tracking Algorithm for Bistatic Multiple-Input Multiple-Output (MIMO) Radar Based on the Elements of the Covariance Matrix.

    Zhang, Zhengyan; Zhang, Jianyun; Zhou, Qingsong; Li, Xiaobo

    2018-03-07

    In this paper, we consider the problem of tracking the direction of arrivals (DOA) and the direction of departure (DOD) of multiple targets for bistatic multiple-input multiple-output (MIMO) radar. A high-precision tracking algorithm for target angle is proposed. First, the linear relationship between the covariance matrix difference and the angle difference of the adjacent moment was obtained through three approximate relations. Then, the proposed algorithm obtained the relationship between the elements in the covariance matrix difference. On this basis, the performance of the algorithm was improved by averaging the covariance matrix element. Finally, the least square method was used to estimate the DOD and DOA. The algorithm realized the automatic correlation of the angle and provided better performance when compared with the adaptive asymmetric joint diagonalization (AAJD) algorithm. The simulation results demonstrated the effectiveness of the proposed algorithm. The algorithm provides the technical support for the practical application of MIMO radar.

  6. Simulation study of multi-step model algorithmic control of the nuclear reactor thermal power tracking system

    Shi Xiaoping; Xu Tianshu

    2001-01-01

    The classical control method is usually hard to ensure the thermal power tracking accuracy, because the nuclear reactor system is a complex nonlinear system with uncertain parameters and disturbances. A sort of non-parameter model is constructed with the open-loop impulse response of the system. Furthermore, a sort of thermal power tracking digital control law is presented using the multi-step model algorithmic control principle. The control method presented had good tracking performance and robustness. It can work despite the existence of unmeasurable disturbances. The simulation experiment testifies the correctness and effectiveness of the method. The high accuracy matching between the thermal power and the referenced load is achieved

  7. Medical chart validation of an algorithm for identifying multiple sclerosis relapse in healthcare claims.

    Chastek, Benjamin J; Oleen-Burkey, Merrikay; Lopez-Bresnahan, Maria V

    2010-01-01

    Relapse is a common measure of disease activity in relapsing-remitting multiple sclerosis (MS). The objective of this study was to test the content validity of an operational algorithm for detecting relapse in claims data. A claims-based relapse detection algorithm was tested by comparing its detection rate over a 1-year period with relapses identified based on medical chart review. According to the algorithm, MS patients in a US healthcare claims database who had either (1) a primary claim for MS during hospitalization or (2) a corticosteroid claim following a MS-related outpatient visit were designated as having a relapse. Patient charts were examined for explicit indication of relapse or care suggestive of relapse. Positive and negative predictive values were calculated. Medical charts were reviewed for 300 MS patients, half of whom had a relapse according to the algorithm. The claims-based criteria correctly classified 67.3% of patients with relapses (positive predictive value) and 70.0% of patients without relapses (negative predictive value; kappa 0.373: p value of the operational algorithm. Limitations of the algorithm include lack of differentiation between relapsing-remitting MS and other types, and that it does not incorporate measures of function and disability. The claims-based algorithm appeared to successfully detect moderate-to-severe MS relapse. This validated definition can be applied to future claims-based MS studies.

  8. A Novel Multiple-Time Scale Integrator for the Hybrid Monte Carlo Algorithm

    Kamleh, Waseem

    2011-01-01

    Hybrid Monte Carlo simulations that implement the fermion action using multiple terms are commonly used. By the nature of their formulation they involve multiple integration time scales in the evolution of the system through simulation time. These different scales are usually dealt with by the Sexton-Weingarten nested leapfrog integrator. In this scheme the choice of time scales is somewhat restricted as each time step must be an exact multiple of the next smallest scale in the sequence. A novel generalisation of the nested leapfrog integrator is introduced which allows for far greater flexibility in the choice of time scales, as each scale now must only be an exact multiple of the smallest step size.

  9. Multi-step EMG Classification Algorithm for Human-Computer Interaction

    Ren, Peng; Barreto, Armando; Adjouadi, Malek

    A three-electrode human-computer interaction system, based on digital processing of the Electromyogram (EMG) signal, is presented. This system can effectively help disabled individuals paralyzed from the neck down to interact with computers or communicate with people through computers using point-and-click graphic interfaces. The three electrodes are placed on the right frontalis, the left temporalis and the right temporalis muscles in the head, respectively. The signal processing algorithm used translates the EMG signals during five kinds of facial movements (left jaw clenching, right jaw clenching, eyebrows up, eyebrows down, simultaneous left & right jaw clenching) into five corresponding types of cursor movements (left, right, up, down and left-click), to provide basic mouse control. The classification strategy is based on three principles: the EMG energy of one channel is typically larger than the others during one specific muscle contraction; the spectral characteristics of the EMG signals produced by the frontalis and temporalis muscles during different movements are different; the EMG signals from adjacent channels typically have correlated energy profiles. The algorithm is evaluated on 20 pre-recorded EMG signal sets, using Matlab simulations. The results show that this method provides improvements and is more robust than other previous approaches.

  10. Formulation of an explicit-multiple-time-step time integration method for use in a global primitive equation grid model

    Chao, W. C.

    1982-01-01

    With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.

  11. Murasaki: a fast, parallelizable algorithm to find anchors from multiple genomes.

    Kris Popendorf

    Full Text Available BACKGROUND: With the number of available genome sequences increasing rapidly, the magnitude of sequence data required for multiple-genome analyses is a challenging problem. When large-scale rearrangements break the collinearity of gene orders among genomes, genome comparison algorithms must first identify sets of short well-conserved sequences present in each genome, termed anchors. Previously, anchor identification among multiple genomes has been achieved using pairwise alignment tools like BLASTZ through progressive alignment tools like TBA, but the computational requirements for sequence comparisons of multiple genomes quickly becomes a limiting factor as the number and scale of genomes grows. METHODOLOGY/PRINCIPAL FINDINGS: Our algorithm, named Murasaki, makes it possible to identify anchors within multiple large sequences on the scale of several hundred megabases in few minutes using a single CPU. Two advanced features of Murasaki are (1 adaptive hash function generation, which enables efficient use of arbitrary mismatch patterns (spaced seeds and therefore the comparison of multiple mammalian genomes in a practical amount of computation time, and (2 parallelizable execution that decreases the required wall-clock and CPU times. Murasaki can perform a sensitive anchoring of eight mammalian genomes (human, chimp, rhesus, orangutan, mouse, rat, dog, and cow in 21 hours CPU time (42 minutes wall time. This is the first single-pass in-core anchoring of multiple mammalian genomes. We evaluated Murasaki by comparing it with the genome alignment programs BLASTZ and TBA. We show that Murasaki can anchor multiple genomes in near linear time, compared to the quadratic time requirements of BLASTZ and TBA, while improving overall accuracy. CONCLUSIONS/SIGNIFICANCE: Murasaki provides an open source platform to take advantage of long patterns, cluster computing, and novel hash algorithms to produce accurate anchors across multiple genomes with

  12. Algorithms

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  13. Through-Wall Multiple Targets Vital Signs Tracking Based on VMD Algorithm

    Jiaming Yan

    2016-08-01

    Full Text Available Targets located at the same distance are easily neglected in most through-wall multiple targets detecting applications which use the single-input single-output (SISO ultra-wideband (UWB radar system. In this paper, a novel multiple targets vital signs tracking algorithm for through-wall detection using SISO UWB radar has been proposed. Taking advantage of the high-resolution decomposition of the Variational Mode Decomposition (VMD based algorithm, the respiration signals of different targets can be decomposed into different sub-signals, and then, we can track the time-varying respiration signals accurately when human targets located in the same distance. Intensive evaluation has been conducted to show the effectiveness of our scheme with a 0.15 m thick concrete brick wall. Constant, piecewise-constant and time-varying vital signs could be separated and tracked successfully with the proposed VMD based algorithm for two targets, even up to three targets. For the multiple targets’ vital signs tracking issues like urban search and rescue missions, our algorithm has superior capability in most detection applications.

  14. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  15. Trust Index Based Fault Tolerant Multiple Event Localization Algorithm for WSNs

    Xu, Xianghua; Gao, Xueyong; Wan, Jian; Xiong, Naixue

    2011-01-01

    This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP) localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms. PMID:22163972

  16. Trust Index Based Fault Tolerant Multiple Event Localization Algorithm for WSNs

    Jian Wan

    2011-06-01

    Full Text Available This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms.

  17. Research on Multiple Particle Swarm Algorithm Based on Analysis of Scientific Materials

    Zhao Hongwei

    2017-01-01

    Full Text Available This paper proposed an improved particle swarm optimization algorithm based on analysis of scientific materials. The core thesis of MPSO (Multiple Particle Swarm Algorithm is to improve the single population PSO to interactive multi-swarms, which is used to settle the problem of being trapped into local minima during later iterations because it is lack of diversity. The simulation results show that the convergence rate is fast and the search performance is good, and it has achieved very good results.

  18. An efficient central DOA tracking algorithm for multiple incoherently distributed sources

    Hassen, Sonia Ben; Samet, Abdelaziz

    2015-12-01

    In this paper, we develop a new tracking method for the direction of arrival (DOA) parameters assuming multiple incoherently distributed (ID) sources. The new approach is based on a simple covariance fitting optimization technique exploiting the central and noncentral moments of the source angular power densities to estimate the central DOAs. The current estimates are treated as measurements provided to the Kalman filter that model the dynamic property of directional changes for the moving sources. Then, the covariance-fitting-based algorithm and the Kalman filtering theory are combined to formulate an adaptive tracking algorithm. Our algorithm is compared to the fast approximated power iteration-total least square-estimation of signal parameters via rotational invariance technique (FAPI-TLS-ESPRIT) algorithm using the TLS-ESPRIT method and the subspace updating via FAPI-algorithm. It will be shown that the proposed algorithm offers an excellent DOA tracking performance and outperforms the FAPI-TLS-ESPRIT method especially at low signal-to-noise ratio (SNR) values. Moreover, the performances of the two methods increase as the SNR values increase. This increase is more prominent with the FAPI-TLS-ESPRIT method. However, their performances degrade when the number of sources increases. It will be also proved that our method depends on the form of the angular distribution function when tracking the central DOAs. Finally, it will be shown that the more the sources are spaced, the more the proposed method can exactly track the DOAs.

  19. A fast calculating two-stream-like multiple scattering algorithm that captures azimuthal and elevation variations

    Fiorino, Steven T.; Elmore, Brannon; Schmidt, Jaclyn; Matchefts, Elizabeth; Burley, Jarred L.

    2016-05-01

    Properly accounting for multiple scattering effects can have important implications for remote sensing and possibly directed energy applications. For example, increasing path radiance can affect signal noise. This study describes the implementation of a fast-calculating two-stream-like multiple scattering algorithm that captures azimuthal and elevation variations into the Laser Environmental Effects Definition and Reference (LEEDR) atmospheric characterization and radiative transfer code. The multiple scattering algorithm fully solves for molecular, aerosol, cloud, and precipitation single-scatter layer effects with a Mie algorithm at every calculation point/layer rather than an interpolated value from a pre-calculated look-up-table. This top-down cumulative diffusivity method first considers the incident solar radiance contribution to a given layer accounting for solid angle and elevation, and it then measures the contribution of diffused energy from previous layers based on the transmission of the current level to produce a cumulative radiance that is reflected from a surface and measured at the aperture at the observer. Then a unique set of asymmetry and backscattering phase function parameter calculations are made which account for the radiance loss due to the molecular and aerosol constituent reflectivity within a level and allows for a more accurate characterization of diffuse layers that contribute to multiple scattered radiances in inhomogeneous atmospheres. The code logic is valid for spectral bands between 200 nm and radio wavelengths, and the accuracy is demonstrated by comparing the results from LEEDR to observed sky radiance data.

  20. A quality and efficiency analysis of the IMFASTTM segmentation algorithm in head and neck 'step and shoot' IMRT treatments

    Potter, Larry D.; Chang, Sha X.; Cullip, Timothy J.; Siochi, Alfredo C.

    2002-01-01

    The performance of segmentation algorithms used in IMFAST for 'step and shoot' IMRT treatment delivery is evaluated for three head and neck clinical treatments of different optimization objectives. The segmentation uses the intensity maps generated by the in-house TPS PLANUNC using the index-dose minimization algorithm. The dose optimization objectives include PTV dose uniformity and dose volume histogram-specified critical structure sparing. The optimized continuous intensity maps were truncated into five and ten intensity levels and exported to IMFAST for MLC segments optimization. The MLC segments were imported back to PLUNC for dose optimization quality calculation. The five basic segmentation algorithms included in IMFAST were evaluated alone and in combination with either tongue and groove/match line correction or fluence correction or both. Two criteria were used in the evaluation: treatment efficiency represented by the total number of MLC segments and optimization quality represented by a clinically relevant optimization quality factor. We found that the treatment efficiency depends first on the number of intensity levels used in the intensity map and second the segmentation technique used. The standard optimal segmentation with fluence correction is a consistent good performer for all treatment plans studied. All segmentation techniques evaluated produced treatments with similar dose optimization quality values, especially when ten-level intensity maps are used

  1. Mediator independently orchestrates multiple steps of preinitiation complex assembly in vivo.

    Eyboulet, Fanny; Wydau-Dematteis, Sandra; Eychenne, Thomas; Alibert, Olivier; Neil, Helen; Boschiero, Claire; Nevers, Marie-Claire; Volland, Hervé; Cornu, David; Redeker, Virginie; Werner, Michel; Soutourina, Julie

    2015-10-30

    Mediator is a large multiprotein complex conserved in all eukaryotes, which has a crucial coregulator function in transcription by RNA polymerase II (Pol II). However, the molecular mechanisms of its action in vivo remain to be understood. Med17 is an essential and central component of the Mediator head module. In this work, we utilised our large collection of conditional temperature-sensitive med17 mutants to investigate Mediator's role in coordinating preinitiation complex (PIC) formation in vivo at the genome level after a transfer to a non-permissive temperature for 45 minutes. The effect of a yeast mutation proposed to be equivalent to the human Med17-L371P responsible for infantile cerebral atrophy was also analyzed. The ChIP-seq results demonstrate that med17 mutations differentially affected the global presence of several PIC components including Mediator, TBP, TFIIH modules and Pol II. Our data show that Mediator stabilizes TFIIK kinase and TFIIH core modules independently, suggesting that the recruitment or the stability of TFIIH modules is regulated independently on yeast genome. We demonstrate that Mediator selectively contributes to TBP recruitment or stabilization to chromatin. This study provides an extensive genome-wide view of Mediator's role in PIC formation, suggesting that Mediator coordinates multiple steps of a PIC assembly pathway. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. Assembly and benign step-by-step post-treatment of oppositely charged reduced graphene oxides for transparent conductive thin films with multiple applications

    Zhu, Jiayi; He, Junhui

    2012-05-01

    We report a new approach for the fabrication of flexible and transparent conducting thin films via the layer-by-layer (LbL) assembly of oppositely charged reduced graphene oxide (RGO) and the benign step-by-step post-treatment on substrates with a low glass-transition temperature, such as glass and poly(ethylene terephthalate) (PET). The RGO dispersions and films were characterized by means of atomic force microscopy, UV-visible absorption spectrophotometery, Raman spectroscopy, transmission electron microscopy, contact angle/interface systems and a four-point probe. It was found that the graphene thin films exhibited a significant increase in electrical conductivity after the step-by-step post-treatments. The graphene thin film on the PET substrate had a good conductivity retainability after multiple cycles (30 cycles) of excessively bending (bending angle: 180°), while tin-doped indium oxide (ITO) thin films on PET showed a significant decrease in electrical conductivity. In addition, the graphene thin film had a smooth surface with tunable wettability.We report a new approach for the fabrication of flexible and transparent conducting thin films via the layer-by-layer (LbL) assembly of oppositely charged reduced graphene oxide (RGO) and the benign step-by-step post-treatment on substrates with a low glass-transition temperature, such as glass and poly(ethylene terephthalate) (PET). The RGO dispersions and films were characterized by means of atomic force microscopy, UV-visible absorption spectrophotometery, Raman spectroscopy, transmission electron microscopy, contact angle/interface systems and a four-point probe. It was found that the graphene thin films exhibited a significant increase in electrical conductivity after the step-by-step post-treatments. The graphene thin film on the PET substrate had a good conductivity retainability after multiple cycles (30 cycles) of excessively bending (bending angle: 180°), while tin-doped indium oxide (ITO) thin films on

  3. Pattern-set generation algorithm for the one-dimensional multiple stock sizes cutting stock problem

    Cui, Yaodong; Cui, Yi-Ping; Zhao, Zhigang

    2015-09-01

    A pattern-set generation algorithm (PSG) for the one-dimensional multiple stock sizes cutting stock problem (1DMSSCSP) is presented. The solution process contains two stages. In the first stage, the PSG solves the residual problems repeatedly to generate the patterns in the pattern set, where each residual problem is solved by the column-generation approach, and each pattern is generated by solving a single large object placement problem. In the second stage, the integer linear programming model of the 1DMSSCSP is solved using a commercial solver, where only the patterns in the pattern set are considered. The computational results of benchmark instances indicate that the PSG outperforms existing heuristic algorithms and rivals the exact algorithm in solution quality.

  4. Three-dimensional weight-accumulation algorithm for generating multiple excitation spots in fast optical stimulation

    Takiguchi, Yu; Toyoda, Haruyoshi

    2017-11-01

    We report here an algorithm for calculating a hologram to be employed in a high-access speed microscope for observing sensory-driven synaptic activity across all inputs to single living neurons in an intact cerebral cortex. The system is based on holographic multi-beam generation using a two-dimensional phase-only spatial light modulator to excite multiple locations in three dimensions with a single hologram. The hologram was calculated with a three-dimensional weighted iterative Fourier transform method using the Ewald sphere restriction to increase the calculation speed. Our algorithm achieved good uniformity of three dimensionally generated excitation spots; the standard deviation of the spot intensities was reduced by a factor of two compared with a conventional algorithm.

  5. NIMEFI: gene regulatory network inference using multiple ensemble feature importance algorithms.

    Joeri Ruyssinck

    Full Text Available One of the long-standing open challenges in computational systems biology is the topology inference of gene regulatory networks from high-throughput omics data. Recently, two community-wide efforts, DREAM4 and DREAM5, have been established to benchmark network inference techniques using gene expression measurements. In these challenges the overall top performer was the GENIE3 algorithm. This method decomposes the network inference task into separate regression problems for each gene in the network in which the expression values of a particular target gene are predicted using all other genes as possible predictors. Next, using tree-based ensemble methods, an importance measure for each predictor gene is calculated with respect to the target gene and a high feature importance is considered as putative evidence of a regulatory link existing between both genes. The contribution of this work is twofold. First, we generalize the regression decomposition strategy of GENIE3 to other feature importance methods. We compare the performance of support vector regression, the elastic net, random forest regression, symbolic regression and their ensemble variants in this setting to the original GENIE3 algorithm. To create the ensemble variants, we propose a subsampling approach which allows us to cast any feature selection algorithm that produces a feature ranking into an ensemble feature importance algorithm. We demonstrate that the ensemble setting is key to the network inference task, as only ensemble variants achieve top performance. As second contribution, we explore the effect of using rankwise averaged predictions of multiple ensemble algorithms as opposed to only one. We name this approach NIMEFI (Network Inference using Multiple Ensemble Feature Importance algorithms and show that this approach outperforms all individual methods in general, although on a specific network a single method can perform better. An implementation of NIMEFI has been made

  6. Comparison of Statistical Algorithms for the Detection of Infectious Disease Outbreaks in Large Multiple Surveillance Systems

    Farrington, C. Paddy; Noufaily, Angela; Andrews, Nick J.; Charlett, Andre

    2016-01-01

    A large-scale multiple surveillance system for infectious disease outbreaks has been in operation in England and Wales since the early 1990s. Changes to the statistical algorithm at the heart of the system were proposed and the purpose of this paper is to compare two new algorithms with the original algorithm. Test data to evaluate performance are created from weekly counts of the number of cases of each of more than 2000 diseases over a twenty-year period. The time series of each disease is separated into one series giving the baseline (background) disease incidence and a second series giving disease outbreaks. One series is shifted forward by twelve months and the two are then recombined, giving a realistic series in which it is known where outbreaks have been added. The metrics used to evaluate performance include a scoring rule that appropriately balances sensitivity against specificity and is sensitive to variation in probabilities near 1. In the context of disease surveillance, a scoring rule can be adapted to reflect the size of outbreaks and this was done. Results indicate that the two new algorithms are comparable to each other and better than the algorithm they were designed to replace. PMID:27513749

  7. A Multiobjective Interval Programming Model for Wind-Hydrothermal Power System Dispatching Using 2-Step Optimization Algorithm

    Jihong, Qu

    2014-01-01

    Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision. PMID:24895663

  8. A multiobjective interval programming model for wind-hydrothermal power system dispatching using 2-step optimization algorithm.

    Ren, Kun; Jihong, Qu

    2014-01-01

    Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision.

  9. A Conjugate Gradient Algorithm with Function Value Information and N-Step Quadratic Convergence for Unconstrained Optimization.

    Xiangrong Li

    Full Text Available It is generally acknowledged that the conjugate gradient (CG method achieves global convergence--with at most a linear convergence rate--because CG formulas are generated by linear approximations of the objective functions. The quadratically convergent results are very limited. We introduce a new PRP method in which the restart strategy is also used. Moreover, the method we developed includes not only n-step quadratic convergence but also both the function value information and gradient value information. In this paper, we will show that the new PRP method (with either the Armijo line search or the Wolfe line search is both linearly and quadratically convergent. The numerical experiments demonstrate that the new PRP algorithm is competitive with the normal CG method.

  10. A Conjugate Gradient Algorithm with Function Value Information and N-Step Quadratic Convergence for Unconstrained Optimization.

    Li, Xiangrong; Zhao, Xupei; Duan, Xiabin; Wang, Xiaoliang

    2015-01-01

    It is generally acknowledged that the conjugate gradient (CG) method achieves global convergence--with at most a linear convergence rate--because CG formulas are generated by linear approximations of the objective functions. The quadratically convergent results are very limited. We introduce a new PRP method in which the restart strategy is also used. Moreover, the method we developed includes not only n-step quadratic convergence but also both the function value information and gradient value information. In this paper, we will show that the new PRP method (with either the Armijo line search or the Wolfe line search) is both linearly and quadratically convergent. The numerical experiments demonstrate that the new PRP algorithm is competitive with the normal CG method.

  11. Algorithms

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  12. A new multiple robot path planning algorithm: dynamic distributed particle swarm optimization.

    Ayari, Asma; Bouamama, Sadok

    2017-01-01

    Multiple robot systems have become a major study concern in the field of robotic research. Their control becomes unreliable and even infeasible if the number of robots increases. In this paper, a new dynamic distributed particle swarm optimization (D 2 PSO) algorithm is proposed for trajectory path planning of multiple robots in order to find collision-free optimal path for each robot in the environment. The proposed approach consists in calculating two local optima detectors, LOD pBest and LOD gBest . Particles which are unable to improve their personal best and global best for predefined number of successive iterations would be replaced with restructured ones. Stagnation and local optima problems would be avoided by adding diversity to the population, without losing the fast convergence characteristic of PSO. Experiments with multiple robots are provided and proved effectiveness of such approach compared with the distributed PSO.

  13. One-step isothermal detection of multiple KRAS mutations by forming SNP specific hairpins on a gold nanoshell.

    Chung, Chan Ho; Kim, Joong Hyun

    2018-04-24

    We developed a one-step isothermal method for typing multiple KRAS mutations using a designed set of primers to form a hairpin on a gold nanoshell upon being ligated by a SNP specific DNA ligase after binding of targets. As a result, we could detect as low as 20 attomoles of KRAS mutations within 1 h.

  14. GENESIS 1.1: A hybrid-parallel molecular dynamics simulator with enhanced sampling algorithms on multiple computational platforms.

    Kobayashi, Chigusa; Jung, Jaewoon; Matsunaga, Yasuhiro; Mori, Takaharu; Ando, Tadashi; Tamura, Koichi; Kamiya, Motoshi; Sugita, Yuji

    2017-09-30

    GENeralized-Ensemble SImulation System (GENESIS) is a software package for molecular dynamics (MD) simulation of biological systems. It is designed to extend limitations in system size and accessible time scale by adopting highly parallelized schemes and enhanced conformational sampling algorithms. In this new version, GENESIS 1.1, new functions and advanced algorithms have been added. The all-atom and coarse-grained potential energy functions used in AMBER and GROMACS packages now become available in addition to CHARMM energy functions. The performance of MD simulations has been greatly improved by further optimization, multiple time-step integration, and hybrid (CPU + GPU) computing. The string method and replica-exchange umbrella sampling with flexible collective variable choice are used for finding the minimum free-energy pathway and obtaining free-energy profiles for conformational changes of a macromolecule. These new features increase the usefulness and power of GENESIS for modeling and simulation in biological research. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  15. Using the Multiplicative Schwarz Alternating Algorithm (MSAA) for Solving the Large Linear System of Equations Related to Global Gravity Field Recovery up to Degree and Order 120

    Safari, A.; Sharifi, M. A.; Amjadiparvar, B.

    2010-05-01

    The GRACE mission has substantiated the low-low satellite-to-satellite tracking (LL-SST) concept. The LL-SST configuration can be combined with the previously realized high-low SST concept in the CHAMP mission to provide a much higher accuracy. The line of sight (LOS) acceleration difference between the GRACE satellite pair is the mostly used observable for mapping the global gravity field of the Earth in terms of spherical harmonic coefficients. In this paper, mathematical formulae for LOS acceleration difference observations have been derived and the corresponding linear system of equations has been set up for spherical harmonic up to degree and order 120. The total number of unknowns is 14641. Such a linear equation system can be solved with iterative solvers or direct solvers. However, the runtime of direct methods or that of iterative solvers without a suitable preconditioner increases tremendously. This is the reason why we need a more sophisticated method to solve the linear system of problems with a large number of unknowns. Multiplicative variant of the Schwarz alternating algorithm is a domain decomposition method, which allows it to split the normal matrix of the system into several smaller overlaped submatrices. In each iteration step the multiplicative variant of the Schwarz alternating algorithm solves linear systems with the matrices obtained from the splitting successively. It reduces both runtime and memory requirements drastically. In this paper we propose the Multiplicative Schwarz Alternating Algorithm (MSAA) for solving the large linear system of gravity field recovery. The proposed algorithm has been tested on the International Association of Geodesy (IAG)-simulated data of the GRACE mission. The achieved results indicate the validity and efficiency of the proposed algorithm in solving the linear system of equations from accuracy and runtime points of view. Keywords: Gravity field recovery, Multiplicative Schwarz Alternating Algorithm, Low

  16. The Telomerase Inhibitor MST-312 Interferes with Multiple Steps in the Herpes Simplex Virus Life Cycle.

    Haberichter, Jarod; Roberts, Scott; Abbasi, Imran; Dedthanou, Phonphanh; Pradhan, Prajakta; Nguyen, Marie L

    2015-10-01

    , telomerase, is the cellular enzyme that synthesizes DNA repeats at the ends of chromosomes during replication to prevent DNA shortening. In this study, we investigate role of telomerase in HSV infection. The data demonstrate that the telomerase inhibitor MST-312 suppressed HSV replication at multiple steps of viral infection. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  17. A curvilinear, fully implicit, conservative electromagnetic PIC algorithm in multiple dimensions

    Chacón, L.; Chen, G.

    2016-07-01

    We extend a recently proposed fully implicit PIC algorithm for the Vlasov-Darwin model in multiple dimensions (Chen and Chacón (2015) [1]) to curvilinear geometry. As in the Cartesian case, the approach is based on a potential formulation (ϕ, A), and overcomes many difficulties of traditional semi-implicit Darwin PIC algorithms. Conservation theorems for local charge and global energy are derived in curvilinear representation, and then enforced discretely by a careful choice of the discretization of field and particle equations. Additionally, the algorithm conserves canonical-momentum in any ignorable direction, and preserves the Coulomb gauge ∇ ṡ A = 0 exactly. An asymptotically well-posed fluid preconditioner allows efficient use of large cell sizes, which are determined by accuracy considerations, not stability, and can be orders of magnitude larger than required in a standard explicit electromagnetic PIC simulation. We demonstrate the accuracy and efficiency properties of the algorithm with numerical experiments in mapped meshes in 1D-3V and 2D-3V.

  18. Application of Multiple-Population Genetic Algorithm in Optimizing the Train-Set Circulation Plan Problem

    Yu Zhou

    2017-01-01

    Full Text Available The train-set circulation plan problem (TCPP belongs to the rolling stock scheduling (RSS problem and is similar to the aircraft routing problem (ARP in airline operations and the vehicle routing problem (VRP in the logistics field. However, TCPP involves additional complexity due to the maintenance constraint of train-sets: train-sets must conduct maintenance tasks after running for a certain time and distance. The TCPP is nondeterministic polynomial hard (NP-hard. There is no available algorithm that can obtain the optimal global solution, and many factors such as the utilization mode and the maintenance mode impact the solution of the TCPP. This paper proposes a train-set circulation optimization model to minimize the total connection time and maintenance costs and describes the design of an efficient multiple-population genetic algorithm (MPGA to solve this model. A realistic high-speed railway (HSR case is selected to verify our model and algorithm, and, then, a comparison of different algorithms is carried out. Furthermore, a new maintenance mode is proposed, and related implementation requirements are discussed.

  19. A Robust and Accurate Two-Step Auto-Labeling Conditional Iterative Closest Points (TACICP Algorithm for Three-Dimensional Multi-Modal Carotid Image Registration.

    Hengkai Guo

    Full Text Available Atherosclerosis is among the leading causes of death and disability. Combining information from multi-modal vascular images is an effective and efficient way to diagnose and monitor atherosclerosis, in which image registration is a key technique. In this paper a feature-based registration algorithm, Two-step Auto-labeling Conditional Iterative Closed Points (TACICP algorithm, is proposed to align three-dimensional carotid image datasets from ultrasound (US and magnetic resonance (MR. Based on 2D segmented contours, a coarse-to-fine strategy is employed with two steps: rigid initialization step and non-rigid refinement step. Conditional Iterative Closest Points (CICP algorithm is given in rigid initialization step to obtain the robust rigid transformation and label configurations. Then the labels and CICP algorithm with non-rigid thin-plate-spline (TPS transformation model is introduced to solve non-rigid carotid deformation between different body positions. The results demonstrate that proposed TACICP algorithm has achieved an average registration error of less than 0.2mm with no failure case, which is superior to the state-of-the-art feature-based methods.

  20. Improved Harmony Search Algorithm for Truck Scheduling Problem in Multiple-Door Cross-Docking Systems

    Zhanzhong Wang

    2018-01-01

    Full Text Available The key of realizing the cross docking is to design the joint of inbound trucks and outbound trucks, so a proper sequence of trucks will make the cross-docking system much more efficient and need less makespan. A cross-docking system is proposed with multiple receiving and shipping dock doors. The objective is to find the best door assignments and the sequences of trucks in the principle of products distribution to minimize the total makespan of cross docking. To solve the problem that is regarded as a mixed integer linear programming (MILP model, three metaheuristics, namely, harmony search (HS, improved harmony search (IHS, and genetic algorithm (GA, are proposed. Furthermore, the fixed parameters are optimized by Taguchi experiments to improve the accuracy of solutions further. Finally, several numerical examples are put forward to evaluate the performances of proposed algorithms.

  1. Application of multiple signal classification algorithm to frequency estimation in coherent dual-frequency lidar

    Li, Ruixiao; Li, Kun; Zhao, Changming

    2018-01-01

    Coherent dual-frequency Lidar (CDFL) is a new development of Lidar which dramatically enhances the ability to decrease the influence of atmospheric interference by using dual-frequency laser to measure the range and velocity with high precision. Based on the nature of CDFL signals, we propose to apply the multiple signal classification (MUSIC) algorithm in place of the fast Fourier transform (FFT) to estimate the phase differences in dual-frequency Lidar. In the presence of Gaussian white noise, the simulation results show that the signal peaks are more evident when using MUSIC algorithm instead of FFT in condition of low signal-noise-ratio (SNR), which helps to improve the precision of detection on range and velocity, especially for the long distance measurement systems.

  2. Perfect blind restoration of images blurred by multiple filters: theory and efficient algorithms.

    Harikumar, G; Bresler, Y

    1999-01-01

    We address the problem of restoring an image from its noisy convolutions with two or more unknown finite impulse response (FIR) filters. We develop theoretical results about the existence and uniqueness of solutions, and show that under some generically true assumptions, both the filters and the image can be determined exactly in the absence of noise, and stably estimated in its presence. We present efficient algorithms to estimate the blur functions and their sizes. These algorithms are of two types, subspace-based and likelihood-based, and are extensions of techniques proposed for the solution of the multichannel blind deconvolution problem in one dimension. We present memory and computation-efficient techniques to handle the very large matrices arising in the two-dimensional (2-D) case. Once the blur functions are determined, they are used in a multichannel deconvolution step to reconstruct the unknown image. The theoretical and practical implications of edge effects, and "weakly exciting" images are examined. Finally, the algorithms are demonstrated on synthetic and real data.

  3. Investigating the Variation of Volatile Compound Composition in Maotai-Flavoured Liquor During Its Multiple Fermentation Steps Using Statistical Methods

    Zheng-Yun Wu

    2016-01-01

    Full Text Available The use of multiple fermentations is one of the most specific characteristics of Maotai-flavoured liquor production. In this research, the variation of volatile composition of Maotai-flavoured liquor during its multiple fermentations is investigated using statistical approaches. Cluster analysis shows that the obtained samples are grouped mainly according to the fermentation steps rather than the distillery they originate from, and the samples from the first two fermentation steps show the greatest difference, suggesting that multiple fermentation and distillation steps result in the end in similar volatile composition of the liquor. Back-propagation neural network (BNN models were developed that satisfactorily predict the number of fermentation steps and the organoleptic evaluation scores of liquor samples from their volatile compositions. Mean impact value (MIV analysis shows that ethyl lactate, furfural and some high-boiling-point acids play important roles, while pyrazine contributes much less to the improvement of the flavour and taste of Maotai-flavoured liquor during its production. This study contributes to further understanding of the mechanisms of Maotai-flavoured liquor production.

  4. Clustering Multiple Sclerosis Subgroups with Multifractal Methods and Self-Organizing Map Algorithm

    Karaca, Yeliz; Cattani, Carlo

    Magnetic resonance imaging (MRI) is the most sensitive method to detect chronic nervous system diseases such as multiple sclerosis (MS). In this paper, Brownian motion Hölder regularity functions (polynomial, periodic (sine), exponential) for 2D image, such as multifractal methods were applied to MR brain images, aiming to easily identify distressed regions, in MS patients. With these regions, we have proposed an MS classification based on the multifractal method by using the Self-Organizing Map (SOM) algorithm. Thus, we obtained a cluster analysis by identifying pixels from distressed regions in MR images through multifractal methods and by diagnosing subgroups of MS patients through artificial neural networks.

  5. Algorithmic Foundation of Spectral Rarefaction for Measuring Satellite Imagery Heterogeneity at Multiple Spatial Scales

    Rocchini, Duccio

    2009-01-01

    Measuring heterogeneity in satellite imagery is an important task to deal with. Most measures of spectral diversity have been based on Shannon Information theory. However, this approach does not inherently address different scales, ranging from local (hereafter referred to alpha diversity) to global scales (gamma diversity). The aim of this paper is to propose a method for measuring spectral heterogeneity at multiple scales based on rarefaction curves. An algorithmic solution of rarefaction applied to image pixel values (Digital Numbers, DNs) is provided and discussed. PMID:22389600

  6. Analysis of multiplicities in e+e- interactions using 2-jet rates from different jet algorithms

    Dahiya, S.; Kaur, M.; Dhamija, S.

    2002-01-01

    The shoulder structure of charged particle multiplicity distribution measured in full phase space in e + e - interactions at various c.m. energies from 91 to 189 GeV has been analysed in terms of weighted superposition of two negative binomial distributions associated with 2-jet and multi-jet production. The 2-jet rates have been obtained from various jet algorithms. This phenomenological parametrization reproduces the shoulder structure behaviour quantitatively and improves the agreement with the experimental distributions than the conventional negative binomial distribution. The analysis at the higher energies where the shoulder structure appears more prominently, is important for the understanding of underlying structure. (author)

  7. A two-step ionospheric modeling algorithm considering the impact of GLONASS pseudo-range inter-channel biases

    Zhang, Rui; Yao, Yi-bin; Hu, Yue-ming; Song, Wei-wei

    2017-12-01

    The Global Navigation Satellite System presents a plausible and cost-effective way of computing the total electron content (TEC). But TEC estimated value could be seriously affected by the differential code biases (DCB) of frequency-dependent satellites and receivers. Unlike GPS and other satellite systems, GLONASS adopts a frequency-division multiplexing access mode to distinguish different satellites. This strategy leads to different wavelengths and inter-frequency biases (IFBs) for both pseudo-range and carrier phase observations, whose impacts are rarely considered in ionospheric modeling. We obtained observations from four groups of co-stations to analyze the characteristics of the GLONASS receiver P1P2 pseudo-range IFB with a double-difference method. The results showed that the GLONASS P1P2 pseudo-range IFB remained stable for a period of time and could catch up to several meters, which cannot be absorbed by the receiver DCB during ionospheric modeling. Given the characteristics of the GLONASS P1P2 pseudo-range IFB, we proposed a two-step ionosphere modeling method with the priori IFB information. The experimental analysis showed that the new algorithm can effectively eliminate the adverse effects on ionospheric model and hardware delay parameters estimation in different space environments. During high solar activity period, compared to the traditional GPS + GLONASS modeling algorithm, the absolute average deviation of TEC decreased from 2.17 to 2.07 TECu (TEC unit); simultaneously, the average RMS of GPS satellite DCB decreased from 0.225 to 0.219 ns, and the average deviation of GLONASS satellite DCB decreased from 0.253 to 0.113 ns with a great improvement in over 55%.

  8. Development of the Multiple Gene Knockout System with One-Step PCR in Thermoacidophilic Crenarchaeon Sulfolobus acidocaldarius

    Shoji Suzuki

    2017-01-01

    Full Text Available Multiple gene knockout systems developed in the thermoacidophilic crenarchaeon Sulfolobus acidocaldarius are powerful genetic tools. However, plasmid construction typically requires several steps. Alternatively, PCR tailing for high-throughput gene disruption was also developed in S. acidocaldarius, but repeated gene knockout based on PCR tailing has been limited due to lack of a genetic marker system. In this study, we demonstrated efficient homologous recombination frequency (2.8 × 104 ± 6.9 × 103 colonies/μg DNA by optimizing the transformation conditions. This optimized protocol allowed to develop reliable gene knockout via double crossover using short homologous arms and to establish the multiple gene knockout system with one-step PCR (MONSTER. In the MONSTER, a multiple gene knockout cassette was simply and rapidly constructed by one-step PCR without plasmid construction, and the PCR product can be immediately used for target gene deletion. As an example of the applications of this strategy, we successfully made a DNA photolyase- (phr- and arginine decarboxylase- (argD- deficient strain of S. acidocaldarius. In addition, an agmatine selection system consisting of an agmatine-auxotrophic strain and argD marker was also established. The MONSTER provides an alternative strategy that enables the very simple construction of multiple gene knockout cassettes for genetic studies in S. acidocaldarius.

  9. On the construction of elliptic Chudnovsky-type algorithms for multiplication in large extensions of finite fields

    Ballet, Stéphane; Bonnecaze, Alexis; Tukumuli, Mila

    2013-01-01

    International audience; We indicate a strategy in order to construct bilinear multiplication algorithms of type Chudnovsky in large extensions of any finite field. In particular, using the symmetric version of the generalization of Randriambololona specialized on the elliptic curves, we show that it is possible to construct such algorithms with low bilinear complexity. More precisely, if we only consider the Chudnovsky-type algorithms of type symmetric elliptic, we show that the symmetric bil...

  10. Fast algorithms for coordinate processors in Galois field for multiplicity t = 4.5 and t > 5

    Nikityuk, N.M.

    1989-01-01

    Fast algorithms for solving the coordinate equations for special-purpose processors at multiplicity t = 4.5 and t > 5 are described. Block diagrams of coordinate processor for t 4 in Galois field GF(2 m ) is presented which is solved by a table method. Economical algorithms for solving the coordinate equations by serial methods at t > 5 are described. The algorithms and devices proposed could be applied when creating fast processors in high energy physics spectrometers. 9 refs.; 3 figs

  11. FELIX: an algorithm for indexing multiple crystallites in X-ray free-electron laser snapshot diffraction images

    Beyerlein, Kenneth R.; White, Thomas A.; Yefanov, Oleksandr

    2017-01-01

    A novel algorithm for indexing multiple crystals in snapshot X-ray diffraction images, especially suited for serial crystallography data, is presented. The algorithm, FELIX, utilizes a generalized parametrization of the Rodrigues-Frank space, in which all crystal systems can be represented without...

  12. Design and selection of load control strategies using a multiple objective model and evolutionary algorithms

    Gomes, Alvaro; Antunes, Carlos Henggeler; Martins, Antonio Gomes

    2005-01-01

    This paper aims at presenting a multiple objective model to evaluate the attractiveness of the use of demand resources (through load management control actions) by different stakeholders and in diverse structure scenarios in electricity systems. For the sake of model flexibility, the multiple (and conflicting) objective functions of technical, economical and quality of service nature are able to capture distinct market scenarios and operating entities that may be interested in promoting load management activities. The computation of compromise solutions is made by resorting to evolutionary algorithms, which are well suited to tackle multiobjective problems of combinatorial nature herein involving the identification and selection of control actions to be applied to groups of loads. (Author)

  13. Optimization of constrained multiple-objective reliability problems using evolutionary algorithms

    Salazar, Daniel; Rocco, Claudio M.; Galvan, Blas J.

    2006-01-01

    This paper illustrates the use of multi-objective optimization to solve three types of reliability optimization problems: to find the optimal number of redundant components, find the reliability of components, and determine both their redundancy and reliability. In general, these problems have been formulated as single objective mixed-integer non-linear programming problems with one or several constraints and solved by using mathematical programming techniques or special heuristics. In this work, these problems are reformulated as multiple-objective problems (MOP) and then solved by using a second-generation Multiple-Objective Evolutionary Algorithm (MOEA) that allows handling constraints. The MOEA used in this paper (NSGA-II) demonstrates the ability to identify a set of optimal solutions (Pareto front), which provides the Decision Maker with a complete picture of the optimal solution space. Finally, the advantages of both MOP and MOEA approaches are illustrated by solving four redundancy problems taken from the literature

  14. A genetic algorithm for multiple relay selection in two-way relaying cognitive radio networks

    Alsharoa, Ahmad M.

    2013-09-01

    In this paper, we investigate a multiple relay selection scheme for two-way relaying cognitive radio networks where primary users and secondary users operate on the same frequency band. More specifically, cooperative relays using Amplifyand- Forward (AF) protocol are optimally selected to maximize the sum rate of the secondary users without degrading the Quality of Service (QoS) of the primary users by respecting a tolerated interference threshold. A strong optimization tool based on genetic algorithm is employed to solve our formulated optimization problem where discrete relay power levels are considered. Our simulation results show that the practical heuristic approach achieves almost the same performance of the optimal multiple relay selection scheme either with discrete or continuous power distributions. Copyright © 2013 by the Institute of Electrical and Electronic Engineers, Inc.

  15. A low complexity algorithm for multiple relay selection in two-way relaying Cognitive Radio networks

    Alsharoa, Ahmad M.

    2013-06-01

    In this paper, a multiple relay selection scheme for two-way relaying cognitive radio network is investigated. We consider a cooperative Cognitive Radio (CR) system with spectrum sharing scenario using Amplify-and-Forward (AF) protocol, where licensed users and unlicensed users operate on the same frequency band. The main objective is to maximize the sum rate of the unlicensed users allowed to share the spectrum with the licensed users by respecting a tolerated interference threshold. A practical low complexity heuristic approach is proposed to solve our formulated optimization problem. Selected numerical results show that the proposed algorithm reaches a performance close to the performance of the optimal multiple relay selection scheme either with discrete or continuous power distributions while providing a considerable saving in terms of computational complexity. In addition, these results show that our proposed scheme significantly outperforms the single relay selection scheme. © 2013 IEEE.

  16. Optimization of constrained multiple-objective reliability problems using evolutionary algorithms

    Salazar, Daniel [Instituto de Sistemas Inteligentes y Aplicaciones Numericas en Ingenieria (IUSIANI), Division de Computacion Evolutiva y Aplicaciones (CEANI), Universidad de Las Palmas de Gran Canaria, Islas Canarias (Spain) and Facultad de Ingenieria, Universidad Central Venezuela, Caracas (Venezuela)]. E-mail: danielsalazaraponte@gmail.com; Rocco, Claudio M. [Facultad de Ingenieria, Universidad Central Venezuela, Caracas (Venezuela)]. E-mail: crocco@reacciun.ve; Galvan, Blas J. [Instituto de Sistemas Inteligentes y Aplicaciones Numericas en Ingenieria (IUSIANI), Division de Computacion Evolutiva y Aplicaciones (CEANI), Universidad de Las Palmas de Gran Canaria, Islas Canarias (Spain)]. E-mail: bgalvan@step.es

    2006-09-15

    This paper illustrates the use of multi-objective optimization to solve three types of reliability optimization problems: to find the optimal number of redundant components, find the reliability of components, and determine both their redundancy and reliability. In general, these problems have been formulated as single objective mixed-integer non-linear programming problems with one or several constraints and solved by using mathematical programming techniques or special heuristics. In this work, these problems are reformulated as multiple-objective problems (MOP) and then solved by using a second-generation Multiple-Objective Evolutionary Algorithm (MOEA) that allows handling constraints. The MOEA used in this paper (NSGA-II) demonstrates the ability to identify a set of optimal solutions (Pareto front), which provides the Decision Maker with a complete picture of the optimal solution space. Finally, the advantages of both MOP and MOEA approaches are illustrated by solving four redundancy problems taken from the literature.

  17. Grade Distribution Modeling within the Bauxite Seams of the Wachangping Mine, China, Using a Multi-Step Interpolation Algorithm

    Shaofeng Wang

    2017-05-01

    Full Text Available Mineral reserve estimation and mining design depend on a precise modeling of the mineralized deposit. A multi-step interpolation algorithm, including 1D biharmonic spline estimator for interpolating floor altitudes, 2D nearest neighbor, linear, natural neighbor, cubic, biharmonic spline, inverse distance weighted, simple kriging, and ordinary kriging interpolations for grade distribution on the two vertical sections at roadways, and 3D linear interpolation for grade distribution between sections, was proposed to build a 3D grade distribution model of the mineralized seam in a longwall mining panel with a U-shaped layout having two roadways at both sides. Compared to field data from exploratory boreholes, this multi-step interpolation using a natural neighbor method shows an optimal stability and a minimal difference between interpolation and field data. Using this method, the 97,576 m3 of bauxite, in which the mass fraction of Al2O3 (Wa and the mass ratio of Al2O3 to SiO2 (Wa/s are 61.68% and 27.72, respectively, was delimited from the 189,260 m3 mineralized deposit in the 1102 longwall mining panel in the Wachangping mine, Southwest China. The mean absolute errors, the root mean squared errors and the relative standard deviations of errors between interpolated data and exploratory grade data at six boreholes are 2.544, 2.674, and 32.37% of Wa; and 1.761, 1.974, and 67.37% of Wa/s, respectively. The proposed method can be used for characterizing the grade distribution in a mineralized seam between two roadways at both sides of a longwall mining panel.

  18. Optimal planning approaches with multiple impulses for rendezvous based on hybrid genetic algorithm and control method

    JingRui Zhang

    2015-03-01

    Full Text Available In this article, we focus on safe and effective completion of a rendezvous and docking task by looking at planning approaches and control with fuel-optimal rendezvous for a target spacecraft running on a near-circular reference orbit. A variety of existent practical path constraints are considered, including the constraints of field of view, impulses, and passive safety. A rendezvous approach is calculated by using a hybrid genetic algorithm with those constraints. Furthermore, a control method of trajectory tracking is adopted to overcome the external disturbances. Based on Clohessy–Wiltshire equations, we first construct the mathematical model of optimal planning approaches of multiple impulses with path constraints. Second, we introduce the principle of hybrid genetic algorithm with both stronger global searching ability and local searching ability. We additionally explain the application of this algorithm in the problem of trajectory planning. Then, we give three-impulse simulation examples to acquire an optimal rendezvous trajectory with the path constraints presented in this article. The effectiveness and applicability of the tracking control method are verified with the optimal trajectory above as control objective through the numerical simulation.

  19. Multiple Harmonics Fitting Algorithms Applied to Periodic Signals Based on Hilbert-Huang Transform

    Hui Wang

    2013-01-01

    Full Text Available A new generation of multipurpose measurement equipment is transforming the role of computers in instrumentation. The new features involve mixed devices, such as kinds of sensors, analog-to-digital and digital-to-analog converters, and digital signal processing techniques, that are able to substitute typical discrete instruments like multimeters and analyzers. Signal-processing applications frequently use least-squares (LS sine-fitting algorithms. Periodic signals may be interpreted as a sum of sine waves with multiple frequencies: the Fourier series. This paper describes a new sine fitting algorithm that is able to fit a multiharmonic acquired periodic signal. By means of a “sinusoidal wave” whose amplitude and phase are both transient, the “triangular wave” can be reconstructed on the basis of Hilbert-Huang transform (HHT. This method can be used to test effective number of bits (ENOBs of analog-to-digital converter (ADC, avoiding the trouble of selecting initial value of the parameters and working out the nonlinear equations. The simulation results show that the algorithm is precise and efficient. In the case of enough sampling points, even under the circumstances of low-resolution signal with the harmonic distortion existing, the root mean square (RMS error between the sampling data of original “triangular wave” and the corresponding points of fitting “sinusoidal wave” is marvelously small. That maybe means, under the circumstances of any periodic signal, that ENOBs of high-resolution ADC can be tested accurately.

  20. Multiple Sclerosis Identification Based on Fractional Fourier Entropy and a Modified Jaya Algorithm

    Shui-Hua Wang

    2018-04-01

    Full Text Available Aim: Currently, identifying multiple sclerosis (MS by human experts may come across the problem of “normal-appearing white matter”, which causes a low sensitivity. Methods: In this study, we presented a computer vision based approached to identify MS in an automatic way. This proposed method first extracted the fractional Fourier entropy map from a specified brain image. Afterwards, it sent the features to a multilayer perceptron trained by a proposed improved parameter-free Jaya algorithm. We used cost-sensitivity learning to handle the imbalanced data problem. Results: The 10 × 10-fold cross validation showed our method yielded a sensitivity of 97.40 ± 0.60%, a specificity of 97.39 ± 0.65%, and an accuracy of 97.39 ± 0.59%. Conclusions: We validated by experiments that the proposed improved Jaya performs better than plain Jaya algorithm and other latest bioinspired algorithms in terms of classification performance and training speed. In addition, our method is superior to four state-of-the-art MS identification approaches.

  1. An energy efficient distance-aware routing algorithm with multiple mobile sinks for wireless sensor networks.

    Wang, Jin; Li, Bin; Xia, Feng; Kim, Chang-Seob; Kim, Jeong-Uk

    2014-08-18

    Traffic patterns in wireless sensor networks (WSNs) usually follow a many-to-one model. Sensor nodes close to static sinks will deplete their limited energy more rapidly than other sensors, since they will have more data to forward during multihop transmission. This will cause network partition, isolated nodes and much shortened network lifetime. Thus, how to balance energy consumption for sensor nodes is an important research issue. In recent years, exploiting sink mobility technology in WSNs has attracted much research attention because it can not only improve energy efficiency, but prolong network lifetime. In this paper, we propose an energy efficient distance-aware routing algorithm with multiple mobile sink for WSNs, where sink nodes will move with a certain speed along the network boundary to collect monitored data. We study the influence of multiple mobile sink nodes on energy consumption and network lifetime, and we mainly focus on the selection of mobile sink node number and the selection of parking positions, as well as their impact on performance metrics above. We can see that both mobile sink node number and the selection of parking position have important influence on network performance. Simulation results show that our proposed routing algorithm has better performance than traditional routing ones in terms of energy consumption.

  2. Multiple Memory Structure Bit Reversal Algorithm Based on Recursive Patterns of Bit Reversal Permutation

    K. K. L. B. Adikaram

    2014-01-01

    Full Text Available With the increasing demand for online/inline data processing efficient Fourier analysis becomes more and more relevant. Due to the fact that the bit reversal process requires considerable processing time of the Fast Fourier Transform (FFT algorithm, it is vital to optimize the bit reversal algorithm (BRA. This paper is to introduce an efficient BRA with multiple memory structures. In 2009, Elster showed the relation between the first and the second halves of the bit reversal permutation (BRP and stated that it may cause serious impact on cache performance of the computer, if implemented. We found exceptions, especially when the said index mapping was implemented with multiple one-dimensional memory structures instead of multidimensional or one-dimensional memory structure. Also we found a new index mapping, even after the recursive splitting of BRP into equal sized slots. The four-array and the four-vector versions of BRA with new index mapping reported 34% and 16% improvement in performance in relation to similar versions of Linear BRA of Elster which uses single one-dimensional memory structure.

  3. An Energy Efficient Distance-Aware Routing Algorithm with Multiple Mobile Sinks for Wireless Sensor Networks

    Jin Wang

    2014-08-01

    Full Text Available Traffic patterns in wireless sensor networks (WSNs usually follow a many-to-one model. Sensor nodes close to static sinks will deplete their limited energy more rapidly than other sensors, since they will have more data to forward during multihop transmission. This will cause network partition, isolated nodes and much shortened network lifetime. Thus, how to balance energy consumption for sensor nodes is an important research issue. In recent years, exploiting sink mobility technology in WSNs has attracted much research attention because it can not only improve energy efficiency, but prolong network lifetime. In this paper, we propose an energy efficient distance-aware routing algorithm with multiple mobile sink for WSNs, where sink nodes will move with a certain speed along the network boundary to collect monitored data. We study the influence of multiple mobile sink nodes on energy consumption and network lifetime, and we mainly focus on the selection of mobile sink node number and the selection of parking positions, as well as their impact on performance metrics above. We can see that both mobile sink node number and the selection of parking position have important influence on network performance. Simulation results show that our proposed routing algorithm has better performance than traditional routing ones in terms of energy consumption.

  4. Rotor Cascade Shape Optimization with Unsteady Passing Wakes Using Implicit Dual-Time Stepping and a Genetic Algorithm

    Eun Seok Lee

    2003-01-01

    Full Text Available An axial turbine rotor cascade-shape optimization with unsteady passing wakes was performed to obtain an improved aerodynamic performance using an unsteady flow, Reynolds-averaged Navier-Stokes equations solver that was based on explicit, finite difference; Runge-Kutta multistage time marching; and the diagonalized alternating direction implicit scheme. The code utilized Baldwin-Lomax algebraic and k-ε turbulence modeling. The full approximation storage multigrid method and preconditioning were implemented as iterative convergence-acceleration techniques. An implicit dual-time stepping method was incorporated in order to simulate the unsteady flow fields. The objective function was defined as minimization of total pressure loss and maximization of lift, while the mass flow rate was fixed during the optimization. The design variables were several geometric parameters characterizing airfoil leading edge, camber, stagger angle, and inter-row spacing. The genetic algorithm was used as an optimizer, and the penalty method was introduced for combining the constraints with the objective function. Each individual's objective function was computed simultaneously by using a 32-processor distributedmemory computer. The optimization results indicated that only minor improvements are possible in unsteady rotor/stator aerodynamics by varying these geometric parameters.

  5. Experimental Investigation of a Base Isolation System Incorporating MR Dampers with the High-Order Single Step Control Algorithm

    Weiqing Fu

    2017-03-01

    Full Text Available The conventional isolation structure with rubber bearings exhibits large deformation characteristics when subjected to infrequent earthquakes, which may lead to failure of the isolation layer. Although passive dampers can be used to reduce the layer displacement, the layer deformation and superstructure acceleration responses will increase in cases of fortification earthquakes or frequently occurring earthquakes. In addition to secondary damages and loss of life, such excessive displacement results in damages to the facilities in the structure. In order to overcome these shortcomings, this paper presents a structural vibration control system where the base isolation system is composed of rubber bearings with magnetorheological (MR damper and are regulated using the innovative control strategy. The high-order single-step algorithm with continuity and switch control strategies are applied to the control system. Shaking table test results under various earthquake conditions indicate that the proposed isolation method, compared with passive isolation technique, can effectively suppress earthquake responses for acceleration of superstructure and deformation within the isolation layer. As a result, this structural control method exhibits excellent performance, such as fast computation, generic real-time control, acceleration reduction and high seismic energy dissipation etc. The relative merits of the continuity and switch control strategies are also compared and discussed.

  6. The US Network of Pediatric Multiple Sclerosis Centers: Development, Progress, and Next Steps

    Casper, T. Charles; Rose, John W.; Roalstad, Shelly; Waubant, Emmanuelle; Aaen, Gregory; Belman, Anita; Chitnis, Tanuja; Gorman, Mark; Krupp, Lauren; Lotze, Timothy E.; Ness, Jayne; Patterson, Marc; Rodriguez, Moses; Weinstock-Guttman, Bianca; Browning, Brittan; Graves, Jennifer; Tillema, Jan-Mendelt; Benson, Leslie; Harris, Yolanda

    2014-01-01

    Multiple sclerosis and other demyelinating diseases in the pediatric population have received an increasing level of attention by clinicians and researchers. The low incidence of these diseases in children creates a need for the involvement of multiple clinical centers in research efforts. The Network of Pediatric Multiple Sclerosis Centers was created initially in 2006 to improve the diagnosis and care of children with demyelinating diseases. In 2010, the Network shifted its focus to multicenter research while continuing to advance the care of patients. The Network has obtained support from the National Multiple Sclerosis Society, the Guthy-Jackson Charitable Foundation, and the National Institutes of Health. The Network will continue to serve as a platform for conducting impactful research in pediatric demyelinating diseases of the central nervous system. This article provides a description of the history and development, organization, mission, research priorities, current studies, and future plans of the Network. PMID:25270659

  7. Contactless multiple wavelength photoplethysmographic imaging: a first step toward "spO2 camera" technology

    Wieringa, F.P.; Mastik, F.; Steen, A.F.W. van der

    2005-01-01

    We describe a route toward contactless imaging of arterial oxygen saturation (SpO2) distribution within tissue, based upon detection of a two-dimensional matrix of spatially resolved optical plethysmographic signals at different wavelengths. As a first step toward SpO 2-imaging we built a monochrome

  8. Identification of alternative splice variants in Aspergillus flavus through comparison of multiple tandem MS search algorithms

    Chang Kung-Yen

    2011-07-01

    Full Text Available Abstract Background Database searching is the most frequently used approach for automated peptide assignment and protein inference of tandem mass spectra. The results, however, depend on the sequences in target databases and on search algorithms. Recently by using an alternative splicing database, we identified more proteins than with the annotated proteins in Aspergillus flavus. In this study, we aimed at finding a greater number of eligible splice variants based on newly available transcript sequences and the latest genome annotation. The improved database was then used to compare four search algorithms: Mascot, OMSSA, X! Tandem, and InsPecT. Results The updated alternative splicing database predicted 15833 putative protein variants, 61% more than the previous results. There was transcript evidence for 50% of the updated genes compared to the previous 35% coverage. Database searches were conducted using the same set of spectral data, search parameters, and protein database but with different algorithms. The false discovery rates of the peptide-spectrum matches were estimated Conclusions We were able to detect dozens of new peptides using the improved alternative splicing database with the recently updated annotation of the A. flavus genome. Unlike the identifications of the peptides and the RefSeq proteins, large variations existed between the putative splice variants identified by different algorithms. 12 candidates of putative isoforms were reported based on the consensus peptide-spectrum matches. This suggests that applications of multiple search engines effectively reduced the possible false positive results and validated the protein identifications from tandem mass spectra using an alternative splicing database.

  9. Stepped-wedge cluster randomised controlled trials: a generic framework including parallel and multiple-level designs.

    Hemming, Karla; Lilford, Richard; Girling, Alan J

    2015-01-30

    Stepped-wedge cluster randomised trials (SW-CRTs) are being used with increasing frequency in health service evaluation. Conventionally, these studies are cross-sectional in design with equally spaced steps, with an equal number of clusters randomised at each step and data collected at each and every step. Here we introduce several variations on this design and consider implications for power. One modification we consider is the incomplete cross-sectional SW-CRT, where the number of clusters varies at each step or where at some steps, for example, implementation or transition periods, data are not collected. We show that the parallel CRT with staggered but balanced randomisation can be considered a special case of the incomplete SW-CRT. As too can the parallel CRT with baseline measures. And we extend these designs to allow for multiple layers of clustering, for example, wards within a hospital. Building on results for complete designs, power and detectable difference are derived using a Wald test and obtaining the variance-covariance matrix of the treatment effect assuming a generalised linear mixed model. These variations are illustrated by several real examples. We recommend that whilst the impact of transition periods on power is likely to be small, where they are a feature of the design they should be incorporated. We also show examples in which the power of a SW-CRT increases as the intra-cluster correlation (ICC) increases and demonstrate that the impact of the ICC is likely to be smaller in a SW-CRT compared with a parallel CRT, especially where there are multiple levels of clustering. Finally, through this unified framework, the efficiency of the SW-CRT and the parallel CRT can be compared. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  10. A modeling and numerical algorithm for thermoporomechanics in multiple porosity media for naturally fractured reservoirs

    Kim, J.; Sonnenthal, E. L.; Rutqvist, J.

    2011-12-01

    Rigorous modeling of coupling between fluid, heat, and geomechanics (thermo-poro-mechanics), in fractured porous media is one of the important and difficult topics in geothermal reservoir simulation, because the physics are highly nonlinear and strongly coupled. Coupled fluid/heat flow and geomechanics are investigated using the multiple interacting continua (MINC) method as applied to naturally fractured media. In this study, we generalize constitutive relations for the isothermal elastic dual porosity model proposed by Berryman (2002) to those for the non-isothermal elastic/elastoplastic multiple porosity model, and derive the coupling coefficients of coupled fluid/heat flow and geomechanics and constraints of the coefficients. When the off-diagonal terms of the total compressibility matrix for the flow problem are zero, the upscaled drained bulk modulus for geomechanics becomes the harmonic average of drained bulk moduli of the multiple continua. In this case, the drained elastic/elastoplastic moduli for mechanics are determined by a combination of the drained moduli and volume fractions in multiple porosity materials. We also determine a relation between local strains of all multiple porosity materials in a gridblock and the global strain of the gridblock, from which we can track local and global elastic/plastic variables. For elastoplasticity, the return mapping is performed for all multiple porosity materials in the gridblock. For numerical implementation, we employ and extend the fixed-stress sequential method of the single porosity model to coupled fluid/heat flow and geomechanics in multiple porosity systems, because it provides numerical stability and high accuracy. This sequential scheme can be easily implemented by using a porosity function and its corresponding porosity correction, making use of the existing robust flow and geomechanics simulators. We implemented the proposed modeling and numerical algorithm to the reaction transport simulator

  11. Algorithms

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  12. A Self-Reconstructing Algorithm for Single and Multiple-Sensor Fault Isolation Based on Auto-Associative Neural Networks

    Hamidreza Mousavi

    2017-01-01

    Full Text Available Recently different approaches have been developed in the field of sensor fault diagnostics based on Auto-Associative Neural Network (AANN. In this paper we present a novel algorithm called Self reconstructing Auto-Associative Neural Network (S-AANN which is able to detect and isolate single faulty sensor via reconstruction. We have also extended the algorithm to be applicable in multiple fault conditions. The algorithm uses a calibration model based on AANN. AANN can reconstruct the faulty sensor using non-faulty sensors due to correlation between the process variables, and mean of the difference between reconstructed and original data determines which sensors are faulty. The algorithms are tested on a Dimerization process. The simulation results show that the S-AANN can isolate multiple faulty sensors with low computational time that make the algorithm appropriate candidate for online applications.

  13. Highly transparent films from carboxymethylated microfibrillated cellulose: The effect of multiple homogenization steps on key properties

    Siró, Istvan; Plackett, David; Hedenqvist, M.

    2011-01-01

    We produced microfibrillated cellulose by passing carboxymethylated sulfite-softwood-dissolving pulp with a relatively low hemicellulose content (4.5%) through a high-shear homogenizer. The resulting gel was subjected to as many as three additional homogenization steps and then used to prepare...... solvent-cast films. The optical, mechanical, and oxygen-barrier properties of these films were determined. A reduction in the quantity and appearance of large fiber fragments and fiber aggregates in the films as a function of increasing homogenization was illustrated with optical microscopy, atomic force...... microscopy, and scanning electron microscopy. Film opacity decreased with increasing homogenization, and the use of three additional homogenization steps after initial gel production resulted in highly transparent films. The oxygen permeability of the films was not significantly influenced by the degree...

  14. Optical properties and surface morphology of ZnTe thin films prepared by multiple potential steps

    Gromboni, Murilo F.; Lucas, Francisco W. S.; Mascaro, Lucia H., E-mail: lmascaro@ufscar.br [Universidade de Federal de Sao Carlos (LIEC/UFSCar), SP (Brazil). Departamento de Quimica. Lab. de Eletroquimica e Ceramica

    2014-03-15

    In this work, the ZnTe thin films were electrodeposited using potentiostatic steps, on Pt substrate. The effect of steps number, the deposition time for each element (Zn or Te) and layer order (Zn/Te or Te/Zn) in the morphology, composition, band gap energy and photocurrent was evaluated. Microanalysis data showed that the ratio Zn/Te ranged from 0.12 and 0.30 and the film was not stoichiometric. However, the band-gap value obtained from in all experimental conditions used in this work was 2.28 eV, indicating film growth of ZnTe. The samples with higher Zn showed higher photocurrent, which was of the order of 2.64 μA cm{sup -2} and dendritic morphology (author)

  15. A ROBUST METHOD FOR STEREO VISUAL ODOMETRY BASED ON MULTIPLE EUCLIDEAN DISTANCE CONSTRAINT AND RANSAC ALGORITHM

    Q. Zhou

    2017-07-01

    Full Text Available Visual Odometry (VO is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC and Random Sample Consensus (RANSAC algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation. The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.

  16. Comprehension of Multiple Documents with Conflicting Information: A Two-Step Model of Validation

    Richter, Tobias; Maier, Johanna

    2017-01-01

    In this article, we examine the cognitive processes that are involved when readers comprehend conflicting information in multiple texts. Starting from the notion of routine validation during comprehension, we argue that readers' prior beliefs may lead to a biased processing of conflicting information and a one-sided mental model of controversial…

  17. The Non–Symmetric s–Step Lanczos Algorithm: Derivation of Efficient Recurrences and Synchronization–Reducing Variants of BiCG and QMR

    Feuerriegel Stefan

    2015-12-01

    Full Text Available The Lanczos algorithm is among the most frequently used iterative techniques for computing a few dominant eigenvalues of a large sparse non-symmetric matrix. At the same time, it serves as a building block within biconjugate gradient (BiCG and quasi-minimal residual (QMR methods for solving large sparse non-symmetric systems of linear equations. It is well known that, when implemented on distributed-memory computers with a huge number of processes, the synchronization time spent on computing dot products increasingly limits the parallel scalability. Therefore, we propose synchronization-reducing variants of the Lanczos, as well as BiCG and QMR methods, in an attempt to mitigate these negative performance effects. These so-called s-step algorithms are based on grouping dot products for joint execution and replacing time-consuming matrix operations by efficient vector recurrences. The purpose of this paper is to provide a rigorous derivation of the recurrences for the s-step Lanczos algorithm, introduce s-step BiCG and QMR variants, and compare the parallel performance of these new s-step versions with previous algorithms.

  18. Application of multislice spiral CT (MSCT) in multiple injured patients and its effect on diagnostic and therapeutic algorithms

    Boehm, T.; Alkadhi, H.; Schertler, T.; Baumert, B.; Roos, J.; Marincek, B.; Wildermuth, S.

    2004-01-01

    The initial diagnostic work-up of trauma victims with multiple injuries is currently a combination of conventional radiography (CR), ultrasound (US), and computed tomography (CT). This article reviews the diagnostic quality of the different imaging modalities regarding detection and classification of injuries. CT performs better than US in detecting traumatic lesions of abdominal parenchymal organs. Furthermore, CT is better than CR in detecting therapeutically relevant chest and bone injuries. MSCT may replace CR and US under the condition that it is faster than or at least as fast as the conventional approach to diagnose lite threatening injuries. This can be achieved only by changing the work-flow for the entire trauma team including radiologist. Furthermore, certain prerequisites must be fulfilled including integration of a MSCT scanner into the emergency room. An optimized whole body CT protocol for the assessment of trauma victims using MSCT as well as a two-step algorithm for reporting the imaging findings depending on their clinical significance is presented. (orig.)

  19. Identification of novel adhesins of M. tuberculosis H37Rv using integrated approach of multiple computational algorithms and experimental analysis.

    Sanjiv Kumar

    Full Text Available Pathogenic bacteria interacting with eukaryotic host express adhesins on their surface. These adhesins aid in bacterial attachment to the host cell receptors during colonization. A few adhesins such as Heparin binding hemagglutinin adhesin (HBHA, Apa, Malate Synthase of M. tuberculosis have been identified using specific experimental interaction models based on the biological knowledge of the pathogen. In the present work, we carried out computational screening for adhesins of M. tuberculosis. We used an integrated computational approach using SPAAN for predicting adhesins, PSORTb, SubLoc and LocTree for extracellular localization, and BLAST for verifying non-similarity to human proteins. These steps are among the first of reverse vaccinology. Multiple claims and attacks from different algorithms were processed through argumentative approach. Additional filtration criteria included selection for proteins with low molecular weights and absence of literature reports. We examined binding potential of the selected proteins using an image based ELISA. The protein Rv2599 (membrane protein binds to human fibronectin, laminin and collagen. Rv3717 (N-acetylmuramoyl-L-alanine amidase and Rv0309 (L,D-transpeptidase bind to fibronectin and laminin. We report Rv2599 (membrane protein, Rv0309 and Rv3717 as novel adhesins of M. tuberculosis H37Rv. Our results expand the number of known adhesins of M. tuberculosis and suggest their regulated expression in different stages.

  20. An FPGA-Based Multiple-Axis Velocity Controller and Stepping Motors Drives Design

    Lai Chiu-Keng

    2016-01-01

    Full Text Available A Field Programmable Gate Array based system is a great hardware platform to support the implementation of hardware controllers such as PID controller and fuzzy controller. It is also programmed as hardware accelerator to speed up the mathematic calculation and greatly enhance the performance as applied to motor drive and motion control. Furthermore, the open structure of FPGA-based system is suitable for those designs with the ability of parallel processing or soft code processor embedded. In this paper, we apply the FPGA to a multi-axis velocity controller design. The developed system integrated three functions inside the FPGA chip, which are respectively the stepping motor drive, the multi-axis motion controller and the motion planning. Furthermore, an embedded controller with a soft code processor compatible to 8051 micro-control unit (MCU is built to handle the data transfer between the FPGA board and host PC. The MCU is also used to initialize the motion control and run the interpolator. The designed system is practically applied to a XYZ motion platform which is driven by stepping motors to verify its performance.

  1. A multiple objective magnet sorting algorithm for the Advanced Light Source insertion devices

    Humphries, D.; Goetz, F.; Kownacki, P.; Marks, S.; Schlueter, R.

    1995-01-01

    Insertion devices for the Advanced Light Source (ALS) incorporate large numbers of permanent magnets which have a variety of magnetization orientation errors. These orientation errors can produce field errors which affect both the spectral brightness of the insertion devices and the storage ring electron beam dynamics. A perturbation study was carried out to quantify the effects of orientation errors acting in a hybrid magnetic structure. The results of this study were used to develop a multiple stage sorting algorithm which minimizes undesirable integrated field errors and essentially eliminates pole excitation errors. When applied to a measured magnet population for an existing insertion device, an order of magnitude reduction in integrated field errors was achieved while maintaining near zero pole excitation errors

  2. A multiple objective magnet sorting algorithm for the ALS insertion devices

    Humphries, D.; Goetz, F.; Kownacki, P.; Marks, S.; Schlueter, R.

    1994-07-01

    Insertion devices for the Advanced Light Source (ALS) incorporate large numbers of permanent magnets which have a variety of magnetization orientation errors. These orientation errors can produce field errors which affect both the spectral brightness of the insertion devices and the storage ring electron beam dynamics. A perturbation study was carried out to quantify the effects of orientation errors acting in a hybrid magnetic structure. The results of this study were used to develop a multiple stage sorting algorithm which minimizes undesirable integrated field errors and essentially eliminates pole excitation errors. When applied to a measured magnet population for an existing insertion device, an order of magnitude reduction in integrated field errors was achieved while maintaining near zero pole excitation errors

  3. An energy efficient multiple mobile sinks based routing algorithm for wireless sensor networks

    Zhong, Peijun; Ruan, Feng

    2018-03-01

    With the fast development of wireless sensor networks (WSNs), more and more energy efficient routing algorithms have been proposed. However, one of the research challenges is how to alleviate the hot spot problem since nodes close to static sink (or base station) tend to die earlier than other sensors. The introduction of mobile sink node can effectively alleviate this problem since sink node can move along certain trajectories, causing hot spot nodes more evenly distributed. In this paper, we mainly study the energy efficient routing method with multiple mobile sinks support. We divide the whole network into several clusters and study the influence of mobile sink number on network lifetime. Simulation results show that the best network performance appears when mobile sink number is about 3 under our simulation environment.

  4. Application of genetic algorithm - multiple linear regressions to predict the activity of RSK inhibitors

    Avval Zhila Mohajeri

    2015-01-01

    Full Text Available This paper deals with developing a linear quantitative structure-activity relationship (QSAR model for predicting the RSK inhibition activity of some new compounds. A dataset consisting of 62 pyrazino [1,2-α] indole, diazepino [1,2-α] indole, and imidazole derivatives with known inhibitory activities was used. Multiple linear regressions (MLR technique combined with the stepwise (SW and the genetic algorithm (GA methods as variable selection tools was employed. For more checking stability, robustness and predictability of the proposed models, internal and external validation techniques were used. Comparison of the results obtained, indicate that the GA-MLR model is superior to the SW-MLR model and that it isapplicable for designing novel RSK inhibitors.

  5. The Development of Advanced Processing and Analysis Algorithms for Improved Neutron Multiplicity Measurements

    Santi, P.; Favalli, A.; Hauck, D.; Henzl, V.; Henzlova, D.; Ianakiev, K.; Iliev, M.; Swinhoe, M.; Croft, S.; Worrall, L.

    2015-01-01

    One of the most distinctive and informative signatures of special nuclear materials is the emission of correlated neutrons from either spontaneous or induced fission. Because the emission of correlated neutrons is a unique and unmistakable signature of nuclear materials, the ability to effectively detect, process, and analyze these emissions will continue to play a vital role in the non-proliferation, safeguards, and security missions. While currently deployed neutron measurement techniques based on 3He proportional counter technology, such as neutron coincidence and multiplicity counters currently used by the International Atomic Energy Agency, have proven to be effective over the past several decades for a wide range of measurement needs, a number of technical and practical limitations exist in continuing to apply this technique to future measurement needs. In many cases, those limitations exist within the algorithms that are used to process and analyze the detected signals from these counters that were initially developed approximately 20 years ago based on the technology and computing power that was available at that time. Over the past three years, an effort has been undertaken to address the general shortcomings in these algorithms by developing new algorithms that are based on fundamental physics principles that should lead to the development of more sensitive neutron non-destructive assay instrumentation. Through this effort, a number of advancements have been made in correcting incoming data for electronic dead time, connecting the two main types of analysis techniques used to quantify the data (Shift register analysis and Feynman variance to mean analysis), and in the underlying physical model, known as the point model, that is used to interpret the data in terms of the characteristic properties of the item being measured. The current status of the testing and evaluation of these advancements in correlated neutron analysis techniques will be discussed

  6. A consensus successive projections algorithm--multiple linear regression method for analyzing near infrared spectra.

    Liu, Ke; Chen, Xiaojing; Li, Limin; Chen, Huiling; Ruan, Xiukai; Liu, Wenbin

    2015-02-09

    The successive projections algorithm (SPA) is widely used to select variables for multiple linear regression (MLR) modeling. However, SPA used only once may not obtain all the useful information of the full spectra, because the number of selected variables cannot exceed the number of calibration samples in the SPA algorithm. Therefore, the SPA-MLR method risks the loss of useful information. To make a full use of the useful information in the spectra, a new method named "consensus SPA-MLR" (C-SPA-MLR) is proposed herein. This method is the combination of consensus strategy and SPA-MLR method. In the C-SPA-MLR method, SPA-MLR is used to construct member models with different subsets of variables, which are selected from the remaining variables iteratively. A consensus prediction is obtained by combining the predictions of the member models. The proposed method is evaluated by analyzing the near infrared (NIR) spectra of corn and diesel. The results of C-SPA-MLR method showed a better prediction performance compared with the SPA-MLR and full-spectra PLS methods. Moreover, these results could serve as a reference for combination the consensus strategy and other variable selection methods when analyzing NIR spectra and other spectroscopic techniques. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Multiple Kernel Learning for Heterogeneous Anomaly Detection: Algorithm and Aviation Safety Case Study

    Das, Santanu; Srivastava, Ashok N.; Matthews, Bryan L.; Oza, Nikunj C.

    2010-01-01

    The world-wide aviation system is one of the most complex dynamical systems ever developed and is generating data at an extremely rapid rate. Most modern commercial aircraft record several hundred flight parameters including information from the guidance, navigation, and control systems, the avionics and propulsion systems, and the pilot inputs into the aircraft. These parameters may be continuous measurements or binary or categorical measurements recorded in one second intervals for the duration of the flight. Currently, most approaches to aviation safety are reactive, meaning that they are designed to react to an aviation safety incident or accident. In this paper, we discuss a novel approach based on the theory of multiple kernel learning to detect potential safety anomalies in very large data bases of discrete and continuous data from world-wide operations of commercial fleets. We pose a general anomaly detection problem which includes both discrete and continuous data streams, where we assume that the discrete streams have a causal influence on the continuous streams. We also assume that atypical sequence of events in the discrete streams can lead to off-nominal system performance. We discuss the application domain, novel algorithms, and also discuss results on real-world data sets. Our algorithm uncovers operationally significant events in high dimensional data streams in the aviation industry which are not detectable using state of the art methods

  8. Algorithms

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  9. Algorithms

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  10. "Silicon millefeuille": From a silicon wafer to multiple thin crystalline films in a single step

    Hernández, David; Trifonov, Trifon; Garín, Moisés; Alcubilla, Ramon

    2013-04-01

    During the last years, many techniques have been developed to obtain thin crystalline films from commercial silicon ingots. Large market applications are foreseen in the photovoltaic field, where important cost reductions are predicted, and also in advanced microelectronics technologies as three-dimensional integration, system on foil, or silicon interposers [Dross et al., Prog. Photovoltaics 20, 770-784 (2012); R. Brendel, Thin Film Crystalline Silicon Solar Cells (Wiley-VCH, Weinheim, Germany 2003); J. N. Burghartz, Ultra-Thin Chip Technology and Applications (Springer Science + Business Media, NY, USA, 2010)]. Existing methods produce "one at a time" silicon layers, once one thin film is obtained, the complete process is repeated to obtain the next layer. Here, we describe a technology that, from a single crystalline silicon wafer, produces a large number of crystalline films with controlled thickness in a single technological step.

  11. Stability of gas atomized reactive powders through multiple step in-situ passivation

    Anderson, Iver E.; Steinmetz, Andrew D.; Byrd, David J.

    2017-05-16

    A method for gas atomization of oxygen-reactive reactive metals and alloys wherein the atomized particles are exposed as they solidify and cool in a very short time to multiple gaseous reactive agents for the in-situ formation of a protective reaction film on the atomized particles. The present invention is especially useful for making highly pyrophoric reactive metal or alloy atomized powders, such as atomized magnesium and magnesium alloy powders. The gaseous reactive species (agents) are introduced into the atomization spray chamber at locations downstream of a gas atomizing nozzle as determined by the desired powder or particle temperature for the reactions and the desired thickness of the reaction film.

  12. Imputation and quality control steps for combining multiple genome-wide datasets

    Shefali S Verma

    2014-12-01

    Full Text Available The electronic MEdical Records and GEnomics (eMERGE network brings together DNA biobanks linked to electronic health records (EHRs from multiple institutions. Approximately 52,000 DNA samples from distinct individuals have been genotyped using genome-wide SNP arrays across the nine sites of the network. The eMERGE Coordinating Center and the Genomics Workgroup developed a pipeline to impute and merge genomic data across the different SNP arrays to maximize sample size and power to detect associations with a variety of clinical endpoints. The 1000 Genomes cosmopolitan reference panel was used for imputation. Imputation results were evaluated using the following metrics: accuracy of imputation, allelic R2 (estimated correlation between the imputed and true genotypes, and the relationship between allelic R2 and minor allele frequency. Computation time and memory resources required by two different software packages (BEAGLE and IMPUTE2 were also evaluated. A number of challenges were encountered due to the complexity of using two different imputation software packages, multiple ancestral populations, and many different genotyping platforms. We present lessons learned and describe the pipeline implemented here to impute and merge genomic data sets. The eMERGE imputed dataset will serve as a valuable resource for discovery, leveraging the clinical data that can be mined from the EHR.

  13. ChromAlign: A two-step algorithmic procedure for time alignment of three-dimensional LC-MS chromatographic surfaces.

    Sadygov, Rovshan G; Maroto, Fernando Martin; Hühmer, Andreas F R

    2006-12-15

    We present an algorithmic approach to align three-dimensional chromatographic surfaces of LC-MS data of complex mixture samples. The approach consists of two steps. In the first step, we prealign chromatographic profiles: two-dimensional projections of chromatographic surfaces. This is accomplished by correlation analysis using fast Fourier transforms. In this step, a temporal offset that maximizes the overlap and dot product between two chromatographic profiles is determined. In the second step, the algorithm generates correlation matrix elements between full mass scans of the reference and sample chromatographic surfaces. The temporal offset from the first step indicates a range of the mass scans that are possibly correlated, then the correlation matrix is calculated only for these mass scans. The correlation matrix carries information on highly correlated scans, but it does not itself determine the scan or time alignment. Alignment is determined as a path in the correlation matrix that maximizes the sum of the correlation matrix elements. The computational complexity of the optimal path generation problem is reduced by the use of dynamic programming. The program produces time-aligned surfaces. The use of the temporal offset from the first step in the second step reduces the computation time for generating the correlation matrix and speeds up the process. The algorithm has been implemented in a program, ChromAlign, developed in C++ language for the .NET2 environment in WINDOWS XP. In this work, we demonstrate the applications of ChromAlign to alignment of LC-MS surfaces of several datasets: a mixture of known proteins, samples from digests of surface proteins of T-cells, and samples prepared from digests of cerebrospinal fluid. ChromAlign accurately aligns the LC-MS surfaces we studied. In these examples, we discuss various aspects of the alignment by ChromAlign, such as constant time axis shifts and warping of chromatographic surfaces.

  14. Algorithms for Some Euler-Type Identities for Multiple Zeta Values

    Shifeng Ding

    2013-01-01

    Full Text Available Multiple zeta values are the numbers defined by the convergent series ζ(s1,s2,…,sk=∑n1>n2>⋯>nk>0(1/n1s1 n2s2⋯nksk, where s1, s2, …, sk are positive integers with s1>1. For k≤n, let E(2n,k be the sum of all multiple zeta values with even arguments whose weight is 2n and whose depth is k. The well-known result E(2n,2=3ζ(2n/4 was extended to E(2n,3 and E(2n,4 by Z. Shen and T. Cai. Applying the theory of symmetric functions, Hoffman gave an explicit generating function for the numbers E(2n,k and then gave a direct formula for E(2n,k for arbitrary k≤n. In this paper we apply a technique introduced by Granville to present an algorithm to calculate E(2n,k and prove that the direct formula can also be deduced from Eisenstein's double product.

  15. Signal Timing Optimization for Corridors with Multiple Highway-Rail Grade Crossings Using Genetic Algorithm

    Yifeng Chen

    2018-01-01

    Full Text Available Safety and efficiency are two critical issues at highway-rail grade crossings (HRGCs and their nearby intersections. Standard traffic signal optimization programs are not designed to work on roadway networks that contain multiple HRGCs, because their underlying assumption is that the roadway traffic is in a steady-state. During a train event, steady-state conditions do not occur. This is particularly true for corridors that experience high train traffic (e.g., over 2 trains per hour. In this situation, the non-steady-state conditions predominate. This paper develops a simulation-based methodology for optimizing traffic signal timing plan on corridors of this kind. The primary goal is to maximize safety, and the secondary goal is to minimize delay. A Genetic Algorithm (GA was used as the optimization approach in the proposed methodology. A new transition preemption strategy for dual tracks (TPS_DT and a train arrival prediction model were integrated in the proposed methodology. An urban road network with multiple HRGCs in Lincoln, NE, was used as the study network. The microsimulation model VISSIM was used for evaluation purposes and was calibrated to local traffic conditions. A sensitivity analysis with different train traffic scenarios was conducted. It was concluded that the methodology can significantly improve both the safety and efficiency of traffic corridors with HRGCs.

  16. Multiple Signal Classification Algorithm Based Electric Dipole Source Localization Method in an Underwater Environment

    Yidong Xu

    2017-10-01

    Full Text Available A novel localization method based on multiple signal classification (MUSIC algorithm is proposed for positioning an electric dipole source in a confined underwater environment by using electric dipole-receiving antenna array. In this method, the boundary element method (BEM is introduced to analyze the boundary of the confined region by use of a matrix equation. The voltage of each dipole pair is used as spatial-temporal localization data, and it does not need to obtain the field component in each direction compared with the conventional fields based localization method, which can be easily implemented in practical engineering applications. Then, a global-multiple region-conjugate gradient (CG hybrid search method is used to reduce the computation burden and to improve the operation speed. Two localization simulation models and a physical experiment are conducted. Both the simulation results and physical experiment result provide accurate positioning performance, with the help to verify the effectiveness of the proposed localization method in underwater environments.

  17. Algorithms

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  18. Direction of Radio Finding via MUSIC (Multiple Signal Classification) Algorithm for Hardware Design System

    Zhang, Zheng

    2017-10-01

    Concept of radio direction finding systems, which use radio direction finding is based on digital signal processing algorithms. Thus, the radio direction finding system becomes capable to locate and track signals by the both. Performance of radio direction finding significantly depends on effectiveness of digital signal processing algorithms. The algorithm uses the Direction of Arrival (DOA) algorithms to estimate the number of incidents plane waves on the antenna array and their angle of incidence. This manuscript investigates implementation of the DOA algorithms (MUSIC) on the uniform linear array in the presence of white noise. The experiment results exhibit that MUSIC algorithm changed well with the radio direction.

  19. Comparison of the phenolic composition of fruit juices by single step gradient HPLC analysis of multiple components versus multiple chromatographic runs optimised for individual families.

    Bremner, P D; Blacklock, C J; Paganga, G; Mullen, W; Rice-Evans, C A; Crozier, A

    2000-06-01

    After minimal sample preparation, two different HPLC methodologies, one based on a single gradient reversed-phase HPLC step, the other on multiple HPLC runs each optimised for specific components, were used to investigate the composition of flavonoids and phenolic acids in apple and tomato juices. The principal components in apple juice were identified as chlorogenic acid, phloridzin, caffeic acid and p-coumaric acid. Tomato juice was found to contain chlorogenic acid, caffeic acid, p-coumaric acid, naringenin and rutin. The quantitative estimates of the levels of these compounds, obtained with the two HPLC procedures, were very similar, demonstrating that either method can be used to analyse accurately the phenolic components of apple and tomato juices. Chlorogenic acid in tomato juice was the only component not fully resolved in the single run study and the multiple run analysis prior to enzyme treatment. The single run system of analysis is recommended for the initial investigation of plant phenolics and the multiple run approach for analyses where chromatographic resolution requires improvement.

  20. Accelerating simulation for the multiple-point statistics algorithm using vector quantization

    Zuo, Chen; Pan, Zhibin; Liang, Hao

    2018-03-01

    Multiple-point statistics (MPS) is a prominent algorithm to simulate categorical variables based on a sequential simulation procedure. Assuming training images (TIs) as prior conceptual models, MPS extracts patterns from TIs using a template and records their occurrences in a database. However, complex patterns increase the size of the database and require considerable time to retrieve the desired elements. In order to speed up simulation and improve simulation quality over state-of-the-art MPS methods, we propose an accelerating simulation for MPS using vector quantization (VQ), called VQ-MPS. First, a variable representation is presented to make categorical variables applicable for vector quantization. Second, we adopt a tree-structured VQ to compress the database so that stationary simulations are realized. Finally, a transformed template and classified VQ are used to address nonstationarity. A two-dimensional (2D) stationary channelized reservoir image is used to validate the proposed VQ-MPS. In comparison with several existing MPS programs, our method exhibits significantly better performance in terms of computational time, pattern reproductions, and spatial uncertainty. Further demonstrations consist of a 2D four facies simulation, two 2D nonstationary channel simulations, and a three-dimensional (3D) rock simulation. The results reveal that our proposed method is also capable of solving multifacies, nonstationarity, and 3D simulations based on 2D TIs.

  1. Schedule Optimization of Imaging Missions for Multiple Satellites and Ground Stations Using Genetic Algorithm

    Lee, Junghyun; Kim, Heewon; Chung, Hyun; Kim, Haedong; Choi, Sujin; Jung, Okchul; Chung, Daewon; Ko, Kwanghee

    2018-04-01

    In this paper, we propose a method that uses a genetic algorithm for the dynamic schedule optimization of imaging missions for multiple satellites and ground systems. In particular, the visibility conflicts of communication and mission operation using satellite resources (electric power and onboard memory) are integrated in sequence. Resource consumption and restoration are considered in the optimization process. Image acquisition is an essential part of satellite missions and is performed via a series of subtasks such as command uplink, image capturing, image storing, and image downlink. An objective function for optimization is designed to maximize the usability by considering the following components: user-assigned priority, resource consumption, and image-acquisition time. For the simulation, a series of hypothetical imaging missions are allocated to a multi-satellite control system comprising five satellites and three ground stations having S- and X-band antennas. To demonstrate the performance of the proposed method, simulations are performed via three operation modes: general, commercial, and tactical.

  2. A Multiple-Label Guided Clustering Algorithm for Historical Document Dating and Localization.

    He, Sheng; Samara, Petros; Burgers, Jan; Schomaker, Lambert

    2016-11-01

    It is of essential importance for historians to know the date and place of origin of the documents they study. It would be a huge advancement for historical scholars if it would be possible to automatically estimate the geographical and temporal provenance of a handwritten document by inferring them from the handwriting style of such a document. We propose a multiple-label guided clustering algorithm to discover the correlations between the concrete low-level visual elements in historical documents and abstract labels, such as date and location. First, a novel descriptor, called histogram of orientations of handwritten strokes, is proposed to extract and describe the visual elements, which is built on a scale-invariant polar-feature space. In addition, the multi-label self-organizing map (MLSOM) is proposed to discover the correlations between the low-level visual elements and their labels in a single framework. Our proposed MLSOM can be used to predict the labels directly. Moreover, the MLSOM can also be considered as a pre-structured clustering method to build a codebook, which contains more discriminative information on date and geography. The experimental results on the medieval paleographic scale data set demonstrate that our method achieves state-of-the-art results.

  3. New Method of Calculating a Multiplication by using the Generalized Bernstein-Vazirani Algorithm

    Nagata, Koji; Nakamura, Tadao; Geurdes, Han; Batle, Josep; Abdalla, Soliman; Farouk, Ahmed

    2018-06-01

    We present a new method of more speedily calculating a multiplication by using the generalized Bernstein-Vazirani algorithm and many parallel quantum systems. Given the set of real values a1,a2,a3,\\ldots ,aN and a function g:bf {R}→ {0,1}, we shall determine the following values g(a1),g(a2),g(a3),\\ldots , g(aN) simultaneously. The speed of determining the values is shown to outperform the classical case by a factor of N. Next, we consider it as a number in binary representation; M 1 = ( g( a 1), g( a 2), g( a 3),…, g( a N )). By using M parallel quantum systems, we have M numbers in binary representation, simultaneously. The speed of obtaining the M numbers is shown to outperform the classical case by a factor of M. Finally, we calculate the product; M1× M2× \\cdots × MM. The speed of obtaining the product is shown to outperform the classical case by a factor of N × M.

  4. DOA Estimation of Multiple LFM Sources Using a STFT-based and FBSS-based MUSIC Algorithm

    K. B. Cui

    2017-12-01

    Full Text Available Direction of arrival (DOA estimation is an important problem in array signal processing. An effective multiple signal classification (MUSIC method based on the short-time Fourier transform (STFT and forward/ backward spatial smoothing (FBSS techniques for the DOA estimation problem of multiple time-frequency (t-f joint LFM sources is addressed. Previous work in the area e. g. STFT-MUSIC algorithm cannot resolve the t-f completely or largely joint sources because they can only select the single-source t-f points. The proposed method con¬structs the spatial t-f distributions (STFDs by selecting the multiple-source t-f points and uses the FBSS techniques to solve the problem of rank loss. In this way, the STFT-FBSS-MUSIC algorithm can resolve the t-f largely joint or completely joint LFM sources. In addition, the proposed algorithm also owns pretty low computational complexity when resolving multiple LFM sources because it can reduce the times of the feature decomposition and spectrum search. The performance of the proposed method is compared with that of the existing t-f based MUSIC algorithms through computer simulations and the results show its good performance.

  5. Two-step phase retrieval algorithm based on the quotient of inner products of phase-shifting interferograms

    Niu, Wenhu; Zhong, Liyun; Sun, Peng; Zhang, Wangping; Lu, Xiaoxu

    2015-01-01

    Based on the quotient of inner products, a simple and rapid algorithm is proposed to retrieve the measured phase from two-frame phase-shifting interferograms with unknown phase shifts. Firstly, we filtered the background of interferograms by a Gaussian high-pass filter. Secondly, we calculated the inner products of the background-filtered interferograms. Thirdly, we extracted the phase shifts by the quotient of the inner products then calculated the measured phase by an arctangent function. Finally, we tested the performance of the proposed algorithm by the simulation calculation and the experimental research for a vortex phase plate. Both the simulation calculation and the experimental result showed that the phase shifts and the measured phase with high accuracy can be obtained by the proposed algorithm rapidly and conveniently. (paper)

  6. SOLA-VOF: a solution algorithm for transient fluid flow with multiple free boundaries

    Nichols, B.D.; Hirt, C.W.; Hotchkiss, R.S.

    1980-08-01

    In this report a simple, but powerful, computer program is presented for the solution of two-dimensional transient fluid flow with free boundaries. The SOLA-VOF program, which is based on the concept of a fractional volume of fluid (VOF), is more flexible and efficient than other methods for treating arbitrary free boundaries. SOLA-VOF has a variety of user options that provide capabilities for a wide range of applications. Its basic mode of operation is for single fluid calculations having multiple free surfaces. However, SOLA-VOF can also be used for calculations involving two fluids separated by a sharp interface. In either case, the fluids may be treated as incompressible or as having limited compressibility. Surface tension forces with wall adhesion are permitted in both cases. Internal obstacles may be defined by blocking out any desired combination of cells in the mesh, which is composed of rectangular cells of variable size. SOLA-VOF is an easy-to-use program. Its logical parts are isolated in separate subroutines, and numerous special features have been included to simplify its operation, such as an automatic time-step control, a flexible mesh generator, extensive output capabilities, a variety of optional boundary conditions, and instructive internal documentation

  7. SINGLE VERSUS MULTIPLE TRIAL VECTORS IN CLASSICAL DIFFERENTIAL EVOLUTION FOR OPTIMIZING THE QUANTIZATION TABLE IN JPEG BASELINE ALGORITHM

    B Vinoth Kumar

    2017-07-01

    Full Text Available Quantization Table is responsible for compression / quality trade-off in baseline Joint Photographic Experts Group (JPEG algorithm and therefore it is viewed as an optimization problem. In the literature, it has been found that Classical Differential Evolution (CDE is a promising algorithm to generate the optimal quantization table. However, the searching capability of CDE could be limited due to generation of single trial vector in an iteration which in turn reduces the convergence speed. This paper studies the performance of CDE by employing multiple trial vectors in a single iteration. An extensive performance analysis has been made between CDE and CDE with multiple trial vectors in terms of Optimization process, accuracy, convergence speed and reliability. The analysis report reveals that CDE with multiple trial vectors improves the convergence speed of CDE and the same is confirmed using a statistical hypothesis test (t-test.

  8. Validation of the Welch Allyn SureBP (inflation) and StepBP (deflation) algorithms by AAMI standard testing and BHS data analysis.

    Alpert, Bruce S

    2011-04-01

    We evaluated two new Welch Allyn automated blood pressure (BP) algorithms. The first, SureBP, estimates BP during cuff inflation; the second, StepBP, does so during deflation. We followed the American National Standards Institute/Association for the Advancement of Medical Instrumentation SP10:2006 standard for testing and data analysis. The data were also analyzed using the British Hypertension Society analysis strategy. We tested children, adolescents, and adults. The requirements of the American National Standards Institute/Association for the Advancement of Medical Instrumentation SP10:2006 standard were fulfilled with respect to BP levels, arm sizes, and ages. Association for the Advancement of Medical Instrumentation SP10 Method 1 data analysis was used. The mean±standard deviation for the device readings compared with auscultation by paired, trained, blinded observers in the SureBP mode were -2.14±7.44 mmHg for systolic BP (SBP) and -0.55±5.98 mmHg for diastolic BP (DBP). In the StepBP mode, the differences were -3.61±6.30 mmHg for SBP and -2.03±5.30 mmHg for DBP. Both algorithms achieved an A grade for both SBP and DBP by British Hypertension Society analysis. The SureBP inflation-based algorithm will be available in many new-generation Welch Allyn monitors. Its use will reduce the time it takes to estimate BP in critical patient care circumstances. The device will not need to inflate to excessive suprasystolic BPs to obtain the SBP values. Deflation is rapid once SBP has been determined, thus reducing the total time of cuff inflation and reducing patient discomfort. If the SureBP fails to obtain a BP value, the StepBP algorithm is activated to estimate BP by traditional deflation methodology.

  9. A simple test of choice stepping reaction time for assessing fall risk in people with multiple sclerosis.

    Tijsma, Mylou; Vister, Eva; Hoang, Phu; Lord, Stephen R

    2017-03-01

    Purpose To determine (a) the discriminant validity for established fall risk factors and (b) the predictive validity for falls of a simple test of choice stepping reaction time (CSRT) in people with multiple sclerosis (MS). Method People with MS (n = 210, 21-74y) performed the CSRT, sensorimotor, balance and neuropsychological tests in a single session. They were then followed up for falls using monthly fall diaries for 6 months. Results The CSRT test had excellent discriminant validity with respect to established fall risk factors. Frequent fallers (≥3 falls) performed significantly worse in the CSRT test than non-frequent fallers (0-2 falls). With the odds of suffering frequent falls increasing 69% with each SD increase in CSRT (OR = 1.69, 95% CI: 1.27-2.26, p = falls in people with MS. This test may prove useful in documenting longitudinal changes in fall risk in relation to MS disease progression and effects of interventions. Implications for rehabilitation Good choice stepping reaction time (CSRT) is required for maintaining balance. A simple low-tech CSRT test has excellent discriminative and predictive validity in relation to falls in people with MS. This test may prove useful documenting longitudinal changes in fall risk in relation to MS disease progression and effects of interventions.

  10. GraDit: graph-based data repair algorithm for multiple data edits rule violations

    Ode Zuhayeni Madjida, Wa; Gusti Bagus Baskara Nugraha, I.

    2018-03-01

    Constraint-based data cleaning captures data violation to a set of rule called data quality rules. The rules consist of integrity constraint and data edits. Structurally, they are similar, where the rule contain left hand side and right hand side. Previous research proposed a data repair algorithm for integrity constraint violation. The algorithm uses undirected hypergraph as rule violation representation. Nevertheless, this algorithm can not be applied for data edits because of different rule characteristics. This study proposed GraDit, a repair algorithm for data edits rule. First, we use bipartite-directed hypergraph as model representation of overall defined rules. These representation is used for getting interaction between violation rules and clean rules. On the other hand, we proposed undirected graph as violation representation. Our experimental study showed that algorithm with undirected graph as violation representation model gave better data quality than algorithm with undirected hypergraph as representation model.

  11. A multiobjective non-dominated sorting genetic algorithm (NSGA-II for the Multiple Traveling Salesman Problem

    Rubén Iván Bolaños

    2015-06-01

    Full Text Available This paper considers a multi-objective version of the Multiple Traveling Salesman Problem (MOmTSP. In particular, two objectives are considered: the minimization of the total traveled distance and the balance of the working times of the traveling salesmen. The problem is formulated as an integer multi-objective optimization model. A non-dominated sorting genetic algorithm (NSGA-II is proposed to solve the MOmTSP. The solution scheme allows one to find a set of ordered solutions in Pareto fronts by considering the concept of dominance. Tests on real world instances and instances adapted from the literature show the effectiveness of the proposed algorithm.

  12. Prognosis of the individual course of disease--steps in developing a decision support tool for Multiple Sclerosis.

    Daumer, M; Neuhaus, A; Lederer, C; Scholz, M; Wolinsky, J S; Heiderhoff, M

    2007-05-08

    Multiple sclerosis is a chronic disease of uncertain aetiology. Variations in its disease course make it difficult to impossible to accurately determine the prognosis of individual patients. The Sylvia Lawry Centre for Multiple Sclerosis Research (SLCMSR) developed an "online analytical processing (OLAP)" tool that takes advantage of extant clinical trials data and allows one to model the near term future course of this chronic disease for an individual patient. For a given patient the most similar patients of the SLCMSR database are intelligently selected by a model-based matching algorithm integrated into an OLAP-tool to enable real time, web-based statistical analyses. The underlying database (last update April 2005) contains 1,059 patients derived from 30 placebo arms of controlled clinical trials. Demographic information on the entire database and the portion selected for comparison are displayed. The result of the statistical comparison is provided as a display of the course of Expanded Disability Status Scale (EDSS) for individuals in the database with regions of probable progression over time, along with their mean relapse rate. Kaplan-Meier curves for time to sustained progression in the EDSS and time to requirement of constant assistance to walk (EDSS 6) are also displayed. The software-application OLAP anticipates the input MS patient's course on the basis of baseline values and the known course of disease for similar patients who have been followed in clinical trials. This simulation could be useful for physicians, researchers and other professionals who counsel patients on therapeutic options. The application can be modified for studying the natural history of other chronic diseases, if and when similar datasets on which the OLAP operates exist.

  13. Prognosis of the individual course of disease - steps in developing a decision support tool for Multiple Sclerosis

    Scholz M

    2007-05-01

    Full Text Available Abstract Background Multiple sclerosis is a chronic disease of uncertain aetiology. Variations in its disease course make it difficult to impossible to accurately determine the prognosis of individual patients. The Sylvia Lawry Centre for Multiple Sclerosis Research (SLCMSR developed an "online analytical processing (OLAP" tool that takes advantage of extant clinical trials data and allows one to model the near term future course of this chronic disease for an individual patient. Results For a given patient the most similar patients of the SLCMSR database are intelligently selected by a model-based matching algorithm integrated into an OLAP-tool to enable real time, web-based statistical analyses. The underlying database (last update April 2005 contains 1,059 patients derived from 30 placebo arms of controlled clinical trials. Demographic information on the entire database and the portion selected for comparison are displayed. The result of the statistical comparison is provided as a display of the course of Expanded Disability Status Scale (EDSS for individuals in the database with regions of probable progression over time, along with their mean relapse rate. Kaplan-Meier curves for time to sustained progression in the EDSS and time to requirement of constant assistance to walk (EDSS 6 are also displayed. The software-application OLAP anticipates the input MS patient's course on the basis of baseline values and the known course of disease for similar patients who have been followed in clinical trials. Conclusion This simulation could be useful for physicians, researchers and other professionals who counsel patients on therapeutic options. The application can be modified for studying the natural history of other chronic diseases, if and when similar datasets on which the OLAP operates exist.

  14. Prognosis of the individual course of disease - steps in developing a decision support tool for Multiple Sclerosis

    Daumer, M; Neuhaus, A; Lederer, C; Scholz, M; Wolinsky, JS; Heiderhoff, M

    2007-01-01

    Background Multiple sclerosis is a chronic disease of uncertain aetiology. Variations in its disease course make it difficult to impossible to accurately determine the prognosis of individual patients. The Sylvia Lawry Centre for Multiple Sclerosis Research (SLCMSR) developed an "online analytical processing (OLAP)" tool that takes advantage of extant clinical trials data and allows one to model the near term future course of this chronic disease for an individual patient. Results For a given patient the most similar patients of the SLCMSR database are intelligently selected by a model-based matching algorithm integrated into an OLAP-tool to enable real time, web-based statistical analyses. The underlying database (last update April 2005) contains 1,059 patients derived from 30 placebo arms of controlled clinical trials. Demographic information on the entire database and the portion selected for comparison are displayed. The result of the statistical comparison is provided as a display of the course of Expanded Disability Status Scale (EDSS) for individuals in the database with regions of probable progression over time, along with their mean relapse rate. Kaplan-Meier curves for time to sustained progression in the EDSS and time to requirement of constant assistance to walk (EDSS 6) are also displayed. The software-application OLAP anticipates the input MS patient's course on the basis of baseline values and the known course of disease for similar patients who have been followed in clinical trials. Conclusion This simulation could be useful for physicians, researchers and other professionals who counsel patients on therapeutic options. The application can be modified for studying the natural history of other chronic diseases, if and when similar datasets on which the OLAP operates exist. PMID:17488517

  15. A New Multi-Step Iterative Algorithm for Approximating Common Fixed Points of a Finite Family of Multi-Valued Bregman Relatively Nonexpansive Mappings

    Wiyada Kumam

    2016-05-01

    Full Text Available In this article, we introduce a new multi-step iteration for approximating a common fixed point of a finite class of multi-valued Bregman relatively nonexpansive mappings in the setting of reflexive Banach spaces. We prove a strong convergence theorem for the proposed iterative algorithm under certain hypotheses. Additionally, we also use our results for the solution of variational inequality problems and to find the zero points of maximal monotone operators. The theorems furnished in this work are new and well-established and generalize many well-known recent research works in this field.

  16. Performance Optimization of a Solar-Driven Multi-Step Irreversible Brayton Cycle Based on a Multi-Objective Genetic Algorithm

    Ahmadi Mohammad Hosein

    2016-01-01

    Full Text Available An applicable approach for a multi-step regenerative irreversible Brayton cycle on the basis of thermodynamics and optimization of thermal efficiency and normalized output power is presented in this work. In the present study, thermodynamic analysis and a NSGA II algorithm are coupled to determine the optimum values of thermal efficiency and normalized power output for a Brayton cycle system. Moreover, three well-known decision-making methods are employed to indicate definite answers from the outputs gained from the aforementioned approach. Finally, with the aim of error analysis, the values of the average and maximum error of the results are also calculated.

  17. An Improved Image Encryption Algorithm Based on Cyclic Rotations and Multiple Chaotic Sequences: Application to Satellite Images

    MADANI Mohammed

    2017-10-01

    Full Text Available In this paper, a new satellite image encryption algorithm based on the combination of multiple chaotic systems and a random cyclic rotation technique is proposed. Our contribution consists in implementing three different chaotic maps (logistic, sine, and standard combined to improve the security of satellite images. Besides enhancing the encryption, the proposed algorithm also focuses on advanced efficiency of the ciphered images. Compared with classical encryption schemes based on multiple chaotic maps and the Rubik's cube rotation, our approach has not only the same merits of chaos systems like high sensitivity to initial values, unpredictability, and pseudo-randomness, but also other advantages like a higher number of permutations, better performances in Peak Signal to Noise Ratio (PSNR and a Maximum Deviation (MD.

  18. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA for L p -Regularization Using the Multiple Sub-Dictionary Representation

    Yunyi Li

    2017-12-01

    Full Text Available Both L 1 / 2 and L 2 / 3 are two typical non-convex regularizations of L p ( 0 < p < 1 , which can be employed to obtain a sparser solution than the L 1 regularization. Recently, the multiple-state sparse transformation strategy has been developed to exploit the sparsity in L 1 regularization for sparse signal recovery, which combines the iterative reweighted algorithms. To further exploit the sparse structure of signal and image, this paper adopts multiple dictionary sparse transform strategies for the two typical cases p ∈ { 1 / 2 ,   2 / 3 } based on an iterative L p thresholding algorithm and then proposes a sparse adaptive iterative-weighted L p thresholding algorithm (SAITA. Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based L p regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L 1 algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based L p case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work.

  19. Extending the eigCG algorithm to nonsymmetric Lanczos for linear systems with multiple right-hand sides

    Abdel-Rehim, A M; Stathopoulos, Andreas; Orginos, Kostas

    2014-08-01

    The technique that was used to build the EigCG algorithm for sparse symmetric linear systems is extended to the nonsymmetric case using the BiCG algorithm. We show that, similarly to the symmetric case, we can build an algorithm that is capable of computing a few smallest magnitude eigenvalues and their corresponding left and right eigenvectors of a nonsymmetric matrix using only a small window of the BiCG residuals while simultaneously solving a linear system with that matrix. For a system with multiple right-hand sides, we give an algorithm that computes incrementally more eigenvalues while solving the first few systems and then uses the computed eigenvectors to deflate BiCGStab for the remaining systems. Our experiments on various test problems, including Lattice QCD, show the remarkable ability of EigBiCG to compute spectral approximations with accuracy comparable to that of the unrestarted, nonsymmetric Lanczos. Furthermore, our incremental EigBiCG followed by appropriately restarted and deflated BiCGStab provides a competitive method for systems with multiple right-hand sides.

  20. A two-step nearest neighbors algorithm using satellite imagery for predicting forest structure within species composition classes

    Ronald E. McRoberts

    2009-01-01

    Nearest neighbors techniques have been shown to be useful for predicting multiple forest attributes from forest inventory and Landsat satellite image data. However, in regions lacking good digital land cover information, nearest neighbors selected to predict continuous variables such as tree volume must be selected without regard to relevant categorical variables such...

  1. An Interference-Aware Traffic-Priority-Based Link Scheduling Algorithm for Interference Mitigation in Multiple Wireless Body Area Networks

    Thien T. T. Le

    2016-12-01

    Full Text Available Currently, wireless body area networks (WBANs are effectively used for health monitoring services. However, in cases where WBANs are densely deployed, interference among WBANs can cause serious degradation of network performance and reliability. Inter-WBAN interference can be reduced by scheduling the communication links of interfering WBANs. In this paper, we propose an interference-aware traffic-priority-based link scheduling (ITLS algorithm to overcome inter-WBAN interference in densely deployed WBANs. First, we model a network with multiple WBANs as an interference graph where node-level interference and traffic priority are taken into account. Second, we formulate link scheduling for multiple WBANs as an optimization model where the objective is to maximize the throughput of the entire network while ensuring the traffic priority of sensor nodes. Finally, we propose the ITLS algorithm for multiple WBANs on the basis of the optimization model. High spatial reuse is also achieved in the proposed ITLS algorithm. The proposed ITLS achieves high spatial reuse while considering traffic priority, packet length, and the number of interfered sensor nodes. Our simulation results show that the proposed ITLS significantly increases spatial reuse and network throughput with lower delay by mitigating inter-WBAN interference.

  2. A Multiple Time-Step Finite State Projection Algorithm for the Solution to the Chemical Master Equation

    Munsky, Brian; Khammash, Mustafa

    2006-01-01

    At the mesoscopic scale, chemical processes have probability distributions that evolve according to an infinite set of linear ordinary differential equations known as the chemical master equation (CME...

  3. A New Quaternion-Based Kalman Filter for Real-Time Attitude Estimation Using the Two-Step Geometrically-Intuitive Correction Algorithm.

    Feng, Kaiqiang; Li, Jie; Zhang, Xiaoming; Shen, Chong; Bi, Yu; Zheng, Tao; Liu, Jun

    2017-09-19

    In order to reduce the computational complexity, and improve the pitch/roll estimation accuracy of the low-cost attitude heading reference system (AHRS) under conditions of magnetic-distortion, a novel linear Kalman filter, suitable for nonlinear attitude estimation, is proposed in this paper. The new algorithm is the combination of two-step geometrically-intuitive correction (TGIC) and the Kalman filter. In the proposed algorithm, the sequential two-step geometrically-intuitive correction scheme is used to make the current estimation of pitch/roll immune to magnetic distortion. Meanwhile, the TGIC produces a computed quaternion input for the Kalman filter, which avoids the linearization error of measurement equations and reduces the computational complexity. Several experiments have been carried out to validate the performance of the filter design. The results demonstrate that the mean time consumption and the root mean square error (RMSE) of pitch/roll estimation under magnetic disturbances are reduced by 45.9% and 33.8%, respectively, when compared with a standard filter. In addition, the proposed filter is applicable for attitude estimation under various dynamic conditions.

  4. Walk Ratio (Step Length/Cadence) as a Summary Index of Neuromotor Control of Gait: Application to Multiple Sclerosis

    Rota, Viviana; Perucca, Laura; Simone, Anna; Tesio, Luigi

    2011-01-01

    In healthy adults, the step length/cadence ratio [walk ratio (WR) in mm/(steps/min) and normalized for height] is known to be constant around 6.5 mm/(step/min). It is a speed-independent index of the overall neuromotor gait control, in as much as it reflects energy expenditure, balance, between-step variability, and attentional demand. The speed…

  5. Phase gradient algorithm based on co-axis two-step phase-shifting interferometry and its application

    Wang, Yawei; Zhu, Qiong; Xu, Yuanyuan; Xin, Zhiduo; Liu, Jingye

    2017-12-01

    A phase gradient method based on co-axis two-step phase-shifting interferometry, is used to reveal the detailed information of a specimen. In this method, the phase gradient distribution can only be obtained by calculating both the first-order derivative and the radial Hilbert transformation of the intensity difference between two phase-shifted interferograms. The feasibility and accuracy of this method were fully verified by the simulation results for a polystyrene sphere and a red blood cell. The empirical results demonstrated that phase gradient is sensitive to changes in the refractive index and morphology. Because phase retrieval and tedious phase unwrapping are not required, the calculation speed is faster. In addition, co-axis interferometry has high spatial resolution.

  6. Cryptosystem based on two-step phase-shifting interferometry and the RSA public-key encryption algorithm

    Meng, X. F.; Peng, X.; Cai, L. Z.; Li, A. M.; Gao, Z.; Wang, Y. R.

    2009-08-01

    A hybrid cryptosystem is proposed, in which one image is encrypted to two interferograms with the aid of double random-phase encoding (DRPE) and two-step phase-shifting interferometry (2-PSI), then three pairs of public-private keys are utilized to encode and decode the session keys (geometrical parameters, the second random-phase mask) and interferograms. In the stage of decryption, the ciphered image can be decrypted by wavefront reconstruction, inverse Fresnel diffraction, and real amplitude normalization. This approach can successfully solve the problem of key management and dispatch, resulting in increased security strength. The feasibility of the proposed cryptosystem and its robustness against some types of attack are verified and analyzed by computer simulations.

  7. Evaluation of expansion algorithm of measurement range suited for 3D shape measurement using two pitches of projected grating with light source-stepping method

    Sakaguchi, Toshimasa; Fujigaki, Motoharu; Murata, Yorinobu

    2015-03-01

    Accurate and wide-range shape measurement method is required in industrial field. The same technique is possible to be used for a shape measurement of a human body for the garment industry. Compact 3D shape measurement equipment is also required for embedding in the inspection system. A shape measurement by a phase shifting method can measure the shape with high spatial resolution because the coordinates can be obtained pixel by pixel. A key-device to develop compact equipment is a grating projector. Authors developed a linear LED projector and proposed a light source stepping method (LSSM) using the linear LED projector. The shape measurement euipment can be produced with low-cost and compact without any phase-shifting mechanical systems by using this method. Also it enables us to measure 3D shape in very short time by switching the light sources quickly. A phase unwrapping method is necessary to widen the measurement range with constant accuracy for phase shifting method. A general phase unwrapping method with difference grating pitches is often used. It is one of a simple phase unwrapping method. It is, however, difficult to apply the conventional phase unwrapping algorithm to the LSSM. Authors, therefore, developed an expansion unwrapping algorithm for the LSSM. In this paper, an expansion algorithm of measurement range suited for 3D shape measurement using two pitches of projected grating with the LSSM was evaluated.

  8. The collapsed cone algorithm for (192)Ir dosimetry using phantom-size adaptive multiple-scatter point kernels.

    Tedgren, Åsa Carlsson; Plamondon, Mathieu; Beaulieu, Luc

    2015-07-07

    The aim of this work was to investigate how dose distributions calculated with the collapsed cone (CC) algorithm depend on the size of the water phantom used in deriving the point kernel for multiple scatter. A research version of the CC algorithm equipped with a set of selectable point kernels for multiple-scatter dose that had initially been derived in water phantoms of various dimensions was used. The new point kernels were generated using EGSnrc in spherical water phantoms of radii 5 cm, 7.5 cm, 10 cm, 15 cm, 20 cm, 30 cm and 50 cm. Dose distributions derived with CC in water phantoms of different dimensions and in a CT-based clinical breast geometry were compared to Monte Carlo (MC) simulations using the Geant4-based brachytherapy specific MC code Algebra. Agreement with MC within 1% was obtained when the dimensions of the phantom used to derive the multiple-scatter kernel were similar to those of the calculation phantom. Doses are overestimated at phantom edges when kernels are derived in larger phantoms and underestimated when derived in smaller phantoms (by around 2% to 7% depending on distance from source and phantom dimensions). CC agrees well with MC in the high dose region of a breast implant and is superior to TG43 in determining skin doses for all multiple-scatter point kernel sizes. Increased agreement between CC and MC is achieved when the point kernel is comparable to breast dimensions. The investigated approximation in multiple scatter dose depends on the choice of point kernel in relation to phantom size and yields a significant fraction of the total dose only at distances of several centimeters from a source/implant which correspond to volumes of low doses. The current implementation of the CC algorithm utilizes a point kernel derived in a comparatively large (radius 20 cm) water phantom. A fixed point kernel leads to predictable behaviour of the algorithm with the worst case being a source/implant located well within a patient

  9. Bottom-up GGM algorithm for constructing multiple layered hierarchical gene regulatory networks

    Multilayered hierarchical gene regulatory networks (ML-hGRNs) are very important for understanding genetics regulation of biological pathways. However, there are currently no computational algorithms available for directly building ML-hGRNs that regulate biological pathways. A bottom-up graphic Gaus...

  10. A Two-Step Strategy for System Identification of Civil Structures for Structural Health Monitoring Using Wavelet Transform and Genetic Algorithms

    Carlos Andres Perez-Ramirez

    2017-01-01

    Full Text Available Nowadays, the accurate identification of natural frequencies and damping ratios play an important role in smart civil engineering, since they can be used for seismic design, vibration control, and condition assessment, among others. To achieve it in practical way, it is required to instrument the structure and apply techniques which are able to deal with noise-corrupted and non-linear signals, as they are common features in real-life civil structures. In this article, a two-step strategy is proposed for performing accurate modal parameters identification in an automated manner. In the first step, it is obtained and decomposed the measured signals using the natural excitation technique and the synchrosqueezed wavelet transform, respectively. Then, the second step estimates the modal parameters by solving an optimization problem employing a genetic algorithm-based approach, where the micropopulation concept is used to improve the speed convergence as well as the accuracy of the estimated values. The accuracy and effectiveness of the proposal are tested using both the simulated response of a benchmark structure and the measurements of a real eight-story building. The obtained results show that the proposed strategy can estimate the modal parameters accurately, indicating than the proposal can be considered as an alternative to perform the abovementioned task.

  11. Algorithm of axial fuel optimization based in progressive steps of turned search; Algoritmo de optimizacion axial de combustible basado en etapas progresivas de busqueda de entorno

    Martin del Campo, C.; Francois, J.L. [Laboratorio de Analisis en Ingenieria de Reactores Nucleares, FI-UNAM, Paseo Cuauhnahuac 8532, Jiutepec, Morelos (Mexico)

    2003-07-01

    The development of an algorithm for the axial optimization of fuel of boiling water reactors (BWR) is presented. The algorithm is based in a serial optimizations process in the one that the best solution in each stage is the starting point of the following stage. The objective function of each stage adapts to orient the search toward better values of one or two parameters leaving the rest like restrictions. Conform to it advances in those optimization stages, it is increased the fineness of the evaluation of the investigated designs. The algorithm is based on three stages, in the first one are used Genetic algorithms and in the two following Tabu Search. The objective function of the first stage it looks for to minimize the average enrichment of the one it assembles and to fulfill with the generation of specified energy for the operation cycle besides not violating none of the limits of the design base. In the following stages the objective function looks for to minimize the power factor peak (PPF) and to maximize the margin of shutdown (SDM), having as restrictions the one average enrichment obtained for the best design in the first stage and those other restrictions. The third stage, very similar to the previous one, it begins with the design of the previous stage but it carries out a search of the margin of shutdown to different exhibition steps with calculations in three dimensions (3D). An application to the case of the design of the fresh assemble for the fourth fuel reload of the Unit 1 reactor of the Laguna Verde power plant (U1-CLV) is presented. The obtained results show an advance in the handling of optimization methods and in the construction of the objective functions that should be used for the different design stages of the fuel assemblies. (Author)

  12. Use of genomic recursions and algorithm for proven and young animals for single-step genomic BLUP analyses--a simulation study.

    Fragomeni, B O; Lourenco, D A L; Tsuruta, S; Masuda, Y; Aguilar, I; Misztal, I

    2015-10-01

    The purpose of this study was to examine accuracy of genomic selection via single-step genomic BLUP (ssGBLUP) when the direct inverse of the genomic relationship matrix (G) is replaced by an approximation of G(-1) based on recursions for young genotyped animals conditioned on a subset of proven animals, termed algorithm for proven and young animals (APY). With the efficient implementation, this algorithm has a cubic cost with proven animals and linear with young animals. Ten duplicate data sets mimicking a dairy cattle population were simulated. In a first scenario, genomic information for 20k genotyped bulls, divided in 7k proven and 13k young bulls, was generated for each replicate. In a second scenario, 5k genotyped cows with phenotypes were included in the analysis as young animals. Accuracies (average for the 10 replicates) in regular EBV were 0.72 and 0.34 for proven and young animals, respectively. When genomic information was included, they increased to 0.75 and 0.50. No differences between genomic EBV (GEBV) obtained with the regular G(-1) and the approximated G(-1) via the recursive method were observed. In the second scenario, accuracies in GEBV (0.76, 0.51 and 0.59 for proven bulls, young males and young females, respectively) were also higher than those in EBV (0.72, 0.35 and 0.49). Again, no differences between GEBV with regular G(-1) and with recursions were observed. With the recursive algorithm, the number of iterations to achieve convergence was reduced from 227 to 206 in the first scenario and from 232 to 209 in the second scenario. Cows can be treated as young animals in APY without reducing the accuracy. The proposed algorithm can be implemented to reduce computing costs and to overcome current limitations on the number of genotyped animals in the ssGBLUP method. © 2015 Blackwell Verlag GmbH.

  13. A Novel Sensor Selection and Power Allocation Algorithm for Multiple-Target Tracking in an LPI Radar Network

    Ji She

    2016-12-01

    Full Text Available Radar networks are proven to have numerous advantages over traditional monostatic and bistatic radar. With recent developments, radar networks have become an attractive platform due to their low probability of intercept (LPI performance for target tracking. In this paper, a joint sensor selection and power allocation algorithm for multiple-target tracking in a radar network based on LPI is proposed. It is found that this algorithm can minimize the total transmitted power of a radar network on the basis of a predetermined mutual information (MI threshold between the target impulse response and the reflected signal. The MI is required by the radar network system to estimate target parameters, and it can be calculated predictively with the estimation of target state. The optimization problem of sensor selection and power allocation, which contains two variables, is non-convex and it can be solved by separating power allocation problem from sensor selection problem. To be specific, the optimization problem of power allocation can be solved by using the bisection method for each sensor selection scheme. Also, the optimization problem of sensor selection can be solved by a lower complexity algorithm based on the allocated powers. According to the simulation results, it can be found that the proposed algorithm can effectively reduce the total transmitted power of a radar network, which can be conducive to improving LPI performance.

  14. Hierarchical multiple binary image encryption based on a chaos and phase retrieval algorithm in the Fresnel domain

    Wang, Zhipeng; Hou, Chenxia; Lv, Xiaodong; Wang, Hongjuan; Gong, Qiong; Qin, Yi

    2016-01-01

    Based on the chaos and phase retrieval algorithm, a hierarchical multiple binary image encryption is proposed. In the encryption process, each plaintext is encrypted into a diffraction intensity pattern by two chaos-generated random phase masks (RPMs). Thereafter, the captured diffraction intensity patterns are partially selected by different binary masks and then combined together to form a single intensity pattern. The combined intensity pattern is saved as ciphertext. For decryption, an iterative phase retrieval algorithm is performed, in which a support constraint in the output plane and a median filtering operation are utilized to achieve a rapid convergence rate without a stagnation problem. The proposed scheme has a simple optical setup and large encryption capacity. In particular, it is well suited for constructing a hierarchical security system. The security and robustness of the proposal are also investigated. (letter)

  15. A genetic algorithm for multiple relay selection in two-way relaying cognitive radio networks

    Alsharoa, Ahmad M.; Ghazzai, Hakim; Alouini, Mohamed-Slim

    2013-01-01

    In this paper, we investigate a multiple relay selection scheme for two-way relaying cognitive radio networks where primary users and secondary users operate on the same frequency band. More specifically, cooperative relays using Amplifyand- Forward

  16. Clustering and Genetic Algorithm Based Hybrid Flowshop Scheduling with Multiple Operations

    Yingfeng Zhang

    2014-01-01

    Full Text Available This research is motivated by a flowshop scheduling problem of our collaborative manufacturing company for aeronautic products. The heat-treatment stage (HTS and precision forging stage (PFS of the case are selected as a two-stage hybrid flowshop system. In HTS, there are four parallel machines and each machine can process a batch of jobs simultaneously. In PFS, there are two machines. Each machine can install any module of the four modules for processing the workpeices with different sizes. The problem is characterized by many constraints, such as batching operation, blocking environment, and setup time and working time limitations of modules, and so forth. In order to deal with the above special characteristics, the clustering and genetic algorithm is used to calculate the good solution for the two-stage hybrid flowshop problem. The clustering is used to group the jobs according to the processing ranges of the different modules of PFS. The genetic algorithm is used to schedule the optimal sequence of the grouped jobs for the HTS and PFS. Finally, a case study is used to demonstrate the efficiency and effectiveness of the designed genetic algorithm.

  17. Advanced Receiver Design for Mitigating Multiple RF Impairments in OFDM Systems: Algorithms and RF Measurements

    Adnan Kiayani

    2012-01-01

    Full Text Available Direct-conversion architecture-based orthogonal frequency division multiplexing (OFDM systems are troubled by impairments such as in-phase and quadrature-phase (I/Q imbalance and carrier frequency offset (CFO. These impairments are unavoidable in any practical implementation and severely degrade the obtainable link performance. In this contribution, we study the joint impact of frequency-selective I/Q imbalance at both transmitter and receiver together with channel distortions and CFO error. Two estimation and compensation structures based on different pilot patterns are proposed for coping with such impairments. The first structure is based on preamble pilot pattern while the second one assumes a sparse pilot pattern. The proposed estimation/compensation structures are able to separate the individual impairments, which are then compensated in the reverse order of their appearance at the receiver. We present time-domain estimation and compensation algorithms for receiver I/Q imbalance and CFO and propose low-complexity algorithms for the compensation of channel distortions and transmitter IQ imbalance. The performance of the compensation algorithms is investigated with computer simulations as well as with practical radio frequency (RF measurements. The performance results indicate that the proposed techniques provide close to the ideal performance both in simulations and measurements.

  18. Image Steganography of Multiple File Types with Encryption and Compression Algorithms

    Ernest Andreigh C. Centina

    2017-05-01

    Full Text Available The goals of this study were to develop a system intended for securing files through the technique of image steganography integrated with cryptography by utilizing ZLIB Algorithm for compressing and decompressing secret files, DES Algorithm for encryption and decryption, and Least Significant Bit Algorithm for file embedding and extraction to avoid compromise on highly confidential files from exploits of unauthorized persons. Ensuing to this, the system is in acc ordance with ISO 9126 international quality standards. Every quality criteria of the system was evaluated by 10 Information Technology professionals, and the arithmetic Mean and Standard Deviation of the survey were computed. The result exhibits that m ost of them strongly agreed that the system is excellently effective based on Functionality, Reliability, Usability, Efficiency, Maintainability and Portability conformance to ISO 9126 standards. The system was found to be a useful tool for both governmen t agencies and private institutions for it could keep not only the message secret but also the existence of that particular message or file et maintaining the privacy of highly confidential and sensitive files from unauthorized access.

  19. A unified classifier for robust face recognition based on combining multiple subspace algorithms

    Ijaz Bajwa, Usama; Ahmad Taj, Imtiaz; Waqas Anwar, Muhammad

    2012-10-01

    Face recognition being the fastest growing biometric technology has expanded manifold in the last few years. Various new algorithms and commercial systems have been proposed and developed. However, none of the proposed or developed algorithm is a complete solution because it may work very well on one set of images with say illumination changes but may not work properly on another set of image variations like expression variations. This study is motivated by the fact that any single classifier cannot claim to show generally better performance against all facial image variations. To overcome this shortcoming and achieve generality, combining several classifiers using various strategies has been studied extensively also incorporating the question of suitability of any classifier for this task. The study is based on the outcome of a comprehensive comparative analysis conducted on a combination of six subspace extraction algorithms and four distance metrics on three facial databases. The analysis leads to the selection of the most suitable classifiers which performs better on one task or the other. These classifiers are then combined together onto an ensemble classifier by two different strategies of weighted sum and re-ranking. The results of the ensemble classifier show that these strategies can be effectively used to construct a single classifier that can successfully handle varying facial image conditions of illumination, aging and facial expressions.

  20. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards.

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G

    2011-07-01

    In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids.The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable.In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation.We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  1. Encryption and display of multiple-image information using computer-generated holography with modified GS iterative algorithm

    Xiao, Dan; Li, Xiaowei; Liu, Su-Juan; Wang, Qiong-Hua

    2018-03-01

    In this paper, a new scheme of multiple-image encryption and display based on computer-generated holography (CGH) and maximum length cellular automata (MLCA) is presented. With the scheme, the computer-generated hologram, which has the information of the three primitive images, is generated by modified Gerchberg-Saxton (GS) iterative algorithm using three different fractional orders in fractional Fourier domain firstly. Then the hologram is encrypted using MLCA mask. The ciphertext can be decrypted combined with the fractional orders and the rules of MLCA. Numerical simulations and experimental display results have been carried out to verify the validity and feasibility of the proposed scheme.

  2. Long-term prediction of chaotic time series with multi-step prediction horizons by a neural network with Levenberg-Marquardt learning algorithm

    Mirzaee, Hossein

    2009-01-01

    The Levenberg-Marquardt learning algorithm is applied for training a multilayer perception with three hidden layer each with ten neurons in order to carefully map the structure of chaotic time series such as Mackey-Glass time series. First the MLP network is trained with 1000 data, and then it is tested with next 500 data. After that the trained and tested network is applied for long-term prediction of next 120 data which come after test data. The prediction is such a way that, the first inputs to network for prediction are the four last data of test data, then the predicted value is shifted to the regression vector which is the input to the network, then after first four-step of prediction, the input regression vector to network is fully predicted values and in continue, each predicted data is shifted to input vector for subsequent prediction.

  3. A rapid two-step algorithm detects and identifies clinical macrolide and beta-lactam antibiotic resistance in clinical bacterial isolates.

    Lu, Xuedong; Nie, Shuping; Xia, Chengjing; Huang, Lie; He, Ying; Wu, Runxiang; Zhang, Li

    2014-07-01

    Aiming to identify macrolide and beta-lactam resistance in clinical bacterial isolates rapidly and accurately, a two-step algorithm was developed based on detection of eight antibiotic resistance genes. Targeting at genes linked to bacterial macrolide (msrA, ermA, ermB, and ermC) and beta-lactam (blaTEM, blaSHV, blaCTX-M-1, blaCTX-M-9) antibiotic resistances, this method includes a multiplex real-time PCR, a melting temperature profile analysis as well as a liquid bead microarray assay. Liquid bead microarray assay is applied only when indistinguishable Tm profile is observed. The clinical validity of this method was assessed on clinical bacterial isolates. Among the total 580 isolates that were determined by our diagnostic method, 75% of them were identified by the multiplex real-time PCR with melting temperature analysis alone, while the remaining 25% required both multiplex real-time PCR with melting temperature analysis and liquid bead microarray assay for identification. Compared with the traditional phenotypic antibiotic susceptibility test, an overall agreement of 81.2% (kappa=0.614, 95% CI=0.550-0.679) was observed, with a sensitivity and specificity of 87.7% and 73% respectively. Besides, the average test turnaround time is 3.9h, which is much shorter in comparison with more than 24h for the traditional phenotypic tests. Having the advantages of the shorter operating time and comparable high sensitivity and specificity with the traditional phenotypic test, our two-step algorithm provides an efficient tool for rapid determination of macrolide and beta-lactam antibiotic resistances in clinical bacterial isolates. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Changing physical activity behaviour for people with multiple sclerosis: protocol of a randomised controlled feasibility trial (iStep-MS).

    Ryan, Jennifer M; Fortune, Jennifer; Stennett, Andrea; Kilbride, Cherry; Anokye, Nana; Victor, Christina; Hendrie, Wendy; Abdul, Mohamed; DeSouza, Lorraine; Lavelle, Grace; Brewin, Debbie; David, Lee; Norris, Meriel

    2017-11-15

    Although physical activity may reduce disease burden, fatigue and disability, and improve quality of life among people with multiple sclerosis (MS), many people with MS are physically inactive and spend significant time in sedentary behaviour. Behaviour change interventions may assist people with MS to increase physical activity and reduce sedentary behaviour. However, few studies have investigated their effectiveness using objective measures of physical activity, particularly in the long term. Further, interventions that have proven effective in the short term may not be feasible in clinical practice because of the large amount of support provided. The iStep-MS trial aims to determine the safety, feasibility and acceptability of a behaviour change intervention to increase physical activity and reduce sedentary behaviour among people with MS. Sixty people with MS will be randomised (1:1 ratio) to receive a 12-week intervention or usual care only. The intervention consists of four physical activity consultations with a physiotherapist supported by a handbook and pedometer. Outcomes assessed at baseline, 12 weeks and 9 months are physical activity (ActiGraph wGT3X-BT accelerometer), sedentary behaviour (activPAL3µ), self-reported activity and sitting time, walking capability, fatigue, self-efficacy, participation, quality of life and health service use. The safety of the intervention will be determined by assessing change in pain and fatigue and the incidence of adverse events during the follow-up period. A parallel process evaluation will assess the feasibility and acceptability of the intervention through assessment of fidelity to the programme and semistructured interviews exploring participants' and therapists' experiences of the intervention. The feasibility of conducting an economic evaluation will be determined by collecting data on quality of life and resource use. Research ethics committee approval has been granted from Brunel University London. Results of

  5. On the implementation of the Ford | Fulkerson algorithm on the Multiple Instruction and Single Data computer system

    A. Yu. Popov

    2014-01-01

    Full Text Available Algorithms of optimization in networks and direct graphs find a broad application when solving the practical tasks. However, along with large-scale introduction of information technologies in human activity, requirements for volumes of input data and retrieval rate of solution are aggravated. In spite of the fact that by now the large number of algorithms for the various models of computers and computing systems have been studied and implemented, the solution of key problems of optimization for real dimensions of tasks remains difficult. In this regard search of new and more efficient computing structures, as well as update of known algorithms are of great current interest.The work considers an implementation of the search-end algorithm of the maximum flow on the direct graph for multiple instructions and single data computer system (MISD developed in BMSTU. Key feature of this architecture is deep hardware support of operations over sets and structures of data. Functions of storage and access to them are realized on the specialized processor of structures processing (SP which is capable to perform at the hardware level such operations as: add, delete, search, intersect, complete, merge, and others. Advantage of such system is possibility of parallel execution of parts of the computing tasks regarding the access to the sets to data structures simultaneously with arithmetic and logical processing of information.The previous works present the general principles of the computing process arrangement and features of programs implemented in MISD system, describe the structure and principles of functioning the processor of structures processing, show the general principles of the graph task solutions in such system, and experimentally study the efficiency of the received algorithms.The work gives command formats of the SP processor, offers the technique to update the algorithms realized in MISD system, suggests the option of Ford-Falkersona algorithm

  6. Fast Simulation of 3-D Surface Flanging and Prediction of the Flanging Lines Based On One-Step Inverse Forming Algorithm

    Bao Yidong; Hu Sibo; Lang Zhikui; Hu Ping

    2005-01-01

    A fast simulation scheme for 3D curved binder flanging and blank shape prediction of sheet metal based on one-step inverse finite element method is proposed, in which the total plasticity theory and proportional loading assumption are used. The scheme can be actually used to simulate 3D flanging with complex curve binder shape, and suitable for simulating any type of flanging model by numerically determining the flanging height and flanging lines. Compared with other methods such as analytic algorithm and blank sheet-cut return method, the prominent advantage of the present scheme is that it can directly predict the location of the 3D flanging lines when simulating the flanging process. Therefore, the prediction time of flanging lines will be obviously decreased. Two typical 3D curve binder flanging including stretch and shrink characters are simulated in the same time by using the present scheme and incremental FE non-inverse algorithm based on incremental plasticity theory, which show the validity and high efficiency of the present scheme

  7. MULTIGRAIN: a smoothed particle hydrodynamic algorithm for multiple small dust grains and gas

    Hutchison, Mark; Price, Daniel J.; Laibe, Guillaume

    2018-05-01

    We present a new algorithm, MULTIGRAIN, for modelling the dynamics of an entire population of small dust grains immersed in gas, typical of conditions that are found in molecular clouds and protoplanetary discs. The MULTIGRAIN method is more accurate than single-phase simulations because the gas experiences a backreaction from each dust phase and communicates this change to the other phases, thereby indirectly coupling the dust phases together. The MULTIGRAIN method is fast, explicit and low storage, requiring only an array of dust fractions and their derivatives defined for each resolution element.

  8. HyDR-MI : A hybrid algorithm to reduce dimensionality in multiple instance learning

    Zafra, A.; Pechenizkiy, M.; Ventura, S.

    2013-01-01

    Feature selection techniques have been successfully applied in many applications for making supervised learning more effective and efficient. These techniques have been widely used and studied in traditional supervised learning settings, where each instance is expected to have a label. In multiple

  9. A multiple-choice knapsack based algorithm for CDMA downlink rate differentiation under uplink coverage restrictions

    Endrayanto, A.I.; Bumb, A.F.; Boucherie, Richardus J.

    2004-01-01

    This paper presents an analytical model for downlink rate allocation in Code Division Multiple Access (CDMA) mobile networks. By discretizing the coverage area into small segments, the transmit power requirements are characterized via a matrix representation that separates user and system

  10. Bees Algorithm for Construction of Multiple Test Forms in E-Testing

    Songmuang, Pokpong; Ueno, Maomi

    2011-01-01

    The purpose of this research is to automatically construct multiple equivalent test forms that have equivalent qualities indicated by test information functions based on item response theory. There has been a trade-off in previous studies between the computational costs and the equivalent qualities of test forms. To alleviate this problem, we…

  11. Optimized LTE cell planning for multiple user density subareas using meta-heuristic algorithms

    Ghazzai, Hakim

    2014-09-01

    Base station deployment in cellular networks is one of the most fundamental problems in network design. This paper proposes a novel method for the cell planning problem for the fourth generation 4G-LTE cellular networks using meta heuristic algorithms. In this approach, we aim to satisfy both coverage and cell capacity constraints simultaneously by formulating a practical optimization problem. We start by performing a typical coverage and capacity dimensioning to identify the initial required number of base stations. Afterwards, we implement a Particle Swarm Optimization algorithm or a recently-proposed Grey Wolf Optimizer to find the optimal base station locations that satisfy both problem constraints in the area of interest which can be divided into several subareas with different user densities. Subsequently, an iterative approach is executed to eliminate eventual redundant base stations. We have also performed Monte Carlo simulations to study the performance of the proposed scheme and computed the average number of users in outage. Results show that our proposed approach respects in all cases the desired network quality of services even for large-scale dimension problems.

  12. A bio-inspired swarm robot coordination algorithm for multiple target searching

    Meng, Yan; Gan, Jing; Desai, Sachi

    2008-04-01

    The coordination of a multi-robot system searching for multi targets is challenging under dynamic environment since the multi-robot system demands group coherence (agents need to have the incentive to work together faithfully) and group competence (agents need to know how to work together well). In our previous proposed bio-inspired coordination method, Local Interaction through Virtual Stigmergy (LIVS), one problem is the considerable randomness of the robot movement during coordination, which may lead to more power consumption and longer searching time. To address these issues, an adaptive LIVS (ALIVS) method is proposed in this paper, which not only considers the travel cost and target weight, but also predicting the target/robot ratio and potential robot redundancy with respect to the detected targets. Furthermore, a dynamic weight adjustment is also applied to improve the searching performance. This new method a truly distributed method where each robot makes its own decision based on its local sensing information and the information from its neighbors. Basically, each robot only communicates with its neighbors through a virtual stigmergy mechanism and makes its local movement decision based on a Particle Swarm Optimization (PSO) algorithm. The proposed ALIVS algorithm has been implemented on the embodied robot simulator, Player/Stage, in a searching target. The simulation results demonstrate the efficiency and robustness in a power-efficient manner with the real-world constraints.

  13. Analysis of Naïve Bayes Algorithm for Email Spam Filtering across Multiple Datasets

    Fitriah Rusland, Nurul; Wahid, Norfaradilla; Kasim, Shahreen; Hafit, Hanayanti

    2017-08-01

    E-mail spam continues to become a problem on the Internet. Spammed e-mail may contain many copies of the same message, commercial advertisement or other irrelevant posts like pornographic content. In previous research, different filtering techniques are used to detect these e-mails such as using Random Forest, Naïve Bayesian, Support Vector Machine (SVM) and Neutral Network. In this research, we test Naïve Bayes algorithm for e-mail spam filtering on two datasets and test its performance, i.e., Spam Data and SPAMBASE datasets [8]. The performance of the datasets is evaluated based on their accuracy, recall, precision and F-measure. Our research use WEKA tool for the evaluation of Naïve Bayes algorithm for e-mail spam filtering on both datasets. The result shows that the type of email and the number of instances of the dataset has an influence towards the performance of Naïve Bayes.

  14. The study of Kruskal's and Prim's algorithms on the Multiple Instruction and Single Data stream computer system

    A. Yu. Popov

    2015-01-01

    Full Text Available Bauman Moscow State Technical University is implementing a project to develop operating principles of computer system having radically new architecture. A developed working model of the system allowed us to evaluate an efficiency of developed hardware and software. The experimental results presented in previous studies, as well as the analysis of operating principles of new computer system permit to draw conclusions regarding its efficiency in solving discrete optimization problems related to processing of sets.The new architecture is based on a direct hardware support of operations of discrete mathematics, which is reflected in using the special facilities for processing of sets and data structures. Within the framework of the project a special device was designed, i.e. a structure processor (SP, which improved the performance, without limiting the scope of applications of such a computer system.The previous works presented the basic principles of the computational process organization in MISD (Multiple Instructions, Single Data system, showed the structure and features of the structure processor and the general principles to solve discrete optimization problems on graphs.This paper examines two search algorithms of the minimum spanning tree, namely Kruskal's and Prim's algorithms. It studies the implementations of algorithms for two SP operation modes: coprocessor mode and MISD one. The paper presents results of experimental comparison of MISD system performance in coprocessor mode with mainframes.

  15. TESTING THE GENERALIZATION EFFICIENCY OF OIL SLICK CLASSIFICATION ALGORITHM USING MULTIPLE SAR DATA FOR DEEPWATER HORIZON OIL SPILL

    C. Ozkan

    2012-07-01

    Full Text Available Marine oil spills due to releases of crude oil from tankers, offshore platforms, drilling rigs and wells, etc. are seriously affecting the fragile marine and coastal ecosystem and cause political and environmental concern. A catastrophic explosion and subsequent fire in the Deepwater Horizon oil platform caused the platform to burn and sink, and oil leaked continuously between April 20th and July 15th of 2010, releasing about 780,000 m3 of crude oil into the Gulf of Mexico. Today, space-borne SAR sensors are extensively used for the detection of oil spills in the marine environment, as they are independent from sun light, not affected by cloudiness, and more cost-effective than air patrolling due to covering large areas. In this study, generalization extent of an object based classification algorithm was tested for oil spill detection using multiple SAR imagery data. Among many geometrical, physical and textural features, some more distinctive ones were selected to distinguish oil and look alike objects from each others. The tested classifier was constructed from a Multilayer Perception Artificial Neural Network trained by ABC, LM and BP optimization algorithms. The training data to train the classifier were constituted from SAR data consisting of oil spill originated from Lebanon in 2007. The classifier was then applied to the Deepwater Horizon oil spill data in the Gulf of Mexico on RADARSAT-2 and ALOS PALSAR images to demonstrate the generalization efficiency of oil slick classification algorithm.

  16. Hybrid Multiple Soft-Sensor Models of Grinding Granularity Based on Cuckoo Searching Algorithm and Hysteresis Switching Strategy

    Jie-Sheng Wang

    2015-01-01

    Full Text Available According to the characteristics of grinding process and accuracy requirements of technical indicators, a hybrid multiple soft-sensor modeling method of grinding granularity is proposed based on cuckoo searching (CS algorithm and hysteresis switching (HS strategy. Firstly, a mechanism soft-sensor model of grinding granularity is deduced based on the technique characteristics and a lot of experimental data of grinding process. Meanwhile, the BP neural network soft-sensor model and wavelet neural network (WNN soft-sensor model are set up. Then, the hybrid multiple soft-sensor model based on the hysteresis switching strategy is realized. That is to say, the optimum model is selected as the current predictive model according to the switching performance index at each sampling instant. Finally the cuckoo searching algorithm is adopted to optimize the performance parameters of hysteresis switching strategy. Simulation results show that the proposed model has better generalization results and prediction precision, which can satisfy the real-time control requirements of grinding classification process.

  17. Detection of uterine MMG contractions using a multiple change point estimator and the K-means cluster algorithm.

    La Rosa, Patricio S; Nehorai, Arye; Eswaran, Hari; Lowery, Curtis L; Preissl, Hubert

    2008-02-01

    We propose a single channel two-stage time-segment discriminator of uterine magnetomyogram (MMG) contractions during pregnancy. We assume that the preprocessed signals are piecewise stationary having distribution in a common family with a fixed number of parameters. Therefore, at the first stage, we propose a model-based segmentation procedure, which detects multiple change-points in the parameters of a piecewise constant time-varying autoregressive model using a robust formulation of the Schwarz information criterion (SIC) and a binary search approach. In particular, we propose a test statistic that depends on the SIC, derive its asymptotic distribution, and obtain closed-form optimal detection thresholds in the sense of the Neyman-Pearson criterion; therefore, we control the probability of false alarm and maximize the probability of change-point detection in each stage of the binary search algorithm. We compute and evaluate the relative energy variation [root mean squares (RMS)] and the dominant frequency component [first order zero crossing (FOZC)] in discriminating between time segments with and without contractions. The former consistently detects a time segment with contractions. Thus, at the second stage, we apply a nonsupervised K-means cluster algorithm to classify the detected time segments using the RMS values. We apply our detection algorithm to real MMG records obtained from ten patients admitted to the hospital for contractions with gestational ages between 31 and 40 weeks. We evaluate the performance of our detection algorithm in computing the detection and false alarm rate, respectively, using as a reference the patients' feedback. We also analyze the fusion of the decision signals from all the sensors as in the parallel distributed detection approach.

  18. Location-allocation algorithm for multiple particle tracking using Birmingham MWPC positron camera

    Gundogdu, O.; Tarcan, E.

    2004-01-01

    Positron Emission Particle Tracking is a powerful, non-invasive technique that employs a single radioactive particle. It has been applied to a wide range of industrial systems. This paper presents an original application of a technique, which was mainly developed in economics or resource management. It allows the tracking of multiple particles using small number of trajectories with correct tagging. This technique originally used in economics or resource management provides very encouraging results

  19. Location-allocation algorithm for multiple particle tracking using Birmingham MWPC positron camera

    Gundogdu, O. E-mail: o.gundogdu@surrey.ac.uko_gundo@yahoo.co.uko.gundogdu@kingston.ac.uk; Tarcan, E

    2004-05-01

    Positron Emission Particle Tracking is a powerful, non-invasive technique that employs a single radioactive particle. It has been applied to a wide range of industrial systems. This paper presents an original application of a technique, which was mainly developed in economics or resource management. It allows the tracking of multiple particles using small number of trajectories with correct tagging. This technique originally used in economics or resource management provides very encouraging results.

  20. Computed tomography contrast media extravasation: treatment algorithm and immediate treatment by squeezing with multiple slit incisions.

    Kim, Sue Min; Cook, Kyung Hoon; Lee, Il Jae; Park, Dong Ha; Park, Myong Chul

    2017-04-01

    In our hospital, an adverse event reporting system was initiated that alerts the plastic surgery department immediately after suspecting contrast media extravasation injury. This system is particularly important for a large volume of extravasation during power injector use. Between March 2011 and May 2015, a retrospective chart review was performed on all patients experiencing contrast media extravasation while being treated at our hospital. Immediate treatment by squeezing with multiple slit incisions was conducted for a portion of these patients. Eighty cases of extravasation were reported from approximately 218 000 computed tomography scans. The expected extravasation volume was larger than 50 ml, or severe pressure was felt on the affected limb in 23 patients. They were treated with multiple slit incisions followed by squeezing. Oedema of the affected limb disappeared after 1-2 hours after treatment, and the skin incisions healed within a week. We propose a set of guidelines for the initial management of contrast media extravasation injuries for a timely intervention. For large-volume extravasation cases, immediate management with multiple slit incisions is safe and effective in reducing the swelling quickly, preventing patient discomfort and decreasing skin and soft tissue problems. © 2016 Medicalhelplines.com Inc and John Wiley & Sons Ltd.

  1. MaxBin 2.0: an automated binning algorithm to recover genomes from multiple metagenomic datasets

    Wu, Yu-Wei [Joint BioEnergy Inst. (JBEI), Emeryville, CA (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Simmons, Blake A. [Joint BioEnergy Inst. (JBEI), Emeryville, CA (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Singer, Steven W. [Joint BioEnergy Inst. (JBEI), Emeryville, CA (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-10-29

    The recovery of genomes from metagenomic datasets is a critical step to defining the functional roles of the underlying uncultivated populations. We previously developed MaxBin, an automated binning approach for high-throughput recovery of microbial genomes from metagenomes. Here, we present an expanded binning algorithm, MaxBin 2.0, which recovers genomes from co-assembly of a collection of metagenomic datasets. Tests on simulated datasets revealed that MaxBin 2.0 is highly accurate in recovering individual genomes, and the application of MaxBin 2.0 to several metagenomes from environmental samples demonstrated that it could achieve two complementary goals: recovering more bacterial genomes compared to binning a single sample as well as comparing the microbial community composition between different sampling environments. Availability and implementation: MaxBin 2.0 is freely available at http://sourceforge.net/projects/maxbin/ under BSD license. Supplementary information: Supplementary data are available at Bioinformatics online.

  2. Step responses of a torsional system with multiple clearances: Study of vibro-impact phenomenon using experimental and computational methods

    Oruganti, Pradeep Sharma; Krak, Michael D.; Singh, Rajendra

    2018-01-01

    Recently Krak and Singh (2017) proposed a scientific experiment that examined vibro-impacts in a torsional system under a step down excitation and provided preliminary measurements and limited non-linear model studies. A major goal of this article is to extend the prior work with a focus on the examination of vibro-impact phenomena observed under step responses in a torsional system with one, two or three controlled clearances. First, new measurements are made at several locations with a higher sampling frequency. Measured angular accelerations are examined in both time and time-frequency domains. Minimal order non-linear models of the experiment are successfully constructed, using piecewise linear stiffness and Coulomb friction elements; eight cases of the generic system are examined though only three are experimentally studied. Measured and predicted responses for single and dual clearance configurations exhibit double sided impacts and time varying periods suggest softening trends under the step down torque. Non-linear models are experimentally validated by comparing results with new measurements and with those previously reported. Several metrics are utilized to quantify and compare the measured and predicted responses (including peak to peak accelerations). Eigensolutions and step responses of the corresponding linearized models are utilized to better understand the nature of the non-linear dynamic system. Finally, the effect of step amplitude on the non-linear responses is examined for several configurations, and hardening trends are observed in the torsional system with three clearances.

  3. Enhanced Algorithms for EO/IR Electronic Stabilization, Clutter Suppression, and Track-Before-Detect for Multiple Low Observable Targets

    Tartakovsky, A.; Brown, A.; Brown, J.

    The paper describes the development and evaluation of a suite of advanced algorithms which provide significantly-improved capabilities for finding, fixing, and tracking multiple ballistic and flying low observable objects in highly stressing cluttered environments. The algorithms have been developed for use in satellite-based staring and scanning optical surveillance suites for applications including theatre and intercontinental ballistic missile early warning, trajectory prediction, and multi-sensor track handoff for midcourse discrimination and intercept. The functions performed by the algorithms include electronic sensor motion compensation providing sub-pixel stabilization (to 1/100 of a pixel), as well as advanced temporal-spatial clutter estimation and suppression to below sensor noise levels, followed by statistical background modeling and Bayesian multiple-target track-before-detect filtering. The multiple-target tracking is performed in physical world coordinates to allow for multi-sensor fusion, trajectory prediction, and intercept. Output of detected object cues and data visualization are also provided. The algorithms are designed to handle a wide variety of real-world challenges. Imaged scenes may be highly complex and infinitely varied -- the scene background may contain significant celestial, earth limb, or terrestrial clutter. For example, when viewing combined earth limb and terrestrial scenes, a combination of stationary and non-stationary clutter may be present, including cloud formations, varying atmospheric transmittance and reflectance of sunlight and other celestial light sources, aurora, glint off sea surfaces, and varied natural and man-made terrain features. The targets of interest may also appear to be dim, relative to the scene background, rendering much of the existing deployed software useless for optical target detection and tracking. Additionally, it may be necessary to detect and track a large number of objects in the threat cloud

  4. Multiple Convective Cell Identification and Tracking Algorithm for documenting time-height evolution of measured polarimetric radar and lightning properties

    Rosenfeld, D.; Hu, J.; Zhang, P.; Snyder, J.; Orville, R. E.; Ryzhkov, A.; Zrnic, D.; Williams, E.; Zhang, R.

    2017-12-01

    A methodology to track the evolution of the hydrometeors and electrification of convective cells is presented and applied to various convective clouds from warm showers to super-cells. The input radar data are obtained from the polarimetric NEXRAD weather radars, The information on cloud electrification is obtained from Lightning Mapping Arrays (LMA). The development time and height of the hydrometeors and electrification requires tracking the evolution and lifecycle of convective cells. A new methodology for Multi-Cell Identification and Tracking (MCIT) is presented in this study. This new algorithm is applied to time series of radar volume scans. A cell is defined as a local maximum in the Vertical Integrated Liquid (VIL), and the echo area is divided between cells using a watershed algorithm. The tracking of the cells between radar volume scans is done by identifying the two cells in consecutive radar scans that have maximum common VIL. The vertical profile of the polarimetric radar properties are used for constructing the time-height cross section of the cell properties around the peak reflectivity as a function of height. The LMA sources that occur within the cell area are integrated as a function of height as well for each time step, as determined by the radar volume scans. The result of the tracking can provide insights to the evolution of storms, hydrometer types, precipitation initiation and cloud electrification under different thermodynamic, aerosol and geographic conditions. The details of the MCIT algorithm, its products and their performance for different types of storm are described in this poster.

  5. A PSO-Optimized Reciprocal Velocity Obstacles Algorithm for Navigation of Multiple Mobile Robots

    Ziyad Allawi

    2015-03-01

    Full Text Available In this paper, a new optimization method for the Reciprocal Velocity Obstacles (RVO is proposed. It uses the well-known Particle Swarm Optimization (PSO for navigation control of multiple mobile robots with kinematic constraints. The RVO is used for collision avoidance between the robots, while PSO is used to choose the best path for the robot maneuver to avoid colliding with other robots and to get to its goal faster. This method was applied on 24 mobile robots facing each other. Simulation results have shown that this method outperforms the ordinary RVO when the path is heuristically chosen.

  6. Research reactor in-core fuel management optimization by application of multiple cyclic interchange algorithms

    van Geemert, R.; Hoogenboom, J.E.; Gibcus, H.P.M. [Technische Univ. Delft (Netherlands). Interfacultair Reactor Inst.; Quist, A.J. [Delft University of Technology, Faculty of Applied Mathematics and Informatics Mekelweg 4, 2628 JB, Delft (Netherlands)

    1998-12-01

    Fuel shuffling optimization procedures are proposed for the Hoger Onderwijs Reactor (HOR) in Delft, The Netherlands, a 2MWth swimming-pool type research reactor. These procedures are based on the multiple cyclic interchange approach, according to which the search for the reload pattern associated with the highest objective function value can be thought of as divided in multiple stages. The transition from the initial to the final stage is characterized by an increase in the degree of locality of the search procedure. The general idea is that, during the first stages, the `elite` cluster containing the group of best patterns must be located, after which the solution space is sampled in a more and more local sense to find the local optimum in this cluster. The transition(s) from global search behaviour to local search behaviour can be either prompt, by defining strictly separate search regimes, or gradual by introducing stochastic acceptance tests. The possible objectives and the safety and operation constraints, as well as the optimization procedure, are discussed, followed by some optimization results for the HOR. (orig.) 4 refs.

  7. Research reactor in-core fuel management optimization by application of multiple cyclic interchange algorithms

    Geemert, R. van; Hoogenboom, J.E.; Gibcus, H.P.M.

    1998-01-01

    Fuel shuffling optimization procedures are proposed for the Hoger Onderwijs Reactor (HOR) in Delft, The Netherlands, a 2MWth swimming-pool type research reactor. These procedures are based on the multiple cyclic interchange approach, according to which the search for the reload pattern associated with the highest objective function value can be thought of as divided in multiple stages. The transition from the initial to the final stage is characterized by an increase in the degree of locality of the search procedure. The general idea is that, during the first stages, the 'elite' cluster containing the group of best patterns must be located, after which the solution space is sampled in a more and more local sense to find the local optimum in this cluster. The transition(s) from global search behaviour to local search behaviour can be either prompt, by defining strictly separate search regimes, or gradual by introducing stochastic acceptance tests. The possible objectives and the safety and operation constraints, as well as the optimization procedure, are discussed, followed by some optimization results for the HOR. (orig.)

  8. Linear time algorithms to construct populations fitting multiple constraint distributions at genomic scales.

    Siragusa, Enrico; Haiminen, Niina; Utro, Filippo; Parida, Laxmi

    2017-10-09

    Computer simulations can be used to study population genetic methods, models and parameters, as well as to predict potential outcomes. For example, in plant populations, predicting the outcome of breeding operations can be studied using simulations. In-silico construction of populations with pre-specified characteristics is an important task in breeding optimization and other population genetic studies. We present two linear time Simulation using Best-fit Algorithms (SimBA) for two classes of problems where each co-fits two distributions: SimBA-LD fits linkage disequilibrium and minimum allele frequency distributions, while SimBA-hap fits founder-haplotype and polyploid allele dosage distributions. An incremental gap-filling version of previously introduced SimBA-LD is here demonstrated to accurately fit the target distributions, allowing efficient large scale simulations. SimBA-hap accuracy and efficiency is demonstrated by simulating tetraploid populations with varying numbers of founder haplotypes, we evaluate both a linear time greedy algoritm and an optimal solution based on mixed-integer programming. SimBA is available on http://researcher.watson.ibm.com/project/5669.

  9. Fuzzy bilevel programming with multiple non-cooperative followers: model, algorithm and application

    Ke, Hua; Huang, Hu; Ralescu, Dan A.; Wang, Lei

    2016-04-01

    In centralized decision problems, it is not complicated for decision-makers to make modelling technique selections under uncertainty. When a decentralized decision problem is considered, however, choosing appropriate models is no longer easy due to the difficulty in estimating the other decision-makers' inconclusive decision criteria. These decision criteria may vary with different decision-makers because of their special risk tolerances and management requirements. Considering the general differences among the decision-makers in decentralized systems, we propose a general framework of fuzzy bilevel programming including hybrid models (integrated with different modelling methods in different levels). Specially, we discuss two of these models which may have wide applications in many fields. Furthermore, we apply the proposed two models to formulate a pricing decision problem in a decentralized supply chain with fuzzy coefficients. In order to solve these models, a hybrid intelligent algorithm integrating fuzzy simulation, neural network and particle swarm optimization based on penalty function approach is designed. Some suggestions on the applications of these models are also presented.

  10. Step-up multiple regression model to compute Chlorophyll a in the coastal waters off Cochin, southwest coast of India

    Balachandran, K.K.; Jayalakshmy, K.V.; Laluraj, C.M.; Nair, M.; Joseph, T.; Sheeba, P.

    The interaction effects of abiotic processes in the production of phytoplankton in a coastal marine region off Cochin are evaluated using multiple regression models. The study shows that chlorophyll production is not limited by nutrients...

  11. Combined genetic algorithm and multiple linear regression (GA-MLR) optimizer: Application to multi-exponential fluorescence decay surface.

    Fisz, Jacek J

    2006-12-07

    The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi

  12. Pareto evolution of gene networks: an algorithm to optimize multiple fitness objectives

    Warmflash, Aryeh; Siggia, Eric D; Francois, Paul

    2012-01-01

    The computational evolution of gene networks functions like a forward genetic screen to generate, without preconceptions, all networks that can be assembled from a defined list of parts to implement a given function. Frequently networks are subject to multiple design criteria that cannot all be optimized simultaneously. To explore how these tradeoffs interact with evolution, we implement Pareto optimization in the context of gene network evolution. In response to a temporal pulse of a signal, we evolve networks whose output turns on slowly after the pulse begins, and shuts down rapidly when the pulse terminates. The best performing networks under our conditions do not fall into categories such as feed forward and negative feedback that also encode the input–output relation we used for selection. Pareto evolution can more efficiently search the space of networks than optimization based on a single ad hoc combination of the design criteria. (paper)

  13. MGUPGMA: A Fast UPGMA Algorithm With Multiple Graphics Processing Units Using NCCL.

    Hua, Guan-Jie; Hung, Che-Lun; Lin, Chun-Yuan; Wu, Fu-Che; Chan, Yu-Wei; Tang, Chuan Yi

    2017-01-01

    A phylogenetic tree is a visual diagram of the relationship between a set of biological species. The scientists usually use it to analyze many characteristics of the species. The distance-matrix methods, such as Unweighted Pair Group Method with Arithmetic Mean and Neighbor Joining, construct a phylogenetic tree by calculating pairwise genetic distances between taxa. These methods have the computational performance issue. Although several new methods with high-performance hardware and frameworks have been proposed, the issue still exists. In this work, a novel parallel Unweighted Pair Group Method with Arithmetic Mean approach on multiple Graphics Processing Units is proposed to construct a phylogenetic tree from extremely large set of sequences. The experimental results present that the proposed approach on a DGX-1 server with 8 NVIDIA P100 graphic cards achieves approximately 3-fold to 7-fold speedup over the implementation of Unweighted Pair Group Method with Arithmetic Mean on a modern CPU and a single GPU, respectively.

  14. Pareto evolution of gene networks: an algorithm to optimize multiple fitness objectives.

    Warmflash, Aryeh; Francois, Paul; Siggia, Eric D

    2012-10-01

    The computational evolution of gene networks functions like a forward genetic screen to generate, without preconceptions, all networks that can be assembled from a defined list of parts to implement a given function. Frequently networks are subject to multiple design criteria that cannot all be optimized simultaneously. To explore how these tradeoffs interact with evolution, we implement Pareto optimization in the context of gene network evolution. In response to a temporal pulse of a signal, we evolve networks whose output turns on slowly after the pulse begins, and shuts down rapidly when the pulse terminates. The best performing networks under our conditions do not fall into categories such as feed forward and negative feedback that also encode the input-output relation we used for selection. Pareto evolution can more efficiently search the space of networks than optimization based on a single ad hoc combination of the design criteria.

  15. Suitable post processing algorithms for X-ray imaging using oversampled displaced multiple images

    Thim, J; Reza, S; Nawaz, K; Norlin, B; O'Nils, M; Oelmann, B

    2011-01-01

    X-ray imaging systems such as photon counting pixel detectors have a limited spatial resolution of the pixels, based on the complexity and processing technology of the readout electronics. For X-ray imaging situations where the features of interest are smaller than the imaging system pixel size, and the pixel size cannot be made smaller in the hardware, alternative means of resolution enhancement require to be considered. Oversampling with the usage of multiple displaced images, where the pixels of all images are mapped to a final resolution enhanced image, has proven a viable method of reaching a sub-pixel resolution exceeding the original resolution. The effectiveness of the oversampling method declines with the number of images taken, the sub-pixel resolution increases, but relative to a real reduction of imaging pixel sizes yielding a full resolution image, the perceived resolution from the sub-pixel oversampled image is lower. This is because the oversampling method introduces blurring noise into the mapped final images, and the blurring relative to full resolution images increases with the oversampling factor. One way of increasing the performance of the oversampling method is by sharpening the images in post processing. This paper focus on characterizing the performance increase of the oversampling method after the use of some suitable post processing filters, for digital X-ray images specifically. The results show that spatial domain filters and frequency domain filters of the same type yield indistinguishable results, which is to be expected. The results also show that the effectiveness of applying sharpening filters to oversampled multiple images increase with the number of images used (oversampling factor), leaving 60-80% of the original blurring noise after filtering a 6 x 6 mapped image (36 images taken), where the percentage is depending on the type of filter. This means that the effectiveness of the oversampling itself increase by using sharpening

  16. Multiple-Threshold Event Detection and Other Enhancements to the Virtual Seismologist (VS) Earthquake Early Warning Algorithm

    Fischer, M.; Caprio, M.; Cua, G. B.; Heaton, T. H.; Clinton, J. F.; Wiemer, S.

    2009-12-01

    The Virtual Seismologist (VS) algorithm is a Bayesian approach to earthquake early warning (EEW) being implemented by the Swiss Seismological Service at ETH Zurich. The application of Bayes’ theorem in earthquake early warning states that the most probable source estimate at any given time is a combination of contributions from a likelihood function that evolves in response to incoming data from the on-going earthquake, and selected prior information, which can include factors such as network topology, the Gutenberg-Richter relationship or previously observed seismicity. The VS algorithm was one of three EEW algorithms involved in the California Integrated Seismic Network (CISN) real-time EEW testing and performance evaluation effort. Its compelling real-time performance in California over the last three years has led to its inclusion in the new USGS-funded effort to develop key components of CISN ShakeAlert, a prototype EEW system that could potentially be implemented in California. A significant portion of VS code development was supported by the SAFER EEW project in Europe. We discuss recent enhancements to the VS EEW algorithm. We developed and continue to test a multiple-threshold event detection scheme, which uses different association / location approaches depending on the peak amplitudes associated with an incoming P pick. With this scheme, an event with sufficiently high initial amplitudes can be declared on the basis of a single station, maximizing warning times for damaging events for which EEW is most relevant. Smaller, non-damaging events, which will have lower initial amplitudes, will require more picks to be declared an event to reduce false alarms. This transforms the VS codes from a regional EEW approach reliant on traditional location estimation (and it requirement of at least 4 picks as implemented by the Binder Earthworm phase associator) to a hybrid on-site/regional approach capable of providing a continuously evolving stream of EEW

  17. Prior selection for Gumbel distribution parameters using multiple-try metropolis algorithm for monthly maxima PM10 data

    Amin, Nor Azrita Mohd; Adam, Mohd Bakri; Ibrahim, Noor Akma

    2014-09-01

    The Multiple-try Metropolis (MTM) algorithm is the new alternatives in the field of Bayesian extremes for summarizing the posterior distribution. MTM produce efficient estimation scheme for modelling extreme data in term of the convergence and small burn-in periods. The main objective is to explore the accuracy of the parameter estimation to the change of priors and compare the results with a classical likelihood-based analysis. Focus is on modelling the extreme data based on block maxima approach using Gumbel distribution. The comparative study between MTM and MLE is shown by the numerical problems. Several goodness of fit tests are compute for selecting the best model. The application is on the monthly maxima PM10 data for Johor state.

  18. MGUPGMA: A Fast UPGMA Algorithm With Multiple Graphics Processing Units Using NCCL

    Guan-Jie Hua

    2017-10-01

    Full Text Available A phylogenetic tree is a visual diagram of the relationship between a set of biological species. The scientists usually use it to analyze many characteristics of the species. The distance-matrix methods, such as Unweighted Pair Group Method with Arithmetic Mean and Neighbor Joining, construct a phylogenetic tree by calculating pairwise genetic distances between taxa. These methods have the computational performance issue. Although several new methods with high-performance hardware and frameworks have been proposed, the issue still exists. In this work, a novel parallel Unweighted Pair Group Method with Arithmetic Mean approach on multiple Graphics Processing Units is proposed to construct a phylogenetic tree from extremely large set of sequences. The experimental results present that the proposed approach on a DGX-1 server with 8 NVIDIA P100 graphic cards achieves approximately 3-fold to 7-fold speedup over the implementation of Unweighted Pair Group Method with Arithmetic Mean on a modern CPU and a single GPU, respectively.

  19. An Approach for Predicting Essential Genes Using Multiple Homology Mapping and Machine Learning Algorithms.

    Hua, Hong-Li; Zhang, Fa-Zhan; Labena, Abraham Alemayehu; Dong, Chuan; Jin, Yan-Ting; Guo, Feng-Biao

    Investigation of essential genes is significant to comprehend the minimal gene sets of cell and discover potential drug targets. In this study, a novel approach based on multiple homology mapping and machine learning method was introduced to predict essential genes. We focused on 25 bacteria which have characterized essential genes. The predictions yielded the highest area under receiver operating characteristic (ROC) curve (AUC) of 0.9716 through tenfold cross-validation test. Proper features were utilized to construct models to make predictions in distantly related bacteria. The accuracy of predictions was evaluated via the consistency of predictions and known essential genes of target species. The highest AUC of 0.9552 and average AUC of 0.8314 were achieved when making predictions across organisms. An independent dataset from Synechococcus elongatus , which was released recently, was obtained for further assessment of the performance of our model. The AUC score of predictions is 0.7855, which is higher than other methods. This research presents that features obtained by homology mapping uniquely can achieve quite great or even better results than those integrated features. Meanwhile, the work indicates that machine learning-based method can assign more efficient weight coefficients than using empirical formula based on biological knowledge.

  20. CISN ShakeAlert: Faster Warning Information Through Multiple Threshold Event Detection in the Virtual Seismologist (VS) Early Warning Algorithm

    Cua, G. B.; Fischer, M.; Caprio, M.; Heaton, T. H.; Cisn Earthquake Early Warning Project Team

    2010-12-01

    The Virtual Seismologist (VS) earthquake early warning (EEW) algorithm is one of 3 EEW approaches being incorporated into the California Integrated Seismic Network (CISN) ShakeAlert system, a prototype EEW system that could potentially be implemented in California. The VS algorithm, implemented by the Swiss Seismological Service at ETH Zurich, is a Bayesian approach to EEW, wherein the most probable source estimate at any given time is a combination of contributions from a likehihood function that evolves in response to incoming data from the on-going earthquake, and selected prior information, which can include factors such as network topology, the Gutenberg-Richter relationship or previously observed seismicity. The VS codes have been running in real-time at the Southern California Seismic Network since July 2008, and at the Northern California Seismic Network since February 2009. We discuss recent enhancements to the VS EEW algorithm that are being integrated into CISN ShakeAlert. We developed and continue to test a multiple-threshold event detection scheme, which uses different association / location approaches depending on the peak amplitudes associated with an incoming P pick. With this scheme, an event with sufficiently high initial amplitudes can be declared on the basis of a single station, maximizing warning times for damaging events for which EEW is most relevant. Smaller, non-damaging events, which will have lower initial amplitudes, will require more picks to initiate an event declaration, with the goal of reducing false alarms. This transforms the VS codes from a regional EEW approach reliant on traditional location estimation (and the requirement of at least 4 picks as implemented by the Binder Earthworm phase associator) into an on-site/regional approach capable of providing a continuously evolving stream of EEW information starting from the first P-detection. Real-time and offline analysis on Swiss and California waveform datasets indicate that the

  1. Classification of Suicide Attempts through a Machine Learning Algorithm Based on Multiple Systemic Psychiatric Scales

    Jihoon Oh

    2017-09-01

    Full Text Available Classification and prediction of suicide attempts in high-risk groups is important for preventing suicide. The purpose of this study was to investigate whether the information from multiple clinical scales has classification power for identifying actual suicide attempts. Patients with depression and anxiety disorders (N = 573 were included, and each participant completed 31 self-report psychiatric scales and questionnaires about their history of suicide attempts. We then trained an artificial neural network classifier with 41 variables (31 psychiatric scales and 10 sociodemographic elements and ranked the contribution of each variable for the classification of suicide attempts. To evaluate the clinical applicability of our model, we measured classification performance with top-ranked predictors. Our model had an overall accuracy of 93.7% in 1-month, 90.8% in 1-year, and 87.4% in lifetime suicide attempts detection. The area under the receiver operating characteristic curve (AUROC was the highest for 1-month suicide attempts detection (0.93, followed by lifetime (0.89, and 1-year detection (0.87. Among all variables, the Emotion Regulation Questionnaire had the highest contribution, and the positive and negative characteristics of the scales similarly contributed to classification performance. Performance on suicide attempts classification was largely maintained when we only used the top five ranked variables for training (AUROC; 1-month, 0.75, 1-year, 0.85, lifetime suicide attempts detection, 0.87. Our findings indicate that information from self-report clinical scales can be useful for the classification of suicide attempts. Based on the reliable performance of the top five predictors alone, this machine learning approach could help clinicians identify high-risk patients in clinical settings.

  2. Classification of Suicide Attempts through a Machine Learning Algorithm Based on Multiple Systemic Psychiatric Scales.

    Oh, Jihoon; Yun, Kyongsik; Hwang, Ji-Hyun; Chae, Jeong-Ho

    2017-01-01

    Classification and prediction of suicide attempts in high-risk groups is important for preventing suicide. The purpose of this study was to investigate whether the information from multiple clinical scales has classification power for identifying actual suicide attempts. Patients with depression and anxiety disorders ( N  = 573) were included, and each participant completed 31 self-report psychiatric scales and questionnaires about their history of suicide attempts. We then trained an artificial neural network classifier with 41 variables (31 psychiatric scales and 10 sociodemographic elements) and ranked the contribution of each variable for the classification of suicide attempts. To evaluate the clinical applicability of our model, we measured classification performance with top-ranked predictors. Our model had an overall accuracy of 93.7% in 1-month, 90.8% in 1-year, and 87.4% in lifetime suicide attempts detection. The area under the receiver operating characteristic curve (AUROC) was the highest for 1-month suicide attempts detection (0.93), followed by lifetime (0.89), and 1-year detection (0.87). Among all variables, the Emotion Regulation Questionnaire had the highest contribution, and the positive and negative characteristics of the scales similarly contributed to classification performance. Performance on suicide attempts classification was largely maintained when we only used the top five ranked variables for training (AUROC; 1-month, 0.75, 1-year, 0.85, lifetime suicide attempts detection, 0.87). Our findings indicate that information from self-report clinical scales can be useful for the classification of suicide attempts. Based on the reliable performance of the top five predictors alone, this machine learning approach could help clinicians identify high-risk patients in clinical settings.

  3. Multiple time step molecular dynamics in the optimized isokinetic ensemble steered with the molecular theory of solvation: Accelerating with advanced extrapolation of effective solvation forces

    Omelyan, Igor; Kovalenko, Andriy

    2013-01-01

    We develop efficient handling of solvation forces in the multiscale method of multiple time step molecular dynamics (MTS-MD) of a biomolecule steered by the solvation free energy (effective solvation forces) obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model complemented with the Kovalenko-Hirata closure approximation). To reduce the computational expenses, we calculate the effective solvation forces acting on the biomolecule by using advanced solvation force extrapolation (ASFE) at inner time steps while converging the 3D-RISM-KH integral equations only at large outer time steps. The idea of ASFE consists in developing a discrete non-Eckart rotational transformation of atomic coordinates that minimizes the distances between the atomic positions of the biomolecule at different time moments. The effective solvation forces for the biomolecule in a current conformation at an inner time step are then extrapolated in the transformed subspace of those at outer time steps by using a modified least square fit approach applied to a relatively small number of the best force-coordinate pairs. The latter are selected from an extended set collecting the effective solvation forces obtained from 3D-RISM-KH at outer time steps over a broad time interval. The MTS-MD integration with effective solvation forces obtained by converging 3D-RISM-KH at outer time steps and applying ASFE at inner time steps is stabilized by employing the optimized isokinetic Nosé-Hoover chain (OIN) ensemble. Compared to the previous extrapolation schemes used in combination with the Langevin thermostat, the ASFE approach substantially improves the accuracy of evaluation of effective solvation forces and in combination with the OIN thermostat enables a dramatic increase of outer time steps. We demonstrate on a fully flexible model of alanine dipeptide in aqueous solution that the MTS-MD/OIN/ASFE/3D-RISM-KH multiscale method of molecular dynamics

  4. HOW DOES FINGOLIMOD (GILENYA® FIT IN THE TREATMENT ALGORITHM FOR HIGHLY ACTIVE RELAPSING-REMITTING MULTIPLE SCLEROSIS?

    Franz eFazekas

    2013-05-01

    Full Text Available Multiple sclerosis (MS is a neurological disorder characterised by inflammatory demyelination and neurodegeneration in the central nervous system (CNS. Until recently, disease modifying treatment was based on agents requiring parenteral delivery, thus limiting long-term compliance. Basic treatments such as beta-interferon provide only moderate efficacy, and although therapies for second-line treatment and highly active MS are more effective, they are associated with potentially severe side effects. Fingolimod (Gilenya® is the first oral treatment of MS and has recently been approved as single disease-modifying therapy in highly active relapsing-remitting multiple sclerosis (RRMS for adult patients with high disease activity despite basic treatment (beta-interferon and for treatment-naïve patients with rapidly evolving severe RRMS. At a scientific meeting that took place in Vienna on November 18th, 2011, experts from 10 Central and Eastern European countries discussed the clinical benefits and potential risks of fingolimod for MS, suggested how the new therapy fits within the current treatment algorithm and provided expert opinion for the selection and management of patients.

  5. Spectral algorithms for multiple scale localized eigenfunctions in infinitely long, slightly bent quantum waveguides

    Boyd, John P.; Amore, Paolo; Fernández, Francisco M.

    2018-03-01

    A "bent waveguide" in the sense used here is a small perturbation of a two-dimensional rectangular strip which is infinitely long in the down-channel direction and has a finite, constant width in the cross-channel coordinate. The goal is to calculate the smallest ("ground state") eigenvalue of the stationary Schrödinger equation which here is a two-dimensional Helmholtz equation, ψxx +ψyy + Eψ = 0 where E is the eigenvalue and homogeneous Dirichlet boundary conditions are imposed on the walls of the waveguide. Perturbation theory gives a good description when the "bending strength" parameter ɛ is small as described in our previous article (Amore et al., 2017) and other works cited therein. However, such series are asymptotic, and it is often impractical to calculate more than a handful of terms. It is therefore useful to develop numerical methods for the perturbed strip to cover intermediate ɛ where the perturbation series may be inaccurate and also to check the pertubation expansion when ɛ is small. The perturbation-induced change-in-eigenvalue, δ ≡ E(ɛ) - E(0) , is O(ɛ2) . We show that the computation becomes very challenging as ɛ → 0 because (i) the ground state eigenfunction varies on both O(1) and O(1 / ɛ) length scales and (ii) high accuracy is needed to compute several correct digits in δ, which is itself small compared to the eigenvalue E. The multiple length scales are not geographically separate, but rather are inextricably commingled in the neighborhood of the boundary deformation. We show that coordinate mapping and immersed boundary strategies both reduce the computational domain to the uniform strip, allowing application of pseudospectral methods on tensor product grids with tensor product basis functions. We compared different basis sets; Chebyshev polynomials are best in the cross-channel direction. However, sine functions generate rather accurate analytical approximations with just a single basis function. In the down

  6. Parametric optimization of multiple quality characteristics in laser cutting of Inconel-718 by using hybrid approach of multiple regression analysis and genetic algorithm

    Shrivastava, Prashant Kumar; Pandey, Arun Kumar

    2018-06-01

    Inconel-718 has found high demand in different industries due to their superior mechanical properties. The traditional cutting methods are facing difficulties for cutting these alloys due to their low thermal potential, lower elasticity and high chemical compatibility at inflated temperature. The challenges of machining and/or finishing of unusual shapes and/or sizes in these materials have also faced by traditional machining. Laser beam cutting may be applied for the miniaturization and ultra-precision cutting and/or finishing by appropriate control of different process parameter. This paper present multi-objective optimization the kerf deviation, kerf width and kerf taper in the laser cutting of Incone-718 sheet. The second order regression models have been developed for different quality characteristics by using the experimental data obtained through experimentation. The regression models have been used as objective function for multi-objective optimization based on the hybrid approach of multiple regression analysis and genetic algorithm. The comparison of optimization results to experimental results shows an improvement of 88%, 10.63% and 42.15% in kerf deviation, kerf width and kerf taper, respectively. Finally, the effects of different process parameters on quality characteristics have also been discussed.

  7. Validation of a Step Detection Algorithm during Straight Walking and Turning in Patients with Parkinson’s Disease and Older Adults Using an Inertial Measurement Unit at the Lower Back

    Minh H. Pham

    2017-09-01

    Full Text Available IntroductionInertial measurement units (IMUs positioned on various body locations allow detailed gait analysis even under unconstrained conditions. From a medical perspective, the assessment of vulnerable populations is of particular relevance, especially in the daily-life environment. Gait analysis algorithms need thorough validation, as many chronic diseases show specific and even unique gait patterns. The aim of this study was therefore to validate an acceleration-based step detection algorithm for patients with Parkinson’s disease (PD and older adults in both a lab-based and home-like environment.MethodsIn this prospective observational study, data were captured from a single 6-degrees of freedom IMU (APDM (3DOF accelerometer and 3DOF gyroscope worn on the lower back. Detection of heel strike (HS and toe off (TO on a treadmill was validated against an optoelectronic system (Vicon (11 PD patients and 12 older adults. A second independent validation study in the home-like environment was performed against video observation (20 PD patients and 12 older adults and included step counting during turning and non-turning, defined with a previously published algorithm.ResultsA continuous wavelet transform (cwt-based algorithm was developed for step detection with very high agreement with the optoelectronic system. HS detection in PD patients/older adults, respectively, reached 99/99% accuracy. Similar results were obtained for TO (99/100%. In HS detection, Bland–Altman plots showed a mean difference of 0.002 s [95% confidence interval (CI −0.09 to 0.10] between the algorithm and the optoelectronic system. The Bland–Altman plot for TO detection showed mean differences of 0.00 s (95% CI −0.12 to 0.12. In the home-like assessment, the algorithm for detection of occurrence of steps during turning reached 90% (PD patients/90% (older adults sensitivity, 83/88% specificity, and 88/89% accuracy. The detection of steps during non-turning phases

  8. Two-Step Cycle for Producing Multiple Anodic Aluminum Oxide (AAO) Films with Increasing Long-Range Order.

    Choudhary, Eric; Szalai, Veronika

    2016-01-01

    Nanoporous anodic aluminum oxide (AAO) membranes are being used for an increasing number of applications. However, the original two-step anodization method in which the first anodization is sacrificial to pre-pattern the second is still widely used to produce them. This method provides relatively low throughput and material utilization as half of the films are discarded. An alternative scheme that relies on alternating anodization and cathodic delamination is demonstrated that allows for the fabrication of several AAO films with only one sacrificial layer thus greatly improving total aluminum to alumina yield. The thickness for which the cathodic delamination performs best to yield full, unbroken AAO sheets is around 85 μm. Additionally, an image analysis method is used to quantify the degree of long-range ordering of the unit cells in the AAO films which was found to increase with each successive iteration of the fabrication cycle.

  9. Exercise Training in Progressive Multiple Sclerosis: A Comparison of Recumbent Stepping and Body Weight-Supported Treadmill Training.

    Pilutti, Lara A; Paulseth, John E; Dove, Carin; Jiang, Shucui; Rathbone, Michel P; Hicks, Audrey L

    2016-01-01

    Background: There is evidence of the benefits of exercise training in multiple sclerosis (MS); however, few studies have been conducted in individuals with progressive MS and severe mobility impairment. A potential exercise rehabilitation approach is total-body recumbent stepper training (TBRST). We evaluated the safety and participant-reported experience of TBRST in people with progressive MS and compared the efficacy of TBRST with that of body weight-supported treadmill training (BWSTT) on outcomes of function, fatigue, and health-related quality of life (HRQOL). Methods: Twelve participants with progressive MS (Expanded Disability Status Scale scores, 6.0-8.0) were randomized to receive TBRST or BWSTT. Participants completed three weekly sessions (30 minutes) of exercise training for 12 weeks. Primary outcomes included safety assessed as adverse events and patient-reported exercise experience assessed as postexercise response and evaluation of exercise equipment. Secondary outcomes included the Multiple Sclerosis Functional Composite, the Modified Fatigue Impact Scale, and the Multiple Sclerosis Quality of Life-54 questionnaire scores. Assessments were conducted at baseline and after 12 weeks. Results: Safety was confirmed in both exercise groups. Participants reported enjoying both exercise modalities; however, TBRST was reviewed more favorably. Both interventions reduced fatigue and improved HRQOL (P ≤ .05); there were no changes in function. Conclusions: Both TBRST and BWSTT seem to be safe, well tolerated, and enjoyable for participants with progressive MS with severe disability. Both interventions may also be efficacious for reducing fatigue and improving HRQOL. TBRST should be further explored as an exercise rehabilitation tool for patients with progressive MS.

  10. Multiple Microcomputer Control Algorithm.

    1979-09-01

    3350 ; 3360 SYT4 EOU 63101000 1 INSTRUCTION TYPE OF LONOWORD. 3370 I4 EOU 68001000 1 NOT LONGWORD. 3330 8 3390 9YT8 EOU 63101001 a INSTRUCTION TYPE OF...8217000GC ’)00O00O00000OOAz4.itE000 ALU PEG . DST.. DST .AODR.FLu. RO.DC2&RO*SHFYO FLAGS CONY ADFC ALU.REG. ..OSTADDaS. FLO -LIRD. CZERO. SNTO FLAGS fXPCD

  11. Expression of RNA virus proteins by RNA polymerase II dependent expression plasmids is hindered at multiple steps

    Überla Klaus

    2007-06-01

    Full Text Available Abstract Background Proteins of human and animal viruses are frequently expressed from RNA polymerase II dependent expression cassettes to study protein function and to develop gene-based vaccines. Initial attempts to express the G protein of vesicular stomatitis virus (VSV and the F protein of respiratory syncytial virus (RSV by eukaryotic promoters revealed restrictions at several steps of gene expression. Results Insertion of an intron flanked by exonic sequences 5'-terminal to the open reading frames (ORF of VSV-G and RSV-F led to detectable cytoplasmic mRNA levels of both genes. While the exonic sequences were sufficient to stabilise the VSV-G mRNA, cytoplasmic mRNA levels of RSV-F were dependent on the presence of a functional intron. Cytoplasmic VSV-G mRNA levels led to readily detectable levels of VSV-G protein, whereas RSV-F protein expression remained undetectable. However, RSV-F expression was observed after mutating two of four consensus sites for polyadenylation present in the RSV-F ORF. Expression levels could be further enhanced by codon optimisation. Conclusion Insufficient cytoplasmic mRNA levels and premature polyadenylation prevent expression of RSV-F by RNA polymerase II dependent expression plasmids. Since RSV replicates in the cytoplasm, the presence of premature polyadenylation sites and elements leading to nuclear instability should not interfere with RSV-F expression during virus replication. The molecular mechanisms responsible for the destabilisation of the RSV-F and VSV-G mRNAs and the different requirements for their rescue by insertion of an intron remain to be defined.

  12. QSAR study of HCV NS5B polymerase inhibitors using the genetic algorithm-multiple linear regression (GA-MLR).

    Rafiei, Hamid; Khanzadeh, Marziyeh; Mozaffari, Shahla; Bostanifar, Mohammad Hassan; Avval, Zhila Mohajeri; Aalizadeh, Reza; Pourbasheer, Eslam

    2016-01-01

    Quantitative structure-activity relationship (QSAR) study has been employed for predicting the inhibitory activities of the Hepatitis C virus (HCV) NS5B polymerase inhibitors . A data set consisted of 72 compounds was selected, and then different types of molecular descriptors were calculated. The whole data set was split into a training set (80 % of the dataset) and a test set (20 % of the dataset) using principle component analysis. The stepwise (SW) and the genetic algorithm (GA) techniques were used as variable selection tools. Multiple linear regression method was then used to linearly correlate the selected descriptors with inhibitory activities. Several validation technique including leave-one-out and leave-group-out cross-validation, Y-randomization method were used to evaluate the internal capability of the derived models. The external prediction ability of the derived models was further analyzed using modified r(2), concordance correlation coefficient values and Golbraikh and Tropsha acceptable model criteria's. Based on the derived results (GA-MLR), some new insights toward molecular structural requirements for obtaining better inhibitory activity were obtained.

  13. Is it possible to rapidly and noninvasively identify different plants from Asteraceae using electronic nose with multiple mathematical algorithms?

    Hui-Qin Zou

    2015-12-01

    Full Text Available Many plants originating from the Asteraceae family are applied as herbal medicines and also beverage ingredients in Asian areas, particularly in China. However, they may be confused due to their similar odor, especially when ground into powder, losing their typical macroscopic characteristics. In this paper, 11 different multiple mathematical algorithms, which are commonly used in data processing, were utilized and compared to analyze the electronic nose (E-nose response signals of different plants from Asteraceae family. Results demonstrate that three-dimensional plot scatter figure of principal component analysis with less extracted components could offer the identification results more visually; simultaneously, all nine kinds of artificial neural network could give classification accuracies at 100%. This paper presents a rapid, accurate, and effective method to distinguish Asteraceae plants based on their response signals in E-nose. It also gives insights to further studies, such as to find unique sensors that are more sensitive and exclusive to volatile components in Chinese herbal medicines and to improve the identification ability of E-nose. Screening sensors made by other novel materials would be also an interesting way to improve identification capability of E-nose.

  14. Improved hybridization of Fuzzy Analytic Hierarchy Process (FAHP) algorithm with Fuzzy Multiple Attribute Decision Making - Simple Additive Weighting (FMADM-SAW)

    Zaiwani, B. E.; Zarlis, M.; Efendi, S.

    2018-03-01

    In this research, the improvement of hybridization algorithm of Fuzzy Analytic Hierarchy Process (FAHP) with Fuzzy Technique for Order Preference by Similarity to Ideal Solution (FTOPSIS) in selecting the best bank chief inspector based on several qualitative and quantitative criteria with various priorities. To improve the performance of the above research, FAHP algorithm hybridization with Fuzzy Multiple Attribute Decision Making - Simple Additive Weighting (FMADM-SAW) algorithm was adopted, which applied FAHP algorithm to the weighting process and SAW for the ranking process to determine the promotion of employee at a government institution. The result of improvement of the average value of Efficiency Rate (ER) is 85.24%, which means that this research has succeeded in improving the previous research that is equal to 77.82%. Keywords: Ranking and Selection, Fuzzy AHP, Fuzzy TOPSIS, FMADM-SAW.

  15. Significant internal quantum efficiency enhancement of GaN/AlGaN multiple quantum wells emitting at ~350 nm via step quantum well structure design

    Wu, Feng; Sun, Haiding; Ajia, Idris A.; Roqan, Iman S.; Zhang, Daliang; Dai, Jiangnan; Chen, Changqing; Feng, Zhe Chuan; Li, Xiaohang

    2017-01-01

    Significant internal quantum efficiency (IQE) enhancement of GaN/AlGaN multiple quantum wells (MQWs) emitting at similar to 350 nm was achieved via a step quantum well (QW) structure design. The MQW structures were grown on AlGaN/AlN/sapphire templates by metal-organic chemical vapor deposition (MOCVD). High resolution x-ray diffraction (HR-XRD) and scanning transmission electron microscopy (STEM) were performed, showing sharp interface of the MQWs. Weak beam dark field imaging was conducted, indicating a similar dislocation density of the investigated MQWs samples. The IQE of GaN/AlGaN MQWs was estimated by temperature dependent photoluminescence (TDPL). An IQE enhancement of about two times was observed for the GaN/AlGaN step QW structure, compared with conventional QW structure. Based on the theoretical calculation, this IQE enhancement was attributed to the suppressed polarization-induced field, and thus the improved electron-hole wave-function overlap in the step QW.

  16. Significant internal quantum efficiency enhancement of GaN/AlGaN multiple quantum wells emitting at ~350 nm via step quantum well structure design

    Wu, Feng

    2017-05-03

    Significant internal quantum efficiency (IQE) enhancement of GaN/AlGaN multiple quantum wells (MQWs) emitting at similar to 350 nm was achieved via a step quantum well (QW) structure design. The MQW structures were grown on AlGaN/AlN/sapphire templates by metal-organic chemical vapor deposition (MOCVD). High resolution x-ray diffraction (HR-XRD) and scanning transmission electron microscopy (STEM) were performed, showing sharp interface of the MQWs. Weak beam dark field imaging was conducted, indicating a similar dislocation density of the investigated MQWs samples. The IQE of GaN/AlGaN MQWs was estimated by temperature dependent photoluminescence (TDPL). An IQE enhancement of about two times was observed for the GaN/AlGaN step QW structure, compared with conventional QW structure. Based on the theoretical calculation, this IQE enhancement was attributed to the suppressed polarization-induced field, and thus the improved electron-hole wave-function overlap in the step QW.

  17. Feasibility study design and methods for a home-based, square-stepping exercise program among older adults with multiple sclerosis: The SSE-MS project.

    Sebastião, Emerson; McAuley, Edward; Shigematsu, Ryosuke; Motl, Robert W

    2017-09-01

    We propose a randomized controlled trial (RCT) examining the feasibility of square-stepping exercise (SSE) delivered as a home-based program for older adults with multiple sclerosis (MS). We will assess feasibility in the four domains of process, resources, management and scientific outcomes. The trial will recruit older adults (aged 60 years and older) with mild-to-moderate MS-related disability who will be randomized into intervention or attention control conditions. Participants will complete assessments before and after completion of the conditions delivered over a 12-week period. Participants in the intervention group will have biweekly meetings with an exercise trainer in the Exercise Neuroscience Research Laboratory and receive verbal and visual instruction on step patterns for the SSE program. Participants will receive a mat for home-based practice of the step patterns, an instruction manual, and a logbook and pedometer for monitoring compliance. Compliance will be further monitored through weekly scheduled Skype calls. This feasibility study will inform future phase II and III RCTs that determine the actual efficacy and effectiveness of a home-based exercise program for older adults with MS.

  18. Stepped MS(All) Relied Transition (SMART): An approach to rapidly determine optimal multiple reaction monitoring mass spectrometry parameters for small molecules.

    Ye, Hui; Zhu, Lin; Wang, Lin; Liu, Huiying; Zhang, Jun; Wu, Mengqiu; Wang, Guangji; Hao, Haiping

    2016-02-11

    Multiple reaction monitoring (MRM) is a universal approach for quantitative analysis because of its high specificity and sensitivity. Nevertheless, optimization of MRM parameters remains as a time and labor-intensive task particularly in multiplexed quantitative analysis of small molecules in complex mixtures. In this study, we have developed an approach named Stepped MS(All) Relied Transition (SMART) to predict the optimal MRM parameters of small molecules. SMART requires firstly a rapid and high-throughput analysis of samples using a Stepped MS(All) technique (sMS(All)) on a Q-TOF, which consists of serial MS(All) events acquired from low CE to gradually stepped-up CE values in a cycle. The optimal CE values can then be determined by comparing the extracted ion chromatograms for the ion pairs of interest among serial scans. The SMART-predicted parameters were found to agree well with the parameters optimized on a triple quadrupole from the same vendor using a mixture of standards. The parameters optimized on a triple quadrupole from a different vendor was also employed for comparison, and found to be linearly correlated with the SMART-predicted parameters, suggesting the potential applications of the SMART approach among different instrumental platforms. This approach was further validated by applying to simultaneous quantification of 31 herbal components in the plasma of rats treated with a herbal prescription. Because the sMS(All) acquisition can be accomplished in a single run for multiple components independent of standards, the SMART approach are expected to find its wide application in the multiplexed quantitative analysis of complex mixtures. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. GUESS-ing polygenic associations with multiple phenotypes using a GPU-based evolutionary stochastic search algorithm.

    Leonardo Bottolo

    Full Text Available Genome-wide association studies (GWAS yielded significant advances in defining the genetic architecture of complex traits and disease. Still, a major hurdle of GWAS is narrowing down multiple genetic associations to a few causal variants for functional studies. This becomes critical in multi-phenotype GWAS where detection and interpretability of complex SNP(s-trait(s associations are complicated by complex Linkage Disequilibrium patterns between SNPs and correlation between traits. Here we propose a computationally efficient algorithm (GUESS to explore complex genetic-association models and maximize genetic variant detection. We integrated our algorithm with a new Bayesian strategy for multi-phenotype analysis to identify the specific contribution of each SNP to different trait combinations and study genetic regulation of lipid metabolism in the Gutenberg Health Study (GHS. Despite the relatively small size of GHS (n  =  3,175, when compared with the largest published meta-GWAS (n > 100,000, GUESS recovered most of the major associations and was better at refining multi-trait associations than alternative methods. Amongst the new findings provided by GUESS, we revealed a strong association of SORT1 with TG-APOB and LIPC with TG-HDL phenotypic groups, which were overlooked in the larger meta-GWAS and not revealed by competing approaches, associations that we replicated in two independent cohorts. Moreover, we demonstrated the increased power of GUESS over alternative multi-phenotype approaches, both Bayesian and non-Bayesian, in a simulation study that mimics real-case scenarios. We showed that our parallel implementation based on Graphics Processing Units outperforms alternative multi-phenotype methods. Beyond multivariate modelling of multi-phenotypes, our Bayesian model employs a flexible hierarchical prior structure for genetic effects that adapts to any correlation structure of the predictors and increases the power to identify

  20. Conservative two-step procedure including uterine artery embolization with embosphere and surgical myomectomy for the treatment of multiple fibroids: Preliminary experience

    Malartic, Cécile; Morel, Olivier; Fargeaudou, Yann; Le Dref, Olivier; Fazel, Afchine; Barranger, Emmanuel; Soyer, Philippe

    2012-01-01

    Objective: To evaluate the feasibility and safety of combined uterine artery embolization (UAE) using embosphere and surgical myomectomy as an alternative to radical hysterectomy in premenopausal women with multiple fibroids. Materials and methods: Mid-term clinical outcome (mean, 25 months) of 12 premenopausal women (mean age, 38 years) with multiple and large symptomatic fibroids who desired to retain their uterus and who were treated using combined UAE and surgical myomectomy were retrospectively analyzed. In all women, UAE alone was contraindicated because of large (>10 cm) or subserosal or submucosal fibroids and myomectomy alone was contraindicated because of too many (>10) fibroids. Results: UAE and surgical myomectomy were successfully performed in all women. Myomectomy was performed using laparoscopy (n = 6), open laparotomy (n = 3), hysteroscopy (n = 2), or laparoscopy and hysteroscopy (n = 1). Mean serum hemoglobin level drop was 0.97 g/dL and no blood transfusion was needed. No immediate complications were observed and all women reported resumption of normal menses. During a mean follow-up period of 25 months (range, 14–37 months), complete resolution of initial symptoms along with decrease in uterine volume (mean, 48%) was observed in all women. No further hysterectomy was required in any woman. Conclusion: In premenopausal women with multiple fibroids, the two-step procedure is safe and effective alternative to radical hysterectomy, which allows preserving the uterus. Further prospective studies, however, should be done to determine the actual benefit of this combined approach on the incidence of subsequent pregnancies.

  1. An investigation of multidisciplinary complex health care interventions - steps towards an integrative treatment model in the rehabilitation of People with Multiple Sclerosis

    Skovgaard Lasse

    2012-04-01

    Full Text Available Abstract Background The Danish Multiple Sclerosis Society initiated a large-scale bridge building and integrative treatment project to take place from 2004–2010 at a specialized Multiple Sclerosis (MS hospital. In this project, a team of five conventional health care practitioners and five alternative practitioners was set up to work together in developing and offering individualized treatments to 200 people with MS. The purpose of this paper is to present results from the six year treatment collaboration process regarding the development of an integrative treatment model. Discussion The collaborative work towards an integrative treatment model for people with MS, involved six steps: 1 Working with an initial model 2 Unfolding the different treatment philosophies 3 Discussing the elements of the Intervention-Mechanism-Context-Outcome-scheme (the IMCO-scheme 4 Phrasing the common assumptions for an integrative MS program theory 5 Developing the integrative MS program theory 6 Building the integrative MS treatment model. The model includes important elements of the different treatment philosophies represented in the team and thereby describes a common understanding of the complexity of the courses of treatment. Summary An integrative team of practitioners has developed an integrative model for combined treatments of People with Multiple Sclerosis. The model unites different treatment philosophies and focuses on process-oriented factors and the strengthening of the patients’ resources and competences on a physical, an emotional and a cognitive level.

  2. Optimizing multiple sequence alignments using a genetic algorithm based on three objectives: structural information, non-gaps percentage and totally conserved columns.

    Ortuño, Francisco M; Valenzuela, Olga; Rojas, Fernando; Pomares, Hector; Florido, Javier P; Urquiza, Jose M; Rojas, Ignacio

    2013-09-01

    Multiple sequence alignments (MSAs) are widely used approaches in bioinformatics to carry out other tasks such as structure predictions, biological function analyses or phylogenetic modeling. However, current tools usually provide partially optimal alignments, as each one is focused on specific biological features. Thus, the same set of sequences can produce different alignments, above all when sequences are less similar. Consequently, researchers and biologists do not agree about which is the most suitable way to evaluate MSAs. Recent evaluations tend to use more complex scores including further biological features. Among them, 3D structures are increasingly being used to evaluate alignments. Because structures are more conserved in proteins than sequences, scores with structural information are better suited to evaluate more distant relationships between sequences. The proposed multiobjective algorithm, based on the non-dominated sorting genetic algorithm, aims to jointly optimize three objectives: STRIKE score, non-gaps percentage and totally conserved columns. It was significantly assessed on the BAliBASE benchmark according to the Kruskal-Wallis test (P algorithm also outperforms other aligners, such as ClustalW, Multiple Sequence Alignment Genetic Algorithm (MSA-GA), PRRP, DIALIGN, Hidden Markov Model Training (HMMT), Pattern-Induced Multi-sequence Alignment (PIMA), MULTIALIGN, Sequence Alignment Genetic Algorithm (SAGA), PILEUP, Rubber Band Technique Genetic Algorithm (RBT-GA) and Vertical Decomposition Genetic Algorithm (VDGA), according to the Wilcoxon signed-rank test (P 0.05) with the advantage of being able to use less structures. Structural information is included within the objective function to evaluate more accurately the obtained alignments. The source code is available at http://www.ugr.es/~fortuno/MOSAStrE/MO-SAStrE.zip.

  3. Estimation of perceptible water vapor of atmosphere using artificial neural network, support vector machine and multiple linear regression algorithm and their comparative study

    Shastri, Niket; Pathak, Kamlesh

    2018-05-01

    The water vapor content in atmosphere plays very important role in climate. In this paper the application of GPS signal in meteorology is discussed, which is useful technique that is used to estimate the perceptible water vapor of atmosphere. In this paper various algorithms like artificial neural network, support vector machine and multiple linear regression are use to predict perceptible water vapor. The comparative studies in terms of root mean square error and mean absolute errors are also carried out for all the algorithms.

  4. An integrative computational framework based on a two-step random forest algorithm improves prediction of zinc-binding sites in proteins.

    Cheng Zheng

    Full Text Available Zinc-binding proteins are the most abundant metalloproteins in the Protein Data Bank where the zinc ions usually have catalytic, regulatory or structural roles critical for the function of the protein. Accurate prediction of zinc-binding sites is not only useful for the inference of protein function but also important for the prediction of 3D structure. Here, we present a new integrative framework that combines multiple sequence and structural properties and graph-theoretic network features, followed by an efficient feature selection to improve prediction of zinc-binding sites. We investigate what information can be retrieved from the sequence, structure and network levels that is relevant to zinc-binding site prediction. We perform a two-step feature selection using random forest to remove redundant features and quantify the relative importance of the retrieved features. Benchmarking on a high-quality structural dataset containing 1,103 protein chains and 484 zinc-binding residues, our method achieved >80% recall at a precision of 75% for the zinc-binding residues Cys, His, Glu and Asp on 5-fold cross-validation tests, which is a 10%-28% higher recall at the 75% equal precision compared to SitePredict and zincfinder at residue level using the same dataset. The independent test also indicates that our method has achieved recall of 0.790 and 0.759 at residue and protein levels, respectively, which is a performance better than the other two methods. Moreover, AUC (the Area Under the Curve and AURPC (the Area Under the Recall-Precision Curve by our method are also respectively better than those of the other two methods. Our method can not only be applied to large-scale identification of zinc-binding sites when structural information of the target is available, but also give valuable insights into important features arising from different levels that collectively characterize the zinc-binding sites. The scripts and datasets are available at http://protein.cau.edu.cn/zincidentifier/.

  5. Effects of multiple-step thermal ageing treatment on the hardness characteristics of A356.0-type Al-Si-Mg alloy

    Abdulwahab, M.; Madugu, I.A.; Yaro, S.A.; Hassan, S.B.; Popoola, A.P.I.

    2011-01-01

    The work outlined the hardness characteristics of thermally aged high chromium sodium modified A356.0-type Al-Si-Mg alloy using the multiple-step thermal ageing treatment (MSTAT) approach. This novel approach consists of double thermal ageing (DTAT) and single thermal ageing treatment (STAT). The investigation also includes the development of a new temperature-compensated-time parameter, P, for the studied alloy at different ageing temperatures and time considered. The results obtained in the DTAT developed for the A356.0-type Al-Si-Mg alloy showed an improvement in the precipitation hardening (PH) ability and hardness characteristics as compared to the convectional STAT temper. The observations were evidenced from the X-ray diffractometry (XRD) pattern indicating the possible strengthening phases. Equally, the hardness behavior was correlated with the microstructures using optical microscope (OPM) and scanning electron microscope equipped with energy dispersive spectroscope (SEM-EDS).

  6. Fuzzy Logic-Based Perturb and Observe Algorithm with Variable Step of a Reference Voltage for Solar Permanent Magnet Synchronous Motor Drive System Fed by Direct-Connected Photovoltaic Array

    Mohamed Redha Rezoug

    2018-02-01

    Full Text Available Photovoltaic pumping is considered to be the most used application amongst other photovoltaic energy applications in isolated sites. This technology is developing with a slow progression to allow the photovoltaic system to operate at its maximum power. This work introduces the modified algorithm which is a perturb and observe (P&O type to overcome the limitations of the conventional P&O algorithm and increase its global performance in abrupt weather condition changes. The most significant conventional P&O algorithm restriction is the difficulty faced when choosing the variable step of the reference voltage value, a good compromise between the swift dynamic response and the stability in the steady state. To adjust the step reference voltage according to the location of the operating point of the maximum power point (MPP, a fuzzy logic controller (FLC block adapted to the P&O algorithm is used. This allows the improvement of the tracking pace and the steady state oscillation elimination. The suggested method was evaluated by simulation using MATLAB/SimPowerSystems blocks and compared to the classical P&O under different irradiation levels. The results obtained show the effectiveness of the technique proposed and its capacity for the practical and efficient tracking of maximum power.

  7. Rapid gas chromatography with flame photometric detection of multiple organophosphorus pesticides in Salvia miltiorrhizae after ultrasonication assisted one-step extraction.

    Zhang, Shanshan; Liu, Xiaofei; Qin, Jia'an; Yang, Meihua; Zhao, Hongzheng; Wang, Yong; Guo, Weiying; Ma, Zhijie; Kong, Weijun

    2017-11-15

    A simple and rapid gas chromatography-flame photometric detection (GC-FPD) method was developed for the determination of 12 organophosphorus pesticides (OPPs) in Salvia miltiorrhizae by using ultrasonication assisted one-step extraction (USAE) without any clean-up steps. Some crucial parameters such as type of extraction solvent were optimized to improve the method performance for trace analysis. Any clean-up steps were negligent as no interferences were detected in the GC-FPD chromatograms for sensitive detection. Under the optimized conditions, limits of detection (LODs) and quantitation (LOQs) for all pesticides were in the range of 0.001-0.002mg/kg and 0.002-0.01mg/kg and 0.002-0.01mg/kg, respectively, which were all below the regulatory maximum residue limits suggested. RSDs for method precision (intra- and inter-day variations) were lower than 6.8% in approval with international regulations. Average recovery rates for all pesticides at three fortification levels (0.5, 1.0 and 5.0mg/kg) were in the range of 71.2-101.0% with relative standard deviations (RSDs) pesticide (dimethoate) out of the 12 targets was simultaneously detected in four samples at concentrations of 0.016-0.02mg/kg. Dichlorvos and omethoate were found in the same sample from Sichuan province at 0.004 and 0.027mg/kg, respectively. Malathion and monocrotophos were determined in the other two samples at 0.014 and 0.028mg/kg, respectively. All the positive samples were confirmed by LC-MS/MS. The simple, reliable and rapid USAE-GC-FPD method with many advantages over traditional techniques would be preferred for trace analysis of multiple pesticides in more complex matrices. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. New formulation and branch-and-cut algorithm for the pickup and delivery traveling salesman problem with multiple stacks: new formulation and branch-and-cut algorithm

    Sampaio Oliveira, A.H.; Urrutia, S.

    2017-01-01

    In this paper, we consider the pickup and delivery traveling salesman problem with multiple stacks in which a single vehicle must serve a set of customer requests defined by a pair of pickup and delivery destinations of an item. The vehicle contains a fixed number of stacks, where each item is

  9. Application of single-step genomic best linear unbiased prediction with a multiple-lactation random regression test-day model for Japanese Holsteins.

    Baba, Toshimi; Gotoh, Yusaku; Yamaguchi, Satoshi; Nakagawa, Satoshi; Abe, Hayato; Masuda, Yutaka; Kawahara, Takayoshi

    2017-08-01

    This study aimed to evaluate a validation reliability of single-step genomic best linear unbiased prediction (ssGBLUP) with a multiple-lactation random regression test-day model and investigate an effect of adding genotyped cows on the reliability. Two data sets for test-day records from the first three lactations were used: full data from February 1975 to December 2015 (60 850 534 records from 2 853 810 cows) and reduced data cut off in 2011 (53 091 066 records from 2 502 307 cows). We used marker genotypes of 4480 bulls and 608 cows. Genomic enhanced breeding values (GEBV) of 305-day milk yield in all the lactations were estimated for at least 535 young bulls using two marker data sets: bull genotypes only and both bulls and cows genotypes. The realized reliability (R 2 ) from linear regression analysis was used as an indicator of validation reliability. Using only genotyped bulls, R 2 was ranged from 0.41 to 0.46 and it was always higher than parent averages. The very similar R 2 were observed when genotyped cows were added. An application of ssGBLUP to a multiple-lactation random regression model is feasible and adding a limited number of genotyped cows has no significant effect on reliability of GEBV for genotyped bulls. © 2016 Japanese Society of Animal Science.

  10. Geographic Location of a Computer Node Examining a Time-to-Location Algorithm and Multiple Autonomous System Networks

    Sorgaard, Duane

    2004-01-01

    .... A time-to-location algorithm can successfully resolve a geographic location of a computer node using only latency information from known sites and mathematically calculating the Euclidean distance...

  11. A clinical decision support system algorithm for intravenous to oral antibiotic switch therapy: validity, clinical relevance and usefulness in a three-step evaluation study.

    Akhloufi, H; Hulscher, M; van der Hoeven, C P; Prins, J M; van der Sijs, H; Melles, D C; Verbon, A

    2018-04-26

    To evaluate a clinical decision support system (CDSS) based on consensus-based intravenous to oral switch criteria, which identifies intravenous to oral switch candidates. A three-step evaluation study of a stand-alone CDSS with electronic health record interoperability was performed at the Erasmus University Medical Centre in the Netherlands. During the first step, we performed a technical validation. During the second step, we determined the sensitivity, specificity, negative predictive value and positive predictive value in a retrospective cohort of all hospitalized adult patients starting at least one therapeutic antibacterial drug between 1 and 16 May 2013. ICU, paediatric and psychiatric wards were excluded. During the last step the clinical relevance and usefulness was prospectively assessed by reports to infectious disease specialists. An alert was considered clinically relevant if antibiotics could be discontinued or switched to oral therapy at the time of the alert. During the first step, one technical error was found. The second step yielded a positive predictive value of 76.6% and a negative predictive value of 99.1%. The third step showed that alerts were clinically relevant in 53.5% of patients. For 43.4% it had already been decided to discontinue or switch the intravenous antibiotics by the treating physician. In 10.1%, the alert resulted in advice to change antibiotic policy and was considered useful. This prospective cohort study shows that the alerts were clinically relevant in >50% (n = 449) and useful in 10% (n = 85). The CDSS needs to be evaluated in hospitals with varying activity of infectious disease consultancy services as this probably influences usefulness.

  12. A Distributed Flow Rate Control Algorithm for Networked Agent System with Multiple Coding Rates to Optimize Multimedia Data Transmission

    Shuai Zeng

    2013-01-01

    Full Text Available With the development of wireless technologies, mobile communication applies more and more extensively in the various walks of life. The social network of both fixed and mobile users can be seen as networked agent system. At present, kinds of devices and access network technology are widely used. Different users in this networked agent system may need different coding rates multimedia data due to their heterogeneous demand. This paper proposes a distributed flow rate control algorithm to optimize multimedia data transmission of the networked agent system with the coexisting various coding rates. In this proposed algorithm, transmission path and upload bandwidth of different coding rate data between source node, fixed and mobile nodes are appropriately arranged and controlled. On the one hand, this algorithm can provide user nodes with differentiated coding rate data and corresponding flow rate. On the other hand, it makes the different coding rate data and user nodes networked, which realizes the sharing of upload bandwidth of user nodes which require different coding rate data. The study conducts mathematical modeling on the proposed algorithm and compares the system that adopts the proposed algorithm with the existing system based on the simulation experiment and mathematical analysis. The results show that the system that adopts the proposed algorithm achieves higher upload bandwidth utilization of user nodes and lower upload bandwidth consumption of source node.

  13. Robust frequency diversity based algorithm for clutter noise reduction of ultrasonic signals using multiple sub-spectrum phase coherence

    Gongzhang, R.; Xiao, B.; Lardner, T.; Gachagan, A. [Centre for Ultrasonic Engineering, University of Strathclyde, Glasgow, G1 1XW (United Kingdom); Li, M. [School of Engineering, University of Glasgow, Glasgow, G12 8QQ (United Kingdom)

    2014-02-18

    This paper presents a robust frequency diversity based algorithm for clutter reduction in ultrasonic A-scan waveforms. The performance of conventional spectral-temporal techniques like Split Spectrum Processing (SSP) is highly dependent on the parameter selection, especially when the signal to noise ratio (SNR) is low. Although spatial beamforming offers noise reduction with less sensitivity to parameter variation, phased array techniques are not always available. The proposed algorithm first selects an ascending series of frequency bands. A signal is reconstructed for each selected band in which a defect is present when all frequency components are in uniform sign. Combining all reconstructed signals through averaging gives a probability profile of potential defect position. To facilitate data collection and validate the proposed algorithm, Full Matrix Capture is applied on the austenitic steel and high nickel alloy (HNA) samples with 5MHz transducer arrays. When processing A-scan signals with unrefined parameters, the proposed algorithm enhances SNR by 20dB for both samples and consequently, defects are more visible in B-scan images created from the large amount of A-scan traces. Importantly, the proposed algorithm is considered robust, while SSP is shown to fail on the austenitic steel data and achieves less SNR enhancement on the HNA data.

  14. The PARAFAC-MUSIC Algorithm for DOA Estimation with Doppler Frequency in a MIMO Radar System

    Nan Wang

    2014-01-01

    Full Text Available The PARAFAC-MUSIC algorithm is proposed to estimate the direction-of-arrival (DOA of the targets with Doppler frequency in a monostatic MIMO radar system in this paper. To estimate the Doppler frequency, the PARAFAC (parallel factor algorithm is firstly utilized in the proposed algorithm, and after the compensation of Doppler frequency, MUSIC (multiple signal classification algorithm is applied to estimate the DOA. By these two steps, the DOA of moving targets can be estimated successfully. Simulation results show that the proposed PARAFAC-MUSIC algorithm has a higher accuracy than the PARAFAC algorithm and the MUSIC algorithm in DOA estimation.

  15. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation.

    Li, Yunyi; Zhang, Jie; Fan, Shangang; Yang, Jie; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki; Gui, Guan

    2017-12-15

    Both L 1/2 and L 2/3 are two typical non-convex regularizations of L p (0dictionary sparse transform strategies for the two typical cases p∈{1/2, 2/3} based on an iterative Lp thresholding algorithm and then proposes a sparse adaptive iterative-weighted L p thresholding algorithm (SAITA). Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based L p regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L₁ algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based L p case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work.

  16. A Cooperative Search and Coverage Algorithm with Controllable Revisit and Connectivity Maintenance for Multiple Unmanned Aerial Vehicles

    Zhong Liu

    2018-05-01

    Full Text Available In this paper, we mainly study a cooperative search and coverage algorithm for a given bounded rectangle region, which contains several unknown stationary targets, by a team of unmanned aerial vehicles (UAVs with non-ideal sensors and limited communication ranges. Our goal is to minimize the search time, while gathering more information about the environment and finding more targets. For this purpose, a novel cooperative search and coverage algorithm with controllable revisit mechanism is presented. Firstly, as the representation of the environment, the cognitive maps that included the target probability map (TPM, the uncertain map (UM, and the digital pheromone map (DPM are constituted. We also design a distributed update and fusion scheme for the cognitive map. This update and fusion scheme can guarantee that each one of the cognitive maps converges to the same one, which reflects the targets’ true existence or absence in each cell of the search region. Secondly, we develop a controllable revisit mechanism based on the DPM. This mechanism can concentrate the UAVs to revisit sub-areas that have a large target probability or high uncertainty. Thirdly, in the frame of distributed receding horizon optimizing, a path planning algorithm for the multi-UAVs cooperative search and coverage is designed. In the path planning algorithm, the movement of the UAVs is restricted by the potential fields to meet the requirements of avoiding collision and maintaining connectivity constraints. Moreover, using the minimum spanning tree (MST topology optimization strategy, we can obtain a tradeoff between the search coverage enhancement and the connectivity maintenance. The feasibility of the proposed algorithm is demonstrated by comparison simulations by way of analyzing the effects of the controllable revisit mechanism and the connectivity maintenance scheme. The Monte Carlo method is employed to validate the influence of the number of UAVs, the sensing radius

  17. Towards heterogeneous robot team path planning: acquisition of multiple routes with a modified spline-based algorithm

    Lavrenov Roman

    2017-01-01

    Full Text Available Our research focuses on operation of a heterogeneous robotic group that carries out point-to point navigation in GPS-denied dynamic environment, applying a combined local and global planning approach. In this paper, we introduce a homotopy-based high-level planner, which uses a modified splinebased path-planning algorithm. The algorithm utilizes Voronoi graph for global planning and a set of optimization criteria for local improvements of selected paths. The simulation was implemented in Matlab environment.

  18. Paper 5: Surveillance of multiple congenital anomalies: implementation of a computer algorithm in European registers for classification of cases

    Garne, Ester; Dolk, Helen; Loane, Maria

    2011-01-01

    Surveillance of multiple congenital anomalies is considered to be more sensitive for the detection of new teratogens than surveillance of all or isolated congenital anomalies. Current literature proposes the manual review of all cases for classification into isolated or multiple congenital anomal...

  19. A Synthetic Algorithm for Tracking a Moving Object in a Multiple-Dynamic Obstacles Environment Based on Kinematically Planar Redundant Manipulators

    Hongzhe Jin

    2017-01-01

    Full Text Available This paper presents a synthetic algorithm for tracking a moving object in a multiple-dynamic obstacles environment based on kinematically planar manipulators. By observing the motions of the object and obstacles, Spline filter associated with polynomial fitting is utilized to predict their moving paths for a period of time in the future. Several feasible paths for the manipulator in Cartesian space can be planned according to the predicted moving paths and the defined feasibility criterion. The shortest one among these feasible paths is selected as the optimized path. Then the real-time path along the optimized path is planned for the manipulator to track the moving object in real-time. To improve the convergence rate of tracking, a virtual controller based on PD controller is designed to adaptively adjust the real-time path. In the process of tracking, the null space of inverse kinematic and the local rotation coordinate method (LRCM are utilized for the arms and the end-effector to avoid obstacles, respectively. Finally, the moving object in a multiple-dynamic obstacles environment is thus tracked via real-time updating the joint angles of manipulator according to the iterative method. Simulation results show that the proposed algorithm is feasible to track a moving object in a multiple-dynamic obstacles environment.

  20. Reactive Collision Avoidance Algorithm

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on

  1. Algorithm for the early diagnosis and treatment of patients with cross reactive immunologic material-negative classic infantile pompe disease: a step towards improving the efficacy of ERT.

    Suhrad G Banugaria

    Full Text Available OBJECTIVE: Although enzyme replacement therapy (ERT is a highly effective therapy, CRIM-negative (CN infantile Pompe disease (IPD patients typically mount a strong immune response which abrogates the efficacy of ERT, resulting in clinical decline and death. This study was designed to demonstrate that immune tolerance induction (ITI prevents or diminishes the development of antibody titers, resulting in a better clinical outcome compared to CN IPD patients treated with ERT monotherapy. METHODS: We evaluated the safety, efficacy and feasibility of a clinical algorithm designed to accurately identify CN IPD patients and minimize delays between CRIM status determination and initiation of an ITI regimen (combination of rituximab, methotrexate and IVIG concurrent with ERT. Clinical and laboratory data including measures of efficacy analysis for response to ERT were analyzed and compared to CN IPD patients treated with ERT monotherapy. RESULTS: Seven CN IPD patients were identified and started on the ITI regimen concurrent with ERT. Median time from diagnosis of CN status to commencement of ERT and ITI was 0.5 months (range: 0.1-1.6 months. At baseline, all patients had significant cardiomyopathy and all but one required respiratory support. The ITI regimen was safely tolerated in all seven cases. Four patients never seroconverted and remained antibody-free. One patient died from respiratory failure. Two patients required another course of the ITI regimen. In addition to their clinical improvement, the antibody titers observed in these patients were much lower than those seen in ERT monotherapy treated CN patients. CONCLUSIONS: The ITI regimen appears safe and efficacious and holds promise in altering the natural history of CN IPD by increasing ERT efficacy. An algorithm such as this substantiates the benefits of accelerated diagnosis and management of CN IPD patients, thus, further supporting the importance of early identification and treatment

  2. Combinatorial algorithms

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  3. Optimal Siting and Sizing of Multiple DG Units for the Enhancement of Voltage Profile and Loss Minimization in Transmission Systems Using Nature Inspired Algorithms

    Ambika Ramamoorthy

    2016-01-01

    Full Text Available Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF and weak (WK bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5 and PQ capacities of DGs (P alone, Q alone, and  P and Q both are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology.

  4. Optimal Siting and Sizing of Multiple DG Units for the Enhancement of Voltage Profile and Loss Minimization in Transmission Systems Using Nature Inspired Algorithms.

    Ramamoorthy, Ambika; Ramachandran, Rajeswari

    2016-01-01

    Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF) and weak (WK) bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5) and PQ capacities of DGs (P alone, Q alone, and P and Q both) are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology.

  5. Iterative solution of multiple radiation and scattering problems in structural acoustics using the BL-QMR algorithm

    Malhotra, M. [Stanford Univ., CA (United States)

    1996-12-31

    Finite-element discretizations of time-harmonic acoustic wave problems in exterior domains result in large sparse systems of linear equations with complex symmetric coefficient matrices. In many situations, these matrix problems need to be solved repeatedly for different right-hand sides, but with the same coefficient matrix. For instance, multiple right-hand sides arise in radiation problems due to multiple load cases, and also in scattering problems when multiple angles of incidence of an incoming plane wave need to be considered. In this talk, we discuss the iterative solution of multiple linear systems arising in radiation and scattering problems in structural acoustics by means of a complex symmetric variant of the BL-QMR method. First, we summarize the governing partial differential equations for time-harmonic structural acoustics, the finite-element discretization of these equations, and the resulting complex symmetric matrix problem. Next, we sketch the special version of BL-QMR method that exploits complex symmetry, and we describe the preconditioners we have used in conjunction with BL-QMR. Finally, we report some typical results of our extensive numerical tests to illustrate the typical convergence behavior of BL-QMR method for multiple radiation and scattering problems in structural acoustics, to identify appropriate preconditioners for these problems, and to demonstrate the importance of deflation in block Krylov-subspace methods. Our numerical results show that the multiple systems arising in structural acoustics can be solved very efficiently with the preconditioned BL-QMR method. In fact, for multiple systems with up to 40 and more different right-hand sides we get consistent and significant speed-ups over solving the systems individually.

  6. Performance Analyses of IDEAL Algorithm on Highly Skewed Grid System

    Dongliang Sun

    2014-03-01

    Full Text Available IDEAL is an efficient segregated algorithm for the fluid flow and heat transfer problems. This algorithm has now been extended to the 3D nonorthogonal curvilinear coordinates. Highly skewed grids in the nonorthogonal curvilinear coordinates can decrease the convergence rate and deteriorate the calculating stability. In this study, the feasibility of the IDEAL algorithm on highly skewed grid system is analyzed by investigating the lid-driven flow in the inclined cavity. It can be concluded that the IDEAL algorithm is more robust and more efficient than the traditional SIMPLER algorithm, especially for the highly skewed and fine grid system. For example, at θ = 5° and grid number = 70 × 70 × 70, the convergence rate of the IDEAL algorithm is 6.3 times faster than that of the SIMPLER algorithm, and the IDEAL algorithm can converge almost at any time step multiple.

  7. Cost-effectiveness of a modified two-step algorithm using a combined glutamate dehydrogenase/toxin enzyme immunoassay and real-time PCR for the diagnosis of Clostridium difficile infection.

    Vasoo, Shawn; Stevens, Jane; Portillo, Lena; Barza, Ruby; Schejbal, Debra; Wu, May May; Chancey, Christina; Singh, Kamaljit

    2014-02-01

    The analytical performance and cost-effectiveness of the Wampole Toxin A/B EIA, the C. Diff. Quik Chek Complete (CdQCC) (a combined glutamate dehydrogenase antigen/toxin enzyme immunoassay), two RT-PCR assays (Progastro Cd and BD GeneOhm) and a modified two-step algorithm using the CdQCC reflexed to RT-PCR for indeterminate results were compared. The sensitivity of the Wampole Toxin A/B EIA, CdQCC (GDH antigen), BD GeneOhm and Progastro Cd RT-PCR were 85.4%, 95.8%, 100% and 93.8%, respectively. The algorithm provided rapid results for 86% of specimens and the remaining indeterminate results were resolved by RT-PCR, offering the best balance of sensitivity and cost savings per test (algorithm ∼US$13.50/test versus upfront RT-PCR ∼US$26.00/test). Copyright © 2012. Published by Elsevier B.V.

  8. Research on Innovating, Applying Multiple Paths Routing Technique Based on Fuzzy Logic and Genetic Algorithm for Routing Messages in Service - Oriented Routing

    Nguyen Thanh Long

    2015-02-01

    Full Text Available MANET (short for Mobile Ad-Hoc Network consists of a set of mobile network nodes, network configuration changes very fast. In content based routing, data is transferred from source node to request nodes is not based on destination addresses. Therefore, it is very flexible and reliable, because source node does not need to know destination nodes. If We can find multiple paths that satisfies bandwidth requirement, split the original message into multiple smaller messages to transmit concurrently on these paths. On destination nodes, combine separated messages into the original message. Hence it can utilize better network resources, causes data transfer rate to be higher, load balancing, failover. Service Oriented Routing is inherited from the model of content based routing (CBR, combined with several advanced techniques such as Multicast, multiple path routing, Genetic algorithm to increase the data rate, and data encryption to ensure information security. Fuzzy logic is a logical field study evaluating the accuracy of the results based on the approximation of the components involved, make decisions based on many factors relative accuracy based on experimental or mathematical proof. This article presents some techniques to support multiple path routing from one network node to a set of nodes with guaranteed quality of service. By using these techniques can decrease the network load, congestion, use network resources efficiently.

  9. Diablo 2.0: A modern DNS/LES code for the incompressible NSE leveraging new time-stepping and multigrid algorithms

    Cavaglieri, Daniele; Bewley, Thomas; Mashayek, Ali

    2015-11-01

    We present a new code, Diablo 2.0, for the simulation of the incompressible NSE in channel and duct flows with strong grid stretching near walls. The code leverages the fractional step approach with a few twists. New low-storage IMEX (implicit-explicit) Runge-Kutta time-marching schemes are tested which are superior to the traditional and widely-used CN/RKW3 (Crank-Nicolson/Runge-Kutta-Wray) approach; the new schemes tested are L-stable in their implicit component, and offer improved overall order of accuracy and stability with, remarkably, similar computational cost and storage requirements. For duct flow simulations, our new code also introduces a new smoother for the multigrid solver for the pressure Poisson equation. The classic approach, involving alternating-direction zebra relaxation, is replaced by a new scheme, dubbed tweed relaxation, which achieves the same convergence rate with roughly half the computational cost. The code is then tested on the simulation of a shear flow instability in a duct, a classic problem in fluid mechanics which has been the object of extensive numerical modelling for its role as a canonical pathway to energetic turbulence in several fields of science and engineering.

  10. Algorithmic alternatives

    Creutz, M.

    1987-11-01

    A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/

  11. Robust multiple cue fusion-based high-speed and nonrigid object tracking algorithm for short track speed skating

    Liu, Chenguang; Cheng, Heng-Da; Zhang, Yingtao; Wang, Yuxuan; Xian, Min

    2016-01-01

    This paper presents a methodology for tracking multiple skaters in short track speed skating competitions. Nonrigid skaters move at high speed with severe occlusions happening frequently among them. The camera is panned quickly in order to capture the skaters in a large and dynamic scene. To automatically track the skaters and precisely output their trajectories becomes a challenging task in object tracking. We employ the global rink information to compensate camera motion and obtain the global spatial information of skaters, utilize random forest to fuse multiple cues and predict the blob of each skater, and finally apply a silhouette- and edge-based template-matching and blob-evolving method to labelling pixels to a skater. The effectiveness and robustness of the proposed method are verified through thorough experiments.

  12. A Sequential Circuit-Based IP Watermarking Algorithm for Multiple Scan Chains in Design-for-Test

    C. Wu

    2011-06-01

    Full Text Available In Very Large Scale Integrated Circuits (VLSI design, the existing Design-for-Test(DFT based watermarking techniques usually insert watermark through reordering scan cells, which causes large resource overhead, low security and coverage rate of watermark detection. A novel scheme was proposed to watermark multiple scan chains in DFT for solving the problems. The proposed scheme adopts DFT scan test model of VLSI design, and uses a Linear Feedback Shift Register (LFSR for pseudo random test vector generation. All of the test vectors are shifted in scan input for the construction of multiple scan chains with minimum correlation. Specific registers in multiple scan chains will be changed by the watermark circuit for watermarking the design. The watermark can be effectively detected without interference with normal function of the circuit, even after the chip is packaged. The experimental results on several ISCAS benchmarks show that the proposed scheme has lower resource overhead, probability of coincidence and higher coverage rate of watermark detection by comparing with the existing methods.

  13. Steps toward a CONUS-wide reanalysis with archived NEXRAD data using National Mosaic and Multisensor Quantitative Precipitation Estimation (NMQ/Q2) algorithms

    Stevens, S. E.; Nelson, B. R.; Langston, C.; Qi, Y.

    2012-12-01

    The National Mosaic and Multisensor QPE (NMQ/Q2) software suite, developed at NOAA's National Severe Storms Laboratory (NSSL) in Norman, OK, addresses a large deficiency in the resolution of currently archived precipitation datasets. Current standards, both radar- and satellite-based, provide for nationwide precipitation data with a spatial resolution of up to 4-5 km, with a temporal resolution as fine as one hour. Efforts are ongoing to process archived NEXRAD data for the period of record (1996 - present), producing a continuous dataset providing precipitation data at a spatial resolution of 1 km, on a timescale of only five minutes. In addition, radar-derived precipitation data are adjusted hourly using a wide variety of automated gauge networks spanning the United States. Applications for such a product range widely, from emergency management and flash flood guidance, to hydrological studies and drought monitoring. Results are presented from a subset of the NEXRAD dataset, providing basic statistics on the distribution of rainrates, relative frequency of precipitation types, and several other variables which demonstrate the variety of output provided by the software. Precipitation data from select case studies are also presented to highlight the increased resolution provided by this reanalysis and the possibilities that arise from the availability of data on such fine scales. A previously completed pilot project and steps toward a nationwide implementation are presented along with proposed strategies for managing and processing such a large dataset. Reprocessing efforts span several institutions in both North Carolina and Oklahoma, and data/software coordination are key in producing a homogeneous record of precipitation to be archived alongside NOAA's other Climate Data Records. Methods are presented for utilizing supercomputing capability in expediting processing, to allow for the iterative nature of a reanalysis effort.

  14. GUESS-ing Polygenic Associations with Multiple Phenotypes Using a GPU-Based Evolutionary Stochastic Search Algorithm

    Bottolo, Leonardo; Chadeau-Hyam, Marc; Hastie, David I

    2013-01-01

    Genome-wide association studies (GWAS) yielded significant advances in defining the genetic architecture of complex traits and disease. Still, a major hurdle of GWAS is narrowing down multiple genetic associations to a few causal variants for functional studies. This becomes critical in multi-phe...... of SORT1 with TG-APOB and LIPC with TG-HDL phenotypic groups, which were overlooked in the larger meta-GWAS and not revealed by competing approaches, associations that we replicated in two independent cohorts. Moreover, we demonstrated the increased power of GUESS over alternative multi...

  15. Calibration of the ATLAS $b$-tagging algorithm in $t\\bar{t}$ events with high multiplicity of jets

    La Ruffa, Francesco; The ATLAS collaboration

    2017-01-01

    The calibration of the ATLAS $b$-tagging in environments characterised by high multiplicity of jets is presented. The calibration uses reconstructed $t\\bar{t}$ candidate events collected by the ATLAS detector in proton-proton collisions at LHC with a centre-of-mass energy $\\sqrt{s}$ of 13$\\,$TeV, with a final state containing one charged lepton, missing transverse momentum and at least four jets. The $b$-tagging efficiencies are measured not only as a function of the most relevant kinematic quantities, such as the transverse momentum or the presudo-rapidity of the jets, but also as a function of quantities that are sensitive to close-by jet activity. The results extend the regions where data-to-simulation $b$-tagging scale factors are derived when using dilepton $t\\bar{t}$ events.

  16. A Multiple-Model Particle Filter Fusion Algorithm for GNSS/DR Slide Error Detection and Compensation

    Rafael Toledo-Moreo

    2018-03-01

    Full Text Available Continuous accurate positioning is a key element for the deployment of many advanced driver assistance systems (ADAS and autonomous vehicle navigation. To achieve the necessary performance, global navigation satellite systems (GNSS must be combined with other technologies. A common onboard sensor-set that allows keeping the cost low, features the GNSS unit, odometry, and inertial sensors, such as a gyro. Odometry and inertial sensors compensate for GNSS flaws in many situations and, in normal conditions, their errors can be easily characterized, thus making the whole solution not only more accurate but also with more integrity. However, odometers do not behave properly when friction conditions make the tires slide. If not properly considered, the positioning perception will not be sound. This article introduces a hybridization approach that takes into consideration the sliding situations by means of a multiple model particle filter (MMPF. Tests with real datasets show the goodness of the proposal.

  17. Spatial downscaling algorithm of TRMM precipitation based on multiple high-resolution satellite data for Inner Mongolia, China

    Duan, Limin; Fan, Keke; Li, Wei; Liu, Tingxi

    2017-12-01

    Daily precipitation data from 42 stations in Inner Mongolia, China for the 10 years period from 1 January 2001 to 31 December 2010 was utilized along with downscaled data from the Tropical Rainfall Measuring Mission (TRMM) with a spatial resolution of 0.25° × 0.25° for the same period based on the statistical relationships between the normalized difference vegetation index (NDVI), meteorological variables, and digital elevation models (https://en.wikipedia.org/wiki/Digital_elevation_model) (DEM) using the leave-one-out (LOO) cross validation method and multivariate step regression. The results indicate that (1) TRMM data can indeed be used to estimate annual precipitation in Inner Mongolia and there is a linear relationship between annual TRMM and observed precipitation; (2) there is a significant relationship between TRMM-based precipitation and predicted precipitation, with a spatial resolution of 0.50° × 0.50°; (3) NDVI and temperature are important factors influencing the downscaling of TRMM precipitation data for DEM and the slope is not the most significant factor affecting the downscaled TRMM data; and (4) the downscaled TRMM data reflects spatial patterns in annual precipitation reasonably well, showing less precipitation falling in west Inner Mongolia and more in the south and southeast. The new approach proposed here provides a useful alternative for evaluating spatial patterns in precipitation and can thus be applied to generate a more accurate precipitation dataset to support both irrigation management and the conservation of this fragile grassland ecosystem.

  18. Genetic algorithm as a variable selection procedure for the simulation of 13C nuclear magnetic resonance spectra of flavonoid derivatives using multiple linear regression.

    Ghavami, Raoof; Najafi, Amir; Sajadi, Mohammad; Djannaty, Farhad

    2008-09-01

    In order to accurately simulate (13)C NMR spectra of hydroxy, polyhydroxy and methoxy substituted flavonoid a quantitative structure-property relationship (QSPR) model, relating atom-based calculated descriptors to (13)C NMR chemical shifts (ppm, TMS=0), is developed. A dataset consisting of 50 flavonoid derivatives was employed for the present analysis. A set of 417 topological, geometrical, and electronic descriptors representing various structural characteristics was calculated and separate multilinear QSPR models were developed between each carbon atom of flavonoid and the calculated descriptors. Genetic algorithm (GA) and multiple linear regression analysis (MLRA) were used to select the descriptors and to generate the correlation models. Analysis of the results revealed a correlation coefficient and root mean square error (RMSE) of 0.994 and 2.53ppm, respectively, for the prediction set.

  19. QSRR modeling for the chromatographic retention behavior of some β-lactam antibiotics using forward and firefly variable selection algorithms coupled with multiple linear regression.

    Fouad, Marwa A; Tolba, Enas H; El-Shal, Manal A; El Kerdawy, Ahmed M

    2018-05-11

    The justified continuous emerging of new β-lactam antibiotics provokes the need for developing suitable analytical methods that accelerate and facilitate their analysis. A face central composite experimental design was adopted using different levels of phosphate buffer pH, acetonitrile percentage at zero time and after 15 min in a gradient program to obtain the optimum chromatographic conditions for the elution of 31 β-lactam antibiotics. Retention factors were used as the target property to build two QSRR models utilizing the conventional forward selection and the advanced nature-inspired firefly algorithm for descriptor selection, coupled with multiple linear regression. The obtained models showed high performance in both internal and external validation indicating their robustness and predictive ability. Williams-Hotelling test and student's t-test showed that there is no statistical significant difference between the models' results. Y-randomization validation showed that the obtained models are due to significant correlation between the selected molecular descriptors and the analytes' chromatographic retention. These results indicate that the generated FS-MLR and FFA-MLR models are showing comparable quality on both the training and validation levels. They also gave comparable information about the molecular features that influence the retention behavior of β-lactams under the current chromatographic conditions. We can conclude that in some cases simple conventional feature selection algorithm can be used to generate robust and predictive models comparable to that are generated using advanced ones. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. A Refined Teaching-Learning Based Optimization Algorithm for Dynamic Economic Dispatch of Integrated Multiple Fuel and Wind Power Plants

    Umamaheswari Krishnasamy

    2014-01-01

    Full Text Available Dynamic economic dispatch problem (DEDP for a multiple fuel power plant is a nonlinear and nonsmooth optimization problem when valve-point effects, multifuel effects, and ramp-rate limits are considered. Additionally wind energy is also integrated with the DEDP to supply the load for effective utilization of the renewable energy. Since the wind power may not be predicted, a radial basis function network (RBFN is presented to forecast a one-hour-ahead wind power to plan and ensure a reliable power supply. In this paper, a refined teaching-learning based optimization (TLBO is applied to minimize the overall cost of operation of wind-thermal power system. The TLBO is refined by integrating the sequential quadratic programming (SQP method to fine-tune the better solutions whenever discovered by the former method. To demonstrate the effectiveness of the proposed hybrid TLBO-SQP method, a standard DEDP and one practical DEDP with wind power forecasted are tested based on the practical information of wind speed. Simulation results validate the proposed methodology which is reasonable by ensuring quality solution throughout the scheduling horizon for secure operation of the system.

  1. Modeling soil organic matter (SOM) from satellite data using VISNIR-SWIR spectroscopy and PLS regression with step-down variable selection algorithm: case study of Campos Amazonicos National Park savanna enclave, Brazil

    Rosero-Vlasova, O.; Borini Alves, D.; Vlassova, L.; Perez-Cabello, F.; Montorio Lloveria, R.

    2017-10-01

    Deforestation in Amazon basin due, among other factors, to frequent wildfires demands continuous post-fire monitoring of soil and vegetation. Thus, the study posed two objectives: (1) evaluate the capacity of Visible - Near InfraRed - ShortWave InfraRed (VIS-NIR-SWIR) spectroscopy to estimate soil organic matter (SOM) in fire-affected soils, and (2) assess the feasibility of SOM mapping from satellite images. For this purpose, 30 soil samples (surface layer) were collected in 2016 in areas of grass and riparian vegetation of Campos Amazonicos National Park, Brazil, repeatedly affected by wildfires. Standard laboratory procedures were applied to determine SOM. Reflectance spectra of soils were obtained in controlled laboratory conditions using Fieldspec4 spectroradiometer (spectral range 350nm- 2500nm). Measured spectra were resampled to simulate reflectances for Landsat-8, Sentinel-2 and EnMap spectral bands, used as predictors in SOM models developed using Partial Least Squares regression and step-down variable selection algorithm (PLSR-SD). The best fit was achieved with models based on reflectances simulated for EnMap bands (R2=0.93; R2cv=0.82 and NMSE=0.07; NMSEcv=0.19). The model uses only 8 out of 244 predictors (bands) chosen by the step-down variable selection algorithm. The least reliable estimates (R2=0.55 and R2cv=0.40 and NMSE=0.43; NMSEcv=0.60) resulted from Landsat model, while Sentinel-2 model showed R2=0.68 and R2cv=0.63; NMSE=0.31 and NMSEcv=0.38. The results confirm high potential of VIS-NIR-SWIR spectroscopy for SOM estimation. Application of step-down produces sparser and better-fit models. Finally, SOM can be estimated with an acceptable accuracy (NMSE 0.35) from EnMap and Sentinel-2 data enabling mapping and analysis of impacts of repeated wildfires on soils in the study area.

  2. The BL-QMR algorithm for non-Hermitian linear systems with multiple right-hand sides

    Freund, R.W. [AT& T Bell Labs., Murray Hill, NJ (United States)

    1996-12-31

    Many applications require the solution of multiple linear systems that have the same coefficient matrix, but differ in their right-hand sides. Instead of applying an iterative method to each of these systems individually, it is potentially much more efficient to employ a block version of the method that generates iterates for all the systems simultaneously. However, it is quite intricate to develop robust and efficient block iterative methods. In particular, a key issue in the design of block iterative methods is the need for deflation. The iterates for the different systems that are produced by a block method will, in general, converge at different stages of the block iteration. An efficient and robust block method needs to be able to detect and then deflate converged systems. Each such deflation reduces the block size, and thus the block method needs to be able to handle varying block sizes. For block Krylov-subspace methods, deflation is also crucial in order to delete linearly and almost linearly dependent vectors in the underlying block Krylov sequences. An added difficulty arises for Lanczos-type block methods for non-Hermitian systems, since they involve two different block Krylov sequences. In these methods, deflation can now occur independently in both sequences, and consequently, the block sizes in the two sequences may become different in the course of the iteration, even though they were identical at the beginning. We present a block version of Freund and Nachtigal`s quasi-minimal residual method for the solution of non-Hermitian linear systems with single right-hand sides.

  3. On the Convexity of Step out - Step in Sequencing Games

    Musegaas, Marieke; Borm, Peter; Quant, Marieke

    2016-01-01

    The main result of this paper is the convexity of Step out - Step in (SoSi) sequencing games, a class of relaxed sequencing games first analyzed by Musegaas, Borm, and Quant (2015). The proof makes use of a polynomial time algorithm determining the value and an optimal processing order for an

  4. Cross-Sectional Associations between Multiple Lifestyle Behaviors and Health-Related Quality of Life in the 10,000 Steps Cohort

    Duncan, Mitch J.; Kline, Christopher E.; Vandelanotte, Corneel; Sargent, Charli; Rogers, Naomi L.; Di Milia, Lee

    2014-01-01

    BACKGROUND: The independent and combined influence of smoking, alcohol consumption, physical activity, diet, sitting time, and sleep duration and quality on health status is not routinely examined. This study investigates the relationships between these lifestyle behaviors, independently and in combination, and health-related quality of life (HRQOL). METHODS: Adult members of the 10,000 Steps project (n = 159,699) were invited to participate in an online survey in November-December 2011. Part...

  5. Recognizing the multiple reasons for Bushmeat consumption in urban areas: a necessary step towards the sustainable use of wildlife for food in Cenral Africa

    van Vliet, Nathalie; Mbazza, P.

    2011-01-01

    conclusions based on available case studies. First, as the most highly valued bushmeat species are among the most common, there is a non-negligible potential to reduce the trade to the most resilient species without having to ban all bushmeat trade. Second, because bushmeat serves multiple functions above...

  6. Pep3p/Pep5p complex: a putative docking factor at multiple steps of vesicular transport to the vacuole of Saccharomyces cerevisiae.

    Srivastava, A; Woolford, C A; Jones, E W

    2000-01-01

    Pep3p and Pep5p are known to be necessary for trafficking of hydrolase precursors to the vacuole and for vacuolar biogenesis. These proteins are present in a hetero-oligomeric complex that mediates transport at the vacuolar membrane. PEP5 interacts genetically with VPS8, implicating Pep5p in the earlier Golgi to endosome step and/or in recycling from the endosome to the Golgi. To understand further the cellular roles of Pep3p and Pep5p, we isolated and characterized a set of pep3 conditional ...

  7. Evaluation of the expected moments algorithm and a multiple low-outlier test for flood frequency analysis at streamgaging stations in Arizona

    Paretti, Nicholas V.; Kennedy, Jeffrey R.; Cohn, Timothy A.

    2014-01-01

    Flooding is among the costliest natural disasters in terms of loss of life and property in Arizona, which is why the accurate estimation of flood frequency and magnitude is crucial for proper structural design and accurate floodplain mapping. Current guidelines for flood frequency analysis in the United States are described in Bulletin 17B (B17B), yet since B17B’s publication in 1982 (Interagency Advisory Committee on Water Data, 1982), several improvements have been proposed as updates for future guidelines. Two proposed updates are the Expected Moments Algorithm (EMA) to accommodate historical and censored data, and a generalized multiple Grubbs-Beck (MGB) low-outlier test. The current guidelines use a standard Grubbs-Beck (GB) method to identify low outliers, changing the determination of the moment estimators because B17B uses a conditional probability adjustment to handle low outliers while EMA censors the low outliers. B17B and EMA estimates are identical if no historical information or censored or low outliers are present in the peak-flow data. EMA with MGB (EMA-MGB) test was compared to the standard B17B (B17B-GB) method for flood frequency analysis at 328 streamgaging stations in Arizona. The methods were compared using the relative percent difference (RPD) between annual exceedance probabilities (AEPs), goodness-of-fit assessments, random resampling procedures, and Monte Carlo simulations. The AEPs were calculated and compared using both station skew and weighted skew. Streamgaging stations were classified by U.S. Geological Survey (USGS) National Water Information System (NWIS) qualification codes, used to denote historical and censored peak-flow data, to better understand the effect that nonstandard flood information has on the flood frequency analysis for each method. Streamgaging stations were also grouped according to geographic flood regions and analyzed separately to better understand regional differences caused by physiography and climate. The B

  8. Directional phonon-assisted cascading of photoexcited carriers in stepped Inx(Al0.17Ga0.83)1-xAs/Al0.17Ga0.83As multiple quantum wells

    Machida, S.; Matsuo, M.; Fujiwara, K.

    2002-01-01

    Perpendicular motion of photoexcited electron and hole pairs assisted by phononscattering is investigated in a novel step-graded staircase heterostructure consisting of strained In-1(Al0.17Ga0.83)(1-x). As multiple quantum wells (QWs) with similar widths but five different x values by cw and time...... amplitude in the inter-mediate temperature range, These variations reveal that the photoexcited carriers directionally move from shallower to deeper QWs via phonon-assisted activation above the barrier hand edge state. The PL dynamics directly indicate the perpendicular flowing of photoexcited carriers...

  9. Algorithmic Self

    Markham, Annette

    This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

  10. Variable depth recursion algorithm for leaf sequencing

    Siochi, R. Alfredo C.

    2007-01-01

    The processes of extraction and sweep are basic segmentation steps that are used in leaf sequencing algorithms. A modified version of a commercial leaf sequencer changed the way that the extracts are selected and expanded the search space, but the modification maintained the basic search paradigm of evaluating multiple solutions, each one consisting of up to 12 extracts and a sweep sequence. While it generated the best solutions compared to other published algorithms, it used more computation time. A new, faster algorithm selects one extract at a time but calls itself as an evaluation function a user-specified number of times, after which it uses the bidirectional sweeping window algorithm as the final evaluation function. To achieve a performance comparable to that of the modified commercial leaf sequencer, 2-3 calls were needed, and in all test cases, there were only slight improvements beyond two calls. For the 13 clinical test maps, computation speeds improved by a factor between 12 and 43, depending on the constraints, namely the ability to interdigitate and the avoidance of the tongue-and-groove under dose. The new algorithm was compared to the original and modified versions of the commercial leaf sequencer. It was also compared to other published algorithms for 1400, random, 15x15, test maps with 3-16 intensity levels. In every single case the new algorithm provided the best solution

  11. On the Organization of Parallel Operation of Some Algorithms for Finding the Shortest Path on a Graph on a Computer System with Multiple Instruction Stream and Single Data Stream

    V. E. Podol'skii

    2015-01-01

    Full Text Available The paper considers the implementing Bellman-Ford and Lee algorithms to find the shortest graph path on a computer system with multiple instruction stream and single data stream (MISD. The MISD computer is a computer that executes commands of arithmetic-logical processing (on the CPU and commands of structures processing (on the structures processor in parallel on a single data stream. Transformation of sequential programs into the MISD programs is a labor intensity process because it requires a stream of the arithmetic-logical processing to be manually separated from that of the structures processing. Algorithms based on the processing of data structures (e.g., algorithms on graphs show high performance on a MISD computer. Bellman-Ford and Lee algorithms for finding the shortest path on a graph are representatives of these algorithms. They are applied to robotics for automatic planning of the robot movement in-situ. Modification of Bellman-Ford and Lee algorithms for finding the shortest graph path in coprocessor MISD mode and the parallel MISD modification of these algorithms were first obtained in this article. Thus, this article continues a series of studies on the transformation of sequential algorithms into MISD ones (Dijkstra and Ford-Fulkerson 's algorithms and has a pronouncedly applied nature. The article also presents the analysis results of Bellman-Ford and Lee algorithms in MISD mode. The paper formulates the basic trends of a technique for parallelization of algorithms into arithmetic-logical processing stream and structures processing stream. Among the key areas for future research, development of the mathematical approach to provide a subsequently formalized and automated process of parallelizing sequential algorithms between the CPU and structures processor is highlighted. Among the mathematical models that can be used in future studies there are graph models of algorithms (e.g., dependency graph of a program. Due to the high

  12. Parallel sorting algorithms

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  13. Cross-sectional associations between multiple lifestyle behaviors and health-related quality of life in the 10,000 Steps cohort.

    Duncan, Mitch J; Kline, Christopher E; Vandelanotte, Corneel; Sargent, Charli; Rogers, Naomi L; Di Milia, Lee

    2014-01-01

    The independent and combined influence of smoking, alcohol consumption, physical activity, diet, sitting time, and sleep duration and quality on health status is not routinely examined. This study investigates the relationships between these lifestyle behaviors, independently and in combination, and health-related quality of life (HRQOL). Adult members of the 10,000 Steps project (n = 159,699) were invited to participate in an online survey in November-December 2011. Participant socio-demographics, lifestyle behaviors, and HRQOL (poor self-rated health; frequent unhealthy days) were assessed by self-report. The combined influence of poor lifestyle behaviors were examined, independently and also as part of two lifestyle behavior indices, one excluding sleep quality (Index 1) and one including sleep quality (Index 2). Adjusted Cox proportional hazard models were used to examine relationships between lifestyle behaviors and HRQOL. A total of 10,478 participants provided complete data for the current study. For Index 1, the Prevalence Ratio (p value) of poor self-rated health was 1.54 (p = 0.001), 2.07 (p≤0.001), 3.00 (p≤0.001), 3.61 (p≤0.001) and 3.89 (p≤0.001) for people reporting two, three, four, five and six poor lifestyle behaviors, compared to people with 0-1 poor lifestyle behaviors. For Index 2, the Prevalence Ratio (p value) of poor self-rated health was 2.26 (p = 0.007), 3.29 (p≤0.001), 4.68 (p≤0.001), 6.48 (p≤0.001), 7.91 (p≤0.001) and 8.55 (p≤0.001) for people reporting two, three, four, five, six and seven poor lifestyle behaviors, compared to people with 0-1 poor lifestyle behaviors. Associations between the combined lifestyle behavior index and frequent unhealthy days were statistically significant and similar to those observed for poor self-rated health. Engaging in a greater number of poor lifestyle behaviors was associated with a higher prevalence of poor HRQOL. This association was exacerbated when sleep quality was

  14. Cross-sectional associations between multiple lifestyle behaviors and health-related quality of life in the 10,000 Steps cohort.

    Mitch J Duncan

    Full Text Available BACKGROUND: The independent and combined influence of smoking, alcohol consumption, physical activity, diet, sitting time, and sleep duration and quality on health status is not routinely examined. This study investigates the relationships between these lifestyle behaviors, independently and in combination, and health-related quality of life (HRQOL. METHODS: Adult members of the 10,000 Steps project (n = 159,699 were invited to participate in an online survey in November-December 2011. Participant socio-demographics, lifestyle behaviors, and HRQOL (poor self-rated health; frequent unhealthy days were assessed by self-report. The combined influence of poor lifestyle behaviors were examined, independently and also as part of two lifestyle behavior indices, one excluding sleep quality (Index 1 and one including sleep quality (Index 2. Adjusted Cox proportional hazard models were used to examine relationships between lifestyle behaviors and HRQOL. RESULTS: A total of 10,478 participants provided complete data for the current study. For Index 1, the Prevalence Ratio (p value of poor self-rated health was 1.54 (p = 0.001, 2.07 (p≤0.001, 3.00 (p≤0.001, 3.61 (p≤0.001 and 3.89 (p≤0.001 for people reporting two, three, four, five and six poor lifestyle behaviors, compared to people with 0-1 poor lifestyle behaviors. For Index 2, the Prevalence Ratio (p value of poor self-rated health was 2.26 (p = 0.007, 3.29 (p≤0.001, 4.68 (p≤0.001, 6.48 (p≤0.001, 7.91 (p≤0.001 and 8.55 (p≤0.001 for people reporting two, three, four, five, six and seven poor lifestyle behaviors, compared to people with 0-1 poor lifestyle behaviors. Associations between the combined lifestyle behavior index and frequent unhealthy days were statistically significant and similar to those observed for poor self-rated health. CONCLUSIONS: Engaging in a greater number of poor lifestyle behaviors was associated with a higher prevalence of poor HRQOL. This

  15. Animation of planning algorithms

    Sun, Fan

    2014-01-01

    Planning is the process of creating a sequence of steps/actions that will satisfy a goal of a problem. The partial order planning (POP) algorithm is one of Artificial Intelligence approach for problem planning. By learning G52PAS module, I find that it is difficult for students to understand this planning algorithm by just reading its pseudo code and doing some exercise in writing. Students cannot know how each actual step works clearly and might miss some steps because of their confusion. ...

  16. Parallel Algorithms and Patterns

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  17. Solution of Constrained Optimal Control Problems Using Multiple Shooting and ESDIRK Methods

    Capolei, Andrea; Jørgensen, John Bagterp

    2012-01-01

    of this paper is the use of ESDIRK integration methods for solution of the initial value problems and the corresponding sensitivity equations arising in the multiple shooting algorithm. Compared to BDF-methods, ESDIRK-methods are advantageous in multiple shooting algorithms in which restarts and frequent...... algorithm. As we consider stiff systems, implicit solvers with sensitivity computation capabilities for initial value problems must be used in the multiple shooting algorithm. Traditionally, multi-step methods based on the BDF algorithm have been used for such problems. The main novel contribution...... discontinuities on each shooting interval are present. The ESDIRK methods are implemented using an inexact Newton method that reuses the factorization of the iteration matrix for the integration as well as the sensitivity computation. Numerical experiments are provided to demonstrate the algorithm....

  18. N-Dimensional LLL Reduction Algorithm with Pivoted Reflection

    Zhongliang Deng

    2018-01-01

    Full Text Available The Lenstra-Lenstra-Lovász (LLL lattice reduction algorithm and many of its variants have been widely used by cryptography, multiple-input-multiple-output (MIMO communication systems and carrier phase positioning in global navigation satellite system (GNSS to solve the integer least squares (ILS problem. In this paper, we propose an n-dimensional LLL reduction algorithm (n-LLL, expanding the Lovász condition in LLL algorithm to n-dimensional space in order to obtain a further reduced basis. We also introduce pivoted Householder reflection into the algorithm to optimize the reduction time. For an m-order positive definite matrix, analysis shows that the n-LLL reduction algorithm will converge within finite steps and always produce better results than the original LLL reduction algorithm with n > 2. The simulations clearly prove that n-LLL is better than the original LLL in reducing the condition number of an ill-conditioned input matrix with 39% improvement on average for typical cases, which can significantly reduce the searching space for solving ILS problem. The simulation results also show that the pivoted reflection has significantly declined the number of swaps in the algorithm by 57%, making n-LLL a more practical reduction algorithm.

  19. Fast matrix multiplication and its algebraic neighbourhood

    Pan, V. Ya.

    2017-11-01

    Matrix multiplication is among the most fundamental operations of modern computations. By 1969 it was still commonly believed that the classical algorithm was optimal, although the experts already knew that this was not so. Worldwide interest in matrix multiplication instantly exploded in 1969, when Strassen decreased the exponent 3 of cubic time to 2.807. Then everyone expected to see matrix multiplication performed in quadratic or nearly quadratic time very soon. Further progress, however, turned out to be capricious. It was at stalemate for almost a decade, then a combination of surprising techniques (completely independent of Strassen's original ones and much more advanced) enabled a new decrease of the exponent in 1978-1981 and then again in 1986, to 2.376. By 2017 the exponent has still not passed through the barrier of 2.373, but most disturbing was the curse of recursion — even the decrease of exponents below 2.7733 required numerous recursive steps, and each of them squared the problem size. As a result, all algorithms supporting such exponents supersede the classical algorithm only for inputs of immense sizes, far beyond any potential interest for the user. We survey the long study of fast matrix multiplication, focusing on neglected algorithms for feasible matrix multiplication. We comment on their design, the techniques involved, implementation issues, the impact of their study on the modern theory and practice of Algebraic Computations, and perspectives for fast matrix multiplication. Bibliography: 163 titles.

  20. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  1. Algorithming the Algorithm

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  2. Acuros XB: Impact dosimetric heterogeneous geometric appliance low density AXB represents a step forward in terms of algorithms of calculation for the planning of treatments; Acuros XB: impacto dosimetrico de artefactos geometricos heterogeneos de baja densidad

    Jurado Bruggeman, D.; Munoz Montplet, C.; Agramunt Chaler, S.; Bueno Vizcarra, M.; Duch Guillen, M. A.

    2013-07-01

    In some situations, the dose distributions which provides are significantly different from those obtained with other marketed algorithms, which entails new challenges for evaluation and optimization of the treatment. (Author)

  3. Effects of walking speed on the step-by-step control of step width.

    Stimpson, Katy H; Heitkamp, Lauren N; Horne, Joscelyn S; Dean, Jesse C

    2018-02-08

    Young, healthy adults walking at typical preferred speeds use step-by-step adjustments of step width to appropriately redirect their center of mass motion and ensure mediolateral stability. However, it is presently unclear whether this control strategy is retained when walking at the slower speeds preferred by many clinical populations. We investigated whether the typical stabilization strategy is influenced by walking speed. Twelve young, neurologically intact participants walked on a treadmill at a range of prescribed speeds (0.2-1.2 m/s). The mediolateral stabilization strategy was quantified as the proportion of step width variance predicted by the mechanical state of the pelvis throughout a step (calculated as R 2 magnitude from a multiple linear regression). Our ability to accurately predict the upcoming step width increased over the course of a step. The strength of the relationship between step width and pelvis mechanics at the start of a step was reduced at slower speeds. However, these speed-dependent differences largely disappeared by the end of a step, other than at the slowest walking speed (0.2 m/s). These results suggest that mechanics-dependent adjustments in step width are a consistent component of healthy gait across speeds and contexts. However, slower walking speeds may ease this control by allowing mediolateral repositioning of the swing leg to occur later in a step, thus encouraging slower walking among clinical populations with limited sensorimotor control. Published by Elsevier Ltd.

  4. Improvement on LiFePO4 Cell Balancing Algorithm

    Vencislav C. Valchev; Plamen V. Yankov; Dimo D. Stefanov

    2018-01-01

    The paper presents improvement on operation time of cell balancing algorithm compared to conventional multiple cell LiFePO4 charge methodology. A flowchart is synthesised to explain the main steps of the software design, which afterwards is implemented in a microcontroller. Experimental results are provided to clarify the transition between charge and balance process. Graphical data for a voltage equalization of eight cells is presented to verify the proposed improvement.

  5. Improvement on LiFePO4 Cell Balancing Algorithm

    Vencislav C. Valchev

    2018-02-01

    Full Text Available The paper presents improvement on operation time of cell balancing algorithm compared to conventional multiple cell LiFePO4 charge methodology. A flowchart is synthesised to explain the main steps of the software design, which afterwards is implemented in a microcontroller. Experimental results are provided to clarify the transition between charge and balance process. Graphical data for a voltage equalization of eight cells is presented to verify the proposed improvement.

  6. Application of stepping motor

    1980-10-01

    This book is divided into three parts, which is about practical using of stepping motor. The first part has six chapters. The contents of the first part are about stepping motor, classification of stepping motor, basic theory og stepping motor, characteristic and basic words, types and characteristic of stepping motor in hybrid type and basic control of stepping motor. The second part deals with application of stepping motor with hardware of stepping motor control, stepping motor control by microcomputer and software of stepping motor control. The last part mentions choice of stepping motor system, examples of stepping motor, measurement of stepping motor and practical cases of application of stepping motor.

  7. Diagnosis of Clostridium difficile-associated disease: examination of multiple algorithms using toxin EIA, glutamate dehydrogenase EIA and loop-mediated isothermal amplification.

    Bamber, A I; Fitzsimmons, K; Cunniffe, J G; Beasor, C C; Mackintosh, C A; Hobbs, G

    2012-01-01

    The laboratory diagnosis of Clostridium difficile infection (CDI) needs to be accurate and timely to ensure optimal patient management, infection control and reliable surveillance. Three methods are evaluated using 810 consecutive stool samples against toxigenic culture: CDT TOX A/B Premier enzyme immunoassay (EIA) kit (Meridian Bioscience, Europe), Premier EIA for C. difficile glutamate dehydrogenase (GDH) (Meridian Bioscience, Europe) and the Illumigene kit (Meridian Bioscience, Europe), both individually and within combined testing algorithms. The study revealed that the CDT TOX A/B Premier EIA gave rise to false-positive and false-negative results and demonstrated poor sensitivity (56.47%), compared to Premier EIA for C. difficile GDH (97.65%), suggesting this GDH EIA can be a useful negative screening method. Results for the Illumigene assay alone showed sensitivity, specificity, negative predictive value (NPV) and positive predictive value (PPV) of 91.57%, 98.07%, 99.03% and 84.44%, respectively. A two-stage algorithm using Premier EIA for C. difficile GDH/Illumigene assay yielded superior results compared with other testing algorithms (91.57%, 98.07%, 99.03% and 84.44%, respectively), mirroring the Illumigene performance. However, Illumigene is approximately half the cost of current polymerase chain reaction (PCR) methods, has a rapid turnaround time and requires no specialised skill base, making it an attractive alternative to assays such as the Xpert C. difficile assay (Cepheid, Sunnyvale, CA). A three-stage algorithm offered no improvement and would hamper workflow.

  8. Noise effect in an improved conjugate gradient algorithm to invert particle size distribution and the algorithm amendment.

    Wei, Yongjie; Ge, Baozhen; Wei, Yaolin

    2009-03-20

    In general, model-independent algorithms are sensitive to noise during laser particle size measurement. An improved conjugate gradient algorithm (ICGA) that can be used to invert particle size distribution (PSD) from diffraction data is presented. By use of the ICGA to invert simulated data with multiplicative or additive noise, we determined that additive noise is the main factor that induces distorted results. Thus the ICGA is amended by introduction of an iteration step-adjusting parameter and is used experimentally on simulated data and some samples. The experimental results show that the sensitivity of the ICGA to noise is reduced and the inverted results are in accord with the real PSD.

  9. Sound algorithms

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  10. Genetic algorithms

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  11. Step out - Step in Sequencing Games

    Musegaas, M.; Borm, P.E.M.; Quant, M.

    2014-01-01

    In this paper a new class of relaxed sequencing games is introduced: the class of Step out - Step in sequencing games. In this relaxation any player within a coalition is allowed to step out from his position in the processing order and to step in at any position later in the processing order.

  12. Step out-step in sequencing games

    Musegaas, Marieke; Borm, Peter; Quant, Marieke

    2015-01-01

    In this paper a new class of relaxed sequencing games is introduced: the class of Step out–Step in sequencing games. In this relaxation any player within a coalition is allowed to step out from his position in the processing order and to step in at any position later in the processing order. First,

  13. Robust multiple frequency multiple power localization schemes in the presence of multiple jamming attacks.

    Ahmed Abdulqader Hussein

    Full Text Available Localization of the wireless sensor network is a vital area acquiring an impressive research concern and called upon to expand more with the rising of its applications. As localization is gaining prominence in wireless sensor network, it is vulnerable to jamming attacks. Jamming attacks disrupt communication opportunity among the sender and receiver and deeply impact the localization process, leading to a huge error of the estimated sensor node position. Therefore, detection and elimination of jamming influence are absolutely indispensable. Range-based techniques especially Received Signal Strength (RSS is facing severe impact of these attacks. This paper proposes algorithms based on Combination Multiple Frequency Multiple Power Localization (C-MFMPL and Step Function Multiple Frequency Multiple Power Localization (SF-MFMPL. The algorithms have been tested in the presence of multiple types of jamming attacks including capture and replay, random and constant jammers over a log normal shadow fading propagation model. In order to overcome the impact of random and constant jammers, the proposed method uses two sets of frequencies shared by the implemented anchor nodes to obtain the averaged RSS readings all over the transmitted frequencies successfully. In addition, three stages of filters have been used to cope with the replayed beacons caused by the capture and replay jammers. In this paper the localization performance of the proposed algorithms for the ideal case which is defined by without the existence of the jamming attack are compared with the case of jamming attacks. The main contribution of this paper is to achieve robust localization performance in the presence of multiple jamming attacks under log normal shadow fading environment with a different simulation conditions and scenarios.

  14. de Casteljau's Algorithm Revisited

    Gravesen, Jens

    1998-01-01

    It is demonstrated how all the basic properties of Bezier curves can be derived swiftly and efficiently without any reference to the Bernstein polynomials and essentially with only geometric arguments. This is achieved by viewing one step in de Casteljau's algorithm as an operator (the de Casteljau...

  15. Boosted regression trees, multivariate adaptive regression splines and their two-step combinations with multiple linear regression or partial least squares to predict blood-brain barrier passage: a case study.

    Deconinck, E; Zhang, M H; Petitet, F; Dubus, E; Ijjaali, I; Coomans, D; Vander Heyden, Y

    2008-02-18

    The use of some unconventional non-linear modeling techniques, i.e. classification and regression trees and multivariate adaptive regression splines-based methods, was explored to model the blood-brain barrier (BBB) passage of drugs and drug-like molecules. The data set contains BBB passage values for 299 structural and pharmacological diverse drugs, originating from a structured knowledge-based database. Models were built using boosted regression trees (BRT) and multivariate adaptive regression splines (MARS), as well as their respective combinations with stepwise multiple linear regression (MLR) and partial least squares (PLS) regression in two-step approaches. The best models were obtained using combinations of MARS with either stepwise MLR or PLS. It could be concluded that the use of combinations of a linear with a non-linear modeling technique results in some improved properties compared to the individual linear and non-linear models and that, when the use of such a combination is appropriate, combinations using MARS as non-linear technique should be preferred over those with BRT, due to some serious drawbacks of the BRT approaches.

  16. Microprocessor controller for stepping motors

    Strait, B.G.; Thuot, M.E.

    1977-01-01

    A new concept for digital computer control of multiple stepping motors which operate in a severe electromagnetic pulse environment is presented. The motors position mirrors in the beam-alignment system of a 100-kJ CO 2 laser. An asynchronous communications channel of a computer is used to send coded messages, containing the motor address and stepping-command information, to the stepping-motor controller in a bit serial format over a fiber-optics communications link. The addressed controller responds by transmitting to the computer its address and other motor information, thus confirming the received message. Each controller is capable of controlling three stepping motors. The controller contains the fiber-optics interface, a microprocessor, and the stepping-motor driven circuits. The microprocessor program, which resides in an EPROM, decodes the received messages, transmits responses, performs the stepping-motor sequence logic, maintains motor-position information, and monitors the motor's reference switch. For multiple stepping-motor application, the controllers are connected in a daisy chain providing control of many motors from one asynchronous communications channel of the computer

  17. A General Algorithm for Reusing Krylov Subspace Information. I. Unsteady Navier-Stokes

    Carpenter, Mark H.; Vuik, C.; Lucas, Peter; vanGijzen, Martin; Bijl, Hester

    2010-01-01

    A general algorithm is developed that reuses available information to accelerate the iterative convergence of linear systems with multiple right-hand sides A x = b (sup i), which are commonly encountered in steady or unsteady simulations of nonlinear equations. The algorithm is based on the classical GMRES algorithm with eigenvector enrichment but also includes a Galerkin projection preprocessing step and several novel Krylov subspace reuse strategies. The new approach is applied to a set of test problems, including an unsteady turbulent airfoil, and is shown in some cases to provide significant improvement in computational efficiency relative to baseline approaches.

  18. Algorithmic cryptanalysis

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  19. Quick fuzzy backpropagation algorithm.

    Nikov, A; Stoeva, S

    2001-03-01

    A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.

  20. Majorization arrow in quantum-algorithm design

    Latorre, J.I.; Martin-Delgado, M.A.

    2002-01-01

    We apply majorization theory to study the quantum algorithms known so far and find that there is a majorization principle underlying the way they operate. Grover's algorithm is a neat instance of this principle where majorization works step by step until the optimal target state is found. Extensions of this situation are also found in algorithms based in quantum adiabatic evolution and the family of quantum phase-estimation algorithms, including Shor's algorithm. We state that in quantum algorithms the time arrow is a majorization arrow

  1. Algorithms for Decision Tree Construction

    Chikalov, Igor

    2011-01-01

    The study of algorithms for decision tree construction was initiated in 1960s. The first algorithms are based on the separation heuristic [13, 31] that at each step tries dividing the set of objects as evenly as possible. Later Garey and Graham [28

  2. A discrete particle swarm optimization algorithm with local search for a production-based two-echelon single-vendor multiple-buyer supply chain

    Seifbarghy, Mehdi; Kalani, Masoud Mirzaei; Hemmati, Mojtaba

    2016-03-01

    This paper formulates a two-echelon single-producer multi-buyer supply chain model, while a single product is produced and transported to the buyers by the producer. The producer and the buyers apply vendor-managed inventory mode of operation. It is assumed that the producer applies economic production quantity policy, which implies a constant production rate at the producer. The operational parameters of each buyer are sales quantity, sales price and production rate. Channel profit of the supply chain and contract price between the producer and each buyer is determined based on the values of the operational parameters. Since the model belongs to nonlinear integer programs, we use a discrete particle swarm optimization algorithm (DPSO) to solve the addressed problem; however, the performance of the DPSO is compared utilizing two well-known heuristics, namely genetic algorithm and simulated annealing. A number of examples are provided to verify the model and assess the performance of the proposed heuristics. Experimental results indicate that DPSO outperforms the rival heuristics, with respect to some comparison metrics.

  3. Algorithmic mathematics

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  4. Total algorithms

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  5. Cloud-Coffee: implementation of a parallel consistency-based multiple alignment algorithm in the T-Coffee package and its benchmarking on the Amazon Elastic-Cloud.

    Di Tommaso, Paolo; Orobitg, Miquel; Guirado, Fernando; Cores, Fernado; Espinosa, Toni; Notredame, Cedric

    2010-08-01

    We present the first parallel implementation of the T-Coffee consistency-based multiple aligner. We benchmark it on the Amazon Elastic Cloud (EC2) and show that the parallelization procedure is reasonably effective. We also conclude that for a web server with moderate usage (10K hits/month) the cloud provides a cost-effective alternative to in-house deployment. T-Coffee is a freeware open source package available from http://www.tcoffee.org/homepage.html

  6. Effect of exercising at minimum recommendations of the multiple sclerosis exercise guideline combined with structured education or attention control education - secondary results of the step it up randomised controlled trial.

    Coote, Susan; Uszynski, Marcin; Herring, Matthew P; Hayes, Sara; Scarrott, Carl; Newell, John; Gallagher, Stephen; Larkin, Aidan; Motl, Robert W

    2017-06-24

    Recent exercise guidelines for people with multiple sclerosis (MS) recommend a minimum of 30 min moderate intensity aerobic exercise and resistance exercise twice per week. This trial compared the secondary outcomes of a combined 10-week guideline based intervention and a Social Cognitive Theory (SCT) education programme with the same exercise intervention involving an attention control education. Physically inactive people with MS, scoring 0-3 on Patient Determined Disease Steps Scale, with no MS relapse or change in MS medication, were randomised to 10-week exercise plus SCT education or exercise plus attention control education conditions. Outcomes included fatigue, depression, anxiety, strength, physical activity, SCT constructs and impact of MS and were measured by a blinded assessor pre and post-intervention and 3 and 6 month follow up. One hundred and seventy-four expressed interest, 92 were eligible and 65 enrolled. Using linear mixed effects models, the differences between groups on all secondary measures post-intervention and at follow-up were not significant. Post-hoc, exploratory, within group analysis identified improvements in both groups post intervention in fatigue (mean ∆(95% CI) SCT -4.99(-9.87, -0.21), p = 0.04, Control -7.68(-12.13, -3.23), p = 0.00), strength (SCT -1.51(-2.41, -0.60), p exercise planning (SCT 5.88(3.37, 8.39), p Exercising at the minimum guideline amount has a positive effect on fatigue, strength and PA that is sustained at 3 and 6 months following the cessation of the program. ClinicalTrials.gov, NCT02301442 , retrospectively registered on November 13th 2014.

  7. Selfish Gene Algorithm Vs Genetic Algorithm: A Review

    Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed

    2016-11-01

    Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.

  8. Testing for direct genetic effects using a screening step in family-based association studies

    Sharon M Lutz

    2013-11-01

    Full Text Available In genome wide association studies (GWAS, families based studies tend to have less power to detect genetic associations than population based studies, such as case-control studies. This can be an issue when testing if genes in a family based GWAS have a direct effect on the phenotype of interest or if the genes act indirectly through a secondary phenotype. When multiple SNPs are tested for a direct effect in the family based study, a screening step can be used to minimize the burden of multiple comparisons in the causal analysis. We propose a 2-stage screening step that can be incorporated into the family based association test (FBAT approach similar to the conditional mean model approach in the VanSteen-algorithm [1]. Simulations demonstrate that the type 1 error is preserved and this method is advantageous when multiple markers are tested. This method is illustrated by an application to the Framingham Heart Study.

  9. Privacy Protection on Multiple Sensitive Attributes

    Li, Zhen; Ye, Xiaojun

    In recent years, a privacy model called k-anonymity has gained popularity in the microdata releasing. As the microdata may contain multiple sensitive attributes about an individual, the protection of multiple sensitive attributes has become an important problem. Different from the existing models of single sensitive attribute, extra associations among multiple sensitive attributes should be invested. Two kinds of disclosure scenarios may happen because of logical associations. The Q&S Diversity is checked to prevent the foregoing disclosure risks, with an α Requirement definition used to ensure the diversity requirement. At last, a two-step greedy generalization algorithm is used to carry out the multiple sensitive attributes processing which deal with quasi-identifiers and sensitive attributes respectively. We reduce the overall distortion by the measure of Masking SA.

  10. Optimization of distribution networks with multiple power through the application of genetic algorithms; Optimizacion de redes de distribucion con multiples alimentaciones a traves de la aplicacion de algoritmos geneticos

    Anaut, D.; Mauro, G. di; Suarez, J.A.; Dimenna, C.; Aguero, C. [Universidad Nacional de Mar del Plata (Argentina)

    2009-07-01

    Inside the aspects to be improved in the service of power distribution, reducing Joule losses (technical losses) is vital. This paper proposes a method of reconfiguration of primary distribution networks for minimal losses through the use of genetic algorithms. The same is applied to a multi-node system with feeders, tending to minimize these losses and maintain the voltage in line with current legislation. As a result of the implementation of this system, we found that the optimization method used is able to find the best solution (the optimal solution) among all possible combinations where the switch operations. Also, check your flexibility to adapt to the restrictions of radiality and voltage level, in less time than that required for an exhaustive search.

  11. Robust Selection Algorithm (RSA) for Multi-Omic Biomarker Discovery; Integration with Functional Network Analysis to Identify miRNA Regulated Pathways in Multiple Cancers.

    Sehgal, Vasudha; Seviour, Elena G; Moss, Tyler J; Mills, Gordon B; Azencott, Robert; Ram, Prahlad T

    2015-01-01

    MicroRNAs (miRNAs) play a crucial role in the maintenance of cellular homeostasis by regulating the expression of their target genes. As such, the dysregulation of miRNA expression has been frequently linked to cancer. With rapidly accumulating molecular data linked to patient outcome, the need for identification of robust multi-omic molecular markers is critical in order to provide clinical impact. While previous bioinformatic tools have been developed to identify potential biomarkers in cancer, these methods do not allow for rapid classification of oncogenes versus tumor suppressors taking into account robust differential expression, cutoffs, p-values and non-normality of the data. Here, we propose a methodology, Robust Selection Algorithm (RSA) that addresses these important problems in big data omics analysis. The robustness of the survival analysis is ensured by identification of optimal cutoff values of omics expression, strengthened by p-value computed through intensive random resampling taking into account any non-normality in the data and integration into multi-omic functional networks. Here we have analyzed pan-cancer miRNA patient data to identify functional pathways involved in cancer progression that are associated with selected miRNA identified by RSA. Our approach demonstrates the way in which existing survival analysis techniques can be integrated with a functional network analysis framework to efficiently identify promising biomarkers and novel therapeutic candidates across diseases.

  12. An improved affine projection algorithm for active noise cancellation

    Zhang, Congyan; Wang, Mingjiang; Han, Yufei; Sun, Yunzhuo

    2017-08-01

    Affine projection algorithm is a signal reuse algorithm, and it has a good convergence rate compared to other traditional adaptive filtering algorithm. There are two factors that affect the performance of the algorithm, which are step factor and the projection length. In the paper, we propose a new variable step size affine projection algorithm (VSS-APA). It dynamically changes the step size according to certain rules, so that it can get smaller steady-state error and faster convergence speed. Simulation results can prove that its performance is superior to the traditional affine projection algorithm and in the active noise control (ANC) applications, the new algorithm can get very good results.

  13. Handbook of Memetic Algorithms

    Cotta, Carlos; Moscato, Pablo

    2012-01-01

    Memetic Algorithms (MAs) are computational intelligence structures combining multiple and various operators in order to address optimization problems.  The combination and interaction amongst operators evolves and promotes the diffusion of the most successful units and generates an algorithmic behavior which can handle complex objective functions and hard fitness landscapes.   “Handbook of Memetic Algorithms” organizes, in a structured way, all the the most important results in the field of MAs since their earliest definition until now.  A broad review including various algorithmic solutions as well as successful applications is included in this book. Each class of optimization problems, such as constrained optimization, multi-objective optimization, continuous vs combinatorial problems, uncertainties, are analysed separately and, for each problem,  memetic recipes for tackling the difficulties are given with some successful examples. Although this book contains chapters written by multiple authors, ...

  14. 3-D Image Encryption Based on Rubik's Cube and RC6 Algorithm

    Helmy, Mai; El-Rabaie, El-Sayed M.; Eldokany, Ibrahim M.; El-Samie, Fathi E. Abd

    2017-12-01

    A novel encryption algorithm based on the 3-D Rubik's cube is proposed in this paper to achieve 3D encryption of a group of images. This proposed encryption algorithm begins with RC6 as a first step for encrypting multiple images, separately. After that, the obtained encrypted images are further encrypted with the 3-D Rubik's cube. The RC6 encrypted images are used as the faces of the Rubik's cube. From the concepts of image encryption, the RC6 algorithm adds a degree of diffusion, while the Rubik's cube algorithm adds a degree of permutation. The simulation results demonstrate that the proposed encryption algorithm is efficient, and it exhibits strong robustness and security. The encrypted images are further transmitted over wireless Orthogonal Frequency Division Multiplexing (OFDM) system and decrypted at the receiver side. Evaluation of the quality of the decrypted images at the receiver side reveals good results.

  15. Algorithmic phase diagrams

    Hockney, Roger

    1987-01-01

    Algorithmic phase diagrams are a neat and compact representation of the results of comparing the execution time of several algorithms for the solution of the same problem. As an example, the recent results are shown of Gannon and Van Rosendale on the solution of multiple tridiagonal systems of equations in the form of such diagrams. The act of preparing these diagrams has revealed an unexpectedly complex relationship between the best algorithm and the number and size of the tridiagonal systems, which was not evident from the algebraic formulae in the original paper. Even so, for a particular computer, one diagram suffices to predict the best algorithm for all problems that are likely to be encountered the prediction being read directly from the diagram without complex calculation.

  16. Finding counterparts for all-sky X-ray surveys with NWAY: a Bayesian algorithm for cross-matching multiple catalogues

    Salvato, M.; Buchner, J.; Budavári, T.; Dwelly, T.; Merloni, A.; Brusa, M.; Rau, A.; Fotopoulou, S.; Nandra, K.

    2018-02-01

    We release the AllWISE counterparts and Gaia matches to 106 573 and 17 665 X-ray sources detected in the ROSAT 2RXS and XMMSL2 surveys with |b| > 15°. These are the brightest X-ray sources in the sky, but their position uncertainties and the sparse multi-wavelength coverage until now rendered the identification of their counterparts a demanding task with uncertain results. New all-sky multi-wavelength surveys of sufficient depth, like AllWISE and Gaia, and a new Bayesian statistics based algorithm, NWAY, allow us, for the first time, to provide reliable counterpart associations. NWAY extends previous distance and sky density based association methods and, using one or more priors (e.g. colours, magnitudes), weights the probability that sources from two or more catalogues are simultaneously associated on the basis of their observable characteristics. Here, counterparts have been determined using a Wide-field Infrared Survey Explorer (WISE) colour-magnitude prior. A reference sample of 4524 XMM/Chandra and Swift X-ray sources demonstrates a reliability of ∼94.7 per cent (2RXS) and 97.4 per cent (XMMSL2). Combining our results with Chandra-COSMOS data, we propose a new separation between stars and AGN in the X-ray/WISE flux-magnitude plane, valid over six orders of magnitude. We also release the NWAY code and its user manual. NWAY was extensively tested with XMM-COSMOS data. Using two different sets of priors, we find an agreement of 96 per cent and 99 per cent with published Likelihood Ratio methods. Our results were achieved faster and without any follow-up visual inspection. With the advent of deep and wide area surveys in X-rays (e.g. SRG/eROSITA, Athena/WFI) and radio (ASKAP/EMU, LOFAR, APERTIF, etc.) NWAY will provide a powerful and reliable counterpart identification tool.

  17. Learning-based meta-algorithm for MRI brain extraction.

    Shi, Feng; Wang, Li; Gilmore, John H; Lin, Weili; Shen, Dinggang

    2011-01-01

    Multiple-segmentation-and-fusion method has been widely used for brain extraction, tissue segmentation, and region of interest (ROI) localization. However, such studies are hindered in practice by their computational complexity, mainly coming from the steps of template selection and template-to-subject nonlinear registration. In this study, we address these two issues and propose a novel learning-based meta-algorithm for MRI brain extraction. Specifically, we first use exemplars to represent the entire template library, and assign the most similar exemplar to the test subject. Second, a meta-algorithm combining two existing brain extraction algorithms (BET and BSE) is proposed to conduct multiple extractions directly on test subject. Effective parameter settings for the meta-algorithm are learned from the training data and propagated to subject through exemplars. We further develop a level-set based fusion method to combine multiple candidate extractions together with a closed smooth surface, for obtaining the final result. Experimental results show that, with only a small portion of subjects for training, the proposed method is able to produce more accurate and robust brain extraction results, at Jaccard Index of 0.956 +/- 0.010 on total 340 subjects under 6-fold cross validation, compared to those by the BET and BSE even using their best parameter combinations.

  18. Alignment of Short Reads: A Crucial Step for Application of Next-Generation Sequencing Data in Precision Medicine

    Hao Ye

    2015-11-01

    Full Text Available Precision medicine or personalized medicine has been proposed as a modernized and promising medical strategy. Genetic variants of patients are the key information for implementation of precision medicine. Next-generation sequencing (NGS is an emerging technology for deciphering genetic variants. Alignment of raw reads to a reference genome is one of the key steps in NGS data analysis. Many algorithms have been developed for alignment of short read sequences since 2008. Users have to make a decision on which alignment algorithm to use in their studies. Selection of the right alignment algorithm determines not only the alignment algorithm but also the set of suitable parameters to be used by the algorithm. Understanding these algorithms helps in selecting the appropriate alignment algorithm for different applications in precision medicine. Here, we review current available algorithms and their major strategies such as seed-and-extend and q-gram filter. We also discuss the challenges in current alignment algorithms, including alignment in multiple repeated regions, long reads alignment and alignment facilitated with known genetic variants.

  19. Internship guide : Work placements step by step

    Haag, Esther

    2013-01-01

    Internship Guide: Work Placements Step by Step has been written from the practical perspective of a placement coordinator. This book addresses the following questions : what problems do students encounter when they start thinking about the jobs their degree programme prepares them for? How do you

  20. The way to collisions, step by step

    2009-01-01

    While the LHC sectors cool down and reach the cryogenic operating temperature, spirits are warming up as we all eagerly await the first collisions. No reason to hurry, though. Making particles collide involves the complex manoeuvring of thousands of delicate components. The experts will make it happen using a step-by-step approach.

  1. A Fast General-Purpose Clustering Algorithm Based on FPGAs for High-Throughput Data Processing

    Annovi, A; The ATLAS collaboration; Castegnaro, A; Gatta, M

    2012-01-01

    We present a fast general-purpose algorithm for high-throughput clustering of data ”with a two dimensional organization”. The algorithm is designed to be implemented with FPGAs or custom electronics. The key feature is a processing time that scales linearly with the amount of data to be processed. This means that clustering can be performed in pipeline with the readout, without suffering from combinatorial delays due to looping multiple times through all the data. This feature makes this algorithm especially well suited for problems where the data has high density, e.g. in the case of tracking devices working under high-luminosity condition such as those of LHC or Super-LHC. The algorithm is organized in two steps: the first step (core) clusters the data; the second step analyzes each cluster of data to extract the desired information. The current algorithm is developed as a clustering device for modern high-energy physics pixel detectors. However, the algorithm has much broader field of applications. In ...

  2. Boris push with spatial stepping

    Penn, G; Stoltz, P H; Cary, J R; Wurtele, J

    2003-01-01

    The Boris push is commonly used in plasma physics simulations because of its speed and stability. It is second-order accurate, requires only one field evaluation per time step, and has good conservation properties. However, for accelerator simulations it is convenient to propagate particles in z down a changing beamline. A 'spatial Boris push' algorithm has been developed which is similar to the Boris push but uses a spatial coordinate as the independent variable, instead of time. This scheme is compared to the fourth-order Runge-Kutta algorithm, for two simplified muon beam lattices: a uniform solenoid field, and a 'FOFO' lattice where the solenoid field varies sinusoidally along the axis. Examination of the canonical angular momentum, which should be conserved in axisymmetric systems, shows that the spatial Boris push improves accuracy over long distances

  3. Research on an uplink carrier sense multiple access algorithm of large indoor visible light communication networks based on an optical hard core point process.

    Nan, Zhufen; Chi, Xuefen

    2016-12-20

    The IEEE 802.15.7 protocol suggests that it could coordinate the channel access process based on the competitive method of carrier sensing. However, the directionality of light and randomness of diffuse reflection would give rise to a serious imperfect carrier sense (ICS) problem [e.g., hidden node (HN) problem and exposed node (EN) problem], which brings great challenges in realizing the optical carrier sense multiple access (CSMA) mechanism. In this paper, the carrier sense process implemented by diffuse reflection light is modeled as the choice of independent sets. We establish an ICS model with the presence of ENs and HNs for the multi-point to multi-point visible light communication (VLC) uplink communications system. Considering the severe optical ICS problem, an optical hard core point process (OHCPP) is developed, which characterizes the optical CSMA for the indoor VLC uplink communications system. Due to the limited coverage of the transmitted optical signal, in our OHCPP, the ENs within the transmitters' carrier sense region could be retained provided that they could not corrupt the ongoing communications. Moreover, because of the directionality of both light emitting diode (LED) transmitters and receivers, theoretical analysis of the HN problem becomes difficult. In this paper, we derive the closed-form expression for approximating the outage probability and transmission capacity of VLC networks with the presence of HNs and ENs. Simulation results validate the analysis and also show the existence of an optimal physical carrier-sensing threshold that maximizes the transmission capacity for a given emission angle of LED.

  4. An improved ASIFT algorithm for indoor panorama image matching

    Fu, Han; Xie, Donghai; Zhong, Ruofei; Wu, Yu; Wu, Qiong

    2017-07-01

    The generation of 3D models for indoor objects and scenes is an attractive tool for digital city, virtual reality and SLAM purposes. Panoramic images are becoming increasingly more common in such applications due to their advantages to capture the complete environment in one single image with large field of view. The extraction and matching of image feature points are important and difficult steps in three-dimensional reconstruction, and ASIFT is a state-of-the-art algorithm to implement these functions. Compared with the SIFT algorithm, more feature points can be generated and the matching accuracy of ASIFT algorithm is higher, even for the panoramic images with obvious distortions. However, the algorithm is really time-consuming because of complex operations and performs not very well for some indoor scenes under poor light or without rich textures. To solve this problem, this paper proposes an improved ASIFT algorithm for indoor panoramic images: firstly, the panoramic images are projected into multiple normal perspective images. Secondly, the original ASIFT algorithm is simplified from the affine transformation of tilt and rotation with the images to the only tilt affine transformation. Finally, the results are re-projected to the panoramic image space. Experiments in different environments show that this method can not only ensure the precision of feature points extraction and matching, but also greatly reduce the computing time.

  5. Microsoft Office professional 2010 step by step

    Cox, Joyce; Frye, Curtis

    2011-01-01

    Teach yourself exactly what you need to know about using Office Professional 2010-one step at a time! With STEP BY STEP, you build and practice new skills hands-on, at your own pace. Covering Microsoft Word, PowerPoint, Outlook, Excel, Access, Publisher, and OneNote, this book will help you learn the core features and capabilities needed to: Create attractive documents, publications, and spreadsheetsManage your e-mail, calendar, meetings, and communicationsPut your business data to workDevelop and deliver great presentationsOrganize your ideas and notes in one placeConnect, share, and accom

  6. Brain source localization: A new method based on MUltiple SIgnal Classification algorithm and spatial sparsity of the field signal for electroencephalogram measurements

    Vergallo, P.; Lay-Ekuakille, A.

    2013-08-01

    Brain activity can be recorded by means of EEG (Electroencephalogram) electrodes placed on the scalp of the patient. The EEG reflects the activity of groups of neurons located in the head, and the fundamental problem in neurophysiology is the identification of the sources responsible of brain activity, especially if a seizure occurs and in this case it is important to identify it. The studies conducted in order to formalize the relationship between the electromagnetic activity in the head and the recording of the generated external field allow to know pattern of brain activity. The inverse problem, that is given the sampling field at different electrodes the underlying asset must be determined, is more difficult because the problem may not have a unique solution, or the search for the solution is made difficult by a low spatial resolution which may not allow to distinguish between activities involving sources close to each other. Thus, sources of interest may be obscured or not detected and known method in source localization problem as MUSIC (MUltiple SIgnal Classification) could fail. Many advanced source localization techniques achieve a best resolution by exploiting sparsity: if the number of sources is small as a result, the neural power vs. location is sparse. In this work a solution based on the spatial sparsity of the field signal is presented and analyzed to improve MUSIC method. For this purpose, it is necessary to set a priori information of the sparsity in the signal. The problem is formulated and solved using a regularization method as Tikhonov, which calculates a solution that is the better compromise between two cost functions to minimize, one related to the fitting of the data, and another concerning the maintenance of the sparsity of the signal. At the first, the method is tested on simulated EEG signals obtained by the solution of the forward problem. Relatively to the model considered for the head and brain sources, the result obtained allows to

  7. A transport-based condensed history algorithm

    Tolar, D. R. Jr.

    1999-01-01

    Condensed history algorithms are approximate electron transport Monte Carlo methods in which the cumulative effects of multiple collisions are modeled in a single step of (user-specified) path length s 0 . This path length is the distance each Monte Carlo electron travels between collisions. Current condensed history techniques utilize a splitting routine over the range 0 le s le s 0 . For example, the PEnELOPE method splits each step into two substeps; one with length ξs 0 and one with length (1 minusξ)s 0 , where ξ is a random number from 0 0 is fixed (not sampled from an exponential distribution), conventional condensed history schemes are not transport processes. Here the authors describe a new condensed history algorithm that is a transport process. The method simulates a transport equation that approximates the exact Boltzmann equation. The new transport equation has a larger mean free path than, and preserves two angular moments of, the Boltzmann equation. Thus, the new process is solved more efficiently by Monte Carlo, and it conserves both particles and scattering power

  8. A review on quantum search algorithms

    Giri, Pulak Ranjan; Korepin, Vladimir E.

    2017-12-01

    The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.

  9. An improved VSS NLMS algorithm for active noise cancellation

    Sun, Yunzhuo; Wang, Mingjiang; Han, Yufei; Zhang, Congyan

    2017-08-01

    In this paper, an improved variable step size NLMS algorithm is proposed. NLMS has fast convergence rate and low steady state error compared to other traditional adaptive filtering algorithm. But there is a contradiction between the convergence speed and steady state error that affect the performance of the NLMS algorithm. Now, we propose a new variable step size NLMS algorithm. It dynamically changes the step size according to current error and iteration times. The proposed algorithm has simple formulation and easily setting parameters, and effectively solves the contradiction in NLMS. The simulation results show that the proposed algorithm has a good tracking ability, fast convergence rate and low steady state error simultaneously.

  10. STEP and fundamental physics

    Overduin, James; Everitt, Francis; Worden, Paul; Mester, John

    2012-09-01

    The Satellite Test of the Equivalence Principle (STEP) will advance experimental limits on violations of Einstein's equivalence principle from their present sensitivity of two parts in 1013 to one part in 1018 through multiple comparison of the motions of four pairs of test masses of different compositions in a drag-free earth-orbiting satellite. We describe the experiment, its current status and its potential implications for fundamental physics. Equivalence is at the heart of general relativity, our governing theory of gravity and violations are expected in most attempts to unify this theory with the other fundamental interactions of physics, as well as in many theoretical explanations for the phenomenon of dark energy in cosmology. Detection of such a violation would be equivalent to the discovery of a new force of nature. A null result would be almost as profound, pushing upper limits on any coupling between standard-model fields and the new light degrees of freedom generically predicted by these theories down to unnaturally small levels.

  11. STEP and fundamental physics

    Overduin, James; Everitt, Francis; Worden, Paul; Mester, John

    2012-01-01

    The Satellite Test of the Equivalence Principle (STEP) will advance experimental limits on violations of Einstein's equivalence principle from their present sensitivity of two parts in 10 13 to one part in 10 18 through multiple comparison of the motions of four pairs of test masses of different compositions in a drag-free earth-orbiting satellite. We describe the experiment, its current status and its potential implications for fundamental physics. Equivalence is at the heart of general relativity, our governing theory of gravity and violations are expected in most attempts to unify this theory with the other fundamental interactions of physics, as well as in many theoretical explanations for the phenomenon of dark energy in cosmology. Detection of such a violation would be equivalent to the discovery of a new force of nature. A null result would be almost as profound, pushing upper limits on any coupling between standard-model fields and the new light degrees of freedom generically predicted by these theories down to unnaturally small levels. (paper)

  12. Higher-order force gradient symplectic algorithms

    Chin, Siu A.; Kidwell, Donald W.

    2000-12-01

    We show that a recently discovered fourth order symplectic algorithm, which requires one evaluation of force gradient in addition to three evaluations of the force, when iterated to higher order, yielded algorithms that are far superior to similarly iterated higher order algorithms based on the standard Forest-Ruth algorithm. We gauge the accuracy of each algorithm by comparing the step-size independent error functions associated with energy conservation and the rotation of the Laplace-Runge-Lenz vector when solving a highly eccentric Kepler problem. For orders 6, 8, 10, and 12, the new algorithms are approximately a factor of 103, 104, 104, and 105 better.

  13. A trust region interior point algorithm for optimal power flow problems

    Wang Min [Hefei University of Technology (China). Dept. of Electrical Engineering and Automation; Liu Shengsong [Jiangsu Electric Power Dispatching and Telecommunication Company (China). Dept. of Automation

    2005-05-01

    This paper presents a new algorithm that uses the trust region interior point method to solve nonlinear optimal power flow (OPF) problems. The OPF problem is solved by a primal/dual interior point method with multiple centrality corrections as a sequence of linearized trust region sub-problems. It is the trust region that controls the linear step size and ensures the validity of the linear model. The convergence of the algorithm is improved through the modification of the trust region sub-problem. Numerical results of standard IEEE systems and two realistic networks ranging in size from 14 to 662 buses are presented. The computational results show that the proposed algorithm is very effective to optimal power flow applications, and favors the successive linear programming (SLP) method. Comparison with the predictor/corrector primal/dual interior point (PCPDIP) method is also made to demonstrate the superiority of the multiple centrality corrections technique. (author)

  14. Linear feature detection algorithm for astronomical surveys - I. Algorithm description

    Bektešević, Dino; Vinković, Dejan

    2017-11-01

    Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.

  15. M4GB : Efficient Groebner Basis algorithm

    R.H. Makarim (Rusydi); M.M.J. Stevens (Marc)

    2017-01-01

    textabstractWe introduce a new efficient algorithm for computing Groebner-bases named M4GB. Like Faugere's algorithm F4 it is an extension of Buchberger's algorithm that describes: how to store already computed (tail-)reduced multiples of basis polynomials to prevent redundant work in the reduction

  16. Step by Step Microsoft Office Visio 2003

    Lemke, Judy

    2004-01-01

    Experience learning made easy-and quickly teach yourself how to use Visio 2003, the Microsoft Office business and technical diagramming program. With STEP BY STEP, you can take just the lessons you need, or work from cover to cover. Either way, you drive the instruction-building and practicing the skills you need, just when you need them! Produce computer network diagrams, organization charts, floor plans, and moreUse templates to create new diagrams and drawings quicklyAdd text, color, and 1-D and 2-D shapesInsert graphics and pictures, such as company logosConnect shapes to create a basic f

  17. Efficient sequential and parallel algorithms for planted motif search.

    Nicolae, Marius; Rajasekaran, Sanguthevar

    2014-01-31

    Motif searching is an important step in the detection of rare events occurring in a set of DNA or protein sequences. One formulation of the problem is known as (l,d)-motif search or Planted Motif Search (PMS). In PMS we are given two integers l and d and n biological sequences. We want to find all sequences of length l that appear in each of the input sequences with at most d mismatches. The PMS problem is NP-complete. PMS algorithms are typically evaluated on certain instances considered challenging. Despite ample research in the area, a considerable performance gap exists because many state of the art algorithms have large runtimes even for moderately challenging instances. This paper presents a fast exact parallel PMS algorithm called PMS8. PMS8 is the first algorithm to solve the challenging (l,d) instances (25,10) and (26,11). PMS8 is also efficient on instances with larger l and d such as (50,21). We include a comparison of PMS8 with several state of the art algorithms on multiple problem instances. This paper also presents necessary and sufficient conditions for 3 l-mers to have a common d-neighbor. The program is freely available at http://engr.uconn.edu/~man09004/PMS8/. We present PMS8, an efficient exact algorithm for Planted Motif Search. PMS8 introduces novel ideas for generating common neighborhoods. We have also implemented a parallel version for this algorithm. PMS8 can solve instances not solved by any previous algorithms.

  18. Free Modal Algebras Revisited: The Step-by-Step Method

    Bezhanishvili, N.; Ghilardi, Silvio; Jibladze, Mamuka

    2012-01-01

    We review the step-by-step method of constructing finitely generated free modal algebras. First we discuss the global step-by-step method, which works well for rank one modal logics. Next we refine the global step-by-step method to obtain the local step-by-step method, which is applicable beyond

  19. Diabetes PSA (:30) Step By Step

    2009-10-24

    First steps to preventing diabetes. For Hispanic and Latino American audiences.  Created: 10/24/2009 by National Diabetes Education Program (NDEP), a joint program of the Centers for Disease Control and Prevention and the National Institutes of Health.   Date Released: 10/24/2009.

  20. Diabetes PSA (:60) Step By Step

    2009-10-24

    First steps to preventing diabetes. For Hispanic and Latino American audiences.  Created: 10/24/2009 by National Diabetes Education Program (NDEP), a joint program of the Centers for Disease Control and Prevention and the National Institutes of Health.   Date Released: 10/24/2009.

  1. Applications of Fast Truncated Multiplication in Cryptography

    Laszlo Hars

    2006-12-01

    Full Text Available Truncated multiplications compute truncated products, contiguous subsequences of the digits of integer products. For an n-digit multiplication algorithm of time complexity O(nα, with 1<α≤2, there is a truncated multiplication algorithm, which is constant times faster when computing a short enough truncated product. Applying these fast truncated multiplications, several cryptographic long integer arithmetic algorithms are improved, including integer reciprocals, divisions, Barrett and Montgomery multiplications, 2n-digit modular multiplication on hardware for n-digit half products. For example, Montgomery multiplication is performed in 2.6 Karatsuba multiplication time.

  2. Improved quantum backtracking algorithms using effective resistance estimates

    Jarret, Michael; Wan, Kianna

    2018-02-01

    We investigate quantum backtracking algorithms of the type introduced by Montanaro (Montanaro, arXiv:1509.02374). These algorithms explore trees of unknown structure and in certain settings exponentially outperform their classical counterparts. Some of the previous work focused on obtaining a quantum advantage for trees in which a unique marked vertex is promised to exist. We remove this restriction by recharacterizing the problem in terms of the effective resistance of the search space. In this paper, we present a generalization of one of Montanaro's algorithms to trees containing k marked vertices, where k is not necessarily known a priori. Our approach involves using amplitude estimation to determine a near-optimal weighting of a diffusion operator, which can then be applied to prepare a superposition state with support only on marked vertices and ancestors thereof. By repeatedly sampling this state and updating the input vertex, a marked vertex is reached in a logarithmic number of steps. The algorithm thereby achieves the conjectured bound of O ˜(√{T Rmax }) for finding a single marked vertex and O ˜(k √{T Rmax }) for finding all k marked vertices, where T is an upper bound on the tree size and Rmax is the maximum effective resistance encountered by the algorithm. This constitutes a speedup over Montanaro's original procedure in both the case of finding one and the case of finding multiple marked vertices in an arbitrary tree.

  3. Sparse Nonlinear Electromagnetic Imaging Accelerated With Projected Steepest Descent Algorithm

    Desmal, Abdulla; Bagci, Hakan

    2017-01-01

    steepest descent algorithm. The algorithm uses a projection operator to enforce the sparsity constraint by thresholding the solution at every iteration. Thresholding level and iteration step are selected carefully to increase the efficiency without

  4. Effectiveness of Partition and Graph Theoretic Clustering Algorithms for Multiple Source Partial Discharge Pattern Classification Using Probabilistic Neural Network and Its Adaptive Version: A Critique Based on Experimental Studies

    S. Venkatesh

    2012-01-01

    Full Text Available Partial discharge (PD is a major cause of failure of power apparatus and hence its measurement and analysis have emerged as a vital field in assessing the condition of the insulation system. Several efforts have been undertaken by researchers to classify PD pulses utilizing artificial intelligence techniques. Recently, the focus has shifted to the identification of multiple sources of PD since it is often encountered in real-time measurements. Studies have indicated that classification of multi-source PD becomes difficult with the degree of overlap and that several techniques such as mixed Weibull functions, neural networks, and wavelet transformation have been attempted with limited success. Since digital PD acquisition systems record data for a substantial period, the database becomes large, posing considerable difficulties during classification. This research work aims firstly at analyzing aspects concerning classification capability during the discrimination of multisource PD patterns. Secondly, it attempts at extending the previous work of the authors in utilizing the novel approach of probabilistic neural network versions for classifying moderate sets of PD sources to that of large sets. The third focus is on comparing the ability of partition-based algorithms, namely, the labelled (learning vector quantization and unlabelled (K-means versions, with that of a novel hypergraph-based clustering method in providing parsimonious sets of centers during classification.

  5. Autodriver algorithm

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  6. A novel optimization method, Gravitational Search Algorithm (GSA), for PWR core optimization

    Mahmoudi, S.M.; Aghaie, M.; Bahonar, M.; Poursalehi, N.

    2016-01-01

    Highlights: • The Gravitational Search Algorithm (GSA) is introduced. • The advantage of GSA is verified in Shekel’s Foxholes. • Reload optimizing in WWER-1000 and WWER-440 cases are performed. • Maximizing K eff , minimizing PPFs and flattening power density is considered. - Abstract: In-core fuel management optimization (ICFMO) is one of the most challenging concepts of nuclear engineering. In recent decades several meta-heuristic algorithms or computational intelligence methods have been expanded to optimize reactor core loading pattern. This paper presents a new method of using Gravitational Search Algorithm (GSA) for in-core fuel management optimization. The GSA is constructed based on the law of gravity and the notion of mass interactions. It uses the theory of Newtonian physics and searcher agents are the collection of masses. In this work, at the first step, GSA method is compared with other meta-heuristic algorithms on Shekel’s Foxholes problem. In the second step for finding the best core, the GSA algorithm has been performed for three PWR test cases including WWER-1000 and WWER-440 reactors. In these cases, Multi objective optimizations with the following goals are considered, increment of multiplication factor (K eff ), decrement of power peaking factor (PPF) and power density flattening. It is notable that for neutronic calculation, PARCS (Purdue Advanced Reactor Core Simulator) code is used. The results demonstrate that GSA algorithm have promising performance and could be proposed for other optimization problems of nuclear engineering field.

  7. Microsoft Office Word 2007 step by step

    Cox, Joyce

    2007-01-01

    Experience learning made easy-and quickly teach yourself how to create impressive documents with Word 2007. With Step By Step, you set the pace-building and practicing the skills you need, just when you need them!Apply styles and themes to your document for a polished lookAdd graphics and text effects-and see a live previewOrganize information with new SmartArt diagrams and chartsInsert references, footnotes, indexes, a table of contentsSend documents for review and manage revisionsTurn your ideas into blogs, Web pages, and moreYour all-in-one learning experience includes:Files for building sk

  8. Benchmarking monthly homogenization algorithms

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  9. A Stepped Frequency CW SAR for Lightweight UAV Operation

    Morrison, Keith

    2005-01-01

    A stepped-frequency continuous wave (SF-CW) synthetic aperture radar (SAR), with frequency-agile waveforms and real-time intelligent signal processing algorithms, is proposed for operation from a lightweight UAV platform...

  10. An iterative two-step algorithm for American option pricing

    Siddiqi, A. H.; Manchanda, P.; Kočvara, Michal

    2000-01-01

    Roč. 11, č. 2 (2000), s. 71-84 ISSN 0953-0061 R&D Projects: GA AV ČR IAA1075707 Institutional research plan: AV0Z1075907 Keywords : American option pricing * linear complementarity * iterative methods Subject RIV: AH - Economics

  11. Progressive multiple sequence alignments from triplets

    Stadler Peter F

    2007-07-01

    Full Text Available Abstract Background The quality of progressive sequence alignments strongly depends on the accuracy of the individual pairwise alignment steps since gaps that are introduced at one step cannot be removed at later aggregation steps. Adjacent insertions and deletions necessarily appear in arbitrary order in pairwise alignments and hence form an unavoidable source of errors. Research Here we present a modified variant of progressive sequence alignments that addresses both issues. Instead of pairwise alignments we use exact dynamic programming to align sequence or profile triples. This avoids a large fractions of the ambiguities arising in pairwise alignments. In the subsequent aggregation steps we follow the logic of the Neighbor-Net algorithm, which constructs a phylogenetic network by step-wisely replacing triples by pairs instead of combining pairs to singletons. To this end the three-way alignments are subdivided into two partial alignments, at which stage all-gap columns are naturally removed. This alleviates the "once a gap, always a gap" problem of progressive alignment procedures. Conclusion The three-way Neighbor-Net based alignment program aln3nn is shown to compare favorably on both protein sequences and nucleic acids sequences to other progressive alignment tools. In the latter case one easily can include scoring terms that consider secondary structure features. Overall, the quality of resulting alignments in general exceeds that of clustalw or other multiple alignments tools even though our software does not included heuristics for context dependent (mismatch scores.

  12. Joint optimization of algorithmic suites for EEG analysis.

    Santana, Eder; Brockmeier, Austin J; Principe, Jose C

    2014-01-01

    Electroencephalogram (EEG) data analysis algorithms consist of multiple processing steps each with a number of free parameters. A joint optimization methodology can be used as a wrapper to fine-tune these parameters for the patient or application. This approach is inspired by deep learning neural network models, but differs because the processing layers for EEG are heterogeneous with different approaches used for processing space and time. Nonetheless, we treat the processing stages as a neural network and apply backpropagation to jointly optimize the parameters. This approach outperforms previous results on the BCI Competition II - dataset IV; additionally, it outperforms the common spatial patterns (CSP) algorithm on the BCI Competition III dataset IV. In addition, the optimized parameters in the architecture are still interpretable.

  13. Honing process optimization algorithms

    Kadyrov, Ramil R.; Charikov, Pavel N.; Pryanichnikova, Valeria V.

    2018-03-01

    This article considers the relevance of honing processes for creating high-quality mechanical engineering products. The features of the honing process are revealed and such important concepts as the task for optimization of honing operations, the optimal structure of the honing working cycles, stepped and stepless honing cycles, simulation of processing and its purpose are emphasized. It is noted that the reliability of the mathematical model determines the quality parameters of the honing process control. An algorithm for continuous control of the honing process is proposed. The process model reliably describes the machining of a workpiece in a sufficiently wide area and can be used to operate the CNC machine CC743.

  14. GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling

    Miki, Yohei; Umemura, Masayuki

    2017-04-01

    The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.

  15. Fostering Autonomy through Syllabus Design: A Step-by-Step Guide for Success

    Ramírez Espinosa, Alexánder

    2016-01-01

    Promoting learner autonomy is relevant in the field of applied linguistics due to the multiple benefits it brings to the process of learning a new language. However, despite the vast array of research on how to foster autonomy in the language classroom, it is difficult to find step-by-step processes to design syllabi and curricula focused on the…

  16. The structure of stepped surfaces

    Algra, A.J.

    1981-01-01

    The state-of-the-art of Low Energy Ion Scattering (LEIS) as far as multiple scattering effects are concerned, is discussed. The ion fractions of lithium, sodium and potassium scattered from a copper (100) surface have been measured as a function of several experimental parameters. The ratio of the intensities of the single and double scattering peaks observed in ion scattering spectroscopy has been determined and ion scattering spectroscopy applied in the multiple scattering mode is used to determine the structure of a stepped Cu(410) surface. The average relaxation of the (100) terraces of this surface appears to be very small. The adsorption of oxygen on this surface has been studied with LEIS and it is indicated that oxygen absorbs dissociatively. (C.F.)

  17. Elementary functions algorithms and implementation

    Muller, Jean-Michel

    2016-01-01

    This textbook presents the concepts and tools necessary to understand, build, and implement algorithms for computing elementary functions (e.g., logarithms, exponentials, and the trigonometric functions). Both hardware- and software-oriented algorithms are included, along with issues related to accurate floating-point implementation. This third edition has been updated and expanded to incorporate the most recent advances in the field, new elementary function algorithms, and function software. After a preliminary chapter that briefly introduces some fundamental concepts of computer arithmetic, such as floating-point arithmetic and redundant number systems, the text is divided into three main parts. Part I considers the computation of elementary functions using algorithms based on polynomial or rational approximations and using table-based methods; the final chapter in this section deals with basic principles of multiple-precision arithmetic. Part II is devoted to a presentation of “shift-and-add” algorithm...

  18. Efficient RNA structure comparison algorithms.

    Arslan, Abdullah N; Anandan, Jithendar; Fry, Eric; Monschke, Keith; Ganneboina, Nitin; Bowerman, Jason

    2017-12-01

    Recently proposed relative addressing-based ([Formula: see text]) RNA secondary structure representation has important features by which an RNA structure database can be stored into a suffix array. A fast substructure search algorithm has been proposed based on binary search on this suffix array. Using this substructure search algorithm, we present a fast algorithm that finds the largest common substructure of given multiple RNA structures in [Formula: see text] format. The multiple RNA structure comparison problem is NP-hard in its general formulation. We introduced a new problem for comparing multiple RNA structures. This problem has more strict similarity definition and objective, and we propose an algorithm that solves this problem efficiently. We also develop another comparison algorithm that iteratively calls this algorithm to locate nonoverlapping large common substructures in compared RNAs. With the new resulting tools, we improved the RNASSAC website (linked from http://faculty.tamuc.edu/aarslan ). This website now also includes two drawing tools: one specialized for preparing RNA substructures that can be used as input by the search tool, and another one for automatically drawing the entire RNA structure from a given structure sequence.

  19. ON A NUMERICAL ALGORITHM FOR UNCERTAIN SYSTEM ∫ Φ ...

    Administrator

    Science World Journal Vol 7 (No 1) 2012 www.scienceworldjournal.org. ISSN 1597-6343. On a Numerical Algorithm for Uncertain System. Newton's Algorithm. Step 1 Calculate. )(),().(k k k. xAxgxF. Step 2. Check if ε. <. )(k xg for a predetermined ,ε if so stop, else. Step3. Set k k. PxA. )( = )(k xg. -. Step4. Set k k k. Px x. +. = +1.

  20. Array architectures for iterative algorithms

    Jagadish, Hosagrahar V.; Rao, Sailesh K.; Kailath, Thomas

    1987-01-01

    Regular mesh-connected arrays are shown to be isomorphic to a class of so-called regular iterative algorithms. For a wide variety of problems it is shown how to obtain appropriate iterative algorithms and then how to translate these algorithms into arrays in a systematic fashion. Several 'systolic' arrays presented in the literature are shown to be specific cases of the variety of architectures that can be derived by the techniques presented here. These include arrays for Fourier Transform, Matrix Multiplication, and Sorting.

  1. Multi-time-step domain coupling method with energy control

    Mahjoubi, N.; Krenk, Steen

    2010-01-01

    the individual time step. It is demonstrated that displacement continuity between the subdomains leads to cancelation of the interface contributions to the energy balance equation, and thus stability and algorithmic damping properties of the original algorithms are retained. The various subdomains can...... by a numerical example using a refined mesh around concentrated forces. Copyright © 2010 John Wiley & Sons, Ltd....

  2. a Voxel-Based Filtering Algorithm for Mobile LIDAR Data

    Qin, H.; Guan, G.; Yu, Y.; Zhong, L.

    2018-04-01

    This paper presents a stepwise voxel-based filtering algorithm for mobile LiDAR data. In the first step, to improve computational efficiency, mobile LiDAR points, in xy-plane, are first partitioned into a set of two-dimensional (2-D) blocks with a given block size, in each of which all laser points are further organized into an octree partition structure with a set of three-dimensional (3-D) voxels. Then, a voxel-based upward growing processing is performed to roughly separate terrain from non-terrain points with global and local terrain thresholds. In the second step, the extracted terrain points are refined by computing voxel curvatures. This voxel-based filtering algorithm is comprehensively discussed in the analyses of parameter sensitivity and overall performance. An experimental study performed on multiple point cloud samples, collected by different commercial mobile LiDAR systems, showed that the proposed algorithm provides a promising solution to terrain point extraction from mobile point clouds.

  3. Enhanced sampling algorithms.

    Mitsutake, Ayori; Mori, Yoshiharu; Okamoto, Yuko

    2013-01-01

    In biomolecular systems (especially all-atom models) with many degrees of freedom such as proteins and nucleic acids, there exist an astronomically large number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. Enhanced conformational sampling techniques are thus in great demand. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present various extensions of these three generalized-ensemble algorithms. The effectiveness of the methods is tested with short peptide and protein systems.

  4. The theory of hybrid stochastic algorithms

    Duane, S.; Kogut, J.B.

    1986-01-01

    The theory of hybrid stochastic algorithms is developed. A generalized Fokker-Planck equation is derived and is used to prove that the correct equilibrium distribution is generated by the algorithm. Systematic errors following from the discrete time-step used in the numerical implementation of the scheme are computed. Hybrid algorithms which simulate lattice gauge theory with dynamical fermions are presented. They are optimized in computer simulations and their systematic errors and efficiencies are studied. (orig.)

  5. Multi-Feature Based Multiple Landmine Detection Using Ground Penetration Radar

    S. Park

    2014-06-01

    Full Text Available This paper presents a novel method for detection of multiple landmines using a ground penetrating radar (GPR. Conventional algorithms mainly focus on detection of a single landmine, which cannot linearly extend to the multiple landmine case. The proposed algorithm is composed of four steps; estimation of the number of multiple objects buried in the ground, isolation of each object, feature extraction and detection of landmines. The number of objects in the GPR signal is estimated by using the energy projection method. Then signals for the objects are extracted by using the symmetry filtering method. Each signal is then processed for features, which are given as input to the support vector machine (SVM for landmine detection. Three landmines buried in various ground conditions are considered for the test of the proposed method. They demonstrate that the proposed method can successfully detect multiple landmines.

  6. Faster algorithms for RNA-folding using the Four-Russians method.

    Venkatachalam, Balaji; Gusfield, Dan; Frid, Yelena

    2014-03-06

    The secondary structure that maximizes the number of non-crossing matchings between complimentary bases of an RNA sequence of length n can be computed in O(n3) time using Nussinov's dynamic programming algorithm. The Four-Russians method is a technique that reduces the running time for certain dynamic programming algorithms by a multiplicative factor after a preprocessing step where solutions to all smaller subproblems of a fixed size are exhaustively enumerated and solved. Frid and Gusfield designed an O(n3logn) algorithm for RNA folding using the Four-Russians technique. In their algorithm the preprocessing is interleaved with the algorithm computation. We simplify the algorithm and the analysis by doing the preprocessing once prior to the algorithm computation. We call this the two-vector method. We also show variants where instead of exhaustive preprocessing, we only solve the subproblems encountered in the main algorithm once and memoize the results. We give a simple proof of correctness and explore the practical advantages over the earlier method.The Nussinov algorithm admits an O(n2) time parallel algorithm. We show a parallel algorithm using the two-vector idea that improves the time bound to O(n2logn). We have implemented the parallel algorithm on graphics processing units using the CUDA platform. We discuss the organization of the data structures to exploit coalesced memory access for fast running times. The ideas to organize the data structures also help in improving the running time of the serial algorithms. For sequences of length up to 6000 bases the parallel algorithm takes only about 2.5 seconds and the two-vector serial method takes about 57 seconds on a desktop and 15 seconds on a server. Among the serial algorithms, the two-vector and memoized versions are faster than the Frid-Gusfield algorithm by a factor of 3, and are faster than Nussinov by up to a factor of 20. The source-code for the algorithms is available at http://github.com/ijalabv/FourRussiansRNAFolding.

  7. Explorations of the implementation of a parallel IDW interpolation algorithm in a Linux cluster-based parallel GIS

    Huang, Fang; Liu, Dingsheng; Tan, Xicheng; Wang, Jian; Chen, Yunping; He, Binbin

    2011-04-01

    To design and implement an open-source parallel GIS (OP-GIS) based on a Linux cluster, the parallel inverse distance weighting (IDW) interpolation algorithm has been chosen as an example to explore the working model and the principle of algorithm parallel pattern (APP), one of the parallelization patterns for OP-GIS. Based on an analysis of the serial IDW interpolation algorithm of GRASS GIS, this paper has proposed and designed a specific parallel IDW interpolation algorithm, incorporating both single process, multiple data (SPMD) and master/slave (M/S) programming modes. The main steps of the parallel IDW interpolation algorithm are: (1) the master node packages the related information, and then broadcasts it to the slave nodes; (2) each node calculates its assigned data extent along one row using the serial algorithm; (3) the master node gathers the data from all nodes; and (4) iterations continue until all rows have been processed, after which the results are outputted. According to the experiments performed in the course of this work, the parallel IDW interpolation algorithm can attain an efficiency greater than 0.93 compared with similar algorithms, which indicates that the parallel algorithm can greatly reduce processing time and maximize speed and performance.

  8. Decoding Interleaved Gabidulin Codes using Alekhnovich's Algorithm

    Puchinger, Sven; Müelich, Sven; Mödinger, David

    2017-01-01

    We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log⁡(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent.......We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log⁡(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent....

  9. "Accelerated Perceptron": A Self-Learning Linear Decision Algorithm

    Zuev, Yu. A.

    2003-01-01

    The class of linear decision rules is studied. A new algorithm for weight correction, called an "accelerated perceptron", is proposed. In contrast to classical Rosenblatt's perceptron this algorithm modifies the weight vector at each step. The algorithm may be employed both in learning and in self-learning modes. The theoretical aspects of the behaviour of the algorithm are studied when the algorithm is used for the purpose of increasing the decision reliability by means of weighted voting. I...

  10. Parallel algorithms for online trackfinding at PANDA

    Bianchi, Ludovico; Ritman, James; Stockmanns, Tobias [IKP, Forschungszentrum Juelich GmbH (Germany); Herten, Andreas [JSC, Forschungszentrum Juelich GmbH (Germany); Collaboration: PANDA-Collaboration

    2016-07-01

    The PANDA experiment, one of the four scientific pillars of the FAIR facility currently in construction in Darmstadt, is a next-generation particle detector that will study collisions of antiprotons with beam momenta of 1.5-15 GeV/c on a fixed proton target. Because of the broad physics scope and the similar signature of signal and background events, PANDA's strategy for data acquisition is to continuously record data from the whole detector and use this global information to perform online event reconstruction and filtering. A real-time rejection factor of up to 1000 must be achieved to match the incoming data rate for offline storage, making all components of the data processing system computationally very challenging. Online particle track identification and reconstruction is an essential step, since track information is used as input in all following phases. Online tracking algorithms must ensure a delicate balance between high tracking efficiency and quality, and minimal computational footprint. For this reason, a massively parallel solution exploiting multiple Graphic Processing Units (GPUs) is under investigation. The talk presents the core concepts of the algorithms being developed for primary trackfinding, along with details of their implementation on GPUs.

  11. Parallel algorithms

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  12. Algorithm 865

    Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy

    2007-01-01

    We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...

  13. Focal cryotherapy: step by step technique description

    Cristina Redondo

    Full Text Available ABSTRACT Introduction and objective: Focal cryotherapy emerged as an efficient option to treat favorable and localized prostate cancer (PCa. The purpose of this video is to describe the procedure step by step. Materials and methods: We present the case of a 68 year-old man with localized PCa in the anterior aspect of the prostate. Results: The procedure is performed under general anesthesia, with the patient in lithotomy position. Briefly, the equipment utilized includes the cryotherapy console coupled with an ultrasound system, argon and helium gas bottles, cryoprobes, temperature probes and an urethral warming catheter. The procedure starts with a real-time trans-rectal prostate ultrasound, which is used to outline the prostate, the urethra and the rectal wall. The cryoprobes are pretested and placed in to the prostate through the perineum, following a grid template, along with the temperature sensors under ultrasound guidance. A cystoscopy confirms the right positioning of the needles and the urethral warming catheter is installed. Thereafter, the freeze sequence with argon gas is started, achieving extremely low temperatures (-40°C to induce tumor cell lysis. Sequentially, the thawing cycle is performed using helium gas. This process is repeated one time. Results among several series showed a biochemical disease-free survival between 71-93% at 9-70 month- follow-up, incontinence rates between 0-3.6% and erectile dysfunction between 0-42% (1–5. Conclusions: Focal cryotherapy is a feasible procedure to treat anterior PCa that may offer minimal morbidity, allowing good cancer control and better functional outcomes when compared to whole-gland treatment.

  14. Large scale tracking algorithms

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  15. FEM simulation of multi step forming of thick sheet

    Wisselink, H.H.; Huetink, Han

    2004-01-01

    A case study has been performed on the forming of an industrial product. This product, a bracket, is made of 5mm thick sheet in multiple steps. The process exists of a bending step followed by a drawing and a flanging step. FEM simulations have been used to investigate this forming process. First,

  16. Mosaic crystal algorithm for Monte Carlo simulations

    Seeger, P A

    2002-01-01

    An algorithm is presented for calculating reflectivity, absorption, and scattering of mosaic crystals in Monte Carlo simulations of neutron instruments. The algorithm uses multi-step transport through the crystal with an exact solution of the Darwin equations at each step. It relies on the kinematical model for Bragg reflection (with parameters adjusted to reproduce experimental data). For computation of thermal effects (the Debye-Waller factor and coherent inelastic scattering), an expansion of the Debye integral as a rapidly converging series of exponential terms is also presented. Any crystal geometry and plane orientation may be treated. The algorithm has been incorporated into the neutron instrument simulation package NISP. (orig.)

  17. An algorithm for gluinos on the lattice

    Montvay, I.

    1995-10-01

    Luescher's local bosonic algorithm for Monte Carlo simulations of quantum field theories with fermions is applied to the simulation of a possibly supersymmetric Yang-Mills theory with a Majorana fermion in the adjoint representation. Combined with a correction step in a two-step polynomial approximation scheme, the obtained algorithm seems to be promising and could be competitive with more conventional algorithms based on discretized classical (''molecular dynamics'') equations of motion. The application of the considered polynomial approximation scheme to optimized hopping parameter expansions is also discussed. (orig.)

  18. High-resolution seismic wave propagation using local time stepping

    Peter, Daniel

    2017-03-13

    High-resolution seismic wave simulations often require local refinements in numerical meshes to accurately capture e.g. steep topography or complex fault geometry. Together with explicit time schemes, this dramatically reduces the global time step size for ground-motion simulations due to numerical stability conditions. To alleviate this problem, local time stepping (LTS) algorithms allow an explicit time stepping scheme to adapt the time step to the element size, allowing nearoptimal time steps everywhere in the mesh. This can potentially lead to significantly faster simulation runtimes.

  19. Parameter Estimation of Multiple Frequency-Hopping Signals with Two Sensors

    Le Zuo

    2018-04-01

    Full Text Available This paper essentially focuses on parameter estimation of multiple wideband emitting sources with time-varying frequencies, such as two-dimensional (2-D direction of arrival (DOA and signal sorting, with a low-cost circular synthetic array (CSA consisting of only two rotating sensors. Our basic idea is to decompose the received data, which is a superimposition of phase measurements from multiple sources into separated groups and separately estimate the DOA associated with each source. Motivated by joint parameter estimation, we propose to adopt the expectation maximization (EM algorithm in this paper; our method involves two steps, namely, the expectation-step (E-step and the maximization (M-step. In the E-step, the correspondence of each signal with its emitting source is found. Then, in the M-step, the maximum-likelihood (ML estimates of the DOA parameters are obtained. These two steps are iteratively and alternatively executed to jointly determine the DOAs and sort multiple signals. Closed-form DOA estimation formulae are developed by ML estimation based on phase data, which also realize an optimal estimation. Directional ambiguity is also addressed by another ML estimation method based on received complex responses. The Cramer-Rao lower bound is derived for understanding the estimation accuracy and performance comparison. The verification of the proposed method is demonstrated with simulations.

  20. Incorporating a multi-criteria decision procedure into the combined dynamic programming/production simulation algorithm for generation expansion planning

    Yang, H.T.; Chen, S.L.

    1989-01-01

    A multi-objective optimization approach to generation expansion planning is presented. The approach is designed by adding a new multi-criteria decision (MCD) procedure to the conventional algorithm which combines dynamic programming with production simulation method. The MCD procedure can help decision makers weight the relative importance of multiple attributes associated with the decision alternatives, and find the near-best compromise solution efficiently at each optimization step of the conventional algorithm. Practical application of proposed approach to feasibility evaluation of the fourth nuclear power plant of Tawian is also presented, demonstrating the effectiveness and limitations of the approach