Efficient On-the-fly Algorithms for the Analysis of Timed Games
DEFF Research Database (Denmark)
Cassez, Franck; David, Alexandre; Fleury, Emmanuel
2005-01-01
In this paper, we propose the first efficient on-the-fly algorithm for solving games based on timed game automata with respect to reachability and safety properties The algorithm we propose is a symbolic extension of the on-the-fly algorithm suggested by Liu & Smolka [15] for linear-time model......-checking of finite-state systems. Being on-the-fly, the symbolic algorithm may terminate long before having explored the entire state-space. Also the individual steps of the algorithm are carried out efficiently by the use of so-called zones as the underlying data structure. Various optimizations of the basic...... symbolic algorithm are proposed as well as methods for obtaining time-optimal winning strategies (for reachability games). Extensive evaluation of an experimental implementation of the algorithm yields very encouraging performance results....
Efficient on-the-fly Algorithm for Checking Alternating Timed Simulation
DEFF Research Database (Denmark)
David, Alexandre; Larsen, Kim Guldstrand; Chatain, Thomas
2009-01-01
of building a symbolic turn-based two-player game such that the existence of a winning strategy is equivalent to the simulation being satisfied. We also propose an on-the-fly algorithm for solving this game. This simulation checking method can be applied to the case of non-alternating or strong simulations...
An Efficient Algorithm for On-the-Fly Data Race Detection Using an Epoch-Based Technique
Directory of Open Access Journals (Sweden)
Ok-Kyoon Ha
2015-01-01
Full Text Available Data races represent the most notorious class of concurrency bugs in multithreaded programs. To detect data races precisely and efficiently during the execution of multithreaded programs, the epoch-based FastTrack technique has been employed. However, FastTrack has time and space complexities that depend on the maximum parallelism of the program to partially maintain expensive data structures, such as vector clocks. This paper presents an efficient algorithm, called iFT, that uses only the epochs of the access histories. Unlike FastTrack, our algorithm requires O(1 operations to maintain an access history and locate data races, without any switching between epochs and vector clocks. We implement this algorithm on top of the Pin binary instrumentation framework and compare it with other on-the-fly detection algorithms, including FastTrack, which uses a state-of-the-art happens-before analysis algorithm. Empirical results using the PARSEC benchmark show that iFT reduces the average runtime and memory overhead to 84% and 37%, respectively, of those of FastTrack.
Computationally efficient simulation of unsteady aerodynamics using POD on the fly
Moreno-Ramos, Ruben; Vega, José M.; Varas, Fernando
2016-12-01
Modern industrial aircraft design requires a large amount of sufficiently accurate aerodynamic and aeroelastic simulations. Current computational fluid dynamics (CFD) solvers with aeroelastic capabilities, such as the NASA URANS unstructured solver FUN3D, require very large computational resources. Since a very large amount of simulation is necessary, the CFD cost is just unaffordable in an industrial production environment and must be significantly reduced. Thus, a more inexpensive, yet sufficiently precise solver is strongly needed. An opportunity to approach this goal could follow some recent results (Terragni and Vega 2014 SIAM J. Appl. Dyn. Syst. 13 330-65 Rapun et al 2015 Int. J. Numer. Meth. Eng. 104 844-68) on an adaptive reduced order model that combines ‘on the fly’ a standard numerical solver (to compute some representative snapshots), proper orthogonal decomposition (POD) (to extract modes from the snapshots), Galerkin projection (onto the set of POD modes), and several additional ingredients such as projecting the equations using a limited amount of points and fairly generic mode libraries. When applied to the complex Ginzburg-Landau equation, the method produces acceleration factors (comparing with standard numerical solvers) of the order of 20 and 300 in one and two space dimensions, respectively. Unfortunately, the extension of the method to unsteady, compressible flows around deformable geometries requires new approaches to deal with deformable meshes, high-Reynolds numbers, and compressibility. A first step in this direction is presented considering the unsteady compressible, two-dimensional flow around an oscillating airfoil using a CFD solver in a rigidly moving mesh. POD on the Fly gives results whose accuracy is comparable to that of the CFD solver used to compute the snapshots.
Indian Academy of Sciences (India)
Outline of the talk. Introduction. Computing connectivities between all pairs of vertices. All pairs shortest paths/distances. Optimal bipartite matching . – p.2/30 .... Efficient Algorithm. The time taken for this computation on any input should be bounded by a small polynomial in the input size. . – p.6/30 ...
An Efficient Reachability Analysis Algorithm
Vatan, Farrokh; Fijany, Amir
2008-01-01
A document discusses a new algorithm for generating higher-order dependencies for diagnostic and sensor placement analysis when a system is described with a causal modeling framework. This innovation will be used in diagnostic and sensor optimization and analysis tools. Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in-situ platforms. This algorithm will serve as a power tool for technologies that satisfy a key requirement of autonomous spacecraft, including science instruments and in-situ missions.
Domestic energy efficiency improving algorithms
Molderink, Albert; Bakker, Vincent; Bosman, M.G.C.; Hurink, Johann L.; Smit, Gerardus Johannes Maria
Due to increasing energy prices and the greenhouse effect more efficient electricity production is desirable, referably based on renewable sources. In the last years, a lot of technologies have been developed to improve the efficiency of the electricity usage and supply. Next to large scale
An Efficient Algorithm for Unconstrained Optimization
Directory of Open Access Journals (Sweden)
Sergio Gerardo de-los-Cobos-Silva
2015-01-01
Full Text Available This paper presents an original and efficient PSO algorithm, which is divided into three phases: (1 stabilization, (2 breadth-first search, and (3 depth-first search. The proposed algorithm, called PSO-3P, was tested with 47 benchmark continuous unconstrained optimization problems, on a total of 82 instances. The numerical results show that the proposed algorithm is able to reach the global optimum. This work mainly focuses on unconstrained optimization problems from 2 to 1,000 variables.
Efficient GPS Position Determination Algorithms
2007-06-01
Algorithm Let the reference receiver, m, have a known position represented as (xm, ym, zm) and the reported ith satellite position (via ephemeris data) be...vr is given as the velocity difference uvv &−=r (1-22) where v is the (known) velocity of the satellite and u& is the velocity of the user to be...this research and in our previous work reported in [15] and [16], an over-determined system is treated, making use of all-in-view (n ≥ 5) satellites as
ILIGRA : An Efficient Inverse Line Graph Algorithm
Liu, D.; Trajanovski, S.; Van Mieghem, P.
2014-01-01
This paper presents a new and efficient algorithm, ILIGRA, for inverse line graph construction. Given a line graph H, ILIGRA constructs its root graph G with the time complexity being linear in the number of nodes in H. If ILIGRA does not know whether the given graph H is a line graph, it firstly
Efficient Parallel Algorithms for Unsteady Incompressible Flows
Guermond, Jean-Luc
2013-01-01
The objective of this paper is to give an overview of recent developments on splitting schemes for solving the time-dependent incompressible Navier–Stokes equations and to discuss possible extensions to the variable density/viscosity case. A particular attention is given to algorithms that can be implemented efficiently on large parallel clusters.
Novel and efficient tag SNPs selection algorithms.
Chen, Wen-Pei; Hung, Che-Lun; Tsai, Suh-Jen Jane; Lin, Yaw-Ling
2014-01-01
SNPs are the most abundant forms of genetic variations amongst species; the association studies between complex diseases and SNPs or haplotypes have received great attention. However, these studies are restricted by the cost of genotyping all SNPs; thus, it is necessary to find smaller subsets, or tag SNPs, representing the rest of the SNPs. In fact, the existing tag SNP selection algorithms are notoriously time-consuming. An efficient algorithm for tag SNP selection was presented, which was applied to analyze the HapMap YRI data. The experimental results show that the proposed algorithm can achieve better performance than the existing tag SNP selection algorithms; in most cases, this proposed algorithm is at least ten times faster than the existing methods. In many cases, when the redundant ratio of the block is high, the proposed algorithm can even be thousands times faster than the previously known methods. Tools and web services for haplotype block analysis integrated by hadoop MapReduce framework are also developed using the proposed algorithm as computation kernels.
Efficient protein alignment algorithm for protein search.
Lu, Zaixin; Zhao, Zhiyu; Fu, Bin
2010-01-18
Proteins show a great variety of 3D conformations, which can be used to infer their evolutionary relationship and to classify them into more general groups; therefore protein structure alignment algorithms are very helpful for protein biologists. However, an accurate alignment algorithm itself may be insufficient for effective discovering of structural relationships among tens of thousands of proteins. Due to the exponentially increasing amount of protein structural data, a fast and accurate structure alignment tool is necessary to access protein classification and protein similarity search; however, the complexity of current alignment algorithms are usually too high to make a fully alignment-based classification and search practical. We have developed an efficient protein pairwise alignment algorithm and applied it to our protein search tool, which aligns a query protein structure in the pairwise manner with all protein structures in the Protein Data Bank (PDB) to output similar protein structures. The algorithm can align hundreds of pairs of protein structures in one second. Given a protein structure, the tool efficiently discovers similar structures from tens of thousands of structures stored in the PDB always in 2 minutes in a single machine and 20 seconds in our cluster of 6 machines. The algorithm has been fully implemented and is accessible online at our webserver, which is supported by a cluster of computers. Our algorithm can work out hundreds of pairs of protein alignments in one second. Therefore, it is very suitable for protein search. Our experimental results show that it is more accurate than other well known protein search systems in finding proteins which are structurally similar at SCOP family and superfamily levels, and its speed is also competitive with those systems. In terms of the pairwise alignment performance, it is as good as some well known alignment algorithms.
Implementation of the On-the-fly Encryption for the Linux OS Based on Certified CPS
Directory of Open Access Journals (Sweden)
Alexander Mikhailovich Korotin
2013-02-01
Full Text Available The article is devoted to tools for on-the-fly encryption and a method to implement such tool for the Linux OS based on a certified CPS.The idea is to modify the existing tool named eCryptfs. Russian cryptographic algorithms will be used in the user and kernel modes.
AN EFFICIENT SEGMENTATION ALGORITHM FOR ENTITY INTERACTION
Directory of Open Access Journals (Sweden)
Eugene Ch'ng
2009-04-01
Full Text Available The inventorying of biological diversity and studies in biocomplexity require the management of large electronic datasets of organisms. While species inventory has adopted structured electronic databases for some time, the computer modelling of the functional interactions between biological entities at all levels of life is still in the stage of development. One of the challenges for this type of modelling is the biotic interactions that occur between large datasets of entities represented as computer algorithms. In real-time simulation that models the biotic interactions of large population datasets, the use of computational processing time could be extensive. One way of increasing the efficiency of such simulation is to partition the landscape so that entities need only traverse its local space for entities that falls within the interaction proximity. This article presents an efficient segmentation algorithm for biotic interactions for research related to the modelling and simulation of biological systems.
Efficient Algorithms for the Maximum Sum Problems
Directory of Open Access Journals (Sweden)
Sung Eun Bae
2017-01-01
Full Text Available We present efficient sequential and parallel algorithms for the maximum sum (MS problem, which is to maximize the sum of some shape in the data array. We deal with two MS problems; the maximum subarray (MSA problem and the maximum convex sum (MCS problem. In the MSA problem, we find a rectangular part within the given data array that maximizes the sum in it. The MCS problem is to find a convex shape rather than a rectangular shape that maximizes the sum. Thus, MCS is a generalization of MSA. For the MSA problem, O ( n time parallel algorithms are already known on an ( n , n 2D array of processors. We improve the communication steps from 2 n − 1 to n, which is optimal. For the MCS problem, we achieve the asymptotic time bound of O ( n on an ( n , n 2D array of processors. We provide rigorous proofs for the correctness of our parallel algorithm based on Hoare logic and also provide some experimental results of our algorithm that are gathered from the Blue Gene/P super computer. Furthermore, we briefly describe how to compute the actual shape of the maximum convex sum.
An efficient algorithm for function optimization: modified stem cells algorithm
Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad
2013-03-01
In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).
Memory-efficient algorithm for stored projection and backprojection matrix in helical CT.
Guo, Minghao; Gao, Hao
2017-04-01
Iterative image reconstruction is often time-consuming, especially for helical CT. The calculation of X-ray projections and backprojections are computationally expensive. Although they can be significantly accelerated by parallel computing (e.g., via graphics processing unit (GPU)), they have to be calculated numerous times on-the-fly (OTF) during iterative image reconstruction due to insufficient memory storage. In this work, the memory-efficient algorithm for stored system matrix (SSM) is developed for both projections and backprojections to avoid repeated OTF computations of system matrices. The SSM algorithm is based on the shift-invariance for projection and backprojection under a rotating coordinate. As a result, the size of projection and backprojection matrices can be significantly reduced and fully stored in memory. The proposed method can be readily incorporated into iterative reconstruction algorithm with minor modification, i.e., by replacing OTF for SSM. Rigorous mathematical analysis is carried out to establish the shift-invariance for ray-driven projection and pixel-driven backprojection. Numerical results via GPU suggest that the proposed SSM method has improved computational efficiency from the OTF method, i.e., by three- to sixfold acceleration for the projection and 3- to 16-fold acceleration for the backprojection respectively for helical CT. We propose a memory-efficient SSM algorithm for projections and backprojections so that system matrices can be fully stored on the state-of-the-art GPU to facilitate the rapid iterative helical CT image reconstruction. © 2017 American Association of Physicists in Medicine.
Efficient predictive algorithms for image compression
Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla
2017-01-01
This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...
Cerqueira, Tiago F T; Sarmiento-Pérez, Rafael; Amsler, Maximilian; Nogueira, F; Botti, Silvana; Marques, Miguel A L
2015-08-11
The dream of any solid-state theorist is to be able to predict new materials with tailored properties from scratch, i.e., without any input from experiment. Over the past decades, we have steadily approached this goal. Recent developments in the field of high-throughput calculations focused on finding the best material for specific applications. However, a key input for these techniques still had to be obtained experimentally, namely, the crystal structure of the materials. Here, we give a step further and show that one can indeed optimize material properties using as a single starting point the knowledge of the periodic table and the fundamental laws of quantum mechanics. This is done by combining state-of-the-art methods of global structure prediction that allow us to obtain the ground-state crystal structure of arbitrary materials, with an evolutionary algorithm that optimizes the chemical composition for the desired property. As a first showcase demonstration of our method, we perform an unbiased search for superhard materials and for transparent conductors. We stress that our method is completely general and can be used to optimize any property (or combination of properties) that can be calculated in a computer.
A simple and efficient algorithm for modeling modular complex networks
Kowalczyk, Mateusz; Fronczak, Piotr; Fronczak, Agata
2017-09-01
In this paper we introduce a new algorithm to generate networks in which node degrees and community sizes can follow any arbitrary distribution. We compare the quality and efficiency of the proposed algorithm and the well-known algorithm by Lancichinetti et al. In contrast to the later one, the new algorithm, at the cost of accuracy, allows to generate two orders of magnitude larger networks in a reasonable time and it can be easily described analytically.
An efficient algorithm for color image segmentation
Directory of Open Access Journals (Sweden)
Shikha Yadav
2016-09-01
Full Text Available In field of image processing, image segmentation plays an important role that focus on splitting the whole image into segments. Representation of an image so that it can be more easily analysed and involves more information is an important segmentation goal. The process of partitioning an image can be usually realized by Region based, Boundary based or edge based method. In this work a hybrid approach is followed that combines improved bee colony optimization and Tabu search for color image segmentation. The results produced from this hybrid approach are compared with non-sorted particle swarm optimization, non-sorted genetic algorithm and improved bee colony optimization. Results show that the Hybrid algorithm has better or somewhat similar performance as compared to other algorithms that are based on population. The algorithm is successfully implemented on MATLAB.
On-the-fly Overlapping of Sparse Generations
DEFF Research Database (Denmark)
Sørensen, Chres Wiant; Roetter, Daniel Enrique Lucani; Fitzek, Frank
2014-01-01
generations can still be quite high compared to other sparse coding approaches. This paper focuses on an inherently different approach that combines (i) sparsely coded generations configured on-the- fly based on (ii) controllable and infrequent feedback that allows the system to remove some original packets...
On-the-fly conformance testing using Spin
de Vries, R.G.; Tretmans, G.J.
2000-01-01
In this paper we report on the construction of a tool for conformance testing based on Spin. The Spin tool has been adapted such that it can derive the building blocks for constructing test cases, called test primitives, from systems described in Promela. The test primitives support the on-the-fly
Computationally efficient optimisation algorithms for WECs arrays
DEFF Research Database (Denmark)
Ferri, Francesco
2017-01-01
In this paper two derivative-free global optimization algorithms are applied for the maximisation of the energy absorbed by wave energy converter (WEC) arrays. Wave energy is a large and mostly untapped source of energy that could have a key role in the future energy mix. The collection of this r...... a comparison between derivative-free global optimisation algorithms. In particular, evolutionary strategies (CMA-ES) and metamodel based optimisations algorithms are compared in terms of accuracy and computational time.......In this paper two derivative-free global optimization algorithms are applied for the maximisation of the energy absorbed by wave energy converter (WEC) arrays. Wave energy is a large and mostly untapped source of energy that could have a key role in the future energy mix. The collection...... output. Although started in the late seventies, the topic of WEC array optimisation has gathered a renovated interest mostly in the last decade, and a number of different approaches has been already used, from traditional algorithms to heuristic ones. The objective of this paper is to present...
An efficient time algorithm for makespan objectives
Directory of Open Access Journals (Sweden)
Yucel Ozturkoglu
2015-07-01
Full Text Available This paper focuses on a single machine scheduling subject to machine deterioration with rate-modifying activities (RMA. The motivation for this study stems from the automatic-production line problem with one machine. The main question is to find the sequence in which jobs should be scheduled, how many maintenance activity (RMA to use, if any, and where to insert them in the schedule during the time interval with optimal makespan objective. This problem is known to be NP-hard and we give concise analyses of the problem and provide polynomial time algorithms to solve the makespan problem. We also propose an algorithm which can be applied to some scheduling problems with the actual processing time of job nonlinearly based on its position. This paper focuses on a single machine scheduling subject to machine deterioration with rate-modifying activities (RMA. The motivation for this study stems from the automatic-production line problem with one machine. The main question is to find the sequence in which jobs should be scheduled, how many maintenance activity (RMA to use, if any, and where to insert them in the schedule during the time interval with optimal makespan objective. This problem is known to be NP-hard and we give concise analyses of the problem and provide polynomial time algorithms to solve the makespan problem. We also propose an algorithm which can be applied to some scheduling problems with the actual processing time of job nonlinearly based on its position.
Efficient waste reduction algorithms based on alternative ...
African Journals Online (AJOL)
This paper is concerned with wastage reduction in constrained two-dimensional guillotine- cut cutting stock problems, often called trim loss problems. A number of researchers report in the literature on algorithmic approaches to nd exact solutions for the trim loss problem. Alternative heuristic functions are investigated and ...
Efficient Algorithms and Data Structures for Massive Data Sets
Alka
2010-05-01
For many algorithmic problems, traditional algorithms that optimise on the number of instructions executed prove expensive on I/Os. Novel and very different design techniques, when applied to these problems, can produce algorithms that are I/O efficient. This thesis adds to the growing chorus of such results. The computational models we use are the external memory model and the W-Stream model. On the external memory model, we obtain the following results. (1) An I/O efficient algorithm for computing minimum spanning trees of graphs that improves on the performance of the best known algorithm. (2) The first external memory version of soft heap, an approximate meldable priority queue. (3) Hard heap, the first meldable external memory priority queue that matches the amortised I/O performance of the known external memory priority queues, while allowing a meld operation at the same amortised cost. (4) I/O efficient exact, approximate and randomised algorithms for the minimum cut problem, which has not been explored before on the external memory model. (5) Some lower and upper bounds on I/Os for interval graphs. On the W-Stream model, we obtain the following results. (1) Algorithms for various tree problems and list ranking that match the performance of the best known algorithms and are easier to implement than them. (2) Pass efficient algorithms for sorting, and the maximal independent set problems, that improve on the best known algorithms. (3) Pass efficient algorithms for the graphs problems of finding vertex-colouring, approximate single source shortest paths, maximal matching, and approximate weighted vertex cover. (4) Lower bounds on passes for list ranking and maximal matching. We propose two variants of the W-Stream model, and design algorithms for the maximal independent set, vertex-colouring, and planar graph single source shortest paths problems on those models.
On-the-Fly Learning in a Perpetual Learning Machine
Simpson, Andrew J. R.
2015-01-01
Despite the promise of brain-inspired machine learning, deep neural networks (DNN) have frustratingly failed to bridge the deceptively large gap between learning and memory. Here, we introduce a Perpetual Learning Machine; a new type of DNN that is capable of brain-like dynamic 'on the fly' learning because it exists in a self-supervised state of Perpetual Stochastic Gradient Descent. Thus, we provide the means to unify learning and memory within a machine learning framework. We also explore ...
Efficient incremental density-based algorithm for clustering large datasets
Directory of Open Access Journals (Sweden)
Ahmad M. Bakr
2015-12-01
Full Text Available In dynamic information environments such as the web, the amount of information is rapidly increasing. Thus, the need to organize such information in an efficient manner is more important than ever. With such dynamic nature, incremental clustering algorithms are always preferred compared to traditional static algorithms. In this paper, an enhanced version of the incremental DBSCAN algorithm is introduced for incrementally building and updating arbitrary shaped clusters in large datasets. The proposed algorithm enhances the incremental clustering process by limiting the search space to partitions rather than the whole dataset which results in significant improvements in the performance compared to relevant incremental clustering algorithms. Experimental results with datasets of different sizes and dimensions show that the proposed algorithm speeds up the incremental clustering process by factor up to 3.2 compared to existing incremental algorithms.
Efficient scheduling request algorithm for opportunistic wireless access
Nam, Haewoon
2011-08-01
An efficient scheduling request algorithm for opportunistic wireless access based on user grouping is proposed in this paper. Similar to the well-known opportunistic splitting algorithm, the proposed algorithm initially adjusts (or lowers) the threshold during a guard period if no user sends a scheduling request. However, if multiple users make requests simultaneously and therefore a collision occurs, the proposed algorithm no longer updates the threshold but narrows down the user search space by splitting the users into multiple groups iteratively, whereas the opportunistic splitting algorithm keeps adjusting the threshold until a single user is found. Since the threshold is only updated when no user sends a request, it is shown that the proposed algorithm significantly alleviates the burden of the signaling for the threshold distribution to the users by the scheduler. More importantly, the proposed algorithm requires a less number of mini-slots to make a user selection given a certain scheduling outage probability. © 2011 IEEE.
An algorithm for efficient constrained mate selection
Directory of Open Access Journals (Sweden)
Kinghorn Brian P
2011-01-01
Full Text Available Abstract Background Mate selection can be used as a framework to balance key technical, cost and logistical issues while implementing a breeding program at a tactical level. The resulting mating lists accommodate optimal contributions of parents to future generations, in conjunction with other factors such as progeny inbreeding, connection between herds, use of reproductive technologies, management of the genetic distribution of nominated traits, and management of allele/genotype frequencies for nominated QTL/markers. Methods This paper describes a mate selection algorithm that is widely used and presents an extension that makes it possible to apply constraints on certain matings, as dictated through a group mating permission matrix. Results This full algorithm leads to simpler applications, and to computing speed for the scenario tested, which is several hundred times faster than the previous strategy of penalising solutions that break constraints. Conclusions The much higher speed of the method presented here extends the use of mate selection and enables implementation in relatively large programs across breeding units.
IFDR: An Efficient Iterative Optimization Algorithm for Standard Cell Placement
Feng Cheng; Junfa Mao
2004-01-01
In the automatic placement of integrated circuits, the force directed relaxation (FDR) method [Goto, S. (1981). An efficient algorithm for the two-dimensional placement problem in electrical circuit layout. IEEE Trans. on Circuits and Systems, CAS-28(1), 12-18] is a good iterative optimization algorithm. In this article, an improved force directed relaxation (IFDR) method for standard cell placement is presented, which provides a more flexible and efficient cell location adjustment scheme and...
Energy-Efficient Probabilistic Routing Algorithm for Internet of Things
Directory of Open Access Journals (Sweden)
Sang-Hyun Park
2014-01-01
Full Text Available In the future network with Internet of Things (IoT, each of the things communicates with the others and acquires information by itself. In distributed networks for IoT, the energy efficiency of the nodes is a key factor in the network performance. In this paper, we propose energy-efficient probabilistic routing (EEPR algorithm, which controls the transmission of the routing request packets stochastically in order to increase the network lifetime and decrease the packet loss under the flooding algorithm. The proposed EEPR algorithm adopts energy-efficient probabilistic control by simultaneously using the residual energy of each node and ETX metric in the context of the typical AODV protocol. In the simulations, we verify that the proposed algorithm has longer network lifetime and consumes the residual energy of each node more evenly when compared with the typical AODV protocol.
Efficient motif finding algorithms for large-alphabet inputs
Directory of Open Access Journals (Sweden)
Pavlovic Vladimir
2010-10-01
Full Text Available Abstract Background We consider the problem of identifying motifs, recurring or conserved patterns, in the biological sequence data sets. To solve this task, we present a new deterministic algorithm for finding patterns that are embedded as exact or inexact instances in all or most of the input strings. Results The proposed algorithm (1 improves search efficiency compared to existing algorithms, and (2 scales well with the size of alphabet. On a synthetic planted DNA motif finding problem our algorithm is over 10× more efficient than MITRA, PMSPrune, and RISOTTO for long motifs. Improvements are orders of magnitude higher in the same setting with large alphabets. On benchmark TF-binding site problems (FNP, CRP, LexA we observed reduction in running time of over 12×, with high detection accuracy. The algorithm was also successful in rapidly identifying protein motifs in Lipocalin, Zinc metallopeptidase, and supersecondary structure motifs for Cadherin and Immunoglobin families. Conclusions Our algorithm reduces computational complexity of the current motif finding algorithms and demonstrate strong running time improvements over existing exact algorithms, especially in important and difficult cases of large-alphabet sequences.
Artifact mitigation of ptychography integrated with on-the-fly scanning probe microscopy
Huang, Xiaojing; Yan, Hanfei; Ge, Mingyuan; Öztürk, Hande; Nazaretski, Evgeny; Robinson, Ian K.; Chu, Yong S.
2017-07-01
We report our experiences with conducting ptychography simultaneously with the X-ray fluorescence measurement using the on-the-fly mode for efficient multi-modality imaging. We demonstrate that the periodic artifact inherent to the raster scan pattern can be mitigated using a sufficiently fine scan step size to provide an overlap ratio of >70%. This allows us to obtain transmitted phase contrast images with enhanced spatial resolution from ptychography while maintaining the fluorescence imaging with continuous-motion scans on pixelated grids. This capability will greatly improve the competence and throughput of scanning probe X-ray microscopy.
Class enzyme-based motors for "on the fly" enantiomer analysis of amino acids.
García-Carmona, Laura; Moreno-Guzmán, María; González, María Cristina; Escarpa, Alberto
2017-10-15
Here, two class-enzyme motors are properly designed allowing the rapid dispersion of the class-enzyme D-amino acid oxidase (DAO) and L-amino acid oxidase (LAO) for selective "on the fly" biodetection of D and L-amino acids (AAs), respectively. The efficient movement together with the continuous release of fresh class-enzyme leads to a greatly accelerated enzymatic reaction processes without the need of external stirring or chemical and physical attachment of the enzyme. Ultra-fast detection (design of future points of care. Copyright © 2017 Elsevier B.V. All rights reserved.
Gradient gravitational search: An efficient metaheuristic algorithm for global optimization.
Dash, Tirtharaj; Sahu, Prabhat K
2015-05-30
The adaptation of novel techniques developed in the field of computational chemistry to solve the concerned problems for large and flexible molecules is taking the center stage with regard to efficient algorithm, computational cost and accuracy. In this article, the gradient-based gravitational search (GGS) algorithm, using analytical gradients for a fast minimization to the next local minimum has been reported. Its efficiency as metaheuristic approach has also been compared with Gradient Tabu Search and others like: Gravitational Search, Cuckoo Search, and Back Tracking Search algorithms for global optimization. Moreover, the GGS approach has also been applied to computational chemistry problems for finding the minimal value potential energy of two-dimensional and three-dimensional off-lattice protein models. The simulation results reveal the relative stability and physical accuracy of protein models with efficient computational cost. © 2015 Wiley Periodicals, Inc.
The Efficiency Analysis of the Augmented Reality Algorithm
Directory of Open Access Journals (Sweden)
Dovilė Kurpytė
2013-05-01
Full Text Available The article presents the investigation of the efficiency of augmented reality algorithm that depends on the rotation angles and lighting conditions. The following were the target subject parameters: three degrees of freedom perspective of the rotation and side lighting that forms a shadow. Static parameters of subjects with the ability to change them were as follow: the distance between the marker and the camera, camera, processor, and the distance from the light source. The study is based on an open source Java programming language algorithm, where the algorithm is tested with 10 markers. It was found that the rotation error did not exceed 2%.Article in Lithuanian
Efficient iterative image reconstruction algorithm for dedicated breast CT
Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan
2016-03-01
Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.
An Efficient Simulated Annealing Algorithm for Economic Load Dispatch Problems
Directory of Open Access Journals (Sweden)
Junaidi Junaidi
2013-03-01
Full Text Available This paper presents an efficient simulated annealing (SA algorithm for solving economic load dispatch (ELD problems in electrical power system. The objectives of ELD problems in electric power generation is to programmed the devoted generating unit outputs so as to meet the mandatory load demand at lowest amount operating cost while satisfying all units and system equality and inequality constraints. Global optimization approaches is inspired by annealing process of thermodynamics. The SA algorithm presented here is applied to two case studies, which analyze power systems having three, and six generating units. The results determined by SA algorithm are compared to those found by conventional quadratic programming (QP and genetic algorithm (GA.
Improving the efficiency of deconvolution algorithms for sound source localization
DEFF Research Database (Denmark)
Lylloff, Oliver Ackermann; Fernandez Grande, Efren; Agerkvist, Finn T.
2015-01-01
of the unknown acoustic source distribution and the beamformer's response to a point source, i.e., point-spread function. A significant limitation of deconvolution is, however, an additional computational effort compared to beamforming. In this paper, computationally efficient deconvolution algorithms...
Efficient architecture for global elimination algorithm for H. 264 ...
Indian Academy of Sciences (India)
Home; Journals; Sadhana; Volume 41; Issue 1. Efficient ... Fast block matching motion estimation; global elimination; matching complexity reduction; power reduction. ... The proposed architecture is based on Global Elimination (GE) Algorithm, which uses pixel averaging to reduce complexity of motion search while keeping ...
An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks.
Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen
2016-01-01
The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper.
Cache and energy efficient algorithms for Nussinov's RNA Folding.
Zhao, Chunchun; Sahni, Sartaj
2017-12-06
An RNA folding/RNA secondary structure prediction algorithm determines the non-nested/pseudoknot-free structure by maximizing the number of complementary base pairs and minimizing the energy. Several implementations of Nussinov's classical RNA folding algorithm have been proposed. Our focus is to obtain run time and energy efficiency by reducing the number of cache misses. Three cache-efficient algorithms, ByRow, ByRowSegment and ByBox, for Nussinov's RNA folding are developed. Using a simple LRU cache model, we show that the Classical algorithm of Nussinov has the highest number of cache misses followed by the algorithms Transpose (Li et al.), ByRow, ByRowSegment, and ByBox (in this order). Extensive experiments conducted on four computational platforms-Xeon E5, AMD Athlon 64 X2, Intel I7 and PowerPC A2-using two programming languages-C and Java-show that our cache efficient algorithms are also efficient in terms of run time and energy. Our benchmarking shows that, depending on the computational platform and programming language, either ByRow or ByBox give best run time and energy performance. The C version of these algorithms reduce run time by as much as 97.2% and energy consumption by as much as 88.8% relative to Classical and by as much as 56.3% and 57.8% relative to Transpose. The Java versions reduce run time by as much as 98.3% relative to Classical and by as much as 75.2% relative to Transpose. Transpose achieves run time and energy efficiency at the expense of memory as it takes twice the memory required by Classical. The memory required by ByRow, ByRowSegment, and ByBox is the same as that of Classical. As a result, using the same amount of memory, the algorithms proposed by us can solve problems up to 40% larger than those solvable by Transpose.
Investigating the Multi-memetic Mind Evolutionary Computation Algorithm Efficiency
Directory of Open Access Journals (Sweden)
M. K. Sakharov
2017-01-01
Full Text Available In solving practically significant problems of global optimization, the objective function is often of high dimensionality and computational complexity and of nontrivial landscape as well. Studies show that often one optimization method is not enough for solving such problems efficiently - hybridization of several optimization methods is necessary.One of the most promising contemporary trends in this field are memetic algorithms (MA, which can be viewed as a combination of the population-based search for a global optimum and the procedures for a local refinement of solutions (memes, provided by a synergy. Since there are relatively few theoretical studies concerning the MA configuration, which is advisable for use to solve the black-box optimization problems, many researchers tend just to adaptive algorithms, which for search select the most efficient methods of local optimization for the certain domains of the search space.The article proposes a multi-memetic modification of a simple SMEC algorithm, using random hyper-heuristics. Presents the software algorithm and memes used (Nelder-Mead method, method of random hyper-sphere surface search, Hooke-Jeeves method. Conducts a comparative study of the efficiency of the proposed algorithm depending on the set and the number of memes. The study has been carried out using Rastrigin, Rosenbrock, and Zakharov multidimensional test functions. Computational experiments have been carried out for all possible combinations of memes and for each meme individually.According to results of study, conducted by the multi-start method, the combinations of memes, comprising the Hooke-Jeeves method, were successful. These results prove a rapid convergence of the method to a local optimum in comparison with other memes, since all methods perform the fixed number of iterations at the most.The analysis of the average number of iterations shows that using the most efficient sets of memes allows us to find the optimal
An Efficient Chemical Reaction Optimization Algorithm for Multiobjective Optimization.
Bechikh, Slim; Chaabani, Abir; Ben Said, Lamjed
2015-10-01
Recently, a new metaheuristic called chemical reaction optimization was proposed. This search algorithm, inspired by chemical reactions launched during collisions, inherits several features from other metaheuristics such as simulated annealing and particle swarm optimization. This fact has made it, nowadays, one of the most powerful search algorithms in solving mono-objective optimization problems. In this paper, we propose a multiobjective variant of chemical reaction optimization, called nondominated sorting chemical reaction optimization, in an attempt to exploit chemical reaction optimization features in tackling problems involving multiple conflicting criteria. Since our approach is based on nondominated sorting, one of the main contributions of this paper is the proposal of a new quasi-linear average time complexity quick nondominated sorting algorithm; thereby making our multiobjective algorithm efficient from a computational cost viewpoint. The experimental comparisons against several other multiobjective algorithms on a variety of benchmark problems involving various difficulties show the effectiveness and the efficiency of this multiobjective version in providing a well-converged and well-diversified approximation of the Pareto front.
Query-Driven Strategy for On-the-Fly Term Spotting in Spontaneous Speech
Directory of Open Access Journals (Sweden)
Mickael Rouvier
2010-01-01
Full Text Available Spoken utterance retrieval was largely studied in the last decades, with the purpose of indexing large audio databases or of detecting keywords in continuous speech streams. While the indexing of closed corpora can be performed via a batch process, on-line spotting systems have to synchronously detect the targeted spoken utterances. We propose a two-level architecture for on-the-fly term spotting. The first level performs a fast detection of the speech segments that probably contain the targeted utterance. The second level refines the detection on the selected segments, by using a speech recognizer based on a query-driven decoding algorithm. Experiments are conducted on both broadcast and spontaneous speech corpora. We investigate the impact of the spontaneity level on system performance. Results show that our method remains effective even if the recognition rates are significantly degraded by disfluencies.
A Traffic Prediction Algorithm for Street Lighting Control Efficiency
Directory of Open Access Journals (Sweden)
POPA Valentin
2013-01-01
Full Text Available This paper presents the development of a traffic prediction algorithm that can be integrated in a street lighting monitoring and control system. The prediction algorithm must enable the reduction of energy costs and improve energy efficiency by decreasing the light intensity depending on the traffic level. The algorithm analyses and processes the information received at the command center based on the traffic level at different moments. The data is collected by means of the Doppler vehicle detection sensors integrated within the system. Thus, two methods are used for the implementation of the algorithm: a neural network and a k-NN (k-Nearest Neighbor prediction algorithm. For 500 training cycles, the mean square error of the neural network is 9.766 and for 500.000 training cycles the error amounts to 0.877. In case of the k-NN algorithm the error increases from 8.24 for k=5 to 12.27 for a number of 50 neighbors. In terms of a root means square error parameter, the use of a neural network ensures the highest performance level and can be integrated in a street lighting control system.
Efficient AM Algorithms for Stochastic ML Estimation of DOA
Directory of Open Access Journals (Sweden)
Haihua Chen
2016-01-01
Full Text Available The estimation of direction-of-arrival (DOA of signals is a basic and important problem in sensor array signal processing. To solve this problem, many algorithms have been proposed, among which the Stochastic Maximum Likelihood (SML is one of the most concerned algorithms because of its high accuracy of DOA. However, the estimation of SML generally involves the multidimensional nonlinear optimization problem. As a result, its computational complexity is rather high. This paper addresses the issue of reducing computational complexity of SML estimation of DOA based on the Alternating Minimization (AM algorithm. We have the following two contributions. First using transformation of matrix and properties of spatial projection, we propose an efficient AM (EAM algorithm by dividing the SML criterion into two components. One depends on a single variable parameter while the other does not. Second when the array is a uniform linear array, we get the irreducible form of the EAM criterion (IAM using polynomial forms. Simulation results show that both EAM and IAM can reduce the computational complexity of SML estimation greatly, while IAM is the best. Another advantage of IAM is that this algorithm can avoid the numerical instability problem which may happen in AM and EAM algorithms when more than one parameter converges to an identical value.
A Fast and Efficient Topological Coding Algorithm for Compound Images
Directory of Open Access Journals (Sweden)
Xin Li
2003-11-01
Full Text Available We present a fast and efficient coding algorithm for compound images. Unlike popular mixture raster content (MRC based approaches, we propose to attack compound image coding problem from the perspective of modeling location uncertainty of image singularities. We suggest that a computationally simple two-class segmentation strategy is sufficient for the coding of compound images. We argue that jointly exploiting topological properties of image source in classification and coding stages is beneficial to the robustness of compound image coding systems. Experiment results have justified effectiveness and robustness of the proposed topological coding algorithm.
Efficient greedy algorithms for economic manpower shift planning
Nearchou, A. C.; Giannikos, I. C.; Lagodimos, A. G.
2015-01-01
Consideration is given to the economic manpower shift planning (EMSP) problem, an NP-hard capacity planning problem appearing in various industrial settings including the packing stage of production in process industries and maintenance operations. EMSP aims to determine the manpower needed in each available workday shift of a given planning horizon so as to complete a set of independent jobs at minimum cost. Three greedy heuristics are presented for the EMSP solution. These practically constitute adaptations of an existing algorithm for a simplified version of EMSP which had shown excellent performance in terms of solution quality and speed. Experimentation shows that the new algorithms perform very well in comparison to the results obtained by both the CPLEX optimizer and an existing metaheuristic. Statistical analysis is deployed to rank the algorithms in terms of their solution quality and to identify the effects that critical planning factors may have on their relative efficiency.
Efficient state initialization by a quantum spectral filtering algorithm
Fillion-Gourdeau, François; MacLean, Steve; Laflamme, Raymond
2017-04-01
An algorithm that initializes a quantum register to a state with a specified energy range is given, corresponding to a quantum implementation of the celebrated Feit-Fleck method. This is performed by introducing a nondeterministic quantum implementation of a standard spectral filtering procedure combined with an apodization technique, allowing for accurate state initialization. It is shown that the implementation requires only two ancilla qubits. A lower bound for the total probability of success of this algorithm is derived, showing that this scheme can be realized using a finite, relatively low number of trials. Assuming the time evolution can be performed efficiently and using a trial state polynomially close to the desired states, it is demonstrated that the number of operations required scales polynomially with the number of qubits. Tradeoffs between accuracy and performance are demonstrated in a simple example: the harmonic oscillator. This algorithm would be useful for the initialization phase of the simulation of quantum systems on digital quantum computers.
Efficient Big Integer Multiplication and Squaring Algorithms for Cryptographic Applications
Directory of Open Access Journals (Sweden)
Shahram Jahani
2014-01-01
Full Text Available Public-key cryptosystems are broadly employed to provide security for digital information. Improving the efficiency of public-key cryptosystem through speeding up calculation and using fewer resources are among the main goals of cryptography research. In this paper, we introduce new symbols extracted from binary representation of integers called Big-ones. We present a modified version of the classical multiplication and squaring algorithms based on the Big-ones to improve the efficiency of big integer multiplication and squaring in number theory based cryptosystems. Compared to the adopted classical and Karatsuba multiplication algorithms for squaring, the proposed squaring algorithm is 2 to 3.7 and 7.9 to 2.5 times faster for squaring 32-bit and 8-Kbit numbers, respectively. The proposed multiplication algorithm is also 2.3 to 3.9 and 7 to 2.4 times faster for multiplying 32-bit and 8-Kbit numbers, respectively. The number theory based cryptosystems, which are operating in the range of 1-Kbit to 4-Kbit integers, are directly benefited from the proposed method since multiplication and squaring are the main operations in most of these systems.
Evolving Resilient Back-Propagation Algorithm for Energy Efficiency Problem
Directory of Open Access Journals (Sweden)
Yang Fei
2016-01-01
Full Text Available Energy efficiency is one of our most economical sources of new energy. When it comes to efficient building design, the computation of the heating load (HL and cooling load (CL is required to determine the specifications of the heating and cooling equipment. The objective of this paper is to model heating load and cooling load buildings using neural networks in order to predict HL load and CL load. Rprop with genetic algorithm was proposed to increase the global convergence capability of Rprop by modifying a corresponding weight. Comparison results show that Rprop with GA can successfully improve the global convergence capability of Rprop and achieve lower MSE than other perceptron training algorithms, such as Back-Propagation or original Rprop. In addition, the trained network has better generalization ability and stabilization performance.
An algorithm for testing the efficient market hypothesis.
Boboc, Ioana-Andreea; Dinică, Mihai-Cristian
2013-01-01
The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH).
An algorithm for testing the efficient market hypothesis.
Directory of Open Access Journals (Sweden)
Ioana-Andreea Boboc
Full Text Available The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA, Moving Average Convergence Divergence (MACD, Relative Strength Index (RSI and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH.
A Comparison of Power-Efficient Broadcast Routing Algorithms
National Research Council Canada - National Science Library
Kang, Intae; Poovendran, Radha
2003-01-01
.... In this paper, they compare the performance of four known power-efficient algorithms (and their variants), not only in terms of the total transmit power, but also in terms of other performance measures such as static network lifetime, total receive and interference power, and maximum and average hop count, which have direct impacts on physical, link, and MAC layers, and on end-to-end network delay.
Decomposition of Large Scale Semantic Graphsvia an Efficient Communities Algorithm
Energy Technology Data Exchange (ETDEWEB)
Yao, Y
2008-02-08
Semantic graphs have become key components in analyzing complex systems such as the Internet, or biological and social networks. These types of graphs generally consist of sparsely connected clusters or 'communities' whose nodes are more densely connected to each other than to other nodes in the graph. The identification of these communities is invaluable in facilitating the visualization, understanding, and analysis of large graphs by producing subgraphs of related data whose interrelationships can be readily characterized. Unfortunately, the ability of LLNL to effectively analyze the terabytes of multisource data at its disposal has remained elusive, since existing decomposition algorithms become computationally prohibitive for graphs of this size. We have addressed this limitation by developing more efficient algorithms for discerning community structure that can effectively process massive graphs. Current algorithms for detecting community structure, such as the high quality algorithm developed by Girvan and Newman [1], are only capable of processing relatively small graphs. The cubic complexity of Girvan and Newman, for example, makes it impractical for graphs with more than approximately 10{sup 4} nodes. Our goal for this project was to develop methodologies and corresponding algorithms capable of effectively processing graphs with up to 10{sup 9} nodes. From a practical standpoint, we expect the developed scalable algorithms to help resolve a variety of operational issues associated with the productive use of semantic graphs at LLNL. During FY07, we completed a graph clustering implementation that leverages a dynamic graph transformation to more efficiently decompose large graphs. In essence, our approach dynamically transforms the graph (or subgraphs) into a tree structure consisting of biconnected components interconnected by bridge links. This isomorphism allows us to compute edge betweenness, the chief source of inefficiency in Girvan and
An Efficient Algorithm for the Maximum Distance Problem
Directory of Open Access Journals (Sweden)
Gabrielle Assunta Grün
2001-12-01
Full Text Available Efficient algorithms for temporal reasoning are essential in knowledge-based systems. This is central in many areas of Artificial Intelligence including scheduling, planning, plan recognition, and natural language understanding. As such, scalability is a crucial consideration in temporal reasoning. While reasoning in the interval algebra is NP-complete, reasoning in the less expressive point algebra is tractable. In this paper, we explore an extension to the work of Gerevini and Schubert which is based on the point algebra. In their seminal framework, temporal relations are expressed as a directed acyclic graph partitioned into chains and supported by a metagraph data structure, where time points or events are represented by vertices, and directed edges are labelled with < or ≤. They are interested in fast algorithms for determining the strongest relation between two events. They begin by developing fast algorithms for the case where all points lie on a chain. In this paper, we are interested in a generalization of this, namely we consider the problem of finding the maximum ``distance'' between two vertices in a chain ; this problem arises in real world applications such as in process control and crew scheduling. We describe an O(n time preprocessing algorithm for the maximum distance problem on chains. It allows queries for the maximum number of < edges between two vertices to be answered in O(1 time. This matches the performance of the algorithm of Gerevini and Schubert for determining the strongest relation holding between two vertices in a chain.
Efficient Algorithms for Electrostatic Interactions Including Dielectric Contrasts
Directory of Open Access Journals (Sweden)
Christian Holm
2013-10-01
Full Text Available Coarse-grained models of soft matter are usually combined with implicit solvent models that take the electrostatic polarizability into account via a dielectric background. In biophysical or nanoscale simulations that include water, this constant can vary greatly within the system. Performing molecular dynamics or other simulations that need to compute exact electrostatic interactions between charges in those systems is computationally demanding. We review here several algorithms developed by us that perform exactly this task. For planar dielectric surfaces in partial periodic boundary conditions, the arising image charges can be either treated with the MMM2D algorithm in a very efficient and accurate way or with the electrostatic layer correction term, which enables the user to use his favorite 3D periodic Coulomb solver. Arbitrarily-shaped interfaces can be dealt with using induced surface charges with the induced charge calculation (ICC* algorithm. Finally, the local electrostatics algorithm, MEMD(Maxwell Equations Molecular Dynamics, even allows one to employ a smoothly varying dielectric constant in the systems. We introduce the concepts of these three algorithms and an extension for the inclusion of boundaries that are to be held fixed at a constant potential (metal conditions. For each method, we present a showcase application to highlight the importance of dielectric interfaces.
Secure Computation, I/O-Efficient Algorithms and Distributed Signatures
DEFF Research Database (Denmark)
Damgård, Ivan Bjerre; Kölker, Jonas; Toft, Tomas
2012-01-01
adversary corrupting a constant fraction of the players and servers. Using packed secret sharing, the data can be stored in a compact way but will only be accessible in a block-wise fashion. We explore the possibility of using I/O-efficient algorithms to nevertheless compute on the data as efficiently...... as if random access was possible. We show that for sorting, priority queues and data mining, this can indeed be done. We show actively secure protocols of complexity within a constant factor of the passively secure solution. As a technical contribution towards this goal, we develop techniques for generating...
EFFICIENT ADAPTIVE STEGANOGRAPHY FOR COLOR IMAGESBASED ON LSBMR ALGORITHM
Directory of Open Access Journals (Sweden)
B. Sharmila
2012-02-01
Full Text Available Steganography is the art of hiding the fact that communication is taking place, by hiding information in other medium. Many different carrier file formats can be used, but digital images are the most popular because of their frequent use on the Internet. For hiding secret information in images, there exists a large variety of steganographic techniques. The Least Significant Bit (LSB based approach is a simplest type of steganographic algorithm. In all the existing approaches, the decision of choosing the region within a cover image is performed without considering the relationship between image content and the size of secret message. Thus, the plain regions in the cover will be ruin after data hiding even at a low data rate. Hence choosing the edge region for data hiding will be a solution. Many algorithms are deal with edges in images for data hiding. The Paper 'Edge adaptive image steganography based on LSBMR algorithm' is a LSB steganography presented the results of algorithms on gray-scale images only. This paper presents the results of analyzing the performance of edge adaptive steganography for colored images (JPEG. The algorithms have been slightly modified for colored image implementation and are compared on the basis of evaluation parameters like peak signal noise ratio (PSNR and mean square error (MSE. This method can select the edge region depending on the length of secret message and difference between two consecutive bits in the cover image. For length of message is short, only small edge regions are utilized while on leaving other region as such. When the data rate increases, more regions can be used adaptively for data hiding by adjusting the parameters. Besides this, the message is encrypted using efficient cryptographic algorithm which further increases the security.
Algorithms for energy efficiency in wireless sensor networks
Energy Technology Data Exchange (ETDEWEB)
Busse, M.
2007-01-21
The recent advances in microsensor and semiconductor technology have opened a new field within computer science: the networking of small-sized sensors which are capable of sensing, processing, and communicating. Such wireless sensor networks offer new applications in the areas of habitat and environment monitoring, disaster control and operation, military and intelligence control, object tracking, video surveillance, traffic control, as well as in health care and home automation. It is likely that the deployed sensors will be battery-powered, which will limit the energy capacity significantly. Thus, energy efficiency becomes one of the main challenges that need to be taken into account, and the design of energy-efficient algorithms is a major contribution of this thesis. As the wireless communication in the network is one of the main energy consumers, we first consider in detail the characteristics of wireless communication. By using the embedded sensor board (ESB) platform recently developed by the Free University of Berlin, we analyze the means of forward error correction and propose an appropriate resync mechanism, which improves the communication between two ESB nodes substantially. Afterwards, we focus on the forwarding of data packets through the network. We present the algorithms energy-efficient forwarding (EEF), lifetime-efficient forwarding (LEF), and energy-efficient aggregation forwarding (EEAF). While EEF is designed to maximize the number of data bytes delivered per energy unit, LEF additionally takes into account the residual energy of forwarding nodes. In so doing, LEF further prolongs the lifetime of the network. Energy savings due to data aggregation and in-network processing are exploited by EEAF. Besides single-link forwarding, in which data packets are sent to only one forwarding node, we also study the impact of multi-link forwarding, which exploits the broadcast characteristics of the wireless medium by sending packets to several (potential
Efficient geometric rectification techniques for spectral analysis algorithm
Chang, C. Y.; Pang, S. S.; Curlander, J. C.
1992-01-01
The spectral analysis algorithm is a viable technique for processing synthetic aperture radar (SAR) data in near real time throughput rates by trading the image resolution. One major challenge of the spectral analysis algorithm is that the output image, often referred to as the range-Doppler image, is represented in the iso-range and iso-Doppler lines, a curved grid format. This phenomenon is known to be the fanshape effect. Therefore, resampling is required to convert the range-Doppler image into a rectangular grid format before the individual images can be overlaid together to form seamless multi-look strip imagery. An efficient algorithm for geometric rectification of the range-Doppler image is presented. The proposed algorithm, realized in two one-dimensional resampling steps, takes into consideration the fanshape phenomenon of the range-Doppler image as well as the high squint angle and updates of the cross-track and along-track Doppler parameters. No ground reference points are required.
Efficient design of hybrid renewable energy systems using evolutionary algorithms
Energy Technology Data Exchange (ETDEWEB)
Bernal-Agustin, Jose L.; Dufo-Lopez, Rodolfo [Department of Electrical Engineering, University of Zaragoza, Calle Maria de Luna, 3. 50018 Zaragoza (Spain)
2009-03-15
This paper shows an exhaustive study that has obtained the best values for the control parameters of an evolutionary algorithm developed by the authors, which permits the efficient design and control of hybrid systems of electrical energy generation, obtaining good solutions but needing low computational effort. In particular, for this study, a complex photovoltaic (PV)-wind-diesel-batteries-hydrogen system has been considered. In order to appropriately evaluate the behaviour of the evolutionary algorithm, the global optimal solution has been obtained (the one in which total net present cost presents a minor value) by an enumerative method. Next, a large number of designs were created using the evolutionary algorithm and modifying the values of the parameters that control its functioning. Finally, from the obtained results, it has been possible to determine the size of the population, the number of generations, the ratios of crossing and mutation, as well as the type of mutation most suitable to assure a probability near 100% of obtaining the global optimal design using the evolutionary algorithm. (author)
Efficient Partitioning of Algorithms for Long Convolutions and their Mapping onto Architectures
Bierens, L.; Deprettere, E.
1998-01-01
We present an efficient approach for the partitioning of algorithms implementing long convolutions. The dependence graph (DG) of a convolution algorithm is locally sequential globally parallel (LSGP) partitioned into smaller, less complex convolution algorithms. The LSGP partitioned DG is mapped
An efficient memetic algorithm for 3D shape matching problems
Sharif Khan, Mohammad; Mohamad Ayob, Ahmad F.; Ray, Tapabrata
2014-05-01
Shape representation plays a vital role in any shape optimization exercise. The ability to identify a shape with good functional properties is dependent on the underlying shape representation scheme, the morphing mechanism and the efficiency of the optimization algorithm. This article presents a novel and efficient methodology for morphing 3D shapes via smart repair of control points. The repaired sequence of control points are subsequently used to define the 3D object using a B-spline surface representation. The control points are evolved within the framework of a memetic algorithm for greater efficiency. While the authors have already proposed an approach for 2D shape matching, this article extends it further to deal with 3D shape matching problems. Three 3D examples and a real customized 3D earplug design have been used as examples to illustrate the performance of the proposed approach and the effectiveness of the repair scheme. Complete details of the problems are presented for future work in this direction.
Efficient Hardware Implementation of the Lightweight Block Encryption Algorithm LEA
Directory of Open Access Journals (Sweden)
Donggeon Lee
2014-01-01
Full Text Available Recently, due to the advent of resource-constrained trends, such as smartphones and smart devices, the computing environment is changing. Because our daily life is deeply intertwined with ubiquitous networks, the importance of security is growing. A lightweight encryption algorithm is essential for secure communication between these kinds of resource-constrained devices, and many researchers have been investigating this field. Recently, a lightweight block cipher called LEA was proposed. LEA was originally targeted for efficient implementation on microprocessors, as it is fast when implemented in software and furthermore, it has a small memory footprint. To reflect on recent technology, all required calculations utilize 32-bit wide operations. In addition, the algorithm is comprised of not complex S-Box-like structures but simple Addition, Rotation, and XOR operations. To the best of our knowledge, this paper is the first report on a comprehensive hardware implementation of LEA. We present various hardware structures and their implementation results according to key sizes. Even though LEA was originally targeted at software efficiency, it also shows high efficiency when implemented as hardware.
Efficient quantum algorithm for computing n-time correlation functions.
Pedernales, J S; Di Candia, R; Egusquiza, I L; Casanova, J; Solano, E
2014-07-11
We propose a method for computing n-time correlation functions of arbitrary spinorial, fermionic, and bosonic operators, consisting of an efficient quantum algorithm that encodes these correlations in an initially added ancillary qubit for probe and control tasks. For spinorial and fermionic systems, the reconstruction of arbitrary n-time correlation functions requires the measurement of two ancilla observables, while for bosonic variables time derivatives of the same observables are needed. Finally, we provide examples applicable to different quantum platforms in the frame of the linear response theory.
An Efficient Sleepy Algorithm for Particle-Based Fluids
Directory of Open Access Journals (Sweden)
Xiao Nie
2014-01-01
Full Text Available We present a novel Smoothed Particle Hydrodynamics (SPH based algorithm for efficiently simulating compressible and weakly compressible particle fluids. Prior particle-based methods simulate all fluid particles; however, in many cases some particles appearing to be at rest can be safely ignored without notably affecting the fluid flow behavior. To identify these particles, a novel sleepy strategy is introduced. By utilizing this strategy, only a portion of the fluid particles requires computational resources; thus an obvious performance gain can be achieved. In addition, in order to resolve unphysical clumping issue due to tensile instability in SPH based methods, a new artificial repulsive force is provided. We demonstrate that our approach can be easily integrated with existing SPH based methods to improve the efficiency without sacrificing visual quality.
Efficient algorithms for collaborative decision making for large scale settings
DEFF Research Database (Denmark)
Assent, Ira
2011-01-01
Collaborative decision making is a successful approach in settings where data analysis and querying can be done interactively. In large scale systems with huge data volumes or many users, collaboration is often hindered by impractical runtimes. Existing work on improving collaboration focuses......, and focuses on improving runtimes regardless of where the queries are issued from. In this work, we claim that progress can be made by taking a novel, more holistic view of the problem. We discuss a new approach that combines the two strands of research on the user experience and query engine parts in order...... to bring about more effective and more efficient retrieval systems that support the users' decision making process. We sketch promising research directions for more efficient algorithms for collaborative decision making, especially for large scale systems....
Novel Intermode Prediction Algorithm for High Efficiency Video Coding Encoder
Directory of Open Access Journals (Sweden)
Chan-seob Park
2014-01-01
Full Text Available The joint collaborative team on video coding (JCT-VC is developing the next-generation video coding standard which is called high efficiency video coding (HEVC. In the HEVC, there are three units in block structure: coding unit (CU, prediction unit (PU, and transform unit (TU. The CU is the basic unit of region splitting like macroblock (MB. Each CU performs recursive splitting into four blocks with equal size, starting from the tree block. In this paper, we propose a fast CU depth decision algorithm for HEVC technology to reduce its computational complexity. In 2N×2N PU, the proposed method compares the rate-distortion (RD cost and determines the depth using the compared information. Moreover, in order to speed up the encoding time, the efficient merge SKIP detection method is developed additionally based on the contextual mode information of neighboring CUs. Experimental result shows that the proposed algorithm achieves the average time-saving factor of 44.84% in the random access (RA at Main profile configuration with the HEVC test model (HM 10.0 reference software. Compared to HM 10.0 encoder, a small BD-bitrate loss of 0.17% is also observed without significant loss of image quality.
Building Integrated Ontological Knowledge Structures with Efficient Approximation Algorithms
Directory of Open Access Journals (Sweden)
Yang Xiang
2015-01-01
Full Text Available The integration of ontologies builds knowledge structures which brings new understanding on existing terminologies and their associations. With the steady increase in the number of ontologies, automatic integration of ontologies is preferable over manual solutions in many applications. However, available works on ontology integration are largely heuristic without guarantees on the quality of the integration results. In this work, we focus on the integration of ontologies with hierarchical structures. We identified optimal structures in this problem and proposed optimal and efficient approximation algorithms for integrating a pair of ontologies. Furthermore, we extend the basic problem to address the integration of a large number of ontologies, and correspondingly we proposed an efficient approximation algorithm for integrating multiple ontologies. The empirical study on both real ontologies and synthetic data demonstrates the effectiveness of our proposed approaches. In addition, the results of integration between gene ontology and National Drug File Reference Terminology suggest that our method provides a novel way to perform association studies between biomedical terms.
Building integrated ontological knowledge structures with efficient approximation algorithms.
Xiang, Yang; Janga, Sarath Chandra
2015-01-01
The integration of ontologies builds knowledge structures which brings new understanding on existing terminologies and their associations. With the steady increase in the number of ontologies, automatic integration of ontologies is preferable over manual solutions in many applications. However, available works on ontology integration are largely heuristic without guarantees on the quality of the integration results. In this work, we focus on the integration of ontologies with hierarchical structures. We identified optimal structures in this problem and proposed optimal and efficient approximation algorithms for integrating a pair of ontologies. Furthermore, we extend the basic problem to address the integration of a large number of ontologies, and correspondingly we proposed an efficient approximation algorithm for integrating multiple ontologies. The empirical study on both real ontologies and synthetic data demonstrates the effectiveness of our proposed approaches. In addition, the results of integration between gene ontology and National Drug File Reference Terminology suggest that our method provides a novel way to perform association studies between biomedical terms.
An Efficient Algorithm for Solving Single Veriable Optimization ...
African Journals Online (AJOL)
Many methods are available for finding x*E Rn which minimizes the real value function f(x), some of which are Fibonacci Search Algorithm, Quadratic Search Algorithm, Convergence Algorithm and Cubic Search Algorithm. In this research work, existing algorithms used in single variable optimization problems are critically ...
Making friends on the fly advances in ad hoc teamwork
Barrett, Samuel
2015-01-01
This book is devoted to the encounter and interaction of agents such as robots with other agents and describes how they cooperate with their previously unknown teammates, forming an Ad Hoc team. It presents a new algorithm, PLASTIC, that allows agents to quickly adapt to new teammates by reusing knowledge learned from previous teammates. PLASTIC is instantiated in both a model-based approach, PLASTIC-Model, and a policy-based approach, PLASTIC-Policy. In addition to reusing knowledge learned from previous teammates, PLASTIC also allows users to provide expert-knowledge and can use transfer learning (such as the new TwoStageTransfer algorithm) to quickly create models of new teammates when it has some information about its new teammates. The effectiveness of the algorithm is demonstrated on three domains, ranging from multi-armed bandits to simulated robot soccer games.
Mohammed, Adnan Saher; Amrahov, Şahin Emrah; Çelebi, Fatih V.
2016-01-01
In this paper, we proposed a new efficient sorting algorithm based on insertion sort concept. The proposed algorithm called Bidirectional Conditional Insertion Sort (BCIS). It is in-place sorting algorithm and it has remarkably efficient average case time complexity when compared with classical insertion sort (IS). By comparing our new proposed algorithm with the Quicksort algorithm, BCIS indicated faster average case time for relatively small size arrays up to 1500 elements. Furthermore, BCI...
An Efficient Bypassing Void Routing Algorithm for Wireless Sensor Network
Directory of Open Access Journals (Sweden)
Xunli Fan
2015-01-01
Full Text Available Since the sensor node’s distribution in a wireless sensor network (WSN is irregular, geographic routing protocols using the greedy algorithm can cause local minima problem. This problem may fail due to routing voids and lead to failure of data transmission. Based on the virtual coordinate mapping, this paper proposes an efficient bypassing void routing protocol to solve the control packet overhead and transmission delay in routing void of WSN, which is called EBVRPVCM. The basic idea is to transfer the random structure of void edge to a regular one through mapping the coordinates on a virtual circle. In EBVRPVCM, some strategies, executed in different regions, are selected through virtual coordinates to bypass routing void efficiently. The regular edge is established by coordinate mapping that can shorten the average routing path length and decrease the transmission delay. The virtual coordinate mapping is not affected by the real geographic node position, and the control packet overhead can be reduced accordingly. Compared with RGP and GPSR, simulation results demonstrate that EBVRPVCM can successfully find the shortest routing path with higher delivery ratio and less control packet overhead and energy consumption.
An Efficiency Analysis of Augmented Reality Marker Recognition Algorithm
Directory of Open Access Journals (Sweden)
Kurpytė Dovilė
2014-05-01
Full Text Available The article reports on the investigation of augmented reality system which is designed for identification and augmentation of 100 different square markers. Marker recognition efficiency was investigated by rotating markers along x and y axis directions in range from −90° to 90°. Virtual simulations of four environments were developed: a an intense source of light, b an intense source of light falling from the left side, c the non-intensive light source falling from the left side, d equally falling shadows. The graphics were created using the OpenGL graphics computer hardware interface; image processing was programmed in C++ language using OpenCV, while augmented reality was developed in Java programming language using NyARToolKit. The obtained results demonstrate that augmented reality marker recognition algorithm is accurate and reliable in the case of changing lighting conditions and rotational angles - only 4 % markers were unidentified. Assessment of marker recognition efficiency let to propose marker classification strategy in order to use it for grouping various markers into distinct markers’ groups possessing similar recognition properties.
On-the-fly nuclear data processing methods for Monte Carlo simulations of fast spectrum systems
Energy Technology Data Exchange (ETDEWEB)
Walsh, Jon [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-08-31
The presentation summarizes work performed over summer 2015 related to Monte Carlo simulations. A flexible probability table interpolation scheme has been implemented and tested with results comparing favorably to the continuous phase-space on-the-fly approach.
Efficient generation of image chips for training deep learning algorithms
Han, Sanghui; Fafard, Alex; Kerekes, John; Gartley, Michael; Ientilucci, Emmett; Savakis, Andreas; Law, Charles; Parhan, Jason; Turek, Matt; Fieldhouse, Keith; Rovito, Todd
2017-05-01
Training deep convolutional networks for satellite or aerial image analysis often requires a large amount of training data. For a more robust algorithm, training data need to have variations not only in the background and target, but also radiometric variations in the image such as shadowing, illumination changes, atmospheric conditions, and imaging platforms with different collection geometry. Data augmentation is a commonly used approach to generating additional training data. However, this approach is often insufficient in accounting for real world changes in lighting, location or viewpoint outside of the collection geometry. Alternatively, image simulation can be an efficient way to augment training data that incorporates all these variations, such as changing backgrounds, that may be encountered in real data. The Digital Imaging and Remote Sensing Image Image Generation (DIRSIG) model is a tool that produces synthetic imagery using a suite of physics-based radiation propagation modules. DIRSIG can simulate images taken from different sensors with variation in collection geometry, spectral response, solar elevation and angle, atmospheric models, target, and background. Simulation of Urban Mobility (SUMO) is a multi-modal traffic simulation tool that explicitly models vehicles that move through a given road network. The output of the SUMO model was incorporated into DIRSIG to generate scenes with moving vehicles. The same approach was used when using helicopters as targets, but with slight modifications. Using the combination of DIRSIG and SUMO, we quickly generated many small images, with the target at the center with different backgrounds. The simulations generated images with vehicles and helicopters as targets, and corresponding images without targets. Using parallel computing, 120,000 training images were generated in about an hour. Some preliminary results show an improvement in the deep learning algorithm when real image training data are augmented with
An Efficient Local Algorithm for Distributed Multivariate Regression
National Aeronautics and Space Administration — This paper offers a local distributed algorithm for multivariate regression in large peer-to-peer environments. The algorithm is designed for distributed...
Efficient Improvement of Silage Additives by Using Genetic Algorithms
Davies, Zoe S.; Gilbert, Richard J.; Merry, Roger J.; Kell, Douglas B.; Theodorou, Michael K.; Griffith, Gareth W.
2000-01-01
The enormous variety of substances which may be added to forage in order to manipulate and improve the ensilage process presents an empirical, combinatorial optimization problem of great complexity. To investigate the utility of genetic algorithms for designing effective silage additive combinations, a series of small-scale proof of principle silage experiments were performed with fresh ryegrass. Having established that significant biochemical changes occur over an ensilage period as short as 2 days, we performed a series of experiments in which we used 50 silage additive combinations (prepared by using eight bacterial and other additives, each of which was added at six different levels, including zero [i.e., no additive]). The decrease in pH, the increase in lactate concentration, and the free amino acid concentration were measured after 2 days and used to calculate a “fitness” value that indicated the quality of the silage (compared to a control silage made without additives). This analysis also included a “cost” element to account for different total additive levels. In the initial experiment additive levels were selected randomly, but subsequently a genetic algorithm program was used to suggest new additive combinations based on the fitness values determined in the preceding experiments. The result was very efficient selection for silages in which large decreases in pH and high levels of lactate occurred along with low levels of free amino acids. During the series of five experiments, each of which comprised 50 treatments, there was a steady increase in the amount of lactate that accumulated; the best treatment combination was that used in the last experiment, which produced 4.6 times more lactate than the untreated silage. The additive combinations that were found to yield the highest fitness values in the final (fifth) experiment were assessed to determine a range of biochemical and microbiological quality parameters during full-term silage
Implementing a land cover stratification on-the-fly
Ronald E. McRoberts; Daniel G. Wendt
2002-01-01
Stratified estimation is used by the Forest Inventory and Analysis program of the USDA Forest Service to increase the precision of county-level inventory estimates. Stratified estimation requires that plots be assigned to strata and that proportions of land area in each strata be determined. Classified satellite imagery has been found to be an efficient and effective...
An efficient and fast detection algorithm for multimode FBG sensing
DEFF Research Database (Denmark)
Ganziy, Denis; Jespersen, O.; Rose, B.
2015-01-01
We propose a novel dynamic gate algorithm (DGA) for fast and accurate peak detection. The algorithm uses threshold determined detection window and Center of gravity algorithm with bias compensation. We analyze the wavelength fit resolution of the DGA for different values of signal to noise ratio ...
An Energy Efficient Multipath Routing Algorithm for Wireless Sensor Networks
Dulman, S.O.; Wu Jian, W.J.; Havinga, Paul J.M.
In this paper we introduce a new routing algorithm for wireless sensor networks. The aim of this algorithm is to provide on-demand multiple disjoint paths between a data source and a destination. Our Multipath On-Demand Routing Algorithm (MDR) improves the reliability of data routing in a wireless
On-the Fly Merging of Attitude Solutions
DEFF Research Database (Denmark)
Jørgensen, Peter Siegbjørn; Jørgensen, John Leif; Denver, Troelz
2008-01-01
of the available information, i.e. optimal accuracy, methods for merging such data should be investigated. The need for and desirability of attitude merging depends on the mission objective and available resources. To enable real-time attitude control and reduce requirements on download budget, on-board merging...... of attitude data will often be advantageous. This should be weighted against the need for post observation reconstruction of attitudes, especially needed when end products are sensitive to optimal attitude reconstruction. Instrument integrated merging algorithms will reduce the complexity of on-board AOCS....... Methods for attitude merging are many. Two examples of merging methods taking into consideration anisotropic noise distributions are presented and discussed....
An efficient genetic algorithm for structure prediction at the nanoscale.
Lazauskas, Tomas; Sokol, Alexey A; Woodley, Scott M
2017-03-17
We have developed and implemented a new global optimization technique based on a Lamarckian genetic algorithm with the focus on structure diversity. The key process in the efficient search on a given complex energy landscape proves to be the removal of duplicates that is achieved using a topological analysis of candidate structures. The careful geometrical prescreening of newly formed structures and the introduction of new mutation move classes improve the rate of success further. The power of the developed technique, implemented in the Knowledge Led Master Code, or KLMC, is demonstrated by its ability to locate and explore a challenging double funnel landscape of a Lennard-Jones 38 atom system (LJ38). We apply the redeveloped KLMC to investigate three chemically different systems: ionic semiconductor (ZnO)1-32, metallic Ni13 and covalently bonded C60. All four systems have been systematically explored on the energy landscape defined using interatomic potentials. The new developments allowed us to successfully locate the double funnels of LJ38, find new local and global minima for ZnO clusters, extensively explore the Ni13 and C60 (the buckminsterfullerene, or buckyball) potential energy surfaces.
Directory of Open Access Journals (Sweden)
Neeraj Kumar
2015-08-01
Full Text Available Internet of Vehicles (IoV is a leading technology of the present era. It has gained huge attention with respect to its implementation in wide variety of domains ranging from traffic safety to infotainment applications. However, IoV can also be extended to healthcare domain, where the patients can be provided healthcare services on-the-fly. We extend this novel concept in this paper and refer it as “Healthcare services on-the-fly”. The concept of game theory has been used among the vehicles to access the healthcare services while traveling. The vehicles act as players in the game and tend to form and split coalitions to access these services. Learning automata (LA act as the players for interaction with the environment and take appropriate actions based on reward and penalty. Apart from this, Virtual Machine (VM scheduling algorithm for efficient utilization of resources at cloud level has also been formulated. A stochastic reward net (SRN-based model is used to represent the coalition formation and splitting with respect to availability of resources at cloud level. The performance of the proposed scheme is evaluated using various performance evaluation metrics. The results obtained prove the effectiveness of the proposed scheme in comparison to the best, first, and random fit schemes.
Efficient Approximation Algorithms for Weighted $b$-Matching
Energy Technology Data Exchange (ETDEWEB)
Khan, Arif; Pothen, Alex; Mostofa Ali Patwary, Md.; Satish, Nadathur Rajagopalan; Sundaram, Narayanan; Manne, Fredrik; Halappanavar, Mahantesh; Dubey, Pradeep
2016-01-01
We describe a half-approximation algorithm, b-Suitor, for computing a b-Matching of maximum weight in a graph with weights on the edges. b-Matching is a generalization of the well-known Matching problem in graphs, where the objective is to choose a subset of M edges in the graph such that at most a specified number b(v) of edges in M are incident on each vertex v. Subject to this restriction we maximize the sum of the weights of the edges in M. We prove that the b-Suitor algorithm computes the same b-Matching as the one obtained by the greedy algorithm for the problem. We implement the algorithm on serial and shared-memory parallel processors, and compare its performance against a collection of approximation algorithms that have been proposed for the Matching problem. Our results show that the b-Suitor algorithm outperforms the Greedy and Locally Dominant edge algorithms by one to two orders of magnitude on a serial processor. The b-Suitor algorithm has a high degree of concurrency, and it scales well up to 240 threads on a shared memory multiprocessor. The b-Suitor algorithm outperforms the Locally Dominant edge algorithm by a factor of fourteen on 16 cores of an Intel Xeon multiprocessor.
An Efficient Hierarchy Algorithm for Community Detection in Complex Networks
Directory of Open Access Journals (Sweden)
Lili Zhang
2014-01-01
Full Text Available Community structure is one of the most fundamental and important topology characteristics of complex networks. The research on community structure has wide applications and is very important for analyzing the topology structure, understanding the functions, finding the hidden properties, and forecasting the time-varying of the networks. This paper analyzes some related algorithms and proposes a new algorithm—CN agglomerative algorithm based on graph theory and the local connectedness of network to find communities in network. We show this algorithm is distributed and polynomial; meanwhile the simulations show it is accurate and fine-grained. Furthermore, we modify this algorithm to get one modified CN algorithm and apply it to dynamic complex networks, and the simulations also verify that the modified CN algorithm has high accuracy too.
Comparative efficiencies of three parallel algorithms for nonlinear ...
Indian Academy of Sciences (India)
The work reported in this paper is motivated by the need to develop portable parallel processing algorithms and codes which can run on a variety of hardware platforms without any modiﬁcations. The prime aim of the research work reported here is to test the portability of the parallel algorithms and also to study and ...
Duality quantum algorithm efficiently simulates open quantum systems
Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu
2016-01-01
Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d3) in contrast to O(d4) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm. PMID:27464855
Duality quantum algorithm efficiently simulates open quantum systems
Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu
2016-07-01
Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d3) in contrast to O(d4) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm.
An efficient parallel algorithm for matrix-vector multiplication
Energy Technology Data Exchange (ETDEWEB)
Hendrickson, B.; Leland, R.; Plimpton, S.
1993-03-01
The multiplication of a vector by a matrix is the kernel computation of many algorithms in scientific computation. A fast parallel algorithm for this calculation is therefore necessary if one is to make full use of the new generation of parallel supercomputers. This paper presents a high performance, parallel matrix-vector multiplication algorithm that is particularly well suited to hypercube multiprocessors. For an n x n matrix on p processors, the communication cost of this algorithm is O(n/[radical]p + log(p)), independent of the matrix sparsity pattern. The performance of the algorithm is demonstrated by employing it as the kernel in the well-known NAS conjugate gradient benchmark, where a run time of 6.09 seconds was observed. This is the best published performance on this benchmark achieved to date using a massively parallel supercomputer.
M4GB: An efficient Gröbner-basis algorithm
R.H. Makarim (Rusydi); M.M.J. Stevens (Marc)
2017-01-01
textabstractThis paper introduces a new efficient algorithm for computing Gröbner-bases named M4GB. Like Faugère's algorithm F4 it is an extension of Buchberger's algorithm that describes: how to store already computed (tail-)reduced multiples of basis polynomials to prevent redundant work in the
Efficient Dual Domain Decoding of Linear Block Codes Using Genetic Algorithms
Directory of Open Access Journals (Sweden)
Ahmed Azouaoui
2012-01-01
Full Text Available A computationally efficient algorithm for decoding block codes is developed using a genetic algorithm (GA. The proposed algorithm uses the dual code in contrast to the existing genetic decoders in the literature that use the code itself. Hence, this new approach reduces the complexity of decoding the codes of high rates. We simulated our algorithm in various transmission channels. The performance of this algorithm is investigated and compared with competitor decoding algorithms including Maini and Shakeel ones. The results show that the proposed algorithm gives large gains over the Chase-2 decoding algorithm and reach the performance of the OSD-3 for some quadratic residue (QR codes. Further, we define a new crossover operator that exploits the domain specific information and compare it with uniform and two point crossover. The complexity of this algorithm is also discussed and compared to other algorithms.
Efficient conjugate gradient algorithms for computation of the manipulator forward dynamics
Fijany, Amir; Scheid, Robert E.
1989-01-01
The applicability of conjugate gradient algorithms for computation of the manipulator forward dynamics is investigated. The redundancies in the previously proposed conjugate gradient algorithm are analyzed. A new version is developed which, by avoiding these redundancies, achieves a significantly greater efficiency. A preconditioned conjugate gradient algorithm is also presented. A diagonal matrix whose elements are the diagonal elements of the inertia matrix is proposed as the preconditioner. In order to increase the computational efficiency, an algorithm is developed which exploits the synergism between the computation of the diagonal elements of the inertia matrix and that required by the conjugate gradient algorithm.
Miyashita, Naoyuki; Yonezawa, Yasushige
2017-09-01
Robust and reliable analyses of long trajectories from molecular dynamics simulations are important for investigations of functions and mechanisms of proteins. Structural fitting is necessary for various analyses of protein dynamics, thus removing time-dependent translational and rotational movements. However, the fitting is often difficult for highly flexible molecules. Thus, to address the issues, we proposed a fitting algorithm that uses the Bayesian inference method in combination with rotational fitting-weight improvements, and the well-studied globular protein systems trpcage and lysozyme were used for investigations. The present method clearly identified rigid core regions that fluctuate less than other regions and also separated core regions from highly fluctuating regions with greater accuracy than conventional methods. Our method also provided simultaneous variance-covariance matrix elements composed of atomic coordinates, allowing us to perform principle component analysis and prepare domain cross-correlation map during molecular dynamics simulations in an on-the-fly manner.
Efficient Algorithms for gcd and Cubic Residuosity in the Ring of Eisenstein Integers
DEFF Research Database (Denmark)
Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg
2003-01-01
We present simple and efficient algorithms for computing gcd and cubic residuosity in the ring of Eisenstein integers, bf Z[ ]i.e. the integers extended with , a complex primitive third root of unity. The algorithms are similar and may be seen as generalisations of the binary integer gcd and deri......We present simple and efficient algorithms for computing gcd and cubic residuosity in the ring of Eisenstein integers, bf Z[ ]i.e. the integers extended with , a complex primitive third root of unity. The algorithms are similar and may be seen as generalisations of the binary integer gcd...... and derived Jacobi symbol algorithms. Our algorithms take time O(n 2) for n bit input. This is an improvement from the known results based on the Euclidean algorithm, and taking time O(n M(n)), where M(n) denotes the complexity of multiplying n bit integers. The new algorithms have applications in practical...
CSIR Research Space (South Africa)
Abu-Mahfouz, Adnan M
2013-05-01
Full Text Available of the available references to enhance their performance. However, to implement an efficient localisation algorithm for WSNs one should reconsider this assumption. This paper introduces an efficient localisation algorithm that is based on a novel smart reference-selection...
An efficient reduction algorithm for computation of interconnect delay ...
Indian Academy of Sciences (India)
An efﬁcient reduction algorithm for computation of interconnect delay variability for statistical timing analysis in clock tree planning. Sivakumar Bondada Soumyendu ... Keywords. Statistical timing analysis; VLSI clock interconnects; delay variability; PDF; process variation; Gaussian random variation; computational cost.
On-the-fly Locata/inertial navigation system integration for precise maritime application
Jiang, Wei; Li, Yong; Rizos, Chris
2013-10-01
The application of Global Navigation Satellite System (GNSS) technology has meant that marine navigators have greater access to a more consistent and accurate positioning capability than ever before. However, GNSS may not be able to meet all emerging navigation performance requirements for maritime applications with respect to service robustness, accuracy, integrity and availability. In particular, applications in port areas (for example automated docking) and in constricted waterways, have very stringent performance requirements. Even when an integrated inertial navigation system (INS)/GNSS device is used there may still be performance gaps. GNSS signals are easily blocked or interfered with, and sometimes the satellite geometry may not be good enough for high accuracy and high reliability applications. Furthermore, the INS accuracy degrades rapidly during GNSS outages. This paper investigates the use of a portable ground-based positioning system, known as ‘Locata’, which was integrated with an INS, to provide accurate navigation in a marine environment without reliance on GNSS signals. An ‘on-the-fly’ Locata resolution algorithm that takes advantage of geometry change via an extended Kalman filter is proposed in this paper. Single-differenced Locata carrier phase measurements are utilized to achieve accurate and reliable solutions. A ‘loosely coupled’ decentralized Locata/INS integration architecture based on the Kalman filter is used for data processing. In order to evaluate the system performance, a field trial was conducted on Sydney Harbour. A Locata network consisting of eight Locata transmitters was set up near the Sydney Harbour Bridge. The experiment demonstrated that the Locata on-the-fly (OTF) algorithm is effective and can improve the system accuracy in comparison with the conventional ‘known point initialization’ (KPI) method. After the OTF and KPI comparison, the OTF Locata/INS integration is then assessed further and its performance
Efficient convex-elastic net algorithm to solve the Euclidean traveling salesman problem.
Al-Mulhem, M; Al-Maghrabi, T
1998-01-01
This paper describes a hybrid algorithm that combines an adaptive-type neural network algorithm and a nondeterministic iterative algorithm to solve the Euclidean traveling salesman problem (E-TSP). It begins with a brief introduction to the TSP and the E-TSP. Then, it presents the proposed algorithm with its two major components: the convex-elastic net (CEN) algorithm and the nondeterministic iterative improvement (NII) algorithm. These two algorithms are combined into the efficient convex-elastic net (ECEN) algorithm. The CEN algorithm integrates the convex-hull property and elastic net algorithm to generate an initial tour for the E-TSP. The NII algorithm uses two rearrangement operators to improve the initial tour given by the CEN algorithm. The paper presents simulation results for two instances of E-TSP: randomly generated tours and tours for well-known problems in the literature. Experimental results are given to show that the proposed algorithm ran find the nearly optimal solution for the E-TSP that outperform many similar algorithms reported in the literature. The paper concludes with the advantages of the new algorithm and possible extensions.
Energy efficient data sorting using standard sorting algorithms
Bunse, Christian
2011-01-01
Protecting the environment by saving energy and thus reducing carbon dioxide emissions is one of todays hottest and most challenging topics. Although the perspective for reducing energy consumption, from ecological and business perspectives is clear, from a technological point of view, the realization especially for mobile systems still falls behind expectations. Novel strategies that allow (software) systems to dynamically adapt themselves at runtime can be effectively used to reduce energy consumption. This paper presents a case study that examines the impact of using an energy management component that dynamically selects and applies the "optimal" sorting algorithm, from an energy perspective, during multi-party mobile communication. Interestingly, the results indicate that algorithmic performance is not key and that dynamically switching algorithms at runtime does have a significant impact on energy consumption. © Springer-Verlag Berlin Heidelberg 2011.
Energy Efficient Distributed Fault Identification Algorithm in Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Meenakshi Panda
2014-01-01
Full Text Available A distributed fault identification algorithm is proposed here to find both hard and soft faulty sensor nodes present in wireless sensor networks. The algorithm is distributed, self-detectable, and can detect the most common byzantine faults such as stuck at zero, stuck at one, and random data. In the proposed approach, each sensor node gathered the observed data from the neighbors and computed the mean to check whether faulty sensor node is present or not. If a node found the presence of faulty sensor node, then compares observed data with the data of the neighbors and predict probable fault status. The final fault status is determined by diffusing the fault information from the neighbors. The accuracy and completeness of the algorithm are verified with the help of statistical model of the sensors data. The performance is evaluated in terms of detection accuracy, false alarm rate, detection latency and message complexity.
New efficient algorithm for recognizing handwritten Hindi digits
El-Sonbaty, Yasser; Ismail, Mohammed A.; Karoui, Kamal
2001-12-01
In this paper a new algorithm for recognizing handwritten Hindi digits is proposed. The proposed algorithm is based on using the topological characteristics combined with statistical properties of the given digits in order to extract a set of features that can be used in the process of digit classification. 10,000 handwritten digits are used in the experimental results. 1100 digits are used for training and another 5500 unseen digits are used for testing. The recognition rate has reached 97.56%, a substitution rate of 1.822%, and a rejection rate of 0.618%.
Efficient algorithms for approximate time separation of events
Indian Academy of Sciences (India)
Asynchronous systems; timing analysis and veriﬁcation; approximate algorithms; convex approximation; time separation of events; bounded delay timing analysis. ... A complete asynchronous chip has been modelled and analysed using the proposed technique, revealing potential timing problems (already known to ...
Efficient differential evolution algorithms for multimodal optmal control problems
Lopez Cruz, I.L.; Willigenburg, van L.G.; Straten, van G.
2003-01-01
Many methods for solving optimal control problems, whether direct or indirect, rely upon gradient information and therefore may converge to a local optimum. Global optimisation methods like Evolutionary algorithms, overcome this problem. In this work it is investigated how well novel and easy to
Efficient algorithms for approximate time separation of events
Indian Academy of Sciences (India)
R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22
pounded in practice by statistical variations in manufacturing and operating conditions that introduce ... and use this to design a polynomial-time algorithm for computing bounds on time separation of events in systems ...... Proceeding of the ACM/IEEE Design Automation Conference (Los Alamitos, CA: IEEE Comput. Soc.
An efficient algorithm for computing the H-infinity norm
Belur, Madhu N.; Praagman, C.
This technical note addresses the computation of the H-infinity norm by directly computing the isolated common zeros of two bivariate polynomials, unlike the iteration algorithm that is currently used to find the H-infinity norm. The proposed method to H-infinity norm calculation is compared with
An efficient modified Elliptic Curve Digital Signature Algorithm | Kiros ...
African Journals Online (AJOL)
Many digital signatures which are based on Elliptic Curves Cryptography (ECC) have been proposed. Among these digital signatures, the Elliptic Curve Digital Signature Algorithm (ECDSA) is the widely standardized one. However, the verification process of ECDSA is slower than the signature generation process. Hence ...
Comparative efficiencies of three parallel algorithms for nonlinear ...
Indian Academy of Sciences (India)
R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22
1977) and element-by-element methods (Ortiz et al 1983), and virtual pulse techniques (Chen et al. 1995) etc. However, in recent years the most exciting possibility in the algorithm development area for nonlinear dynamic analysis has been ...
Efficient parallel and out of core algorithms for constructing large bi-directed de Bruijn graphs
Directory of Open Access Journals (Sweden)
Vaughn Matthew
2010-11-01
Full Text Available Abstract Background Assembling genomic sequences from a set of overlapping reads is one of the most fundamental problems in computational biology. Algorithms addressing the assembly problem fall into two broad categories - based on the data structures which they employ. The first class uses an overlap/string graph and the second type uses a de Bruijn graph. However with the recent advances in short read sequencing technology, de Bruijn graph based algorithms seem to play a vital role in practice. Efficient algorithms for building these massive de Bruijn graphs are very essential in large sequencing projects based on short reads. In an earlier work, an O(n/p time parallel algorithm has been given for this problem. Here n is the size of the input and p is the number of processors. This algorithm enumerates all possible bi-directed edges which can overlap with a node and ends up generating Θ(nΣ messages (Σ being the size of the alphabet. Results In this paper we present a Θ(n/p time parallel algorithm with a communication complexity that is equal to that of parallel sorting and is not sensitive to Σ. The generality of our algorithm makes it very easy to extend it even to the out-of-core model and in this case it has an optimal I/O complexity of Θ(nlog(n/BBlog(M/B (M being the main memory size and B being the size of the disk block. We demonstrate the scalability of our parallel algorithm on a SGI/Altix computer. A comparison of our algorithm with the previous approaches reveals that our algorithm is faster - both asymptotically and practically. We demonstrate the scalability of our sequential out-of-core algorithm by comparing it with the algorithm used by VELVET to build the bi-directed de Bruijn graph. Our experiments reveal that our algorithm can build the graph with a constant amount of memory, which clearly outperforms VELVET. We also provide efficient algorithms for the bi-directed chain compaction problem. Conclusions The bi
Efficient parallel and out of core algorithms for constructing large bi-directed de Bruijn graphs.
Kundeti, Vamsi K; Rajasekaran, Sanguthevar; Dinh, Hieu; Vaughn, Matthew; Thapar, Vishal
2010-11-15
Assembling genomic sequences from a set of overlapping reads is one of the most fundamental problems in computational biology. Algorithms addressing the assembly problem fall into two broad categories - based on the data structures which they employ. The first class uses an overlap/string graph and the second type uses a de Bruijn graph. However with the recent advances in short read sequencing technology, de Bruijn graph based algorithms seem to play a vital role in practice. Efficient algorithms for building these massive de Bruijn graphs are very essential in large sequencing projects based on short reads. In an earlier work, an O(n/p) time parallel algorithm has been given for this problem. Here n is the size of the input and p is the number of processors. This algorithm enumerates all possible bi-directed edges which can overlap with a node and ends up generating Θ(nΣ) messages (Σ being the size of the alphabet). In this paper we present a Θ(n/p) time parallel algorithm with a communication complexity that is equal to that of parallel sorting and is not sensitive to Σ. The generality of our algorithm makes it very easy to extend it even to the out-of-core model and in this case it has an optimal I/O complexity of Θ(nlog(n/B)Blog(M/B)) (M being the main memory size and B being the size of the disk block). We demonstrate the scalability of our parallel algorithm on a SGI/Altix computer. A comparison of our algorithm with the previous approaches reveals that our algorithm is faster--both asymptotically and practically. We demonstrate the scalability of our sequential out-of-core algorithm by comparing it with the algorithm used by VELVET to build the bi-directed de Bruijn graph. Our experiments reveal that our algorithm can build the graph with a constant amount of memory, which clearly outperforms VELVET. We also provide efficient algorithms for the bi-directed chain compaction problem. The bi-directed de Bruijn graph is a fundamental data structure for
Protein structure determination via an efficient geometric build-up algorithm
2010-01-01
Background A protein structure can be determined by solving a so-called distance geometry problem whenever a set of inter-atomic distances is available and sufficient. However, the problem is intractable in general and has proved to be a NP hard problem. An updated geometric build-up algorithm (UGB) has been developed recently that controls numerical errors and is efficient in protein structure determination for cases where only sparse exact distance data is available. In this paper, the UGB method has been improved and revised with aims at solving distance geometry problems more efficiently and effectively. Methods An efficient algorithm (called the revised updated geometric build-up algorithm (RUGB)) to build up a protein structure from atomic distance data is presented and provides an effective way of determining a protein structure with sparse exact distance data. In the algorithm, the condition to determine an unpositioned atom iteratively is relaxed (when compared with the UGB algorithm) and data structure techniques are used to make the algorithm more efficient and effective. The algorithm is tested on a set of proteins selected randomly from the Protein Structure Database-PDB. Results We test a set of proteins selected randomly from the Protein Structure Database-PDB. We show that the numerical errors produced by the new RUGB algorithm are smaller when compared with the errors of the UGB algorithm and that the novel RUGB algorithm has a significantly smaller runtime than the UGB algorithm. Conclusions The RUGB algorithm relaxes the condition for updating and incorporates the data structure for accessing neighbours of an atom. The revisions result in an improvement over the UGB algorithm in two important areas: a reduction on the overall runtime and decrease of the numeric error. PMID:20487514
Protein structure determination via an efficient geometric build-up algorithm.
Davis, Robert T; Ernst, Claus; Wu, Di
2010-05-17
A protein structure can be determined by solving a so-called distance geometry problem whenever a set of inter-atomic distances is available and sufficient. However, the problem is intractable in general and has proved to be a NP hard problem. An updated geometric build-up algorithm (UGB) has been developed recently that controls numerical errors and is efficient in protein structure determination for cases where only sparse exact distance data is available. In this paper, the UGB method has been improved and revised with aims at solving distance geometry problems more efficiently and effectively. An efficient algorithm (called the revised updated geometric build-up algorithm (RUGB)) to build up a protein structure from atomic distance data is presented and provides an effective way of determining a protein structure with sparse exact distance data. In the algorithm, the condition to determine an unpositioned atom iteratively is relaxed (when compared with the UGB algorithm) and data structure techniques are used to make the algorithm more efficient and effective. The algorithm is tested on a set of proteins selected randomly from the Protein Structure Database-PDB. We test a set of proteins selected randomly from the Protein Structure Database-PDB. We show that the numerical errors produced by the new RUGB algorithm are smaller when compared with the errors of the UGB algorithm and that the novel RUGB algorithm has a significantly smaller runtime than the UGB algorithm. The RUGB algorithm relaxes the condition for updating and incorporates the data structure for accessing neighbours of an atom. The revisions result in an improvement over the UGB algorithm in two important areas: a reduction on the overall runtime and decrease of the numeric error.
Efficient algorithms for large-scale quantum transport calculations
Brück, Sascha; Calderara, Mauro; Bani-Hashemian, Mohammad Hossein; VandeVondele, Joost; Luisier, Mathieu
2017-08-01
Massively parallel algorithms are presented in this paper to reduce the computational burden associated with quantum transport simulations from first-principles. The power of modern hybrid computer architectures is harvested in order to determine the open boundary conditions that connect the simulation domain with its environment and to solve the resulting Schrödinger equation. While the former operation takes the form of an eigenvalue problem that is solved by a contour integration technique on the available central processing units (CPUs), the latter can be cast into a linear system of equations that is simultaneously processed by SplitSolve, a two-step algorithm, on general-purpose graphics processing units (GPUs). A significant decrease of the computational time by up to two orders of magnitude is obtained as compared to standard solution methods.
Efficient Feedforward Linearization Technique Using Genetic Algorithms for OFDM Systems
Directory of Open Access Journals (Sweden)
García Paloma
2010-01-01
Full Text Available Feedforward is a linearization method that simultaneously offers wide bandwidth and good intermodulation distortion suppression; so it is a good choice for Orthogonal Frequency Division Multiplexing (OFDM systems. Feedforward structure consists of two loops, being necessary an accurate adjustment between them along the time, and when temperature, environmental, or operating changes are produced. Amplitude and phase imbalances of the circuit elements in both loops produce mismatched effects that lead to degrade its performance. A method is proposed to compensate these mismatches, introducing two complex coefficients calculated by means of a genetic algorithm. A full study is carried out to choose the optimal parameters of the genetic algorithm applied to wideband systems based on OFDM technologies, which are very sensitive to nonlinear distortions. The method functionality has been verified by means of simulation.
An efficient video dehazing algorithm based on spectral clustering
Zhao, Fan; Yao, Zao; Song, XiaoFang; Yao, Yi
2017-07-01
Image and video dehazing is a popular topic in the field of computer vision and digital image processing. A fast, optimized dehazing algorithm was recently proposed that enhances contrast and reduces flickering artifacts in a dehazed video sequence by minimizing a cost function that makes transmission values spatially and temporally coherent. However, its fixed-size block partitioning leads to block effects. Further, the weak edges in a hazy image are not addressed. Hence, a video dehazing algorithm based on customized spectral clustering is proposed. To avoid block artifacts, the spectral clustering is customized to segment static scenes to ensure the same target has the same transmission value. Assuming that dehazed edge images have richer detail than before restoration, an edge cost function is added to the ransmission model. The experimental results demonstrate that the proposed method provides higher dehazing quality and lower time complexity than the previous technique.
Energy-Efficient Train Operation Using Nature-Inspired Algorithms
Directory of Open Access Journals (Sweden)
Kemal Keskin
2017-01-01
Full Text Available A train operation optimization by minimizing its traction energy subject to various constraints is carried out using nature-inspired evolutionary algorithms. The optimization process results in switching points that initiate cruising and coasting phases of the driving. Due to nonlinear optimization formulation of the problem, nature-inspired evolutionary search methods, Genetic Simulated Annealing, Firefly, and Big Bang-Big Crunch algorithms were employed in this study. As a case study a real-like train and test track from a part of Eskisehir light rail network were modeled. Speed limitations, various track alignments, maximum allowable trip time, and changes in train mass were considered, and punctuality was put into objective function as a penalty factor. Results have shown that all three evolutionary methods generated effective and consistent solutions. However, it has also been shown that each one has different accuracy and convergence characteristics.
Novotny, M.A.
2010-02-01
The efficiency of dynamic Monte Carlo algorithms for off-lattice systems composed of particles is studied for the case of a single impurity particle. The theoretical efficiencies of the rejection-free method and of the Monte Carlo with Absorbing Markov Chains method are given. Simulation results are presented to confirm the theoretical efficiencies. © 2010.
Efficient Routing Algorithms for Multiple Vehicles with No Explicit Communications
2009-01-01
mobile robotic networks,” in Workshop on Algorithmic Foundations of Robotics (WAFR), ( Guanajuato , Mexico ), December 2008. To appear. [38] L. A. Dugatkin... economies achieved by the mobile agents. Similar problems were also considered in [10], [11], and decentralized strategies were presented in [12]. This...Algebraic optimization: the Fermat-Weber location problem,” Mathematical Programming, vol. 46, pp. 219–224, 1990 . [25] G. Wesolowsky, “The Weber problem
Efficient Algorithms for Bayesian Network Parameter Learning from Incomplete Data
2015-07-01
Bayesian networks. In IJCNN, pp. 2391– 2396. Ghahramani, Z., & Jordan, M. I. (1997). Factorial hidden markov models. Machine Learning, 29(2-3), 245–273...algorithms like EM (which require inference). 1 INTRODUCTION When learning the parameters of a Bayesian network from data with missing values, the...missing at random assumption (MAR), but also for a broad class of data that is not MAR. Their analysis is based on a graphical representation for
GMG: A Guaranteed, Efficient Global Optimization Algorithm for Remote Sensing.
Energy Technology Data Exchange (ETDEWEB)
D' Helon, CD
2004-08-18
The monocular passive ranging (MPR) problem in remote sensing consists of identifying the precise range of an airborne target (missile, plane, etc.) from its observed radiance. This inverse problem may be set as a global optimization problem (GOP) whereby the difference between the observed and model predicted radiances is minimized over the possible ranges and atmospheric conditions. Using additional information about the error function between the predicted and observed radiances of the target, we developed GMG, a new algorithm to find the Global Minimum with a Guarantee. The new algorithm transforms the original continuous GOP into a discrete search problem, thereby guaranteeing to find the position of the global minimum in a reasonably short time. The algorithm is first applied to the golf course problem, which serves as a litmus test for its performance in the presence of both complete and degraded additional information. GMG is further assessed on a set of standard benchmark functions and then applied to various realizations of the MPR problem.
An efficient error correction algorithm using FM-index.
Huang, Yao-Ting; Huang, Yu-Wen
2017-11-28
High-throughput sequencing offers higher throughput and lower cost for sequencing a genome. However, sequencing errors, including mismatches and indels, may be produced during sequencing. Because, errors may reduce the accuracy of subsequent de novo assembly, error correction is necessary prior to assembly. However, existing correction methods still face trade-offs among correction power, accuracy, and speed. We develop a novel overlap-based error correction algorithm using FM-index (called FMOE). FMOE first identifies overlapping reads by aligning a query read simultaneously against multiple reads compressed by FM-index. Subsequently, sequencing errors are corrected by k-mer voting from overlapping reads only. The experimental results indicate that FMOE has highest correction power with comparable accuracy and speed. Our algorithm performs better in long-read than short-read datasets when compared with others. The assembly results indicated different algorithms has its own strength and weakness, whereas FMOE is good for long or good-quality reads. FMOE is freely available at https://github.com/ythuang0522/FMOC .
Efficiency and Equity Performance of a Coordinated Ramp Metering Algorithm
Directory of Open Access Journals (Sweden)
Duo Li
2016-10-01
software AIMSUN. Simulation results revealed that the equity of the motorway system can be improved significantly by using the proposed strategy without compromising much on the efficiency of the system.
Santillana, Mauricio; Le Sager, Philippe; Jacob, Daniel J.; Brenner, Michael P.
2010-11-01
We present a computationally efficient adaptive method for calculating the time evolution of the concentrations of chemical species in global 3-D models of atmospheric chemistry. Our strategy consists of partitioning the computational domain into fast and slow regions for each chemical species at every time step. In each grid box, we group the fast species and solve for their concentration in a coupled fashion. Concentrations of the slow species are calculated using a simple semi-implicit formula. Separation of species between fast and slow is done on the fly based on their local production and loss rates. This allows for example to exclude short-lived volatile organic compounds (VOCs) and their oxidation products from chemical calculations in the remote troposphere where their concentrations are negligible, letting the simulation determine the exclusion domain and allowing species to drop out individually from the coupled chemical calculation as their production/loss rates decline. We applied our method to a 1-year simulation of global tropospheric ozone-NO x-VOC-aerosol chemistry using the GEOS-Chem model. Results show a 50% improvement in computational performance for the chemical solver, with no significant added error.
IJA: An Efficient Algorithm for Query Processing in Sensor Networks
Directory of Open Access Journals (Sweden)
Dong Hwa Kim
2011-01-01
Full Text Available One of main features in sensor networks is the function that processes real time state information after gathering needed data from many domains. The component technologies consisting of each node called a sensor node that are including physical sensors, processors, actuators and power have advanced significantly over the last decade. Thanks to the advanced technology, over time sensor networks have been adopted in an all-round industry sensing physical phenomenon. However, sensor nodes in sensor networks are considerably constrained because with their energy and memory resources they have a very limited ability to process any information compared to conventional computer systems. Thus query processing over the nodes should be constrained because of their limitations. Due to the problems, the join operations in sensor networks are typically processed in a distributed manner over a set of nodes and have been studied. By way of example while simple queries, such as select and aggregate queries, in sensor networks have been addressed in the literature, the processing of join queries in sensor networks remains to be investigated. Therefore, in this paper, we propose and describe an Incremental Join Algorithm (IJA in Sensor Networks to reduce the overhead caused by moving a join pair to the final join node or to minimize the communication cost that is the main consumer of the battery when processing the distributed queries in sensor networks environments. At the same time, the simulation result shows that the proposed IJA algorithm significantly reduces the number of bytes to be moved to join nodes compared to the popular synopsis join algorithm.
IJA: An Efficient Algorithm for Query Processing in Sensor Networks
Lee, Hyun Chang; Lee, Young Jae; Lim, Ji Hyang; Kim, Dong Hwa
2011-01-01
One of main features in sensor networks is the function that processes real time state information after gathering needed data from many domains. The component technologies consisting of each node called a sensor node that are including physical sensors, processors, actuators and power have advanced significantly over the last decade. Thanks to the advanced technology, over time sensor networks have been adopted in an all-round industry sensing physical phenomenon. However, sensor nodes in sensor networks are considerably constrained because with their energy and memory resources they have a very limited ability to process any information compared to conventional computer systems. Thus query processing over the nodes should be constrained because of their limitations. Due to the problems, the join operations in sensor networks are typically processed in a distributed manner over a set of nodes and have been studied. By way of example while simple queries, such as select and aggregate queries, in sensor networks have been addressed in the literature, the processing of join queries in sensor networks remains to be investigated. Therefore, in this paper, we propose and describe an Incremental Join Algorithm (IJA) in Sensor Networks to reduce the overhead caused by moving a join pair to the final join node or to minimize the communication cost that is the main consumer of the battery when processing the distributed queries in sensor networks environments. At the same time, the simulation result shows that the proposed IJA algorithm significantly reduces the number of bytes to be moved to join nodes compared to the popular synopsis join algorithm. PMID:22319375
Efficient Algorithms with Asymmetric Read and Write Costs
Blelloch, Guy E.; Fineman, Jeremy T.; Gibbons, Phillip B.; Gu, Yan; Shun, Julian
2016-01-01
In several emerging technologies for computer memory (main memory), the cost of reading is significantly cheaper than the cost of writing. Such asymmetry in memory costs poses a fundamentally different model from the RAM for algorithm design. In this paper we study lower and upper bounds for various problems under such asymmetric read and write costs. We consider both the case in which all but O(1) memory has asymmetric cost, and the case of a small cache of symmetric memory. We model both ...
An efficient Lagrangean relaxation-based object tracking algorithm in wireless sensor networks.
Lin, Frank Yeong-Sung; Lee, Cheng-Ta
2010-01-01
In this paper we propose an energy-efficient object tracking algorithm in wireless sensor networks (WSNs). Such sensor networks have to be designed to achieve energy-efficient object tracking for any given arbitrary topology. We consider in particular the bi-directional moving objects with given frequencies for each pair of sensor nodes and link transmission cost. This problem is formulated as a 0/1 integer-programming problem. A Lagrangean relaxation-based (LR-based) heuristic algorithm is proposed for solving the optimization problem. Experimental results showed that the proposed algorithm achieves near optimization in energy-efficient object tracking. Furthermore, the algorithm is very efficient and scalable in terms of the solution time.
Efficient distribution of toy products using ant colony optimization algorithm
Hidayat, S.; Nurpraja, C. A.
2017-12-01
CV Atham Toys (CVAT) produces wooden toys and furniture, comprises 13 small and medium industries. CVAT always attempt to deliver customer orders on time but delivery costs are high. This is because of inadequate infrastructure such that delivery routes are long, car maintenance costs are high, while fuel subsidy by the government is still temporary. This study seeks to minimize the cost of product distribution based on the shortest route using one of five Ant Colony Optimization (ACO) algorithms to solve the Vehicle Routing Problem (VRP). This study concludes that the best of the five is the Ant Colony System (ACS) algorithm. The best route in 1st week gave a total distance of 124.11 km at a cost of Rp 66,703.75. The 2nd week route gave a total distance of 132.27 km at a cost of Rp 71,095.13. The 3rd week best route gave a total distance of 122.70 km with a cost of Rp 65,951.25. While the 4th week gave a total distance of 132.27 km at a cost of Rp 74,083.63. Prior to this study there was no effort to calculate these figures.
Efficient scan mask techniques for connected components labeling algorithm
Directory of Open Access Journals (Sweden)
Sutheebanjard Phaisarn
2011-01-01
Full Text Available Abstract Block-based connected components labeling is by far the fastest algorithm to label the connected components in 2D binary images, especially when the image size is quite large. This algorithm produces a decision tree that contains 211 leaf nodes with 14 levels for the depth of a tree and an average depth of 1.5923. This article attempts to provide a faster method for connected components labeling. We propose two new scan masks for connected components labeling, namely, the pixel-based scan mask and the block-based scan mask. In the final stage, the block-based scan mask is transformed to a near-optimal decision tree. We conducted comparative experiments using different sources of images for examining the performance of the proposed method against the existing methods. We also performed an average tree depth analysis and tree balance analysis to consolidate the performance improvement over the existing methods. Most significantly, the proposed method produces a decision tree containing 86 leaf nodes with 12 levels for the depth of a tree and an average depth of 1.4593, resulting in faster execution time, especially when the foreground density is equal to or greater than the background density of the images.
2D Efficient Unconditionally Stable Meshless FDTD Algorithm
Directory of Open Access Journals (Sweden)
Kang Luo
2016-01-01
Full Text Available This paper presents an efficient weighted Laguerre polynomials based meshless finite-difference time domain (WLP-MFDTD. By decomposing the coefficients of the system matrix and adding a perturbation term, a factorization-splitting scheme is introduced. The huge sparse matrix is transformed into two N×N matrices with 9 unknown elements in each row regardless of the duplicated ones. Consequently, compared with the conventional implementation, the CPU time and memory requirement can be saved greatly. The perfectly matched layer absorbing boundary condition is also extended to this approach. A numerical example demonstrates the capability and efficiency of the proposed method.
Delayed-X Lms Algorithm: AN Efficient Anc Algorithm Utilizing Robustness of Cancellation Path Model
Kim, H.-S.; Park, Y.
1998-05-01
In conventional ANC (Active Noise Control), the cancellation path is usually modelled by a FIR filter with many coefficients. In this study, a Delayed-X LMS algorithm is investigated of which the computational load is much less than that of the Filtered-X LMS method for a long duct or narrow band noise cancellation application. This algorithm is based on the hypothesis that the cancellation path model for the Filtered-X LMS method does not have to be accurate and can be represented by a delay in such cases. The steady state weight vector solution in the presence of model error is investigated to show the validity of using an erroneous simplification. The ADE (Adaptive Delay Estimation) method is proposed for effective delay estimation of the cancellation path, which can be used with the Delayed-X LMS algorithm on-line or off-line. Simulation and experimental results demonstrate the effectiveness of the Delayed-X LMS algorithm for the specified applications.
Efficient algorithms for non-linear four-wave interactions
Van Vledder, G.P.
2012-01-01
This paper addresses the on-going activities in the development of efficient methods for computing the non-linear four-wave interactions in operational discrete third-generation wind-wave models. It is generally assumed that these interactions play an important role in the evolution of wind
Efficient algorithms for probing the RNA mutation landscape.
Directory of Open Access Journals (Sweden)
Jérôme Waldispühl
Full Text Available The diversity and importance of the role played by RNAs in the regulation and development of the cell are now well-known and well-documented. This broad range of functions is achieved through specific structures that have been (presumably optimized through evolution. State-of-the-art methods, such as McCaskill's algorithm, use a statistical mechanics framework based on the computation of the partition function over the canonical ensemble of all possible secondary structures on a given sequence. Although secondary structure predictions from thermodynamics-based algorithms are not as accurate as methods employing comparative genomics, the former methods are the only available tools to investigate novel RNAs, such as the many RNAs of unknown function recently reported by the ENCODE consortium. In this paper, we generalize the McCaskill partition function algorithm to sum over the grand canonical ensemble of all secondary structures of all mutants of the given sequence. Specifically, our new program, RNAmutants, simultaneously computes for each integer k the minimum free energy structure MFE(k and the partition function Z(k over all secondary structures of all k-point mutants, even allowing the user to specify certain positions required not to mutate and certain positions required to base-pair or remain unpaired. This technically important extension allows us to study the resilience of an RNA molecule to pointwise mutations. By computing the mutation profile of a sequence, a novel graphical representation of the mutational tendency of nucleotide positions, we analyze the deleterious nature of mutating specific nucleotide positions or groups of positions. We have successfully applied RNAmutants to investigate deleterious mutations (mutations that radically modify the secondary structure in the Hepatitis C virus cis-acting replication element and to evaluate the evolutionary pressure applied on different regions of the HIV trans-activation response
An Efficient Algorithm for Direction Finding against Unknown Mutual Coupling
Directory of Open Access Journals (Sweden)
Weijiang Wang
2014-10-01
Full Text Available In this paper, an algorithm of direction finding is proposed in the presence of unknown mutual coupling. The preliminary direction of arrival (DOA is estimated using the whole array for high resolution. Further refinement can then be conducted by estimating the angularly dependent coefficients (ADCs with the subspace theory. The mutual coupling coefficients are finally determined by solving the least squares problem with all of the ADCs utilized without discarding any. Simulation results show that the proposed method can achieve better performance at a low signal-to-noise ratio (SNR with a small-sized array and is more robust, compared with the similar processes employing the initial DOA estimation and further improvement iteratively.
An efficient algorithm for direction finding against unknown mutual coupling.
Wang, Weijiang; Ren, Shiwei; Ding, Yingtao; Wang, Haoyu
2014-10-24
In this paper, an algorithm of direction finding is proposed in the presence of unknown mutual coupling. The preliminary direction of arrival (DOA) is estimated using the whole array for high resolution. Further refinement can then be conducted by estimating the angularly dependent coefficients (ADCs) with the subspace theory. The mutual coupling coefficients are finally determined by solving the least squares problem with all of the ADCs utilized without discarding any. Simulation results show that the proposed method can achieve better performance at a low signal-to-noise ratio (SNR) with a small-sized array and is more robust, compared with the similar processes employing the initial DOA estimation and further improvement iteratively.
Motion estimation for video coding efficient algorithms and architectures
Chakrabarti, Indrajit; Chatterjee, Sumit Kumar
2015-01-01
The need of video compression in the modern age of visual communication cannot be over-emphasized. This monograph will provide useful information to the postgraduate students and researchers who wish to work in the domain of VLSI design for video processing applications. In this book, one can find an in-depth discussion of several motion estimation algorithms and their VLSI implementation as conceived and developed by the authors. It records an account of research done involving fast three step search, successive elimination, one-bit transformation and its effective combination with diamond search and dynamic pixel truncation techniques. Two appendices provide a number of instances of proof of concept through Matlab and Verilog program segments. In this aspect, the book can be considered as first of its kind. The architectures have been developed with an eye to their applicability in everyday low-power handheld appliances including video camcorders and smartphones.
ECTracker--an efficient algorithm for haplotype analysis and classification.
Lin, Li; Wong, Limsoon; Leong, Tze-Yun; Lai, Pohsan
2007-01-01
This work aims at discovering the genetic variations of hemophilia A patients through examining the combination of molecular haplotypes present in hemophilia A and normal local populations using data mining methods. Data mining methods that are capable of extracting understandable and expressive patterns and also capable of making predictions based on inferences made on the patterns were explored in this work. An algorithm known as ECTracker is proposed and its performance compared with some common data mining methods such as artificial neural network, support vector machine, naive Bayesian, and decision tree (C4.5). Experimental studies and analyses show that ECTracker has comparatively good predictive accuracies in classification when compared to methods that can only perform classification. At the same time, ECTracker is also capable of producing easily comprehensible and expressive patterns for analytical purposes by experts.
An efficient algorithm for solving the gravity problem of finding a density in a horizontal layer
Akimova, Elena N.; Martyshko, Peter S.; Misilov, Vladimir E.; Kosivets, Rostislav A.
2016-06-01
An efficient algorithm for solving the inverse gravity problem of finding a variable density in a horizontal layer using gravitational data is constructed. After the discretization and approximation, the problem reduces to solving a system of linear algebraic equations. The idea of this algorithm is based on exploiting the block-Toeplitz structure of coefficients matrix. Utilizing this algorithm drastically reduces the memory usage, as well as the computation time. The algorithm was parallelized and implemented using the Uran supercomputer. A model problem with synthetic gravitational data was solved.
Optimizing Inventory Using Genetic Algorithm for Efficient Supply Chain Management
P. Radhakrishnan; W. M. Prasad; M. R. Gopalan
2009-01-01
Problem statement: Today, inventory management is considered to be an important field in Supply chain management. Once the efficient and effective management of inventory is carried out throughout the supply chain, service provided to the customer ultimately gets enhanced. Hence, to ensure minimal cost for the supply chain, the determination of the level of inventory to be held at various levels in a supply chain is unavoidable. Minimizing the total supply chain cost refers to the reduction o...
Deformation analysis of gellan-gum based bone scaffold using on-the-fly tomography
Czech Academy of Sciences Publication Activity Database
Kytýř, Daniel; Zlámal, Petr; Koudelka_ml., Petr; Fíla, Tomáš; Krčmářová, Nela; Kumpová, Ivana; Vavřík, Daniel; Gantar, A.; Novak, S.
2017-01-01
Roč. 134, November (2017), s. 400-417 ISSN 0264-1275 EU Projects: European Commission(XE) ATCZ38 - Com3d-XCT; European Commission(XE) ATCZ133 Keywords : on-the-fly tomography * gellan-gum * scaffold * digital volume correlation * compression Subject RIV: JJ - Other Materials OBOR OECD: Materials engineering Impact factor: 4.364, year: 2016 http://www.sciencedirect.com/science/article/pii/S026412751730789X
Contribution in Adaptating Web Interfaces to any Device on the Fly: The HCI Proxy
Lardon, Jérémy; Fayolle, Jacques; Gravier, Christophe; Ates, Mikaël
2008-11-01
Lot of work has been done on the adaptation of UIs. In the particular field of Web UI adaptation, many research projects aim at displaying web content designed for PCs on poorer supports. In this paper, we present previous work in the domain and then our proxy architecture, HCI proxy, to test solutions for the problem of adapting Web UIs for mobile phones, PDA and smartphones but also for TVs through browser-embedding STBs, and this on the fly.
Efficient Grid-based Clustering Algorithm with Leaping Search and Merge Neighbors Method
Liu, Feng; Wen, Peng; Zhu, Erzhou
2017-09-01
The increasing data size makes the research of clustering algorithm still an important topic in data mining. As one of the fastest algorithms, the grid clustering algorithm now still suffers from low precision problem. And the efficiency of the algorithm also needed improvement. In order to cope with these problems, this paper proposes an efficient grid-based clustering algorithm by using leaping search and Merge Neighborhood (LSMN). In the algorithm, the LSMN first divides the data space into a finite number of grids and determines the validity of the grid according to the threshold. Then, leaping search mechanism is used to find valid grids of the grid by retrieving all the odd columns and odd rows. Finally, if the number of valid grids is greater than the invalid grid, the invalid grids are merged together. In the algorithm, the time cost is reduced and the accuracy is improved by leaping search and re-judgment of the invalid grid mechanisms respectively. The experimental results have shown that the proposed algorithm exhibits relatively better performance when compared with some popularly used algorithms.
A Genetic Algorithm for Task Scheduling on NoC Using FDH Cross Efficiency
Directory of Open Access Journals (Sweden)
Song Chai
2013-01-01
Full Text Available A CrosFDH-GA algorithm is proposed for the task scheduling problem on the NoC-based MPSoC regarding the multicriterion optimization. First of all, four common criterions, namely, makespan, data routing energy, average link load, and workload balance, are extracted from the task scheduling problem on NoC and are used to construct the DEA DMU model. Then the FDH analysis is applied to the problem, and a FDH cross efficiency formulation is derived for evaluating the relative advantage among schedule solutions. Finally, we introduce the DEA approach to the genetic algorithm and propose a CrosFDH-GA scheduling algorithm to find the most efficient schedule solution for a given scheduling problem. The simulation results show that our FDH cross efficiency formulation effectively evaluates the performance of schedule solutions. By conducting comparative simulations, our CrosFDH-GA proposal produces more metrics-balanced schedule solution than other multicriterion algorithms.
Effective and efficient optics inspection approach using machine learning algorithms
Energy Technology Data Exchange (ETDEWEB)
Abdulla, G; Kegelmeyer, L; Liao, Z; Carr, W
2010-11-02
The Final Optics Damage Inspection (FODI) system automatically acquires and utilizes the Optics Inspection (OI) system to analyze images of the final optics at the National Ignition Facility (NIF). During each inspection cycle up to 1000 images acquired by FODI are examined by OI to identify and track damage sites on the optics. The process of tracking growing damage sites on the surface of an optic can be made more effective by identifying and removing signals associated with debris or reflections. The manual process to filter these false sites is daunting and time consuming. In this paper we discuss the use of machine learning tools and data mining techniques to help with this task. We describe the process to prepare a data set that can be used for training and identifying hardware reflections in the image data. In order to collect training data, the images are first automatically acquired and analyzed with existing software and then relevant features such as spatial, physical and luminosity measures are extracted for each site. A subset of these sites is 'truthed' or manually assigned a class to create training data. A supervised classification algorithm is used to test if the features can predict the class membership of new sites. A suite of self-configuring machine learning tools called 'Avatar Tools' is applied to classify all sites. To verify, we used 10-fold cross correlation and found the accuracy was above 99%. This substantially reduces the number of false alarms that would otherwise be sent for more extensive investigation.
Efficient algorithms for multiscale modeling in porous media
Wheeler, Mary F.
2010-09-26
We describe multiscale mortar mixed finite element discretizations for second-order elliptic and nonlinear parabolic equations modeling Darcy flow in porous media. The continuity of flux is imposed via a mortar finite element space on a coarse grid scale, while the equations in the coarse elements (or subdomains) are discretized on a fine grid scale. We discuss the construction of multiscale mortar basis and extend this concept to nonlinear interface operators. We present a multiscale preconditioning strategy to minimize the computational cost associated with construction of the multiscale mortar basis. We also discuss the use of appropriate quadrature rules and approximation spaces to reduce the saddle point system to a cell-centered pressure scheme. In particular, we focus on multiscale mortar multipoint flux approximation method for general hexahedral grids and full tensor permeabilities. Numerical results are presented to verify the accuracy and efficiency of these approaches. © 2010 John Wiley & Sons, Ltd.
A software algorithm/package for control loop configuration and eco-efficiency.
Munir, M T; Yu, W; Young, B R
2012-11-01
Software is a powerful tool to help us analyze industrial information and control processes. In this paper, we will show our recently development of a software algorithm/package which can help us select the more eco-efficient control configuration. Nowadays, the eco-efficiency of all industrial processes/plants has become more and more important; engineers need to find a way to integrate control loop configuration and measurements of eco-efficiency. The exergy eco-efficiency factor; a new measure of eco-efficiency for control loop configuration has been developed. This software algorithm/package will combine a commercial simulator, VMGSim, and Excel together to calculate the exergy eco-efficiency factor. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Efficient discrete cosine transform model-based algorithm for photoacoustic image reconstruction
Zhang, Yan; Wang, Yuanyuan; Zhang, Chen
2013-06-01
The model-based algorithm is an effective reconstruction method for photoacoustic imaging (PAI). Compared with the analytical reconstruction algorithms, the model-based algorithm is able to provide a more accurate and high-resolution reconstructed image. However, the relatively heavy computational complexity and huge memory storage requirement often impose restrictions on its applications. We incorporate the discrete cosine transform (DCT) in PAI reconstruction and establish a new photoacoustic model. With this new model, an efficient algorithm is proposed for PAI reconstruction. Relatively significant DCT coefficients of the measured signals are used to reconstruct the image. As a result, the calculation can be saved. The theoretical computation complexity of the proposed algorithm is figured out and it is proved that the proposed method is efficient in calculation. The proposed algorithm is also verified through the numerical simulations and in vitro experiments. Compared with former developed model-based methods, the proposed algorithm is able to provide an equivalent reconstruction with the cost of much less time. From the theoretical analysis and the experiment results, it would be concluded that the model-based PAI reconstruction can be accelerated by using the proposed algorithm, so that the practical applicability of PAI may be enhanced.
High efficiency video coding (HEVC) algorithms and architectures
Budagavi, Madhukar; Sullivan, Gary
2014-01-01
This book provides developers, engineers, researchers and students with detailed knowledge about the High Efficiency Video Coding (HEVC) standard. HEVC is the successor to the widely successful H.264/AVC video compression standard, and it provides around twice as much compression as H.264/AVC for the same level of quality. The applications for HEVC will not only cover the space of the well-known current uses and capabilities of digital video – they will also include the deployment of new services and the delivery of enhanced video quality, such as ultra-high-definition television (UHDTV) and video with higher dynamic range, wider range of representable color, and greater representation precision than what is typically found today. HEVC is the next major generation of video coding design – a flexible, reliable and robust solution that will support the next decade of video applications and ease the burden of video on world-wide network traffic. This book provides a detailed explanation of the various parts ...
Efficient Parallel Algorithm for Statistical Ion Track Simulations in Crystalline Materials
Jeon, Byoungseon
2008-01-01
We present an efficient parallel algorithm for statistical Molecular Dynamics simulations of ion tracks in solids. The method is based on the Rare Event Enhanced Domain following Molecular Dynamics (REED-MD) algorithm, which has been successfully applied to studies of, e.g., ion implantation into crystalline semiconductor wafers. We discuss the strategies for parallelizing the method, and we settle on a host-client type polling scheme in which a multiple of asynchronous processors are continuously fed to the host, which, in turn, distributes the resulting feed-back information to the clients. This real-time feed-back consists of, e.g., cumulative damage information or statistics updates necessary for the cloning in the rare event algorithm. We finally demonstrate the algorithm for radiation effects in a nuclear oxide fuel, and we show the balanced parallel approach with high parallel efficiency in multiple processor configurations.
FAST SS-ILM: A COMPUTATIONALLY EFFICIENT ALGORITHM TO DISCOVER SOCIALLY IMPORTANT LOCATIONS
Directory of Open Access Journals (Sweden)
A. S. Dokuz
2017-11-01
Full Text Available Socially important locations are places which are frequently visited by social media users in their social media lifetime. Discovering socially important locations provide several valuable information about user behaviours on social media networking sites. However, discovering socially important locations are challenging due to data volume and dimensions, spatial and temporal calculations, location sparseness in social media datasets, and inefficiency of current algorithms. In the literature, several studies are conducted to discover important locations, however, the proposed approaches do not work in computationally efficient manner. In this study, we propose Fast SS-ILM algorithm by modifying the algorithm of SS-ILM to mine socially important locations efficiently. Experimental results show that proposed Fast SS-ILM algorithm decreases execution time of socially important locations discovery process up to 20 %.
Fast Ss-Ilm a Computationally Efficient Algorithm to Discover Socially Important Locations
Dokuz, A. S.; Celik, M.
2017-11-01
Socially important locations are places which are frequently visited by social media users in their social media lifetime. Discovering socially important locations provide several valuable information about user behaviours on social media networking sites. However, discovering socially important locations are challenging due to data volume and dimensions, spatial and temporal calculations, location sparseness in social media datasets, and inefficiency of current algorithms. In the literature, several studies are conducted to discover important locations, however, the proposed approaches do not work in computationally efficient manner. In this study, we propose Fast SS-ILM algorithm by modifying the algorithm of SS-ILM to mine socially important locations efficiently. Experimental results show that proposed Fast SS-ILM algorithm decreases execution time of socially important locations discovery process up to 20 %.
An efficient algorithm for orbital evolution of space debris
Abdel-Aziz, Y.; Abd El-Salam, F.
More than four decades of space exploration have led to accumulation of significant quantities of debris around the Earth. These objects range in size from a tiny piece of junk to a large inoperable satellite, although these objects that have small size they have high are-to-mass ratios, and consequently their orbits are strongly influenced by solar radiation pressure and atmospheric drag. So the increasing population of space debris object in the LEO, MEO and GEO present growing with time, serious hazard for the survival of operating spacecrafts, particularly satellites and astronomical observatories. Since the average collision velocity between any spacecraft orbiting in the LOE and debris objects is about 10 km/s and about 3 km/s in the GEO. Space debris may significantly disturb any satellite operations or cause catastrophic damage to a spacecraft itself. Applying different shielding techniques spacecraft my be protected against impacts of space debris with diameters smaller than 1 cm. For larger debris objects, only one effective method to avoid catastrophic consequence of collision is a manoeuvre that will change the spacecraft orbit. The necessary conditions in this case is to evaluate and predict future positions of the spacecraft and space debris with sufficient accuray. Numerical integration of equations of motion are used until now. Existing analytical methods can solve this problem only with low accuracy. Difficulties are caused mainly by the lack of satisfying analytical solution of the resonance problem for geosynchronous orbit as well as from the lack of efficient analytical theory combining luni-solar perturbation and solar radiation pressure with geopotential attraction. Numerical integration is time consuming in some cases, and then for qualitative analysis of the satellite's and debris's motion it is necessary to apply analytical solution. This is the reason for searching for an accurate model to evaluate the orbital position of the operating
Energy Efficient Routing Algorithms in Dynamic Optical Core Networks with Dual Energy Sources
DEFF Research Database (Denmark)
Wang, Jiayuan; Fagertun, Anna Manolova; Ruepp, Sarah Renée
2013-01-01
This paper proposes new energy efficient routing algorithms in optical core networks, with the application of solar energy sources and bundled links. A comprehensive solar energy model is described in the proposed network scenarios. Network performance in energy savings, connection blocking...... probability, resource utilization and bundled link usage are evaluated with dynamic network simulations. Results show that algorithms proposed aiming for reducing the dynamic part of the energy consumption of the network may raise the fixed part of the energy consumption meanwhile....
Jin, Dakai; Iyer, Krishna S; Chen, Cheng; Hoffman, Eric A; Saha, Punam K
2015-01-01
Conventional curve skeletonization algorithms using the principle of Blum���s transform, often, produce unwanted spurious branches due to boundary irregularities, digital effects, and other artifacts. This paper presents a new robust and efficient curve skeletonization algorithm for three-dimensional (3-D) elongated fuzzy objects using a minimum cost path approach, which avoids spurious branches without requiring post-pruning. Starting from a root voxel, the method iteratively expands the ske...
Ebeling, H; White, D.A.; Rangarajan, F. V. N.
2006-01-01
An efficient algorithm for adaptive kernel smoothing (AKS) of two-dimensional imaging data has been developed and implemented using the Interactive Data Language (IDL). The functional form of the kernel can be varied (top-hat, Gaussian etc.) to allow different weighting of the event counts registered within the smoothing region. For each individual pixel the algorithm increases the smoothing scale until the signal-to-noise ratio (s.n.r.) within the kernel reaches a preset value. Thus, noise i...
Directory of Open Access Journals (Sweden)
Yao-Liang Chung
2016-11-01
Full Text Available The simultaneous aggregation of multiple component carriers (CCs for use by a base station constitutes one of the more promising strategies for providing substantially enhanced bandwidths for packet transmissions in 4th and 5th generation cellular systems. To the best of our knowledge, however, few previous studies have undertaken a thorough investigation of various performance aspects of the use of a simple yet effective packet scheduling algorithm in which multiple CCs are aggregated for transmission in such systems. Consequently, the present study presents an efficient packet scheduling algorithm designed on the basis of the proportional fair criterion for use in multiple-CC systems for downlink transmission. The proposed algorithm includes a focus on providing simultaneous transmission support for both real-time (RT and non-RT traffic. This algorithm can, when applied with sufficiently efficient designs, provide adequate utilization of spectrum resources for the purposes of transmissions, while also improving energy efficiency to some extent. According to simulation results, the performance of the proposed algorithm in terms of system throughput, mean delay, and fairness constitute substantial improvements over those of an algorithm in which the CCs are used independently instead of being aggregated.
An Efficient Forward-Reverse EM Algorithm for Statistical Inference in Stochastic Reaction Networks
Bayer, Christian
2016-01-06
In this work [1], we present an extension of the forward-reverse algorithm by Bayer and Schoenmakers [2] to the context of stochastic reaction networks (SRNs). We then apply this bridge-generation technique to the statistical inference problem of approximating the reaction coefficients based on discretely observed data. To this end, we introduce an efficient two-phase algorithm in which the first phase is deterministic and it is intended to provide a starting point for the second phase which is the Monte Carlo EM Algorithm.
Treat a new and efficient match algorithm for AI production system
Miranker, Daniel P
1988-01-01
TREAT: A New and Efficient Match Algorithm for AI Production Systems describes the architecture and software systems embodying the DADO machine, a parallel tree-structured computer designed to provide significant performance improvements over serial computers of comparable hardware complexity in the execution of large expert systems implemented in production system form.This book focuses on TREAT as a match algorithm for executing production systems that is presented and comparatively analyzed with the RETE match algorithm. TREAT, originally designed specifically for the DADO machine architect
A high-efficient significant coefficient scanning algorithm for 3-D embedded wavelet video coding
Song, Haohao; Yu, Songyu; Song, Li; Xiong, Hongkai
2005-07-01
3-D embedded wavelet video coding (3-D EWVC) algorithms become a vital scheme for state-of-the-art scalable video coding. A major objective in a progressive transmission scheme is to select the most important information which yields the largest distortion reduction to be transmitted first, so traditional 3-D EWVC algorithms scan coefficients according to bit-plane order. To significant bit information of the same bit-plane, however, these algorithms neglect the different effect of coefficients in different subbands to distortion. In this paper, we analyze different effect of significant information bits of the same bit-plane in different subbands to distortion and propose a high-efficient significant coefficient scanning algorithm. Experimental results of 3-D SPIHT and 3-D SPECK show that high-efficient significant coefficient scanning algorithm can improve traditional 3-D EWVC algorithms' ability of compression, and make reconstructed videos have higher PSNR and better visual effects in the same bit rate compared to original significant coefficient scanning algorithms respectively.
On-the-fly Uniformization of Time-Inhomogeneous Infinite Markov Population Models
Directory of Open Access Journals (Sweden)
Aleksandr Andreychenko
2011-07-01
Full Text Available This paper presents an on-the-fly uniformization technique for the analysis of time-inhomogeneous Markov population models. This technique is applicable to models with infinite state spaces and unbounded rates, which are, for instance, encountered in the realm of biochemical reaction networks. To deal with the infinite state space, we dynamically maintain a finite subset of the states where most of the probability mass is located. This approach yields an underapproximation of the original, infinite system. We present experimental results to show the applicability of our technique.
A highly efficient multi-core algorithm for clustering extremely large datasets
Directory of Open Access Journals (Sweden)
Kraus Johann M
2010-04-01
Full Text Available Abstract Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer.
3D video coding for embedded devices energy efficient algorithms and architectures
Zatt, Bruno; Bampi, Sergio; Henkel, Jörg
2013-01-01
This book shows readers how to develop energy-efficient algorithms and hardware architectures to enable high-definition 3D video coding on resource-constrained embedded devices. Users of the Multiview Video Coding (MVC) standard face the challenge of exploiting its 3D video-specific coding tools for increasing compression efficiency at the cost of increasing computational complexity and, consequently, the energy consumption. This book enables readers to reduce the multiview video coding energy consumption through jointly considering the algorithmic and architectural levels. Coverage includes an introduction to 3D videos and an extensive discussion of the current state-of-the-art of 3D video coding, as well as energy-efficient algorithms for 3D video coding and energy-efficient hardware architecture for 3D video coding. · Discusses challenges related to performance and power in 3D video coding for embedded devices; · Describes energy-efficient algorithms for reduci...
Lin, Dejun
2015-09-21
Accurate representation of intermolecular forces has been the central task of classical atomic simulations, known as molecular mechanics. Recent advancements in molecular mechanics models have put forward the explicit representation of permanent and/or induced electric multipole (EMP) moments. The formulas developed so far to calculate EMP interactions tend to have complicated expressions, especially in Cartesian coordinates, which can only be applied to a specific kernel potential function. For example, one needs to develop a new formula each time a new kernel function is encountered. The complication of these formalisms arises from an intriguing and yet obscured mathematical relation between the kernel functions and the gradient operators. Here, I uncover this relation via rigorous derivation and find that the formula to calculate EMP interactions is basically invariant to the potential kernel functions as long as they are of the form f(r), i.e., any Green's function that depends on inter-particle distance. I provide an algorithm for efficient evaluation of EMP interaction energies, forces, and torques for any kernel f(r) up to any arbitrary rank of EMP moments in Cartesian coordinates. The working equations of this algorithm are essentially the same for any kernel f(r). Recently, a few recursive algorithms were proposed to calculate EMP interactions. Depending on the kernel functions, the algorithm here is about 4-16 times faster than these algorithms in terms of the required number of floating point operations and is much more memory efficient. I show that it is even faster than a theoretically ideal recursion scheme, i.e., one that requires 1 floating point multiplication and 1 addition per recursion step. This algorithm has a compact vector-based expression that is optimal for computer programming. The Cartesian nature of this algorithm makes it fit easily into modern molecular simulation packages as compared with spherical coordinate-based algorithms. A
Lin, Dejun
2015-09-01
Accurate representation of intermolecular forces has been the central task of classical atomic simulations, known as molecular mechanics. Recent advancements in molecular mechanics models have put forward the explicit representation of permanent and/or induced electric multipole (EMP) moments. The formulas developed so far to calculate EMP interactions tend to have complicated expressions, especially in Cartesian coordinates, which can only be applied to a specific kernel potential function. For example, one needs to develop a new formula each time a new kernel function is encountered. The complication of these formalisms arises from an intriguing and yet obscured mathematical relation between the kernel functions and the gradient operators. Here, I uncover this relation via rigorous derivation and find that the formula to calculate EMP interactions is basically invariant to the potential kernel functions as long as they are of the form f(r), i.e., any Green's function that depends on inter-particle distance. I provide an algorithm for efficient evaluation of EMP interaction energies, forces, and torques for any kernel f(r) up to any arbitrary rank of EMP moments in Cartesian coordinates. The working equations of this algorithm are essentially the same for any kernel f(r). Recently, a few recursive algorithms were proposed to calculate EMP interactions. Depending on the kernel functions, the algorithm here is about 4-16 times faster than these algorithms in terms of the required number of floating point operations and is much more memory efficient. I show that it is even faster than a theoretically ideal recursion scheme, i.e., one that requires 1 floating point multiplication and 1 addition per recursion step. This algorithm has a compact vector-based expression that is optimal for computer programming. The Cartesian nature of this algorithm makes it fit easily into modern molecular simulation packages as compared with spherical coordinate-based algorithms. A
Lougovski, A.; Hofheinz, F.; Maus, J.; Schramm, G.; Will, E.; van den Hoff, J.
2014-02-01
The aim of this study is the evaluation of on-the-fly volume of intersection computation for system’s geometry modelling in 3D PET image reconstruction. For this purpose we propose a simple geometrical model in which the cubic image voxels on the given Cartesian grid are approximated with spheres and the rectangular tubes of response (ToRs) are approximated with cylinders. The model was integrated into a fully 3D list-mode PET reconstruction for performance evaluation. In our model the volume of intersection between a voxel and the ToR is only a function of the impact parameter (the distance between voxel centre to ToR axis) but is independent of the relative orientation of voxel and ToR. This substantially reduces the computational complexity of the system matrix calculation. Based on phantom measurements it was determined that adjusting the diameters of the spherical voxel size and the ToR in such a way that the actual voxel and ToR volumes are conserved leads to the best compromise between high spatial resolution, low noise, and suppression of Gibbs artefacts in the reconstructed images. Phantom as well as clinical datasets from two different PET systems (Siemens ECAT HR+ and Philips Ingenuity-TF PET/MR) were processed using the developed and the respective vendor-provided (line of intersection related) reconstruction algorithms. A comparison of the reconstructed images demonstrated very good performance of the new approach. The evaluation showed the respective vendor-provided reconstruction algorithms to possess 34-41% lower resolution compared to the developed one while exhibiting comparable noise levels. Contrary to explicit point spread function modelling our model has a simple straight-forward implementation and it should be easy to integrate into existing reconstruction software, making it competitive to other existing resolution recovery techniques.
Park, Jihong; Kim, Ki-Hyung; Kim, Kangseok
2017-04-19
The IPv6 Routing Protocol for Low Power and Lossy Networks (RPL) was proposed for various applications of IPv6 low power wireless networks. While RPL supports various routing metrics and is designed to be suitable for wireless sensor network environments, it does not consider the mobility of nodes. Therefore, there is a need for a method that is energy efficient and that provides stable and reliable data transmission by considering the mobility of nodes in RPL networks. This paper proposes an algorithm to support node mobility in RPL in an energy-efficient manner and describes its operating principle based on different scenarios. The proposed algorithm supports the mobility of nodes by dynamically adjusting the transmission interval of the messages that request the route based on the speed and direction of the motion of mobile nodes, as well as the costs between neighboring nodes. The performance of the proposed algorithm and previous algorithms for supporting node mobility were examined experimentally. From the experiment, it was observed that the proposed algorithm requires fewer messages per unit time for selecting a new parent node following the movement of a mobile node. Since fewer messages are used to select a parent node, the energy consumption is also less than that of previous algorithms.
An Efficient Feature Subset Selection Algorithm for Classification of Multidimensional Dataset.
Devaraj, Senthilkumar; Paulraj, S
2015-01-01
Multidimensional medical data classification has recently received increased attention by researchers working on machine learning and data mining. In multidimensional dataset (MDD) each instance is associated with multiple class values. Due to its complex nature, feature selection and classifier built from the MDD are typically more expensive or time-consuming. Therefore, we need a robust feature selection technique for selecting the optimum single subset of the features of the MDD for further analysis or to design a classifier. In this paper, an efficient feature selection algorithm is proposed for the classification of MDD. The proposed multidimensional feature subset selection (MFSS) algorithm yields a unique feature subset for further analysis or to build a classifier and there is a computational advantage on MDD compared with the existing feature selection algorithms. The proposed work is applied to benchmark multidimensional datasets. The number of features was reduced to 3% minimum and 30% maximum by using the proposed MFSS. In conclusion, the study results show that MFSS is an efficient feature selection algorithm without affecting the classification accuracy even for the reduced number of features. Also the proposed MFSS algorithm is suitable for both problem transformation and algorithm adaptation and it has great potentials in those applications generating multidimensional datasets.
Implementation of Efficient seamless non-broadcast Routing algorithm for Wireless Mesh Network
Kbar, Ghassan; Mansoor, Wathiq
Wireless Mesh Networks become popular and are used everywhere as an alternative to broadband connections. The ease of configuration of wireless mesh LAN, the mobility of clients, and the large coverage make it attractive choice for supporting wireless technology in LAN and MAN. However, there are some concerns in assigning the multiple channels for different node and having efficient routing algorithm to route packet seamlessly without affecting the network performance. Multiple channel usage has been addressed in previous research paper, but efficient routing algorithm still to be researched. In this paper an efficient seamless non-broadcast routing algorithm has been developed and implemented in C++ to support the wireless mesh network. This algorithm is based on mapping the mesh wireless routing nodes geographically according to 2 coordinates. Each node will apply this algorithm to find the closet neighboring node that leads to destination based on the mapped network without the need for broadcast such as the one used in traditional routing protocol in RIP, and OSPF.
Efficient frequent pattern mining algorithm based on node sets in cloud computing environment
Billa, V. N. Vinay Kumar; Lakshmanna, K.; Rajesh, K.; Reddy, M. Praveen Kumar; Nagaraja, G.; Sudheer, K.
2017-11-01
The ultimate goal of Data Mining is to determine the hidden information which is useful in making decisions using the large databases collected by an organization. This Data Mining involves many tasks that are to be performed during the process. Mining frequent itemsets is the one of the most important tasks in case of transactional databases. These transactional databases contain the data in very large scale where the mining of these databases involves the consumption of physical memory and time in proportion to the size of the database. A frequent pattern mining algorithm is said to be efficient only if it consumes less memory and time to mine the frequent itemsets from the given large database. Having these points in mind in this thesis we proposed a system which mines frequent itemsets in an optimized way in terms of memory and time by using cloud computing as an important factor to make the process parallel and the application is provided as a service. A complete framework which uses a proven efficient algorithm called FIN algorithm. FIN algorithm works on Nodesets and POC (pre-order coding) tree. In order to evaluate the performance of the system we conduct the experiments to compare the efficiency of the same algorithm applied in a standalone manner and in cloud computing environment on a real time data set which is traffic accidents data set. The results show that the memory consumption and execution time taken for the process in the proposed system is much lesser than those of standalone system.
PARM--an efficient algorithm to mine association rules from spatial data.
Ding, Qin; Ding, Qiang; Perrizo, William
2008-12-01
Association rule mining, originally proposed for market basket data, has potential applications in many areas. Spatial data, such as remote sensed imagery (RSI) data, is one of the promising application areas. Extracting interesting patterns and rules from spatial data sets, composed of images and associated ground data, can be of importance in precision agriculture, resource discovery, and other areas. However, in most cases, the sizes of the spatial data sets are too large to be mined in a reasonable amount of time using existing algorithms. In this paper, we propose an efficient approach to derive association rules from spatial data using Peano Count Tree (P-tree) structure. P-tree structure provides a lossless and compressed representation of spatial data. Based on P-trees, an efficient association rule mining algorithm PARM with fast support calculation and significant pruning techniques is introduced to improve the efficiency of the rule mining process. The P-tree based Association Rule Mining (PARM) algorithm is implemented and compared with FP-growth and Apriori algorithms. Experimental results showed that our algorithm is superior for association rule mining on RSI spatial data.
On-the-fly augmented reality for orthopedic surgery using a multimodal fiducial.
Andress, Sebastian; Johnson, Alex; Unberath, Mathias; Winkler, Alexander Felix; Yu, Kevin; Fotouhi, Javad; Weidert, Simon; Osgood, Greg; Navab, Nassir
2018-04-01
Fluoroscopic x-ray guidance is a cornerstone for percutaneous orthopedic surgical procedures. However, two-dimensional (2-D) observations of the three-dimensional (3-D) anatomy suffer from the effects of projective simplification. Consequently, many x-ray images from various orientations need to be acquired for the surgeon to accurately assess the spatial relations between the patient's anatomy and the surgical tools. We present an on-the-fly surgical support system that provides guidance using augmented reality and can be used in quasiunprepared operating rooms. The proposed system builds upon a multimodality marker and simultaneous localization and mapping technique to cocalibrate an optical see-through head mounted display to a C-arm fluoroscopy system. Then, annotations on the 2-D x-ray images can be rendered as virtual objects in 3-D providing surgical guidance. We quantitatively evaluate the components of the proposed system and, finally, design a feasibility study on a semianthropomorphic phantom. The accuracy of our system was comparable to the traditional image-guided technique while substantially reducing the number of acquired x-ray images as well as procedure time. Our promising results encourage further research on the interaction between virtual and real objects that we believe will directly benefit the proposed method. Further, we would like to explore the capabilities of our on-the-fly augmented reality support system in a larger study directed toward common orthopedic interventions.
Indian Academy of Sciences (India)
positive numbers. The word 'algorithm' was most often associated with this algorithm till 1950. It may however be pOinted out that several non-trivial algorithms such as synthetic (polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used.
Directory of Open Access Journals (Sweden)
Anupam Mittal
2016-07-01
Full Text Available Today breakthroughs in wireless technologies have greatly spurred the emergence of industrial wireless sensor networks (IWSNs.To facilitate the adaptation of IWSNs to industrial applications, concerns about networks full coverage and connectivity must be addressed to fulfill reliability and real time requirements. Although connected target coverage algorithms have been studied notice both limitations and applicability of various coverage areas from an industry viewpoint. In this paper is discuss the two energy efficiency connected target coverage (CTC algorithms CWGC(Communication Weighted Greedy Cover and OTTC(Overlapped Target and Connected Coverage algorithm based on dynamic node to resolve the problem of Coverage improvement. This paper uses the simulation in MATLAB represent the performance of two CTC algorithms with Dynamic node to improve network lifetime and low energy consumption and quality of service. Compare the dynamic nodes results with static nodes results
Efficient heuristic algorithm used for optimal capacitor placement in distribution systems
Energy Technology Data Exchange (ETDEWEB)
Segura, Silvio; Rider, Marcos J. [Department of Electric Energy Systems, University of Campinas, Campinas, Sao Paulo (Brazil); Romero, Ruben [Faculty of Engineering of Ilha Solteira, Paulista State University, Ilha Solteira, Sao Paulo (Brazil)
2010-01-15
An efficient heuristic algorithm is presented in this work in order to solve the optimal capacitor placement problem in radial distribution systems. The proposal uses the solution from the mathematical model after relaxing the integrality of the discrete variables as a strategy to identify the most attractive bus to add capacitors to each step of the heuristic algorithm. The relaxed mathematical model is a non-linear programming problem and is solved using a specialized interior point method. The algorithm still incorporates an additional strategy of local search that enables the finding of a group of quality solutions after small alterations in the optimization strategy. Proposed solution methodology has been implemented and tested in known electric systems getting a satisfactory outcome compared with metaheuristic methods. The tests carried out in electric systems known in specialized literature reveal the satisfactory outcome of the proposed algorithm compared with metaheuristic methods. (author)
A new efficient optimal path planner for mobile robot based on Invasive Weed Optimization algorithm
Mohanty, Prases K.; Parhi, Dayal R.
2014-12-01
Planning of the shortest/optimal route is essential for efficient operation of autonomous mobile robot or vehicle. In this paper Invasive Weed Optimization (IWO), a new meta-heuristic algorithm, has been implemented for solving the path planning problem of mobile robot in partially or totally unknown environments. This meta-heuristic optimization is based on the colonizing property of weeds. First we have framed an objective function that satisfied the conditions of obstacle avoidance and target seeking behavior of robot in partially or completely unknown environments. Depending upon the value of objective function of each weed in colony, the robot avoids obstacles and proceeds towards destination. The optimal trajectory is generated with this navigational algorithm when robot reaches its destination. The effectiveness, feasibility, and robustness of the proposed algorithm has been demonstrated through series of simulation and experimental results. Finally, it has been found that the developed path planning algorithm can be effectively applied to any kinds of complex situation.
An efficient algorithm for DNA fragment assembly in MapReduce.
Xu, Baomin; Gao, Jin; Li, Chunyan
2012-09-28
Fragment assembly is one of the most important problems of sequence assembly. Algorithms for DNA fragment assembly using de Bruijn graph have been widely used. These algorithms require a large amount of memory and running time to build the de Bruijn graph. Another drawback of the conventional de Bruijn approach is the loss of information. To overcome these shortcomings, this paper proposes a parallel strategy to construct de Bruijin graph. Its main characteristic is to avoid the division of de Bruijin graph. A novel fragment assembly algorithm based on our parallel strategy is implemented in the MapReduce framework. The experimental results show that the parallel strategy can effectively improve the computational efficiency and remove the memory limitations of the assembly algorithm based on Euler superpath. This paper provides a useful attempt to the assembly of large-scale genome sequence using Cloud Computing. Copyright © 2012 Elsevier Inc. All rights reserved.
An Efficient Return Algorithm for Non-Associated Mohr-Coulomb Plasticity
DEFF Research Database (Denmark)
Clausen, Johan Christian; Damkilde, Lars; Andersen, Lars
2005-01-01
An efficient return algorithm for stress update in numerical plasticity computations is presented. The yield criterion must be linear in principal stress space, and can be composed of any number of yield planes. Each of these yield planes can have an associated or non-associated flow rule...... considerations. The method is exemplified on non-associated Mohr-Coulomb plasticity throughout the paper....
An efficient Return Algorithm for Non-Associated Mohr-Coulomb Plasticity
DEFF Research Database (Denmark)
Clausen, Johan; Damkilde, Lars; Andersen, Lars
2005-01-01
An efficient return algorithm for stress update in numerical plasticity computations is presented. The yield criterion must be linear in principal stress space, and can be composed of any number of yield planes. Each of these yield planes can have an associated or non-associated flow rule...... considerations. The method is exemplified on non-associated Mohr-Coulomb plasticity throughout the paper....
An Efficient Rank Adaptation Algorithm for Cellular MIMO Systems with IRC Receivers
DEFF Research Database (Denmark)
Mahmood, Nurul Huda; Berardinelli, Gilberto; Tavares, Fernando Menezes Leitão
2014-01-01
of linear interference rejection combining (IRC) receivers. Typically, rank adaptation algorithms are aimed at balancing the trade-off between increasing the spatial gain, and improving the interference resilience property. In this paper, we propose an efficient and computationally effective rank adaptation...
Al-Mayouf, Yusor Rafid Bahar; Ismail, Mahamod; Abdullah, Nor Fadzilah; Wahab, Ainuddin Wahid Abdul; Mahdi, Omar Adil; Khan, Suleman; Choo, Kim-Kwang Raymond
2016-01-01
Vehicular ad hoc networks (VANETs) are considered an emerging technology in the industrial and educational fields. This technology is essential in the deployment of the intelligent transportation system, which is targeted to improve safety and efficiency of traffic. The implementation of VANETs can be effectively executed by transmitting data among vehicles with the use of multiple hops. However, the intrinsic characteristics of VANETs, such as its dynamic network topology and intermittent connectivity, limit data delivery. One particular challenge of this network is the possibility that the contributing node may only remain in the network for a limited time. Hence, to prevent data loss from that node, the information must reach the destination node via multi-hop routing techniques. An appropriate, efficient, and stable routing algorithm must be developed for various VANET applications to address the issues of dynamic topology and intermittent connectivity. Therefore, this paper proposes a novel routing algorithm called efficient and stable routing algorithm based on user mobility and node density (ESRA-MD). The proposed algorithm can adapt to significant changes that may occur in the urban vehicular environment. This algorithm works by selecting an optimal route on the basis of hop count and link duration for delivering data from source to destination, thereby satisfying various quality of service considerations. The validity of the proposed algorithm is investigated by its comparison with ARP-QD protocol, which works on the mechanism of optimal route finding in VANETs in urban environments. Simulation results reveal that the proposed ESRA-MD algorithm shows remarkable improvement in terms of delivery ratio, delivery delay, and communication overhead. PMID:27855165
An efficient clustering algorithm for partitioning Y-short tandem repeats data
Directory of Open Access Journals (Sweden)
Seman Ali
2012-10-01
Full Text Available Abstract Background Y-Short Tandem Repeats (Y-STR data consist of many similar and almost similar objects. This characteristic of Y-STR data causes two problems with partitioning: non-unique centroids and local minima problems. As a result, the existing partitioning algorithms produce poor clustering results. Results Our new algorithm, called k-Approximate Modal Haplotypes (k-AMH, obtains the highest clustering accuracy scores for five out of six datasets, and produces an equal performance for the remaining dataset. Furthermore, clustering accuracy scores of 100% are achieved for two of the datasets. The k-AMH algorithm records the highest mean accuracy score of 0.93 overall, compared to that of other algorithms: k-Population (0.91, k-Modes-RVF (0.81, New Fuzzy k-Modes (0.80, k-Modes (0.76, k-Modes-Hybrid 1 (0.76, k-Modes-Hybrid 2 (0.75, Fuzzy k-Modes (0.74, and k-Modes-UAVM (0.70. Conclusions The partitioning performance of the k-AMH algorithm for Y-STR data is superior to that of other algorithms, owing to its ability to solve the non-unique centroids and local minima problems. Our algorithm is also efficient in terms of time complexity, which is recorded as O(km(n-k and considered to be linear.
An Efficient Adaptive Load Balancing Algorithm for Cloud Computing Under Bursty Workloads
Directory of Open Access Journals (Sweden)
S. F. Issawi
2015-06-01
Full Text Available Cloud computing is a recent, emerging technology in the IT industry. It is an evolution of previous models such as grid computing. It enables a wide range of users to access a large sharing pool of resources over the internet. In such complex system, there is a tremendous need for an efficient load balancing scheme in order to satisfy peak user demands and provide high quality of services. One of the challenging problems that degrade the performance of a load balancing process is bursty workloads. Although there are a lot of researches proposing different load balancing algorithms, most of them neglect the problem of bursty workloads. Motivated by this problem, this paper proposes a new burstness-aware load balancing algorithm which can adapt to the variation in the request rate by adopting two load balancing algorithms: RR in burst and Random in non-burst state. Fuzzy logic is used in order to assign the received request to a balanced VM. The algorithm has been evaluated and compared with other algorithms using Cloud Analyst simulator. Results show that the proposed algorithm improves the average response time and average processing time in comparison with other algorithms.
Efficient Geo-Computational Algorithms for Constructing Space-Time Prisms in Road Networks
Directory of Open Access Journals (Sweden)
Hui-Ping Chen
2016-11-01
Full Text Available The Space-time prism (STP is a key concept in time geography for analyzing human activity-travel behavior under various Space-time constraints. Most existing time-geographic studies use a straightforward algorithm to construct STPs in road networks by using two one-to-all shortest path searches. However, this straightforward algorithm can introduce considerable computational overhead, given the fact that accessible links in a STP are generally a small portion of the whole network. To address this issue, an efficient geo-computational algorithm, called NTP-A*, is proposed. The proposed NTP-A* algorithm employs the A* and branch-and-bound techniques to discard inaccessible links during two shortest path searches, and thereby improves the STP construction performance. Comprehensive computational experiments are carried out to demonstrate the computational advantage of the proposed algorithm. Several implementation techniques, including the label-correcting technique and the hybrid link-node labeling technique, are discussed and analyzed. Experimental results show that the proposed NTP-A* algorithm can significantly improve STP construction performance in large-scale road networks by a factor of 100, compared with existing algorithms.
I/O-Efficient Algorithms for Problems on Grid-Based Terrains
DEFF Research Database (Denmark)
Arge, Lars Allan; Toma, Laura; Vitter, Jeffrey Scott
2001-01-01
The potential and use of Geographic Information Systems is rapidly increasing due to the increasing availability of massive amounts of geospatial data from projects like NASA's Mission to Planet Earth. However, the use of these massive datasets also exposes scalability problems with existing GIS ...... of the dataset becomes bigger than the available main memory. For example, while our algorithm computes the flow accumulation for the Appalachian Mountains in about three hours, the previously known algorithm takes several weeks....... algorithms. These scalability problems are mainly due to the fact that most GIS algorithms have been designed to minimize internal computation time, while I/O communication often is the bottleneck when processing massive amounts of data. In this paper, we consider I/O-efficient algorithms for problems...... on grid-based terrains.Detailed grid-based terrain data is rapidly becoming available for much of the earth's surface. We describe [EQUATION] I/O algorithms for several problems on [EQUATION] grids for which only O(N) algorithms were previously known. Here M denotes the size of the main memory and B...
Hybrid Swarm Intelligence Energy Efficient Clustered Routing Algorithm for Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Rajeev Kumar
2016-01-01
Full Text Available Currently, wireless sensor networks (WSNs are used in many applications, namely, environment monitoring, disaster management, industrial automation, and medical electronics. Sensor nodes carry many limitations like low battery life, small memory space, and limited computing capability. To create a wireless sensor network more energy efficient, swarm intelligence technique has been applied to resolve many optimization issues in WSNs. In many existing clustering techniques an artificial bee colony (ABC algorithm is utilized to collect information from the field periodically. Nevertheless, in the event based applications, an ant colony optimization (ACO is a good solution to enhance the network lifespan. In this paper, we combine both algorithms (i.e., ABC and ACO and propose a new hybrid ABCACO algorithm to solve a Nondeterministic Polynomial (NP hard and finite problem of WSNs. ABCACO algorithm is divided into three main parts: (i selection of optimal number of subregions and further subregion parts, (ii cluster head selection using ABC algorithm, and (iii efficient data transmission using ACO algorithm. We use a hierarchical clustering technique for data transmission; the data is transmitted from member nodes to the subcluster heads and then from subcluster heads to the elected cluster heads based on some threshold value. Cluster heads use an ACO algorithm to discover the best route for data transmission to the base station (BS. The proposed approach is very useful in designing the framework for forest fire detection and monitoring. The simulation results show that the ABCACO algorithm enhances the stability period by 60% and also improves the goodput by 31% against LEACH and WSNCABC, respectively.
On-The-Fly Computation of Frontal Orbitals in Density Matrix Expansions.
Kruchinina, Anastasia; Rudberg, Elias; Rubensson, Emanuel H
2017-12-01
We propose a method for computation of frontal (homo and lumo) orbitals in recursive polynomial expansion algorithms for the density matrix. Such algorithms give a computational cost that increases only linearly with system size for sufficiently sparse systems but a drawback compared to the traditional diagonalization approach is that molecular orbitals are not readily available. Our method is based on the idea to use the polynomial of the density matrix expansion as an eigenvalue filter giving large separation between eigenvalues around homo and lumo [J. Chem. Phys. 128, 176101 (2008)]. This filter is combined with a shift-and-square (folded spectrum) method to move the desired eigenvalue to the end of the spectrum. In this work we propose a transparent way to select recursive expansion iteration and shift for the eigenvector computation that results in a sharp eigenvalue filter. The filter is obtained as a by-product of the density matrix expansion and there is no significant additional cost associated neither with its construction or with its application. This gives a clear-cut and efficient eigenvalue solver that can be used to compute homo and lumo orbitals with sufficient accuracy in a small fraction of the total recursive expansion time. Our algorithms make use of recent homo and lumo eigenvalue estimates that can be obtained at negligible cost [SIAM J. Sci. Comput. 36, B147 (2014)]. We illustrate our method by performing self-consistent field calculations for large scale systems.
Directory of Open Access Journals (Sweden)
Peter Domonkos
2013-01-01
Full Text Available Efficiency evaluations for change point Detection methods used in nine major Objective Homogenization Methods (DOHMs are presented. The evaluations are conducted using ten different simulated datasets and four efficiency measures: detection skill, skill of linear trend estimation, sum of squared error, and a combined efficiency measure. Test datasets applied have a diverse set of inhomogeneity (IH characteristics and include one dataset that is similar to the monthly benchmark temperature dataset of the European benchmarking effort known by the acronym COST HOME. The performance of DOHMs is highly dependent on the characteristics of test datasets and efficiency measures. Measures of skills differ markedly according to the frequency and mean duration of inhomogeneities and vary with the ratio of IH-magnitudes and background noise. The study focuses on cases when high quality relative time series (i.e., the difference between a candidate and reference series can be created, but the frequency and intensity of inhomogeneities are high. Results show that in these cases the Caussinus-Mestre method is the most effective, although appreciably good results can also be achieved by the use of several other DOHMs, such as the Multiple Analysis of Series for Homogenisation, Bayes method, Multiple Linear Regression, and the Standard Normal Homogeneity Test.
Signal Partitioning Algorithm for Highly Efficient Gaussian Mixture Modeling in Mass Spectrometry
Polanski, Andrzej; Marczyk, Michal; Pietrowska, Monika; Widlak, Piotr; Polanska, Joanna
2015-01-01
Mixture - modeling of mass spectra is an approach with many potential applications including peak detection and quantification, smoothing, de-noising, feature extraction and spectral signal compression. However, existing algorithms do not allow for automated analyses of whole spectra. Therefore, despite highlighting potential advantages of mixture modeling of mass spectra of peptide/protein mixtures and some preliminary results presented in several papers, the mixture modeling approach was so far not developed to the stage enabling systematic comparisons with existing software packages for proteomic mass spectra analyses. In this paper we present an efficient algorithm for Gaussian mixture modeling of proteomic mass spectra of different types (e.g., MALDI-ToF profiling, MALDI-IMS). The main idea is automated partitioning of protein mass spectral signal into fragments. The obtained fragments are separately decomposed into Gaussian mixture models. The parameters of the mixture models of fragments are then aggregated to form the mixture model of the whole spectrum. We compare the elaborated algorithm to existing algorithms for peak detection and we demonstrate improvements of peak detection efficiency obtained by using Gaussian mixture modeling. We also show applications of the elaborated algorithm to real proteomic datasets of low and high resolution. PMID:26230717
An Efficient Algorithm for Clustering of Large-Scale Mass Spectrometry Data.
Saeed, Fahad; Pisitkun, Trairak; Knepper, Mark A; Hoffert, Jason D
2012-10-04
High-throughput spectrometers are capable of producing data sets containing thousands of spectra for a single biological sample. These data sets contain a substantial amount of redundancy from peptides that may get selected multiple times in a LC-MS/MS experiment. In this paper, we present an efficient algorithm, CAMS (Clustering Algorithm for Mass Spectra) for clustering mass spectrometry data which increases both the sensitivity and confidence of spectral assignment. CAMS utilizes a novel metric, called F-set, that allows accurate identification of the spectra that are similar. A graph theoretic framework is defined that allows the use of F-set metric efficiently for accurate cluster identifications. The accuracy of the algorithm is tested on real HCD and CID data sets with varying amounts of peptides. Our experiments show that the proposed algorithm is able to cluster spectra with very high accuracy in a reasonable amount of time for large spectral data sets. Thus, the algorithm is able to decrease the computational time by compressing the data sets while increasing the throughput of the data by interpreting low S/N spectra.
An efficient algorithm to compute row and column counts for sparse Cholesky factorization
Energy Technology Data Exchange (ETDEWEB)
Gilbert, J.R. (Xerox Palo Alto Research Center, CA (United States)); Ng, E.G.; Peyton, B.W. (Oak Ridge National Lab., TN (United States))
1992-09-01
Let an undirected graph G be given, along with a specified depth- first spanning tree T. We give almost-linear-time algorithms to solve the following two problems: First, for every vertex v, compute the number of descendants w of v for which some descendant of w is adjacent (in G) to v. Second, for every vertx v, compute the number of ancestors of v that are adjacent (in G) to at least one descendant of v. These problems arise in Cholesky and QR factorizations of sparse matrices. Our algorithms can be used to determine the number of nonzero entries in each row and column of the triangular factor of a matrix from the zero/nonzero structure of the matrix. Such a prediction makes storage allocation for sparse matrix factorizations more efficient. Our algorithms run in time linear in the size of the input times a slowly-growing inverse of Ackermann's function. The best previously known algorithms for these problems ran in time linear in the sum of the nonzero counts, which is usually much larger. We give experimental results demonstrating the practical efficiency of the new algorithms.
An efficient algorithm to compute row and column counts for sparse Cholesky factorization
Energy Technology Data Exchange (ETDEWEB)
Gilbert, J.R. [Xerox Palo Alto Research Center, CA (United States); Ng, E.G.; Peyton, B.W. [Oak Ridge National Lab., TN (United States)
1992-09-01
Let an undirected graph G be given, along with a specified depth- first spanning tree T. We give almost-linear-time algorithms to solve the following two problems: First, for every vertex v, compute the number of descendants w of v for which some descendant of w is adjacent (in G) to v. Second, for every vertx v, compute the number of ancestors of v that are adjacent (in G) to at least one descendant of v. These problems arise in Cholesky and QR factorizations of sparse matrices. Our algorithms can be used to determine the number of nonzero entries in each row and column of the triangular factor of a matrix from the zero/nonzero structure of the matrix. Such a prediction makes storage allocation for sparse matrix factorizations more efficient. Our algorithms run in time linear in the size of the input times a slowly-growing inverse of Ackermann`s function. The best previously known algorithms for these problems ran in time linear in the sum of the nonzero counts, which is usually much larger. We give experimental results demonstrating the practical efficiency of the new algorithms.
MODA: an efficient algorithm for network motif discovery in biological networks.
Omidi, Saeed; Schreiber, Falk; Masoudi-Nejad, Ali
2009-10-01
In recent years, interest has been growing in the study of complex networks. Since Erdös and Rényi (1960) proposed their random graph model about 50 years ago, many researchers have investigated and shaped this field. Many indicators have been proposed to assess the global features of networks. Recently, an active research area has developed in studying local features named motifs as the building blocks of networks. Unfortunately, network motif discovery is a computationally hard problem and finding rather large motifs (larger than 8 nodes) by means of current algorithms is impractical as it demands too much computational effort. In this paper, we present a new algorithm (MODA) that incorporates techniques such as a pattern growth approach for extracting larger motifs efficiently. We have tested our algorithm and found it able to identify larger motifs with more than 8 nodes more efficiently than most of the current state-of-the-art motif discovery algorithms. While most of the algorithms rely on induced subgraphs as motifs of the networks, MODA is able to extract both induced and non-induced subgraphs simultaneously. The MODA source code is freely available at: http://LBB.ut.ac.ir/Download/LBBsoft/MODA/
An efficient algorithm for global periodic orbits generation near irregular-shaped asteroids
Shang, Haibin; Wu, Xiaoyu; Ren, Yuan; Shan, Jinjun
2017-07-01
Periodic orbits (POs) play an important role in understanding dynamical behaviors around natural celestial bodies. In this study, an efficient algorithm was presented to generate the global POs around irregular-shaped uniformly rotating asteroids. The algorithm was performed in three steps, namely global search, local refinement, and model continuation. First, a mascon model with a low number of particles and optimized mass distribution was constructed to remodel the exterior gravitational potential of the asteroid. Using this model, a multi-start differential evolution enhanced with a deflection strategy with strong global exploration and bypassing abilities was adopted. This algorithm can be regarded as a search engine to find multiple globally optimal regions in which potential POs were located. This was followed by applying a differential correction to locally refine global search solutions and generate the accurate POs in the mascon model in which an analytical Jacobian matrix was derived to improve convergence. Finally, the concept of numerical model continuation was introduced and used to convert the POs from the mascon model into a high-fidelity polyhedron model by sequentially correcting the initial states. The efficiency of the proposed algorithm was substantiated by computing the global POs around an elongated shoe-shaped asteroid 433 Eros. Various global POs with different topological structures in the configuration space were successfully located. Specifically, the proposed algorithm was generic and could be conveniently extended to explore periodic motions in other gravitational systems.
Li, Ning; Cürüklü, Baran; Bastos, Joaquim; Sucasas, Victor; Fernandez, Jose Antonio Sanchez; Rodriguez, Jonathan
2017-01-01
The aim of the Smart and Networking Underwater Robots in Cooperation Meshes (SWARMs) project is to make autonomous underwater vehicles (AUVs), remote operated vehicles (ROVs) and unmanned surface vehicles (USVs) more accessible and useful. To achieve cooperation and communication between different AUVs, these must be able to exchange messages, so an efficient and reliable communication network is necessary for SWARMs. In order to provide an efficient and reliable communication network for mission execution, one of the important and necessary issues is the topology control of the network of AUVs that are cooperating underwater. However, due to the specific properties of an underwater AUV cooperation network, such as the high mobility of AUVs, large transmission delays, low bandwidth, etc., the traditional topology control algorithms primarily designed for terrestrial wireless sensor networks cannot be used directly in the underwater environment. Moreover, these algorithms, in which the nodes adjust their transmission power once the current transmission power does not equal an optimal one, are costly in an underwater cooperating AUV network. Considering these facts, in this paper, we propose a Probabilistic Topology Control (PTC) algorithm for an underwater cooperating AUV network. In PTC, when the transmission power of an AUV is not equal to the optimal transmission power, then whether the transmission power needs to be adjusted or not will be determined based on the AUV’s parameters. Each AUV determines their own transmission power adjustment probability based on the parameter deviations. The larger the deviation, the higher the transmission power adjustment probability is, and vice versa. For evaluating the performance of PTC, we combine the PTC algorithm with the Fuzzy logic Topology Control (FTC) algorithm and compare the performance of these two algorithms. The simulation results have demonstrated that the PTC is efficient at reducing the transmission power
Li, Ning; Cürüklü, Baran; Bastos, Joaquim; Sucasas, Victor; Fernandez, Jose Antonio Sanchez; Rodriguez, Jonathan
2017-05-04
The aim of the Smart and Networking Underwater Robots in Cooperation Meshes (SWARMs) project is to make autonomous underwater vehicles (AUVs), remote operated vehicles (ROVs) and unmanned surface vehicles (USVs) more accessible and useful. To achieve cooperation and communication between different AUVs, these must be able to exchange messages, so an efficient and reliable communication network is necessary for SWARMs. In order to provide an efficient and reliable communication network for mission execution, one of the important and necessary issues is the topology control of the network of AUVs that are cooperating underwater. However, due to the specific properties of an underwater AUV cooperation network, such as the high mobility of AUVs, large transmission delays, low bandwidth, etc., the traditional topology control algorithms primarily designed for terrestrial wireless sensor networks cannot be used directly in the underwater environment. Moreover, these algorithms, in which the nodes adjust their transmission power once the current transmission power does not equal an optimal one, are costly in an underwater cooperating AUV network. Considering these facts, in this paper, we propose a Probabilistic Topology Control (PTC) algorithm for an underwater cooperating AUV network. In PTC, when the transmission power of an AUV is not equal to the optimal transmission power, then whether the transmission power needs to be adjusted or not will be determined based on the AUV's parameters. Each AUV determines their own transmission power adjustment probability based on the parameter deviations. The larger the deviation, the higher the transmission power adjustment probability is, and vice versa. For evaluating the performance of PTC, we combine the PTC algorithm with the Fuzzy logic Topology Control (FTC) algorithm and compare the performance of these two algorithms. The simulation results have demonstrated that the PTC is efficient at reducing the transmission power
Directory of Open Access Journals (Sweden)
Ning Li
2017-05-01
Full Text Available The aim of the Smart and Networking Underwater Robots in Cooperation Meshes (SWARMs project is to make autonomous underwater vehicles (AUVs, remote operated vehicles (ROVs and unmanned surface vehicles (USVs more accessible and useful. To achieve cooperation and communication between different AUVs, these must be able to exchange messages, so an efficient and reliable communication network is necessary for SWARMs. In order to provide an efficient and reliable communication network for mission execution, one of the important and necessary issues is the topology control of the network of AUVs that are cooperating underwater. However, due to the specific properties of an underwater AUV cooperation network, such as the high mobility of AUVs, large transmission delays, low bandwidth, etc., the traditional topology control algorithms primarily designed for terrestrial wireless sensor networks cannot be used directly in the underwater environment. Moreover, these algorithms, in which the nodes adjust their transmission power once the current transmission power does not equal an optimal one, are costly in an underwater cooperating AUV network. Considering these facts, in this paper, we propose a Probabilistic Topology Control (PTC algorithm for an underwater cooperating AUV network. In PTC, when the transmission power of an AUV is not equal to the optimal transmission power, then whether the transmission power needs to be adjusted or not will be determined based on the AUV’s parameters. Each AUV determines their own transmission power adjustment probability based on the parameter deviations. The larger the deviation, the higher the transmission power adjustment probability is, and vice versa. For evaluating the performance of PTC, we combine the PTC algorithm with the Fuzzy logic Topology Control (FTC algorithm and compare the performance of these two algorithms. The simulation results have demonstrated that the PTC is efficient at reducing the
DANoC: An Efficient Algorithm and Hardware Codesign of Deep Neural Networks on Chip.
Zhou, Xichuan; Li, Shengli; Tang, Fang; Hu, Shengdong; Lin, Zhi; Zhang, Lei
2017-07-18
Deep neural networks (NNs) are the state-of-the-art models for understanding the content of images and videos. However, implementing deep NNs in embedded systems is a challenging task, e.g., a typical deep belief network could exhaust gigabytes of memory and result in bandwidth and computational bottlenecks. To address this challenge, this paper presents an algorithm and hardware codesign for efficient deep neural computation. A hardware-oriented deep learning algorithm, named the deep adaptive network, is proposed to explore the sparsity of neural connections. By adaptively removing the majority of neural connections and robustly representing the reserved connections using binary integers, the proposed algorithm could save up to 99.9% memory utility and computational resources without undermining classification accuracy. An efficient sparse-mapping-memory-based hardware architecture is proposed to fully take advantage of the algorithmic optimization. Different from traditional Von Neumann architecture, the deep-adaptive network on chip (DANoC) brings communication and computation in close proximity to avoid power-hungry parameter transfers between on-board memory and on-chip computational units. Experiments over different image classification benchmarks show that the DANoC system achieves competitively high accuracy and efficiency comparing with the state-of-the-art approaches.
Efficient irregular wavefront propagation algorithms on Intel® Xeon Phi™.
Gomes, Jeremias M; Teodoro, George; de Melo, Alba; Kong, Jun; Kurc, Tahsin; Saltz, Joel H
2015-10-01
We investigate the execution of the Irregular Wavefront Propagation Pattern (IWPP), a fundamental computing structure used in several image analysis operations, on the Intel® Xeon Phi™ co-processor. An efficient implementation of IWPP on the Xeon Phi is a challenging problem because of IWPP's irregularity and the use of atomic instructions in the original IWPP algorithm to resolve race conditions. On the Xeon Phi, the use of SIMD and vectorization instructions is critical to attain high performance. However, SIMD atomic instructions are not supported. Therefore, we propose a new IWPP algorithm that can take advantage of the supported SIMD instruction set. We also evaluate an alternate storage container (priority queue) to track active elements in the wavefront in an effort to improve the parallel algorithm efficiency. The new IWPP algorithm is evaluated with Morphological Reconstruction and Imfill operations as use cases. Our results show performance improvements of up to 5.63× on top of the original IWPP due to vectorization. Moreover, the new IWPP achieves speedups of 45.7× and 1.62×, respectively, as compared to efficient CPU and GPU implementations.
An Efficient Randomized Algorithm for Real-Time Process Scheduling in PicOS Operating System
Helmy*, Tarek; Fatai, Anifowose; Sallam, El-Sayed
PicOS is an event-driven operating environment designed for use with embedded networked sensors. More specifically, it is designed to support the concurrency in intensive operations required by networked sensors with minimal hardware requirements. Existing process scheduling algorithms of PicOS; a commercial tiny, low-footprint, real-time operating system; have their associated drawbacks. An efficient, alternative algorithm, based on a randomized selection policy, has been proposed, demonstrated, confirmed for efficiency and fairness, on the average, and has been recommended for implementation in PicOS. Simulations were carried out and performance measures such as Average Waiting Time (AWT) and Average Turn-around Time (ATT) were used to assess the efficiency of the proposed randomized version over the existing ones. The results prove that Randomized algorithm is the best and most attractive for implementation in PicOS, since it is most fair and has the least AWT and ATT on average over the other non-preemptive scheduling algorithms implemented in this paper.
Dias, Maluge Pubuduni Imali; Wong, Elaine
2013-04-22
In this work, we present a comparative study of two just-in-time (JIT) dynamic bandwidth allocation algorithms (DBAs), designed to improve the energy-efficiency of the 10 Gbps Ethernet passive optical networks (10G-EPONs). The algorithms, termed just-in-time with varying polling cycle times (JIT) and just-in-time with fixed polling cycle times (J-FIT), are designed to achieve energy-savings when the idle time of an optical network unit (ONU) is less than the sleep-to-active transition time. This is made possible by a vertical-cavity surface-emitting laser (VCSEL) ONU that can transit into sleep or doze modes during its idle time. We evaluate the performance of the algorithms in terms of polling cycle time, power consumption, percentage of energy-savings, and average delay. The energy-efficiency of a VCSEL ONU that can transition into sleep or doze mode is compared to an always-ON distributed feedback (DFB) laser ONU. Simulation results indicate that both JIT and J-FIT DBA algorithms result in improved energy-efficiency whilst J-FIT performs better in terms of energy-savings at low network loads. The J-FIT DBA however, results in increased average delay in comparison to the JIT DBA. Nonetheless, this increase in average delay is within the acceptable range to support the quality of service (QoS) requirements of the next-generation access networks.
An Efficient Distributed Algorithm for Constructing Spanning Trees in Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Rosana Lachowski
2015-01-01
Full Text Available Monitoring and data collection are the two main functions in wireless sensor networks (WSNs. Collected data are generally transmitted via multihop communication to a special node, called the sink. While in a typical WSN, nodes have a sink node as the final destination for the data traffic, in an ad hoc network, nodes need to communicate with each other. For this reason, routing protocols for ad hoc networks are inefficient for WSNs. Trees, on the other hand, are classic routing structures explicitly or implicitly used in WSNs. In this work, we implement and evaluate distributed algorithms for constructing routing trees in WSNs described in the literature. After identifying the drawbacks and advantages of these algorithms, we propose a new algorithm for constructing spanning trees in WSNs. The performance of the proposed algorithm and the quality of the constructed tree were evaluated in different network scenarios. The results showed that the proposed algorithm is a more efficient solution. Furthermore, the algorithm provides multiple routes to the sensor nodes to be used as mechanisms for fault tolerance and load balancing.
An Efficient Combined Meta-Heuristic Algorithm for Solving the Traveling Salesman Problem
Directory of Open Access Journals (Sweden)
Majid Yousefikhoshbakht
2016-08-01
Full Text Available The traveling salesman problem (TSP is one of the most important NP-hard Problems and probably the most famous and extensively studied problem in the field of combinatorial optimization. In this problem, a salesman is required to visit each of n given nodes once and only once, starting from any node and returning to the original place of departure. This paper presents an efficient evolutionary optimization algorithm developed through combining imperialist competitive algorithm and lin-kernighan algorithm called (MICALK in order to solve the TSP. The MICALK is tested on 44 TSP instances involving from 24 to 1655 nodes from the literature so that 26 best known solutions of the benchmark problem are also found by our algorithm. Furthermore, the performance of MICALK is compared with several metaheuristic algorithms, including GA, BA, IBA, ICA, GSAP, ABO, PSO and BCO on 32 instances from TSPLIB. The results indicate that the MICALK performs well and is quite competitive with the above algorithms.
DEFF Research Database (Denmark)
Petersen, Mette Højgaard; Edlund, Kristian; Hansen, Lars Henrik
2013-01-01
of servicing a portfolio of flexible consumers by use of a fluctuating power supply. Based on the developed taxonomy we first prove that no causal optimal dispatch strategies exist for the considered problem. We then present two heuristic algorithms for solving the balancing task: Predictive Balancing...... and Agile Balancing. Predictive Balancing, is a traditional moving horizon algorithm, where power is dispatched based on perfect predictions of the power supply. Agile Balancing, on the other hand, is strictly non-predictive. It is, however, explicitly designed to exploit the heterogeneity of the flexible....... As a further advantage it is demonstrated, that Agile Balancing is extremely computationally efficient since it is based on a sorting....
A User-Centric WS-Mediator Framework for on-the-fly Web Service Composition
Directory of Open Access Journals (Sweden)
T. Zhang
2012-06-01
Full Text Available Nowadays, the effective and adaptive dynamic Web service composition is a major challenge for a real success of Web services among ordinary users. For the latter, and from the viewpoint of a user centric paradigm, existing work has limitation on their agility to create a composed service on the fly according to the desire/need of an end-user at a given time/place. This article presents our approach which consists in providing a comprehensive framework for a user centric WS-mediator which is capable of dynamic service composition. It is based on a composition engine which follows user’s specific needs and which yields a composed service through a WS knowledge base. Users can mash up the services at run time with their own logic and have a fully dynamic composition with context adaptation through a WS-mediator that is also capable of supporting the semantic web.
Topologically correct quantum nonadiabatic formalism for on-the-fly dynamics
Joubert-Doriol, Loic; Ryabinkin, Ilya G; Izmaylov, Artur F
2016-01-01
On-the-fly quantum nonadiabatic dynamics for large systems greatly benefits from the adiabatic representation readily available from the electronic structure programs. However, frequently occurring in this representation conical intersections introduce non-trivial geometric or Berry phases which require a special treatment for adequate modelling of the nuclear dynamics. We analyze two approaches for nonadiabatic dynamics using the time-dependent variational principle and the adiabatic representation. The first approach employs adiabatic electronic functions with global parametric dependence on the nuclear coordinates. The second approach uses adiabatic electronic functions obtained only at the centres of moving localized nuclear basis functions (e.g. frozen-width Gaussians). Unless a gauge transformation is used to enforce single-valued boundary conditions, the first approach fails to capture the geometric phase. In contrast, the second approach accounts for the geometric phase naturally because of the absenc...
Runtime Verification Based on Executable Models: On-the-Fly Matching of Timed Traces
Directory of Open Access Journals (Sweden)
Mikhail Chupilko
2013-03-01
Full Text Available Runtime verification is checking whether a system execution satisfies or violates a given correctness property. A procedure that automatically, and typically on the fly, verifies conformance of the system's behavior to the specified property is called a monitor. Nowadays, a variety of formalisms are used to express properties on observed behavior of computer systems, and a lot of methods have been proposed to construct monitors. However, it is a frequent situation when advanced formalisms and methods are not needed, because an executable model of the system is available. The original purpose and structure of the model are out of importance; rather what is required is that the system and its model have similar sets of interfaces. In this case, monitoring is carried out as follows. Two "black boxes", the system and its reference model, are executed in parallel and stimulated with the same input sequences; the monitor dynamically captures their output traces and tries to match them. The main problem is that a model is usually more abstract than the real system, both in terms of functionality and timing. Therefore, trace-to-trace matching is not straightforward and allows the system to produce events in different order or even miss some of them. The paper studies on-the-fly conformance relations for timed systems (i.e., systems whose inputs and outputs are distributed along the time axis. It also suggests a practice-oriented methodology for creating and configuring monitors for timed systems based on executable models. The methodology has been successfully applied to a number of industrial projects of simulation-based hardware verification.
Indian Academy of Sciences (India)
In the description of algorithms and programming languages, what is the role of control abstraction? • What are the inherent limitations of the algorithmic processes? In future articles in this series, we will show that these constructs are powerful and can be used to encode any algorithm. In the next article, we will discuss ...
Scales of Time Where the Quantum Discord Allows an Efficient Execution of the DQC1 Algorithm
Directory of Open Access Journals (Sweden)
M. Ávila
2014-01-01
Full Text Available The power of one qubit deterministic quantum processor (DQC1 (Knill and Laflamme (1998 generates a nonclassical correlation known as quantum discord. The DQC1 algorithm executes in an efficient way with a characteristic time given by τ=Tr[Un]/2n, where Un is an n qubit unitary gate. For pure states, quantum discord means entanglement while for mixed states such a quantity is more than entanglement. Quantum discord can be thought of as the mutual information between two systems. Within the quantum discord approach the role of time in an efficient evaluation of τ is discussed. It is found that the smaller the value of t/T is, where t is the time of execution of the DQC1 algorithm and T is the scale of time where the nonclassical correlations prevail, the more efficient the calculation of τ is. A Mösbauer nucleus might be a good processor of the DQC1 algorithm while a nuclear spin chain would not be efficient for the calculation of τ.
Efficient classical simulation of the Deutsch-Jozsa and Simon's algorithms
Johansson, Niklas; Larsson, Jan-Åke
2017-09-01
A long-standing aim of quantum information research is to understand what gives quantum computers their advantage. This requires separating problems that need genuinely quantum resources from those for which classical resources are enough. Two examples of quantum speed-up are the Deutsch-Jozsa and Simon's problem, both efficiently solvable on a quantum Turing machine, and both believed to lack efficient classical solutions. Here we present a framework that can simulate both quantum algorithms efficiently, solving the Deutsch-Jozsa problem with probability 1 using only one oracle query, and Simon's problem using linearly many oracle queries, just as expected of an ideal quantum computer. The presented simulation framework is in turn efficiently simulatable in a classical probabilistic Turing machine. This shows that the Deutsch-Jozsa and Simon's problem do not require any genuinely quantum resources, and that the quantum algorithms show no speed-up when compared with their corresponding classical simulation. Finally, this gives insight into what properties are needed in the two algorithms and calls for further study of oracle separation between quantum and classical computation.
Yu, Jieqing; Wu, Lixin; Hu, Qingsong; Yan, Zhigang; Zhang, Shaoliang
2017-12-01
Visibility computation is of great interest to location optimization, environmental planning, ecology, and tourism. Many algorithms have been developed for visibility computation. In this paper, we propose a novel method of visibility computation, called synthetic visual plane (SVP), to achieve better performance with respect to efficiency, accuracy, or both. The method uses a global horizon, which is a synthesis of line-of-sight information of all nearer points, to determine the visibility of a point, which makes it an accurate visibility method. We used discretization of horizon to gain a good performance in efficiency. After discretization, the accuracy and efficiency of SVP depends on the scale of discretization (i.e., zone width). The method is more accurate at smaller zone widths, but this requires a longer operating time. Users must strike a balance between accuracy and efficiency at their discretion. According to our experiments, SVP is less accurate but more efficient than R2 if the zone width is set to one grid. However, SVP becomes more accurate than R2 when the zone width is set to 1/24 grid, while it continues to perform as fast or faster than R2. Although SVP performs worse than reference plane and depth map with respect to efficiency, it is superior in accuracy to these other two algorithms.
New algorithm for efficient pattern recall using a static threshold with the Steinbuch Lernmatrix
Juan Carbajal Hernández, José; Sánchez Fernández, Luis P.
2011-03-01
An associative memory is a binary relationship between inputs and outputs, which is stored in an M matrix. The fundamental purpose of an associative memory is to recover correct output patterns from input patterns, which can be altered by additive, subtractive or combined noise. The Steinbuch Lernmatrix was the first associative memory developed in 1961, and is used as a pattern recognition classifier. However, a misclassification problem is presented when crossbar saturation occurs. A new algorithm that corrects the misclassification in the Lernmatrix is proposed in this work. The results of crossbar saturation with fundamental patterns demonstrate a better performance of pattern recalling using the new algorithm. Experiments with real data show a more efficient classifier when the algorithm is introduced in the original Lernmatrix. Therefore, the thresholded Lernmatrix memory emerges as a suitable and alternative classifier to be used in the developing pattern processing field.
Energy Technology Data Exchange (ETDEWEB)
Atanassov, E.; Dimitrov, D., E-mail: d.slavov@bas.bg, E-mail: emanouil@parallel.bas.bg, E-mail: gurov@bas.bg; Gurov, T. [Institute of Information and Communication Technologies, BAS, Acad. G. Bonchev str., bl. 25A, 1113 Sofia (Bulgaria)
2015-10-28
The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for option pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.
Atanassov, E.; Dimitrov, D.; Gurov, T.
2015-10-01
The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for option pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.
An efficient iterative algorithm for computation of scattering from dielectric objects.
Energy Technology Data Exchange (ETDEWEB)
Liao, L.; Gopalsami, N.; Venugopal, A.; Heifetz, A.; Raptis, A. C. (Nuclear Engineering Division)
2011-02-14
We have developed an efficient iterative algorithm for electromagnetic scattering of arbitrary but relatively smooth dielectric objects. The algorithm iteratively adapts the equivalent surface currents until the electromagnetic fields inside and outside the dielectric objects match the boundary conditions. Theoretical convergence is analyzed for two examples that solve scattering of plane waves incident upon air/dielectric slabs of semi-infinite and finite thicknesses. We applied the iterative algorithm for simulation of sinusoidally-perturbed dielectric slab on one side and the method converged for such unsmooth surfaces. We next simulated the shift in radiation pattern of a 6-inch dielectric lens for different offsets of the feed antenna on the focal plane. The result is compared to that of the Geometrical Optics (GO).
An Efficient VQ Codebook Search Algorithm Applied to AMR-WB Speech Coding
Directory of Open Access Journals (Sweden)
Cheng-Yu Yeh
2017-04-01
Full Text Available The adaptive multi-rate wideband (AMR-WB speech codec is widely used in modern mobile communication systems for high speech quality in handheld devices. Nonetheless, a major disadvantage is that vector quantization (VQ of immittance spectral frequency (ISF coefficients takes a considerable computational load in the AMR-WB coding. Accordingly, a binary search space-structured VQ (BSS-VQ algorithm is adopted to efficiently reduce the complexity of ISF quantization in AMR-WB. This search algorithm is done through a fast locating technique combined with lookup tables, such that an input vector is efficiently assigned to a subspace where relatively few codeword searches are required to be executed. In terms of overall search performance, this work is experimentally validated as a superior search algorithm relative to a multiple triangular inequality elimination (MTIE, a TIE with dynamic and intersection mechanisms (DI-TIE, and an equal-average equal-variance equal-norm nearest neighbor search (EEENNS approach. With a full search algorithm as a benchmark for overall search load comparison, this work provides an 87% search load reduction at a threshold of quantization accuracy of 0.96, a figure far beyond 55% in the MTIE, 76% in the EEENNS approach, and 83% in the DI-TIE approach.
An efficient genetic algorithm for maximum coverage deployment in wireless sensor networks.
Yoon, Yourim; Kim, Yong-Hyuk
2013-10-01
Sensor networks have a lot of applications such as battlefield surveillance, environmental monitoring, and industrial diagnostics. Coverage is one of the most important performance metrics for sensor networks since it reflects how well a sensor field is monitored. In this paper, we introduce the maximum coverage deployment problem in wireless sensor networks and analyze the properties of the problem and its solution space. Random deployment is the simplest way to deploy sensor nodes but may cause unbalanced deployment and therefore, we need a more intelligent way for sensor deployment. We found that the phenotype space of the problem is a quotient space of the genotype space in a mathematical view. Based on this property, we propose an efficient genetic algorithm using a novel normalization method. A Monte Carlo method is adopted to design an efficient evaluation function, and its computation time is decreased without loss of solution quality using a method that starts from a small number of random samples and gradually increases the number for subsequent generations. The proposed genetic algorithms could be further improved by combining with a well-designed local search. The performance of the proposed genetic algorithm is shown by a comparative experimental study. When compared with random deployment and existing methods, our genetic algorithm was not only about twice faster, but also showed significant performance improvement in quality.
An Efficient Imperialist Competitive Algorithm for Solving the QFD Decision Problem
Directory of Open Access Journals (Sweden)
Xue Ji
2016-01-01
Full Text Available It is an important QFD decision problem to determine the engineering characteristics and their corresponding actual fulfillment levels. With the increasing complexity of actual engineering problems, the corresponding QFD matrixes become much huger, and the time spent on analyzing these matrixes and making decisions will be unacceptable. In this paper, a solution for efficiently solving the QFD decision problem is proposed. The QFD decision problem is reformulated as a mixed integer nonlinear programming (MINLP model, which aims to maximize overall customer satisfaction with the consideration of the enterprises’ capability, cost, and resource constraints. And then an improved algorithm G-ICA, a combination of Imperialist Competitive Algorithm (ICA and genetic algorithm (GA, is proposed to tackle this model. The G-ICA is compared with other mature algorithms by solving 7 numerical MINLP problems and 4 adapted QFD decision problems with different scales. The results verify a satisfied global optimization performance and time performance of the G-ICA. Meanwhile, the proposed algorithm’s better capabilities to guarantee decision-making accuracy and efficiency are also proved.
A multi-phase genetic algorithm for the efficient management of multi-chiller systems
Energy Technology Data Exchange (ETDEWEB)
Beghi, Alessandro; Rampazzo, Mirco [Dipartimento di Ingegneria dell' Informazione, Universita di Padova, via Gradenigo 6/B, I-35131 Padova (Italy); Cecchinato, Luca [Dipartimento di Fisica Tecnica, Universita di Padova, via Venezia 1, I-35131 Padova (Italy)
2011-03-15
In HVAC plants of medium-high cooling capacity, multiple-chiller systems are often employed. In such systems, chillers are independent of each other to provide standby capacity, operational flexibility, and less disruption maintenance. However, the problem of efficiently managing multiple-chiller systems is complex in many respects. In particular, the electrical energy consumption in the chiller plant markedly increases if the chillers are managed improperly, therefore significant energy savings can be achieved by optimizing the chiller operations of HVAC systems. In this paper an unified method for Multi-Chiller Management optimization is presented, that deals simultaneously with the Optimal Chiller Loading and Optimal Chiller Sequencing problems. The main objective is that of reducing both power consumption and operative costs. The approach is based on a cooling load estimation algorithm, and the optimization step is performed by means of a multi-phase genetic algorithm, that provides an efficient and suitable approach to solve this kind of complex multi-objective optimization problem. The performance of the algorithm is evaluated by resorting to a dynamic simulation environment developed in Matlab/Simulink trademark, where the plant dynamics are accurately described. It is shown that the proposed algorithm gives superior performance with respect to standard approaches, in terms of both energy performance and load profile tracking. (author)
Ltaief, Hatem
2011-08-31
This paper presents the power profile of two high performance dense linear algebra libraries i.e., LAPACK and PLASMA. The former is based on block algorithms that use the fork-join paradigm to achieve parallel performance. The latter uses fine-grained task parallelism that recasts the computation to operate on submatrices called tiles. In this way tile algorithms are formed. We show results from the power profiling of the most common routines, which permits us to clearly identify the different phases of the computations. This allows us to isolate the bottlenecks in terms of energy efficiency. Our results show that PLASMA surpasses LAPACK not only in terms of performance but also in terms of energy efficiency. © 2011 Springer-Verlag.
Liu, Fei; Zhang, Xi; Jia, Yan
2015-01-01
In this paper, we propose a computer information processing algorithm that can be used for biomedical image processing and disease prediction. A biomedical image is considered a data object in a multi-dimensional space. Each dimension is a feature that can be used for disease diagnosis. We introduce a new concept of the top (k1,k2) outlier. It can be used to detect abnormal data objects in the multi-dimensional space. This technique focuses on uncertain space, where each data object has several possible instances with distinct probabilities. We design an efficient sampling algorithm for the top (k1,k2) outlier in uncertain space. Some improvement techniques are used for acceleration. Experiments show our methods' high accuracy and high efficiency.
Mariano, Artur; Lee, Dongwook; Gerstlauer, Andreas; Chiou, Derek
2013-01-01
Part 4: Performance Analysis; International audience; Minimum spanning tree (MST) problems play an important role in many networking applications, such as routing and network planning. In many cases, such as wireless ad-hoc networks, this requires efficient high-performance and low-power implementations that can run at regular intervals in real time on embedded platforms. In this paper, we study custom software and hardware realizations of one common algorithm for MST computations, Prim’s alg...
Distributed Energy-Efficient Topology Control Algorithm in Home M2M Networks
Chao-Yang Lee; Chu-Sing Yang
2012-01-01
Because machine-to-machine (M2M) technology enables machines to communicate with each other without human intervention, it could play a big role in sensor network systems. Through wireless sensor network (WSN) gateways, various information can be collected by sensors for M2M systems. For home M2M networks, this study proposes a distributed energy-efficient topology control algorithm for both topology construction and topology maintenance. Topology control is an effective method of enhancing e...
Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU
Directory of Open Access Journals (Sweden)
Jinwei Wang
2014-01-01
Full Text Available The active appearance model (AAM is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA on the Nvidia’s GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.
An efficient algorithm for updating regular expression indexes in RDF databases.
Lee, Jinsoo; Kasperovics, Romans; Han, Wook-Shin; Lee, Jeong-Hoon; Kim, Min Soo; Cho, Hune
2015-01-01
The Resource Description Framework (RDF) is widely used for sharing biomedical data, such as gene ontology or the online protein database UniProt. SPARQL is a native query language for RDF, featuring regular expressions in queries for which exact values are either irrelevant or unknown. The use of regular expression indexes in SPARQL query processing improves the performance of queries containing regular expressions by up to two orders of magnitude. In this study, we address the update operation for regular expression indexes in RDF databases. We identify major performance problems of straightforward index update algorithms and propose a new algorithm that utilises unique properties of regular expression indexes to increase performance. Our contributions can be summarised as follows: (1) we propose an efficient update algorithm for regular expression indexes in RDF databases, (2) we build a prototype system for the proposed algorithm in C++ and (3) we conduct extensive experiments demonstrating the improvement of our algorithm over the straightforward approaches by an order of magnitude.
Directory of Open Access Journals (Sweden)
O. Ahmed
2011-01-01
Full Text Available Packet classification plays a crucial role for a number of network services such as policy-based routing, firewalls, and traffic billing, to name a few. However, classification can be a bottleneck in the above-mentioned applications if not implemented properly and efficiently. In this paper, we propose PCIU, a novel classification algorithm, which improves upon previously published work. PCIU provides lower preprocessing time, lower memory consumption, ease of incremental rule update, and reasonable classification time compared to state-of-the-art algorithms. The proposed algorithm was evaluated and compared to RFC and HiCut using several benchmarks. Results obtained indicate that PCIU outperforms these algorithms in terms of speed, memory usage, incremental update capability, and preprocessing time. The algorithm, furthermore, was improved and made more accessible for a variety of applications through implementation in hardware. Two such implementations are detailed and discussed in this paper. The results indicate that a hardware/software codesign approach results in a slower, but easier to optimize and improve within time constraints, PCIU solution. A hardware accelerator based on an ESL approach using Handel-C, on the other hand, resulted in a 31x speed-up over a pure software implementation running on a state of the art Xeon processor.
An Efficient Algorithm for Finding Dominant Trapping Sets of LDPC Codes
Karimi, Mehdi
2011-01-01
This paper presents an efficient algorithm for finding the dominant trapping sets of a low-density parity-check (LDPC) code. The algorithm can be used to estimate the error floor of LDPC codes or to be part of the apparatus to design LDPC codes with low error floors. For regular codes, the algorithm is initiated with a set of short cycles as the input. For irregular codes, in addition to short cycles, variable nodes with low degree and cycles with low approximate cycle extrinsic message degree (ACE) are also used as the initial inputs. The initial inputs are then expanded recursively to dominant trapping sets of increasing size. At the core of the algorithm lies the analysis of the graphical structure of dominant trapping sets and the relationship of such structures to short cycles, low-degree variable nodes and cycles with low ACE. The algorithm is universal in the sense that it can be used for an arbitrary graph and that it can be tailored to find other graphical objects, such as absorbing sets and Zyablov-...
Directory of Open Access Journals (Sweden)
Dayasindhu Dey
2016-11-01
Full Text Available The Density Matrix Renormalization Group (DMRG is a state-of-the-art numerical technique for a one dimensional quantum many-body system; but calculating accurate results for a system with Periodic Boundary Condition (PBC from the conventional DMRG has been a challenging job from the inception of DMRG. The recent development of the Matrix Product State (MPS algorithm gives a new approach to find accurate results for the one dimensional PBC system. The most efficient implementation of the MPS algorithm can scale as O(p x m^3, where p can vary from 4 to m^2. In this paper, we propose a new DMRG algorithm, which is very similar to the conventional DMRG and gives comparable accuracy to that of MPS. The computation effort of the new algorithm goes as O(m^3 and the conventional DMRG code can be easily modified for the new algorithm. Received: 2 August 2016, Accepted: 12 October 2016; Edited by: K. Hallberg; DOI: http://dx.doi.org/10.4279/PIP.080006 Cite as: D Dey, D Maiti, M Kumar, Papers in Physics 8, 080006 (2016
An Efficient Hybrid Face Recognition Algorithm Using PCA and GABOR Wavelets
Directory of Open Access Journals (Sweden)
Hyunjong Cho
2014-04-01
Full Text Available With the rapid development of computers and the increasing, mass use of high-tech mobile devices, vision-based face recognition has advanced significantly. However, it is hard to conclude that the performance of computers surpasses that of humans, as humans have generally exhibited better performance in challenging situations involving occlusion or variations. Motivated by the recognition method of humans who utilize both holistic and local features, we present a computationally efficient hybrid face recognition method that employs dual-stage holistic and local feature-based recognition algorithms. In the first coarse recognition stage, the proposed algorithm utilizes Principal Component Analysis (PCA to identify a test image. The recognition ends at this stage if the confidence level of the result turns out to be reliable. Otherwise, the algorithm uses this result for filtering out top candidate images with a high degree of similarity, and passes them to the next fine recognition stage where Gabor filters are employed. As is well known, recognizing a face image with Gabor filters is a computationally heavy task. The contribution of our work is in proposing a flexible dual-stage algorithm that enables fast, hybrid face recognition. Experimental tests were performed with the Extended Yale Face Database B to verify the effectiveness and validity of the research, and we obtained better recognition results under illumination variations not only in terms of computation time but also in terms of the recognition rate in comparison to PCA- and Gabor wavelet-based recognition algorithms.
Liu, Kuojuey Ray
1990-01-01
Least-squares (LS) estimations and spectral decomposition algorithms constitute the heart of modern signal processing and communication problems. Implementations of recursive LS and spectral decomposition algorithms onto parallel processing architectures such as systolic arrays with efficient fault-tolerant schemes are the major concerns of this dissertation. There are four major results in this dissertation. First, we propose the systolic block Householder transformation with application to the recursive least-squares minimization. It is successfully implemented on a systolic array with a two-level pipelined implementation at the vector level as well as at the word level. Second, a real-time algorithm-based concurrent error detection scheme based on the residual method is proposed for the QRD RLS systolic array. The fault diagnosis, order degraded reconfiguration, and performance analysis are also considered. Third, the dynamic range, stability, error detection capability under finite-precision implementation, order degraded performance, and residual estimation under faulty situations for the QRD RLS systolic array are studied in details. Finally, we propose the use of multi-phase systolic algorithms for spectral decomposition based on the QR algorithm. Two systolic architectures, one based on triangular array and another based on rectangular array, are presented for the multiphase operations with fault-tolerant considerations. Eigenvectors and singular vectors can be easily obtained by using the multi-pase operations. Performance issues are also considered.
CLUSTAG & WCLUSTAG: Hierarchical Clustering Algorithms for Efficient Tag-SNP Selection
Ao, Sio-Iong
More than 6 million single nucleotide polymorphisms (SNPs) in the human genome have been genotyped by the HapMap project. Although only a pro portion of these SNPs are functional, all can be considered as candidate markers for indirect association studies to detect disease-related genetic variants. The complete screening of a gene or a chromosomal region is nevertheless an expensive undertak ing for association studies. A key strategy for improving the efficiency of association studies is to select a subset of informative SNPs, called tag SNPs, for analysis. In the chapter, hierarchical clustering algorithms have been proposed for efficient tag SNP selection.
Directory of Open Access Journals (Sweden)
Y. Tang
2006-01-01
Full Text Available This study provides a comprehensive assessment of state-of-the-art evolutionary multiobjective optimization (EMO tools' relative effectiveness in calibrating hydrologic models. The relative computational efficiency, accuracy, and ease-of-use of the following EMO algorithms are tested: Epsilon Dominance Nondominated Sorted Genetic Algorithm-II (ε-NSGAII, the Multiobjective Shuffled Complex Evolution Metropolis algorithm (MOSCEM-UA, and the Strength Pareto Evolutionary Algorithm 2 (SPEA2. This study uses three test cases to compare the algorithms' performances: (1 a standardized test function suite from the computer science literature, (2 a benchmark hydrologic calibration test case for the Leaf River near Collins, Mississippi, and (3 a computationally intensive integrated surface-subsurface model application in the Shale Hills watershed in Pennsylvania. One challenge and contribution of this work is the development of a methodology for comprehensively comparing EMO algorithms that have different search operators and randomization techniques. Overall, SPEA2 attained competitive to superior results for most of the problems tested in this study. The primary strengths of the SPEA2 algorithm lie in its search reliability and its diversity preservation operator. The biggest challenge in maximizing the performance of SPEA2 lies in specifying an effective archive size without a priori knowledge of the Pareto set. In practice, this would require significant trial-and-error analysis, which is problematic for more complex, computationally intensive calibration applications. ε-NSGAII appears to be superior to MOSCEM-UA and competitive with SPEA2 for hydrologic model calibration. ε-NSGAII's primary strength lies in its ease-of-use due to its dynamic population sizing and archiving which lead to rapid convergence to very high quality solutions with minimal user input. MOSCEM-UA is best suited for hydrologic model calibration applications that have small
Berends, Constantijn J.; van de Wal, Roderik S. W.
2016-12-01
Many processes govern the deglaciation of ice sheets. One of the processes that is usually ignored is the calving of ice in lakes that temporarily surround the ice sheet. In order to capture this process a "flood-fill algorithm" is needed. Here we present and evaluate several optimizations to a standard flood-fill algorithm in terms of computational efficiency. As an example, we determine the land-ocean mask for a 1 km resolution digital elevation model (DEM) of North America and Greenland, a geographical area of roughly 7000 by 5000 km (roughly 35 million elements), about half of which is covered by ocean. Determining the land-ocean mask with our improved flood-fill algorithm reduces computation time by 90 % relative to using a standard stack-based flood-fill algorithm. This implies that it is now feasible to include the calving of ice in lakes as a dynamical process inside an ice-sheet model. We demonstrate this by using bedrock elevation, ice thickness and geoid perturbation fields from the output of a coupled ice-sheet-sea-level equation model at 30 000 years before present and determine the extent of Lake Agassiz, using both the standard and improved versions of the flood-fill algorithm. We show that several optimizations to the flood-fill algorithm used for filling a depression up to a water level, which is not defined beforehand, decrease the computation time by up to 99 %. The resulting reduction in computation time allows determination of the extent and volume of depressions in a DEM over large geographical grids or repeatedly over long periods of time, where computation time might otherwise be a limiting factor. The algorithm can be used for all glaciological and hydrological models, which need to trace the evolution over time of lakes or drainage basins in general.
Multiway simple cycle separators and I/O-efficient algorithms for planar graphs
DEFF Research Database (Denmark)
Arge, L.; Walderveen, Freek van; Zeh, Norbert
2013-01-01
in internal memory, thereby completely negating the performance gain achieved by minimizing the number of disk accesses. In this paper, we show how to make these algorithms simultaneously efficient in internal and external memory so they achieve I/O complexity O(sort(N)) and take O(N log N) time in internal......We revisit I/O-efficient solutions to a number of fundamental problems on planar graphs: single-source shortest paths, topological sorting, and computing strongly connected components. Existing I/O-efficient solutions to these problems pay for I/O efficiency using excessive computation time...... memory, where sort(N) is the number of I/Os needed to sort N items in external memory. The key, and the main technical contribution of this paper, is a multiway version of Miller's simple cycle separator theorem. We show how to compute these separators in linear time in internal memory, and using O(sort...
Taheri, Mahboobeh; Mohebbi, Ali
2008-08-30
In this study, a new approach for the auto-design of neural networks, based on a genetic algorithm (GA), has been used to predict collection efficiency in venturi scrubbers. The experimental input data, including particle diameter, throat gas velocity, liquid to gas flow rate ratio, throat hydraulic diameter, pressure drop across the venturi scrubber and collection efficiency as an output, have been used to create a GA-artificial neural network (ANN) model. The testing results from the model are in good agreement with the experimental data. Comparison of the results of the GA optimized ANN model with the results from the trial-and-error calibrated ANN model indicates that the GA-ANN model is more efficient. Finally, the effects of operating parameters such as liquid to gas flow rate ratio, throat gas velocity, and particle diameter on collection efficiency were determined.
Fuzzy-Logic Based Distributed Energy-Efficient Clustering Algorithm for Wireless Sensor Networks
Zhang, Ying; Wang, Jun; Han, Dezhi; Wu, Huafeng; Zhou, Rundong
2017-01-01
Due to the high-energy efficiency and scalability, the clustering routing algorithm has been widely used in wireless sensor networks (WSNs). In order to gather information more efficiently, each sensor node transmits data to its Cluster Head (CH) to which it belongs, by multi-hop communication. However, the multi-hop communication in the cluster brings the problem of excessive energy consumption of the relay nodes which are closer to the CH. These nodes’ energy will be consumed more quickly than the farther nodes, which brings the negative influence on load balance for the whole networks. Therefore, we propose an energy-efficient distributed clustering algorithm based on fuzzy approach with non-uniform distribution (EEDCF). During CHs’ election, we take nodes’ energies, nodes’ degree and neighbor nodes’ residual energies into consideration as the input parameters. In addition, we take advantage of Takagi, Sugeno and Kang (TSK) fuzzy model instead of traditional method as our inference system to guarantee the quantitative analysis more reasonable. In our scheme, each sensor node calculates the probability of being as CH with the help of fuzzy inference system in a distributed way. The experimental results indicate EEDCF algorithm is better than some current representative methods in aspects of data transmission, energy consumption and lifetime of networks. PMID:28671641
Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong
2017-07-01
The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.
An Efficient Hybrid DSMC/MD Algorithm for Accurate Modeling of Micro Gas Flows
Liang, Tengfei
2013-01-01
Aiming at simulating micro gas flows with accurate boundary conditions, an efficient hybrid algorithmis developed by combining themolecular dynamics (MD) method with the direct simulationMonte Carlo (DSMC)method. The efficiency comes from the fact that theMD method is applied only within the gas-wall interaction layer, characterized by the cut-off distance of the gas-solid interaction potential, to resolve accurately the gas-wall interaction process, while the DSMC method is employed in the remaining portion of the flow field to efficiently simulate rarefied gas transport outside the gas-wall interaction layer. A unique feature about the present scheme is that the coupling between the two methods is realized by matching the molecular velocity distribution function at the DSMC/MD interface, hence there is no need for one-toone mapping between a MD gas molecule and a DSMC simulation particle. Further improvement in efficiency is achieved by taking advantage of gas rarefaction inside the gas-wall interaction layer and by employing the "smart-wall model" proposed by Barisik et al. The developed hybrid algorithm is validated on two classical benchmarks namely 1-D Fourier thermal problem and Couette shear flow problem. Both the accuracy and efficiency of the hybrid algorithm are discussed. As an application, the hybrid algorithm is employed to simulate thermal transpiration coefficient in the free-molecule regime for a system with atomically smooth surface. Result is utilized to validate the coefficients calculated from the pure DSMC simulation with Maxwell and Cercignani-Lampis gas-wall interaction models. ©c 2014 Global-Science Press.
Evaluating Efficiency of Parallel Algorithms of Transformation Operations with Graph Model
Directory of Open Access Journals (Sweden)
G. S. Ivanova
2014-01-01
Full Text Available The usage of graphs in the analysis and design of complex large-scale system structures, which require a significant computing capacity, has led to the need to seek for the new methods both of graph models representation and of graph operations implementation. To reduce the execution time of algorithms the parallel computing systems are appropriate to use.In this case, to achieve the maximum acceleration graph processing is implemented in hardware and software parts of the system. In the article, analysis of graph model operations was performed in the context of their implementation in a parallel computing system based on the abstract description of the graph by the sets that allows the utilizing of various parallel processing systems, regardless of their architecture features. The most common in the algorithms graph transformation operations were considered.As a result of analysis a set of elementary operations on graph structures, which constitute graph operations, was revealed, and graph operations parallel algorithms were realized. Efficiency evaluation of parallel algorithms, presented by speedup for each operation, showed a high degree of graph processing acceleration, compared with sequential operations algorithms.The proposed realization can be used to solve time-consuming large-scale tasks on parallel computing systems. At the same time, in a particular parallel processing system it is also possible to parallelize elementary operations thereby greatly reducing the execution time of operation in general.Further research is focused on describing and complementing current implementation with complex parallel graph transformation operations such as intersection, union, composition of graphs, etc., as well as analysis operations of various graph characteristics. That will expand the set of elementary operations and will provide an opportunity to evaluate more accurately the efficiency of parallel computing systems for processing graph models.
Indian Academy of Sciences (India)
, i is referred to as the loop-index, 'stat-body' is any sequence of ... while i ~ N do stat-body; i: = i+ 1; endwhile. The algorithm for sorting the numbers is described in Table 1 and the algorithmic steps on a list of 4 numbers shown in. Figure 1.
Directory of Open Access Journals (Sweden)
Adamu Murtala Zungeru
2013-01-01
Full Text Available The main problem for event gathering in wireless sensor networks (WSNs is the restricted communication range for each node. Due to the restricted communication range and high network density, event forwarding in WSNs is very challenging and requires multihop data forwarding. Currently, the energy-efficient ant based routing (EEABR algorithm, based on the ant colony optimization (ACO metaheuristic, is one of the state-of-the-art energy-aware routing protocols. In this paper, we propose three improvements to the EEABR algorithm to further improve its energy efficiency. The improvements to the original EEABR are based on the following: (1 a new scheme to intelligently initialize the routing tables giving priority to neighboring nodes that simultaneously could be the destination, (2 intelligent update of routing tables in case of a node or link failure, and (3 reducing the flooding ability of ants for congestion control. The energy efficiency improvements are significant particularly for dynamic routing environments. Experimental results using the RMASE simulation environment show that the proposed method increases the energy efficiency by up to 9% and 64% in converge-cast and target-tracking scenarios, respectively, over the original EEABR without incurring a significant increase in complexity. The method is also compared and found to also outperform other swarm-based routing protocols such as sensor-driven and cost-aware ant routing (SC and Beesensor.
Su, Bo-Han; Shen, Meng-Yu; Harn, Yeu-Chern; Wang, San-Yuan; Schurz, Alioune; Lin, Chieh; Lin, Olivia A; Tseng, Yufeng J
2017-11-15
The identification of chemical structures in natural product mixtures is an important task in drug discovery but is still a challenging problem, as structural elucidation is a time-consuming process and is limited by the available mass spectra of known natural products. Computer-aided structure elucidation (CASE) strategies seek to automatically propose a list of possible chemical structures in mixtures by utilizing chromatographic and spectroscopic methods. However, current CASE tools still cannot automatically solve structures for experienced natural product chemists. Here, we formulated the structural elucidation of natural products in a mixture as a computational problem by extending a list of scaffolds using a weighted side chain list after analyzing a collection of 243,130 natural products and designed an efficient algorithm to precisely identify the chemical structures. The complexity of such a problem is NP-complete. A dynamic programming (DP) algorithm can solve this NP-complete problem in pseudo-polynomial time after converting floating point molecular weights into integers. However, the running time of the DP algorithm degrades exponentially as the precision of the mass spectrometry experiment grows. To ideally solve in polynomial time, we proposed a novel iterative DP algorithm that can quickly recognize the chemical structures of natural products. By utilizing this algorithm to elucidate the structures of four natural products that were experimentally and structurally determined, the algorithm can search the exact solutions, and the time performance was shown to be in polynomial time for average cases. The proposed method improved the speed of the structural elucidation of natural products and helped broaden the spectrum of available compounds that could be applied as new drug candidates. A web service built for structural elucidation studies is freely accessible via the following link ( http://csccp.cmdm.tw/ ).
Jin, Dakai; Iyer, Krishna S; Chen, Cheng; Hoffman, Eric A; Saha, Punam K
2016-06-01
Conventional curve skeletonization algorithms using the principle of Blum's transform, often, produce unwanted spurious branches due to boundary irregularities, digital effects, and other artifacts. This paper presents a new robust and efficient curve skeletonization algorithm for three-dimensional (3-D) elongated fuzzy objects using a minimum cost path approach, which avoids spurious branches without requiring post-pruning. Starting from a root voxel, the method iteratively expands the skeleton by adding new branches in each iteration that connects the farthest quench voxel to the current skeleton using a minimum cost path. The path-cost function is formulated using a novel measure of local significance factor defined by the fuzzy distance transform field, which forces the path to stick to the centerline of an object. The algorithm terminates when dilated skeletal branches fill the entire object volume or the current farthest quench voxel fails to generate a meaningful skeletal branch. Accuracy of the algorithm has been evaluated using computer-generated phantoms with known skeletons. Performance of the method in terms of false and missing skeletal branches, as defined by human experts, has been examined using in vivo CT imaging of human intrathoracic airways. Results from both experiments have established the superiority of the new method as compared to the existing methods in terms of accuracy as well as robustness in detecting true and false skeletal branches. The new algorithm makes a significant reduction in computation complexity by enabling detection of multiple new skeletal branches in one iteration. Specifically, this algorithm reduces the number of iterations from the number of terminal tree branches to the worst case performance of tree depth. In fact, experimental results suggest that, on an average, the order of computation complexity is reduced to the logarithm of the number of terminal branches of a tree-like object.
Directory of Open Access Journals (Sweden)
Shaat Musbah
2010-01-01
Full Text Available Cognitive Radio (CR systems have been proposed to increase the spectrum utilization by opportunistically access the unused spectrum. Multicarrier communication systems are promising candidates for CR systems. Due to its high spectral efficiency, filter bank multicarrier (FBMC can be considered as an alternative to conventional orthogonal frequency division multiplexing (OFDM for transmission over the CR networks. This paper addresses the problem of resource allocation in multicarrier-based CR networks. The objective is to maximize the downlink capacity of the network under both total power and interference introduced to the primary users (PUs constraints. The optimal solution has high computational complexity which makes it unsuitable for practical applications and hence a low complexity suboptimal solution is proposed. The proposed algorithm utilizes the spectrum holes in PUs bands as well as active PU bands. The performance of the proposed algorithm is investigated for OFDM and FBMC based CR systems. Simulation results illustrate that the proposed resource allocation algorithm with low computational complexity achieves near optimal performance and proves the efficiency of using FBMC in CR context.
Directory of Open Access Journals (Sweden)
Sandeep Pirbhulal
2015-06-01
Full Text Available Body Sensor Network (BSN is a network of several associated sensor nodes on, inside or around the human body to monitor vital signals, such as, Electroencephalogram (EEG, Photoplethysmography (PPG, Electrocardiogram (ECG, etc. Each sensor node in BSN delivers major information; therefore, it is very significant to provide data confidentiality and security. All existing approaches to secure BSN are based on complex cryptographic key generation procedures, which not only demands high resource utilization and computation time, but also consumes large amount of energy, power and memory during data transmission. However, it is indispensable to put forward energy efficient and computationally less complex authentication technique for BSN. In this paper, a novel biometric-based algorithm is proposed, which utilizes Heart Rate Variability (HRV for simple key generation process to secure BSN. Our proposed algorithm is compared with three data authentication techniques, namely Physiological Signal based Key Agreement (PSKA, Data Encryption Standard (DES and Rivest Shamir Adleman (RSA. Simulation is performed in Matlab and results suggest that proposed algorithm is quite efficient in terms of transmission time utilization, average remaining energy and total power consumption.
Becker, Hanna; Albera, Laurent; Comon, Pierre; Kachenoura, Amar; Merlet, Isabelle
2017-01-01
As a noninvasive technique, electroencephalography (EEG) is commonly used to monitor the brain signals of patients with epilepsy such as the interictal epileptic spikes. However, the recorded data are often corrupted by artifacts originating, for example, from muscle activities, which may have much higher amplitudes than the interictal epileptic signals of interest. To remove these artifacts, a number of independent component analysis (ICA) techniques were successfully applied. In this paper, we propose a new deflation ICA algorithm, called penalized semialgebraic unitary deflation (P-SAUD) algorithm, that improves upon classical ICA methods by leading to a considerably reduced computational complexity at equivalent performance. This is achieved by employing a penalized semialgebraic extraction scheme, which permits us to identify the epileptic components of interest (interictal spikes) first and obviates the need of extracting subsequent components. The proposed method is evaluated on physiologically plausible simulated EEG data and actual measurements of three patients. The results are compared to those of several popular ICA algorithms as well as second-order blind source separation methods, demonstrating that P-SAUD extracts the epileptic spikes with the same accuracy as the best ICA methods, but reduces the computational complexity by a factor of 10 for 32-channel recordings. This superior computational efficiency is of particular interest considering the increasing use of high-resolution EEG recordings, whose analysis requires algorithms with low computational cost.
Gan, Zecheng
2013-01-01
Computer simulation with Monte Carlo is an important tool to investigate the function and equilibrium properties of many systems with biological and soft matter materials solvable in solvents. The appropriate treatment of long-range electrostatic interaction is essential for these charged systems, but remains a challenging problem for large-scale simulations. We have developed an efficient Barnes-Hut treecode algorithm for electrostatic evaluation in Monte Carlo simulations of Coulomb many-body systems. The algorithm is based on a divide-and-conquer strategy and fast update of the octree data structure in each trial move through a local adjustment procedure. We test the accuracy of the tree algorithm, and use it to computer simulations of electric double layer near a spherical interface. It has been shown that the computational cost of the Monte Carlo method with treecode acceleration scales as $\\log N$ in each move. For a typical system with ten thousand particles, by using the new algorithm, the speed has b...
Shu, Tongxin; Xia, Min; Chen, Jiahong; Silva, Clarence de
2017-11-05
Power management is crucial in the monitoring of a remote environment, especially when long-term monitoring is needed. Renewable energy sources such as solar and wind may be harvested to sustain a monitoring system. However, without proper power management, equipment within the monitoring system may become nonfunctional and, as a consequence, the data or events captured during the monitoring process will become inaccurate as well. This paper develops and applies a novel adaptive sampling algorithm for power management in the automated monitoring of the quality of water in an extensive and remote aquatic environment. Based on the data collected on line using sensor nodes, a data-driven adaptive sampling algorithm (DDASA) is developed for improving the power efficiency while ensuring the accuracy of sampled data. The developed algorithm is evaluated using two distinct key parameters, which are dissolved oxygen (DO) and turbidity. It is found that by dynamically changing the sampling frequency, the battery lifetime can be effectively prolonged while maintaining a required level of sampling accuracy. According to the simulation results, compared to a fixed sampling rate, approximately 30.66% of the battery energy can be saved for three months of continuous water quality monitoring. Using the same dataset to compare with a traditional adaptive sampling algorithm (ASA), while achieving around the same Normalized Mean Error (NME), DDASA is superior in saving 5.31% more battery energy.
I/O-Efficient Algorithms for Computing Contour Lines on a Terrain
DEFF Research Database (Denmark)
Agarwal, Pankaj Kumar; Arge, Lars; Sadri, Bardia
2008-01-01
each other or collapse to a point. We present I/O efficient algorithms for the following two problems related to computing contours of M: (i) Given a sequence l1 algorithm that reports all contours of M at heights l1 , ... , ls using O(sort(N) + T....../B) I/Os, where T is the total number edges in the output contours, B is the "block size," and sort(N) is the number of I/Os needed to sort N elements. The algorithm uses O(N/B) disk blocks. Each contour is generated individually with its composing segments sorted in clockwise or counterclockwise order....... Moreover, our algorithm generates information on how the contours are nested. (ii) We can preprocess M, using O(sort(N)) I/Os, into a linear-size data structure so that all contours at a given height can be reported using O(logB N + T/B) I/Os, where T is the output size. Each contour is generated...
Efficient Constant-Time Complexity Algorithm for Stochastic Simulation of Large Reaction Networks.
Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado
2017-01-01
Exact stochastic simulation is an indispensable tool for a quantitative study of biochemical reaction networks. The simulation realizes the time evolution of the model by randomly choosing a reaction to fire and update the system state according to a probability that is proportional to the reaction propensity. Two computationally expensive tasks in simulating large biochemical networks are the selection of next reaction firings and the update of reaction propensities due to state changes. We present in this work a new exact algorithm to optimize both of these simulation bottlenecks. Our algorithm employs the composition-rejection on the propensity bounds of reactions to select the next reaction firing. The selection of next reaction firings is independent of the number reactions while the update of propensities is skipped and performed only when necessary. It therefore provides a favorable scaling for the computational complexity in simulating large reaction networks. We benchmark our new algorithm with the state of the art algorithms available in literature to demonstrate its applicability and efficiency.
A new hardware-efficient algorithm and reconfigurable architecture for image contrast enhancement.
Huang, Shih-Chia; Chen, Wen-Chieh
2014-10-01
Contrast enhancement is crucial when generating high quality images for image processing applications, such as digital image or video photography, liquid crystal display processing, and medical image analysis. In order to achieve real-time performance for high-definition video applications, it is necessary to design efficient contrast enhancement hardware architecture to meet the needs of real-time processing. In this paper, we propose a novel hardware-oriented contrast enhancement algorithm which can be implemented effectively for hardware design. In order to be considered for hardware implementation, approximation techniques are proposed to reduce these complex computations during performance of the contrast enhancement algorithm. The proposed hardware-oriented contrast enhancement algorithm achieves good image quality by measuring the results of qualitative and quantitative analyzes. To decrease hardware cost and improve hardware utilization for real-time performance, a reduction in circuit area is proposed through use of parameter-controlled reconfigurable architecture. The experiment results show that the proposed hardware-oriented contrast enhancement algorithm can provide an average frame rate of 48.23 frames/s at high definition resolution 1920 × 1080.
An efficient algorithm for classical density functional theory in three dimensions: ionic solutions.
Knepley, Matthew G; Karpeev, Dmitry A; Davidovits, Seth; Eisenberg, Robert S; Gillespie, Dirk
2010-03-28
Classical density functional theory (DFT) of fluids is a valuable tool to analyze inhomogeneous fluids. However, few numerical solution algorithms for three-dimensional systems exist. Here we present an efficient numerical scheme for fluids of charged, hard spheres that uses O(N log N) operations and O(N) memory, where N is the number of grid points. This system-size scaling is significant because of the very large N required for three-dimensional systems. The algorithm uses fast Fourier transforms (FFTs) to evaluate the convolutions of the DFT Euler-Lagrange equations and Picard (iterative substitution) iteration with line search to solve the equations. The pros and cons of this FFT/Picard technique are compared to those of alternative solution methods that use real-space integration of the convolutions instead of FFTs and Newton iteration instead of Picard. For the hard-sphere DFT, we use fundamental measure theory. For the electrostatic DFT, we present two algorithms. One is for the "bulk-fluid" functional of Rosenfeld [Y. Rosenfeld, J. Chem. Phys. 98, 8126 (1993)] that uses O(N log N) operations. The other is for the "reference fluid density" (RFD) functional [D. Gillespie et al., J. Phys.: Condens. Matter 14, 12129 (2002)]. This functional is significantly more accurate than the bulk-fluid functional, but the RFD algorithm requires O(N(2)) operations.
An efficient tensor transpose algorithm for multicore CPU, Intel Xeon Phi, and NVidia Tesla GPU
Lyakh, Dmitry I.
2015-04-01
An efficient parallel tensor transpose algorithm is suggested for shared-memory computing units, namely, multicore CPU, Intel Xeon Phi, and NVidia GPU. The algorithm operates on dense tensors (multidimensional arrays) and is based on the optimization of cache utilization on x86 CPU and the use of shared memory on NVidia GPU. From the applied side, the ultimate goal is to minimize the overhead encountered in the transformation of tensor contractions into matrix multiplications in computer implementations of advanced methods of quantum many-body theory (e.g., in electronic structure theory and nuclear physics). A particular accent is made on higher-dimensional tensors that typically appear in the so-called multireference correlated methods of electronic structure theory. Depending on tensor dimensionality, the presented optimized algorithms can achieve an order of magnitude speedup on x86 CPUs and 2-3 times speedup on NVidia Tesla K20X GPU with respect to the naïve scattering algorithm (no memory access optimization). The tensor transpose routines developed in this work have been incorporated into a general-purpose tensor algebra library (TAL-SH).
DEFF Research Database (Denmark)
Boiroux, Dimitri; Juhl, Rune; Madsen, Henrik
2016-01-01
This paper addresses maximum likelihood parameter estimation of continuous-time nonlinear systems with discrete-time measurements. We derive an efficient algorithm for the computation of the log-likelihood function and its gradient, which can be used in gradient-based optimization algorithms....... This algorithm uses UD decomposition of symmetric matrices and the array algorithm for covariance update and gradient computation. We test our algorithm on the Lotka-Volterra equations. Compared to the maximum likelihood estimation based on finite difference gradient computation, we get a significant speedup...
Intraoperative on-the-fly organ-mosaicking for laparoscopic surgery.
Reichard, Daniel; Bodenstedt, Sebastian; Suwelack, Stefan; Mayer, Benjamin; Preukschas, Anas; Wagner, Martin; Kenngott, Hannes; Müller-Stich, Beat; Dillmann, Rüdiger; Speidel, Stefanie
2015-10-01
The goal of computer-assisted surgery is to provide the surgeon with guidance during an intervention, e.g., using augmented reality. To display preoperative data, soft tissue deformations that occur during surgery have to be taken into consideration. Laparoscopic sensors, such as stereo endoscopes, can be used to create a three-dimensional reconstruction of stereo frames for registration. Due to the small field of view and the homogeneous structure of tissue, reconstructing just one frame, in general, will not provide enough detail to register preoperative data, since every frame only contains a part of an organ surface. A correct assignment to the preoperative model is possible only if the patch geometry can be unambiguously matched to a part of the preoperative surface. We propose and evaluate a system that combines multiple smaller reconstructions from different viewpoints to segment and reconstruct a large model of an organ. Using graphics processing unit-based methods, we achieved four frames per second. We evaluated the system with in silico, phantom, ex vivo, and in vivo (porcine) data, using different methods for estimating the camera pose (optical tracking, iterative closest point, and a combination). The results indicate that the proposed method is promising for on-the-fly organ reconstruction and registration.
Efficient hybrid evolutionary algorithm for optimization of a strip coiling process
Pholdee, Nantiwat; Park, Won-Woong; Kim, Dong-Kyu; Im, Yong-Taek; Bureerat, Sujin; Kwon, Hyuck-Cheol; Chun, Myung-Sik
2015-04-01
This article proposes an efficient metaheuristic based on hybridization of teaching-learning-based optimization and differential evolution for optimization to improve the flatness of a strip during a strip coiling process. Differential evolution operators were integrated into the teaching-learning-based optimization with a Latin hypercube sampling technique for generation of an initial population. The objective function was introduced to reduce axial inhomogeneity of the stress distribution and the maximum compressive stress calculated by Love's elastic solution within the thin strip, which may cause an irregular surface profile of the strip during the strip coiling process. The hybrid optimizer and several well-established evolutionary algorithms (EAs) were used to solve the optimization problem. The comparative studies show that the proposed hybrid algorithm outperformed other EAs in terms of convergence rate and consistency. It was found that the proposed hybrid approach was powerful for process optimization, especially with a large-scale design problem.
An Efficient Algorithm for the Reflexive Solution of the Quaternion Matrix Equation AXB+CXHD=F
Directory of Open Access Journals (Sweden)
Ning Li
2013-01-01
Full Text Available We propose an iterative algorithm for solving the reflexive solution of the quaternion matrix equation AXB+CXHD=F. When the matrix equation is consistent over reflexive matrix X, a reflexive solution can be obtained within finite iteration steps in the absence of roundoff errors. By the proposed iterative algorithm, the least Frobenius norm reflexive solution of the matrix equation can be derived when an appropriate initial iterative matrix is chosen. Furthermore, the optimal approximate reflexive solution to a given reflexive matrix X0 can be derived by finding the least Frobenius norm reflexive solution of a new corresponding quaternion matrix equation. Finally, two numerical examples are given to illustrate the efficiency of the proposed methods.
Einstein, Daniel R; Kuprat, Andrew P; Jiao, Xiangmin; Carson, James P; Einstein, David M; Jacob, Richard E; Corley, Richard A
2013-01-01
Geometries for organ scale and multiscale simulations of organ function are now routinely derived from imaging data. However, medical images may also contain spatially heterogeneous information other than geometry that are relevant to such simulations either as initial conditions or in the form of model parameters. In this manuscript, we present an algorithm for the efficient and robust mapping of such data to imaging-based unstructured polyhedral grids in parallel. We then illustrate the application of our mapping algorithm to three different mapping problems: (i) the mapping of MRI diffusion tensor data to an unstructured ventricular grid; (ii) the mapping of serial cyrosection histology data to an unstructured mouse brain grid; and (iii) the mapping of computed tomography-derived volumetric strain data to an unstructured multiscale lung grid. Execution times and parallel performance are reported for each case. Copyright © 2012 John Wiley & Sons, Ltd.
New Design Methods And Algorithms For High Energy-Efficient And Low-cost Distillation Processes
Energy Technology Data Exchange (ETDEWEB)
Agrawal, Rakesh [Purdue Univ., West Lafayette, IN (United States)
2013-11-21
This project sought and successfully answered two big challenges facing the creation of low-energy, cost-effective, zeotropic multi-component distillation processes: first, identification of an efficient search space that includes all the useful distillation configurations and no undesired configurations; second, development of an algorithm to search the space efficiently and generate an array of low-energy options for industrial multi-component mixtures. Such mixtures are found in large-scale chemical and petroleum plants. Commercialization of our results was addressed by building a user interface allowing practical application of our methods for industrial problems by anyone with basic knowledge of distillation for a given problem. We also provided our algorithm to a major U.S. Chemical Company for use by the practitioners. The successful execution of this program has provided methods and algorithms at the disposal of process engineers to readily generate low-energy solutions for a large class of multicomponent distillation problems in a typical chemical and petrochemical plant. In a petrochemical complex, the distillation trains within crude oil processing, hydrotreating units containing alkylation, isomerization, reformer, LPG (liquefied petroleum gas) and NGL (natural gas liquids) processing units can benefit from our results. Effluents from naphtha crackers and ethane-propane crackers typically contain mixtures of methane, ethylene, ethane, propylene, propane, butane and heavier hydrocarbons. We have shown that our systematic search method with a more complete search space, along with the optimization algorithm, has a potential to yield low-energy distillation configurations for all such applications with energy savings up to 50%.
Protein alignment algorithms with an efficient backtracking routine on multiple GPUs
Directory of Open Access Journals (Sweden)
Kierzynka Michal
2011-05-01
Full Text Available Abstract Background Pairwise sequence alignment methods are widely used in biological research. The increasing number of sequences is perceived as one of the upcoming challenges for sequence alignment methods in the nearest future. To overcome this challenge several GPU (Graphics Processing Unit computing approaches have been proposed lately. These solutions show a great potential of a GPU platform but in most cases address the problem of sequence database scanning and computing only the alignment score whereas the alignment itself is omitted. Thus, the need arose to implement the global and semiglobal Needleman-Wunsch, and Smith-Waterman algorithms with a backtracking procedure which is needed to construct the alignment. Results In this paper we present the solution that performs the alignment of every given sequence pair, which is a required step for progressive multiple sequence alignment methods, as well as for DNA recognition at the DNA assembly stage. Performed tests show that the implementation, with performance up to 6.3 GCUPS on a single GPU for affine gap penalties, is very efficient in comparison to other CPU and GPU-based solutions. Moreover, multiple GPUs support with load balancing makes the application very scalable. Conclusions The article shows that the backtracking procedure of the sequence alignment algorithms may be designed to fit in with the GPU architecture. Therefore, our algorithm, apart from scores, is able to compute pairwise alignments. This opens a wide range of new possibilities, allowing other methods from the area of molecular biology to take advantage of the new computational architecture. Performed tests show that the efficiency of the implementation is excellent. Moreover, the speed of our GPU-based algorithms can be almost linearly increased when using more than one graphics card.
Directory of Open Access Journals (Sweden)
Thang Trung Nguyen
2016-01-01
Full Text Available This paper proposes an efficient Cuckoo-Inspired Meta-Heuristic Algorithm (CIMHA for solving multi-objective short-term hydrothermal scheduling (ST-HTS problem. The objective is to simultaneously minimize the total cost and emission of thermal units while all constraints such as power balance, water discharge, and generation limitations must be satisfied. The proposed CIMHA is a newly developed meta-heuristic algorithm inspired by the intelligent reproduction strategy of the cuckoo bird. It is efficient for solving optimization problems with complicated objective and constraints because the method has few control parameters. The proposed method has been tested on different systems with various numbers of objective functions, and the obtained results have been compared to those from other methods available in the literature. The result comparisons have indicated that the proposed method is more efficient than many other methods for the test systems in terms of total cost, total emission, and computational time. Therefore, the proposed CIMHA can be a favorable method for solving the multi-objective ST-HTS problems.
Jozsa, Richard
2003-08-01
Pell's equation is x2- dy2=1, where d is a square-free integer and we seek positive integer solutions x, y>0. Let ( x0, y0) be the smallest solution (i.e., having smallest A=x 0+y 0d). Lagrange showed that every solution can easily be constructed from A so given d it suffices to compute A. It is known that A can be exponentially large in d so just to write down A we need exponential time in the input size log d . Hence we introduce the regulator R=ln A and ask for the value of R to n decimal places. The best known classical algorithm has sub-exponential running time O( explog d , poly(n)) . Hallgren's quantum algorithm gives the result in polynomial time O( poly( log d), poly(n)) with probability 1/ poly( log d) . The idea of the algorithm falls into two parts: using the formalism of algebraic number theory we convert the problem of solving Pell's equation into the problem of determining R as the period of a function on the real numbers. Then we generalise the quantum Fourier transform period finding algorithm to work in this situation of an irrational period on the (not finitely generated) abelian group of real numbers. This paper is intended to be accessible to a reader having no prior acquaintance with algebraic number theory; we give a self-contained account of all the necessary concepts and we give elementary proofs of all the results needed. Then we go on to describe Hallgren's generalisation of the quantum period finding algorithm, which provides the efficient computational solution of Pell's equation in the above sense.
Efficient mesh motion using radial basis functions with volume grid points reduction algorithm
Xie, Liang; Liu, Hong
2017-11-01
As one of the most robust mesh deformation technique available, the radial basis function (RBF) mesh deformation has been accepted widely. However, for volume mesh deformation driven by surface motion, the RBF system may become impractical for large meshes due to the large number of both surface (control) points and volume points. Surface points selection procedure based on the greedy algorithm results in an efficient implementation of the RBF-based mesh deformation procedure. The greedy algorithm could reduce the number of surface points involved in the RBF interpolation while acquire an acceptable accuracy as shown in literature. To improve the efficiency of the RBF method furthermore, an issue that how to reduce the number of the volume points needed to be moved is addressed. In this paper, we propose an algorithm for volume points reduction based on a wall distance based restricting function which is added to the formulation of the RBF based interpolation. This restricting function is firstly introduced by the current article. To support large deformation, a multi-level subspace interpolation is essentially needed, although this technique was used to improve the efficiency of the surface points selection procedure in the existed literature. The key point of this technique is setting the error of previous interpolation step as the object of current step, and restricting interpolation region gradually. Because the tolerance of the error is decreased hierarchically, the number of the surface points is increased but the number of the volume points needed to be moved is reduced gradually. Therefore, the CPU cost of updating the mesh motion could be reduced eventually since it scales with the product of these two numbers. The computational requirement of the proposed procedure is reduced evidently compared with the standard procedure as proved by some examples.
Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions
Chen, Nan; Majda, Andrew J.
2018-02-01
Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6
Energy Technology Data Exchange (ETDEWEB)
Swiler, Laura Painton; Eldred, Michael Scott
2009-09-01
This report documents the results of an FY09 ASC V&V Methods level 2 milestone demonstrating new algorithmic capabilities for mixed aleatory-epistemic uncertainty quantification. Through the combination of stochastic expansions for computing aleatory statistics and interval optimization for computing epistemic bounds, mixed uncertainty analysis studies are shown to be more accurate and efficient than previously achievable. Part I of the report describes the algorithms and presents benchmark performance results. Part II applies these new algorithms to UQ analysis of radiation effects in electronic devices and circuits for the QASPR program.
Directory of Open Access Journals (Sweden)
A. Schroeder
2012-09-01
Full Text Available This paper proposes a compression of far field matrices in the fast multipole method and its multilevel extension for electromagnetic problems. The compression is based on a spherical harmonic representation of radiation patterns in conjunction with a radiating mode expression of the surface current. The method is applied to study near field effects and the far field of an antenna placed on a ship surface. Furthermore, the electromagnetic scattering of an electrically large plate is investigated. It is demonstrated, that the proposed technique leads to a significant memory saving, making multipole algorithms even more efficient without compromising the accuracy.
Directory of Open Access Journals (Sweden)
Sergey Kharitonov
2015-06-01
Full Text Available Optimum transport infrastructure usage is an important aspect of the development of the national economy of the Russian Federation. Thus, development of instruments for assessing the efficiency of infrastructure is impossible without constant monitoring of a number of significant indicators. This work is devoted to the selection of indicators and the method of their calculation in relation to the transport subsystem as airport infrastructure. The work also reflects aspects of the evaluation of the possibilities of algorithmic computational mechanisms to improve the tools of public administration transport subsystems.
An Efficient Topology-Based Algorithm for Transient Analysis of Power Grid
Yang, Lan
2015-08-10
In the design flow of integrated circuits, chip-level verification is an important step that sanity checks the performance is as expected. Power grid verification is one of the most expensive and time-consuming steps of chip-level verification, due to its extremely large size. Efficient power grid analysis technology is highly demanded as it saves computing resources and enables faster iteration. In this paper, a topology-base power grid transient analysis algorithm is proposed. Nodal analysis is adopted to analyze the topology which is mathematically equivalent to iteratively solving a positive semi-definite linear equation. The convergence of the method is proved.
An efficient algorithm for some highly nonlinear fractional PDEs in mathematical physics.
Ahmad, Jamshad; Mohyud-Din, Syed Tauseef
2014-01-01
In this paper, a fractional complex transform (FCT) is used to convert the given fractional partial differential equations (FPDEs) into corresponding partial differential equations (PDEs) and subsequently Reduced Differential Transform Method (RDTM) is applied on the transformed system of linear and nonlinear time-fractional PDEs. The results so obtained are re-stated by making use of inverse transformation which yields it in terms of original variables. It is observed that the proposed algorithm is highly efficient and appropriate for fractional PDEs and hence can be extended to other complex problems of diversified nonlinear nature.
An efficient algorithm for some highly nonlinear fractional PDEs in mathematical physics.
Directory of Open Access Journals (Sweden)
Jamshad Ahmad
Full Text Available In this paper, a fractional complex transform (FCT is used to convert the given fractional partial differential equations (FPDEs into corresponding partial differential equations (PDEs and subsequently Reduced Differential Transform Method (RDTM is applied on the transformed system of linear and nonlinear time-fractional PDEs. The results so obtained are re-stated by making use of inverse transformation which yields it in terms of original variables. It is observed that the proposed algorithm is highly efficient and appropriate for fractional PDEs and hence can be extended to other complex problems of diversified nonlinear nature.
Weighing Efficiency-Robustness in Supply Chain Disruption by Multi-Objective Firefly Algorithm
Directory of Open Access Journals (Sweden)
Tong Shu
2016-03-01
Full Text Available This paper investigates various supply chain disruptions in terms of scenario planning, including node disruption and chain disruption; namely, disruptions in distribution centers and disruptions between manufacturing centers and distribution centers. Meanwhile, it also focuses on the simultaneous disruption on one node or a number of nodes, simultaneous disruption in one chain or a number of chains and the corresponding mathematical models and exemplification in relation to numerous manufacturing centers and diverse products. Robustness of the design of the supply chain network is examined by weighing efficiency against robustness during supply chain disruptions. Efficiency is represented by operating cost; robustness is indicated by the expected disruption cost and the weighing issue is calculated by the multi-objective firefly algorithm for consistency in the results. It has been shown that the total cost achieved by the optimal target function is lower than that at the most effective time of supply chains. In other words, the decrease of expected disruption cost by improving robustness in supply chains is greater than the increase of operating cost by reducing efficiency, thus leading to cost advantage. Consequently, by approximating the Pareto Front Chart of weighing between efficiency and robustness, enterprises can choose appropriate efficiency and robustness for their longer-term development.
An efficient algorithm using matrix methods to solve wind tunnel force-balance equations
Smith, D. L.
1972-01-01
An iterative procedure applying matrix methods to accomplish an efficient algorithm for automatic computer reduction of wind-tunnel force-balance data has been developed. Balance equations are expressed in a matrix form that is convenient for storing balance sensitivities and interaction coefficient values for online or offline batch data reduction. The convergence of the iterative values to a unique solution of this system of equations is investigated, and it is shown that for balances which satisfy the criteria discussed, this type of solution does occur. Methods for making sensitivity adjustments and initial load effect considerations in wind-tunnel applications are also discussed, and the logic for determining the convergence accuracy limits for the iterative solution is given. This more efficient data reduction program is compared with the technique presently in use at the NASA Langley Research Center, and computational times on the order of one-third or less are demonstrated by use of this new program.
Implementing O(N N-Body Algorithms Efficiently in Data-Parallel Languages
Directory of Open Access Journals (Sweden)
Yu Hu
1996-01-01
Full Text Available The optimization techniques for hierarchical O(N N-body algorithms described here focus on managing the data distribution and the data references, both between the memories of different nodes and within the memory hierarchy of each node. We show how the techniques can be expressed in data-parallel languages, such as High Performance Fortran (HPF and Connection Machine Fortran (CMF. The effectiveness of our techniques is demonstrated on an implementation of Anderson's hierarchical O(N N-body method for the Connection Machine system CM-5/5E. Of the total execution time, communication accounts for about 10–20% of the total time, with the average efficiency for arithmetic operations being about 40% and the total efficiency (including communication being about 35%. For the CM-5E, a performance in excess of 60 Mflop/s per node (peak 160 Mflop/s per node has been measured.
Directory of Open Access Journals (Sweden)
Syed Bilal Hussain Shah
2017-01-01
Full Text Available In Wireless Sensors Networks (WSNs, researcher’s main focus is on energy preservation and prolonging network lifetime. More energy resources are required in case of remote applications of WSNs, where some of the nodes die early that shorten the lifetime and decrease the stability of the network. It is mainly caused due to the non-optimal Cluster Heads (CHs selection based on single criterion and uneven distribution of energy. We propose a new clustering protocol for both homogeneous and heterogeneous environments, named as Optimized Path planning algorithm with Energy efficiency and Extending Network lifetime in WSN (OPEN. In the proposed protocol, timer value concept is used for CH selection based on multiple criteria. Simulation results prove that OPEN performs better than the existing protocols in terms of the network lifetime, throughput and stability. The results explicitly explain the cluster head selection of OPEN protocol and efficient solution of uneven energy distribution problem.
Müllegger, Andreas; Ryba, Tracey
2017-02-01
Standardized production systems which can be implemented, programmed, maintained and sourced in a simple and efficient way are key for a successful global production of automobiles or related parts at component suppliers. This is also valid for systems, which are built by laser based processes. One of the key applications is remote laser welding (RLW) of "Body in White" (BIW) parts (such as hang-on parts, B-Pillars, side frames, etc.), but also builtin components (such as car seats, batteries, etc.). The majority of RLW applications are based on the implementation of a 3-D scanner optic (e.g. the PFO 3D from TRUMPF) which positions the laser beam on the various component surfaces to be welded. Over the past 10 years it has been proven that the most efficient way to build up the RLW process is to have a system where an industrial robot and a scanner optic are combined in one production cell. They usually cooperate within an "On-The-Fly" (OTF) process as this ensures minimum cycle times. Until now there are several technologies on the market which can coordinate both the robot and scanner in the OTF mode. But none of them meet all requirements of global standardized production solutions. With the introduction of the I-PFO (Intelligent Programmable Focusing Optics) technology the situation has changed. It is now possible to program or adopt complex remote processes in a fast and easy way by the "Teach-in" function via the robot teach pendant. Additionally a 3D offline designer software is an option for this system. It automatically creates the ideal remote process based on the part, fixture, production cell and required process parameters. The I-PFO technology doesn't need additional hardware due to the fact that it runs on the controller within the PFO 3D. Furthermore it works together with different types of industrial robots (e.g. ABB, Fanuc and KUKA) which allow highest flexibility for the production planning phase. Finally a single TRUMPF laser source can supply
On resource-efficient algorithm for non-linear systems approximate reachability set construction
Parshikov, G. V.; Matviychuk, A. R.
2017-10-01
The research considers the numerical solution method of the reachability set construction problem for non-linear dynamical system in n-dimensional Euclidean space. The study deals with the dynamical system on the finite time interval, which is described by differential equation satisfying a set of defined conditions. The existing step-by-step pixel methods are based on the time interval sampling and applying the step-by-step reachability set constructing procedure to every time moment in partition. These methods allow us to solve the approximate reachability set constructing problem for the complex non-linear systems, which do not have analytical solutions. However, applying these methods causes a sharp increase of number of points used for reachability set constructing on the next step of time partition. This results in increase of calculation time as well as lack of computing device memory. To reduce the calculation time and satisfy the existing constraints of used device memory, we developed the set filtration algorithm based on some way of picking the points, which are used on the next step of reachability set constructing algorithm. Moreover, the computations are moved from CPU to the CUDA based on GPU, which allows us to run computations with the hundreds of parallel threads. In this research, we provide the description of the algorithm and give some information about its efficiency.
Efficient design of nanoplasmonic waveguide devices using the space mapping algorithm.
Dastmalchi, Pouya; Veronis, Georgios
2013-12-30
We show that the space mapping algorithm, originally developed for microwave circuit optimization, can enable the efficient design of nanoplasmonic waveguide devices which satisfy a set of desired specifications. Space mapping utilizes a physics-based coarse model to approximate a fine model accurately describing a device. Here the fine model is a full-wave finite-difference frequency-domain (FDFD) simulation of the device, while the coarse model is based on transmission line theory. We demonstrate that simply optimizing the transmission line model of the device is not enough to obtain a device which satisfies all the required design specifications. On the other hand, when the iterative space mapping algorithm is used, it converges fast to a design which meets all the specifications. In addition, full-wave FDFD simulations of only a few candidate structures are required before the iterative process is terminated. Use of the space mapping algorithm therefore results in large reductions in the required computation time when compared to any direct optimization method of the fine FDFD model.
An Efficient and Robust Moving Shadow Removal Algorithm and Its Applications in ITS
Directory of Open Access Journals (Sweden)
Shou Yu-Wen
2010-01-01
Full Text Available We propose an efficient algorithm for removing shadows of moving vehicles caused by non-uniform distributions of light reflections in the daytime. This paper presents a brand-new and complete structure in feature combination as well as analysis for orientating and labeling moving shadows so as to extract the defined objects in foregrounds more easily in each snapshot of the original files of videos which are acquired in the real traffic situations. Moreover, we make use of Gaussian Mixture Model (GMM for background removal and detection of moving shadows in our tested images, and define two indices for characterizing non-shadowed regions where one indicates the characteristics of lines and the other index can be characterized by the information in gray scales of images which helps us to build a newly defined set of darkening ratios (modified darkening factors based on Gaussian models. To prove the effectiveness of our moving shadow algorithm, we carry it out with a practical application of traffic flow detection in ITS (Intelligent Transportation System—vehicle counting. Our algorithm shows the faster processing speed, 13.84 ms/frame, and can improve the accuracy rate in 4%~10% for our three tested videos in the experimental results of vehicle counting.
An efficient similarity measure for content based image retrieval using memetic algorithm
Directory of Open Access Journals (Sweden)
Mutasem K. Alsmadi
2017-06-01
Full Text Available Content based image retrieval (CBIR systems work by retrieving images which are related to the query image (QI from huge databases. The available CBIR systems extract limited feature sets which confine the retrieval efficacy. In this work, extensive robust and important features were extracted from the images database and then stored in the feature repository. This feature set is composed of color signature with the shape and color texture features. Where, features are extracted from the given QI in the similar fashion. Consequently, a novel similarity evaluation using a meta-heuristic algorithm called a memetic algorithm (genetic algorithm with great deluge is achieved between the features of the QI and the features of the database images. Our proposed CBIR system is assessed by inquiring number of images (from the test dataset and the efficiency of the system is evaluated by calculating precision-recall value for the results. The results were superior to other state-of-the-art CBIR systems in regard to precision.
Directory of Open Access Journals (Sweden)
Suman Sutradhar
2016-01-01
Full Text Available In this paper, a novel approach of hybridization of two efficient metaheuristic algorithms is proposed for energy system analysis and modelling based on a hydro and thermal based power system in both single and multiobjective environment. The scheduling of hydro and thermal power is modelled descriptively including the handling method of various practical nonlinear constraints. The main goal for the proposed modelling is to minimize the total production cost (which is highly nonlinear and nonconvex problem and emission while satisfying involved hydro and thermal unit commitment limitations. The cascaded hydro reservoirs of hydro subsystem and intertemporal constraints regarding thermal units along with nonlinear nonconvex, mixed-integer mixed-binary objective function make the search space highly complex. To solve such a complicated system, a hybridization of Gray Wolf Optimization and Artificial Bee Colony algorithm, that is, h-ABC/GWO, is used for better exploration and exploitation in the multidimensional search space. Two different test systems are used for modelling and analysis. Experimental results demonstrate the superior performance of the proposed algorithm as compared to other recently reported ones in terms of convergence and better quality of solutions.
Directory of Open Access Journals (Sweden)
Shoaib Ehsan
2015-07-01
Full Text Available The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF, allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video. Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44% in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.
An Efficient and Robust Moving Shadow Removal Algorithm and Its Applications in ITS
Directory of Open Access Journals (Sweden)
Chin-Teng Lin
2010-01-01
Full Text Available We propose an efficient algorithm for removing shadows of moving vehicles caused by non-uniform distributions of light reflections in the daytime. This paper presents a brand-new and complete structure in feature combination as well as analysis for orientating and labeling moving shadows so as to extract the defined objects in foregrounds more easily in each snapshot of the original files of videos which are acquired in the real traffic situations. Moreover, we make use of Gaussian Mixture Model (GMM for background removal and detection of moving shadows in our tested images, and define two indices for characterizing non-shadowed regions where one indicates the characteristics of lines and the other index can be characterized by the information in gray scales of images which helps us to build a newly defined set of darkening ratios (modified darkening factors based on Gaussian models. To prove the effectiveness of our moving shadow algorithm, we carry it out with a practical application of traffic flow detection in ITS (Intelligent Transportation System—vehicle counting. Our algorithm shows the faster processing speed, 13.84 ms/frame, and can improve the accuracy rate in 4%∼10% for our three tested videos in the experimental results of vehicle counting.
Confocal signal evaluation algorithms for surface metrology: uncertainty and numerical efficiency.
Rahlves, Maik; Roth, Bernhard; Reithmeier, Eduard
2017-07-20
Confocal microscopy is one of the dominating measurement techniques in surface metrology, with an enhanced lateral resolution compared to alternative optical methods. However, the axial resolution in confocal microscopy is strongly dependent on the accuracy of signal evaluation algorithms, which are limited by random noise. Here, we discuss the influence of various noise sources on confocal intensity signal evaluating algorithms, including center-of-mass, parabolic least-square fit, and cross-correlation-based methods. We derive results in closed form for the uncertainty in height evaluation on surface microstructures, also accounting for the number of axially measured intensity values and a threshold that is commonly applied before signal evaluation. The validity of our results is verified by numerical Monte Carlo simulations. In addition, we implemented all three algorithms and analyzed their numerical efficiency. Our results can serve as guidance for a suitable choice of measurement parameters in confocal surface topography measurement, and thus lead to a shorter measurement time in practical applications.
Wang, Jin; Li, Bin; Xia, Feng; Kim, Chang-Seob; Kim, Jeong-Uk
2014-01-01
Traffic patterns in wireless sensor networks (WSNs) usually follow a many-to-one model. Sensor nodes close to static sinks will deplete their limited energy more rapidly than other sensors, since they will have more data to forward during multihop transmission. This will cause network partition, isolated nodes and much shortened network lifetime. Thus, how to balance energy consumption for sensor nodes is an important research issue. In recent years, exploiting sink mobility technology in WSNs has attracted much research attention because it can not only improve energy efficiency, but prolong network lifetime. In this paper, we propose an energy efficient distance-aware routing algorithm with multiple mobile sink for WSNs, where sink nodes will move with a certain speed along the network boundary to collect monitored data. We study the influence of multiple mobile sink nodes on energy consumption and network lifetime, and we mainly focus on the selection of mobile sink node number and the selection of parking positions, as well as their impact on performance metrics above. We can see that both mobile sink node number and the selection of parking position have important influence on network performance. Simulation results show that our proposed routing algorithm has better performance than traditional routing ones in terms of energy consumption. PMID:25196015
Directory of Open Access Journals (Sweden)
Álvaro Gutiérrez
2011-11-01
Full Text Available Swarms of robots can use their sensing abilities to explore unknown environments and deploy on sites of interest. In this task, a large number of robots is more effective than a single unit because of their ability to quickly cover the area. However, the coordination of large teams of robots is not an easy problem, especially when the resources for the deployment are limited. In this paper, the Distributed Bees Algorithm (DBA, previously proposed by the authors, is optimized and applied to distributed target allocation in swarms of robots. Improved target allocation in terms of deployment cost efficiency is achieved through optimization of the DBA’s control parameters by means of a Genetic Algorithm. Experimental results show that with the optimized set of parameters, the deployment cost measured as the average distance traveled by the robots is reduced. The cost-efficient deployment is in some cases achieved at the expense of increased robots’ distribution error. Nevertheless, the proposed approach allows the swarm to adapt to the operating conditions when available resources are scarce.
Ahn, Surl-Hee; Grate, Jay W.; Darve, Eric F.
2017-08-01
Molecular dynamics simulations are useful in obtaining thermodynamic and kinetic properties of bio-molecules, but they are limited by the time scale barrier. That is, we may not obtain properties' efficiently because we need to run microseconds or longer simulations using femtosecond time steps. To overcome this time scale barrier, we can use the weighted ensemble (WE) method, a powerful enhanced sampling method that efficiently samples thermodynamic and kinetic properties. However, the WE method requires an appropriate partitioning of phase space into discrete macrostates, which can be problematic when we have a high-dimensional collective space or when little is known a priori about the molecular system. Hence, we developed a new WE-based method, called the "Concurrent Adaptive Sampling (CAS) algorithm," to tackle these issues. The CAS algorithm is not constrained to use only one or two collective variables, unlike most reaction coordinate-dependent methods. Instead, it can use a large number of collective variables and adaptive macrostates to enhance the sampling in the high-dimensional space. This is especially useful for systems in which we do not know what the right reaction coordinates are, in which case we can use many collective variables to sample conformations and pathways. In addition, a clustering technique based on the committor function is used to accelerate sampling the slowest process in the molecular system. In this paper, we introduce the new method and show results from two-dimensional models and bio-molecules, specifically penta-alanine and a triazine trimer.
Directory of Open Access Journals (Sweden)
Jin Wang
2014-08-01
Full Text Available Traffic patterns in wireless sensor networks (WSNs usually follow a many-to-one model. Sensor nodes close to static sinks will deplete their limited energy more rapidly than other sensors, since they will have more data to forward during multihop transmission. This will cause network partition, isolated nodes and much shortened network lifetime. Thus, how to balance energy consumption for sensor nodes is an important research issue. In recent years, exploiting sink mobility technology in WSNs has attracted much research attention because it can not only improve energy efficiency, but prolong network lifetime. In this paper, we propose an energy efficient distance-aware routing algorithm with multiple mobile sink for WSNs, where sink nodes will move with a certain speed along the network boundary to collect monitored data. We study the influence of multiple mobile sink nodes on energy consumption and network lifetime, and we mainly focus on the selection of mobile sink node number and the selection of parking positions, as well as their impact on performance metrics above. We can see that both mobile sink node number and the selection of parking position have important influence on network performance. Simulation results show that our proposed routing algorithm has better performance than traditional routing ones in terms of energy consumption.
Wang, Jin; Li, Bin; Xia, Feng; Kim, Chang-Seob; Kim, Jeong-Uk
2014-08-18
Traffic patterns in wireless sensor networks (WSNs) usually follow a many-to-one model. Sensor nodes close to static sinks will deplete their limited energy more rapidly than other sensors, since they will have more data to forward during multihop transmission. This will cause network partition, isolated nodes and much shortened network lifetime. Thus, how to balance energy consumption for sensor nodes is an important research issue. In recent years, exploiting sink mobility technology in WSNs has attracted much research attention because it can not only improve energy efficiency, but prolong network lifetime. In this paper, we propose an energy efficient distance-aware routing algorithm with multiple mobile sink for WSNs, where sink nodes will move with a certain speed along the network boundary to collect monitored data. We study the influence of multiple mobile sink nodes on energy consumption and network lifetime, and we mainly focus on the selection of mobile sink node number and the selection of parking positions, as well as their impact on performance metrics above. We can see that both mobile sink node number and the selection of parking position have important influence on network performance. Simulation results show that our proposed routing algorithm has better performance than traditional routing ones in terms of energy consumption.
A complexity-efficient and one-pass image compression algorithm for wireless capsule endoscopy.
Liu, Gang; Yan, Guozheng; Zhao, Shaopeng; Kuang, Shuai
2015-01-01
As an important part of the application-specific integrated circuit (ASIC) in wireless capsule endoscopy (WCE), the efficient compressor is crucial for image transmission and power consumption. In this paper, a complexity-efficient and one-pass image compression method is proposed for WCE with Bayer format images. The algorithm is modified from the standard lossless algorithm (JPEG-LS). Firstly, a causal interpolation is used to acquire the context template of a current pixel to be encoded, thus determining different encoding modes. Secondly, a gradient predictor, instead of the median predictor, is designed to improve the accuracy of the predictions. Thirdly, the gradient context is quantized to obtain the context index (Q). Eventually, the encoding process is achieved in different modes. The experimental and comparative results show that our proposed near-lossless compression method provides a high compression rate (2.315) and a high image quality (46.31 dB) compared with other methods. It performs well in the designed wireless capsule system and could be applied in other image fields.
A comparative study of the A* heuristic search algorithm used to solve efficiently a puzzle game
Iordan, A. E.
2018-01-01
The puzzle game presented in this paper consists in polyhedra (prisms, pyramids or pyramidal frustums) which can be moved using the free available spaces. The problem requires to be found the minimum number of movements in order the game reaches to a goal configuration starting from an initial configuration. Because the problem is enough complex, the principal difficulty in solving it is given by dimension of search space, that leads to necessity of a heuristic search. The improving of the search method consists into determination of a strong estimation by the heuristic function which will guide the search process to the most promising side of the search tree. The comparative study is realized among Manhattan heuristic and the Hamming heuristic using A* search algorithm implemented in Java. This paper also presents the necessary stages in object oriented development of a software used to solve efficiently this puzzle game. The modelling of the software is achieved through specific UML diagrams representing the phases of analysis, design and implementation, the system thus being described in a clear and practical manner. With the purpose to confirm the theoretical results which demonstrates that Manhattan heuristic is more efficient was used space complexity criterion. The space complexity was measured by the number of generated nodes from the search tree, by the number of the expanded nodes and by the effective branching factor. From the experimental results obtained by using the Manhattan heuristic, improvements were observed regarding space complexity of A* algorithm versus Hamming heuristic.
CK-LPA: Efficient community detection algorithm based on label propagation with community kernel
Lin, Zhen; Zheng, Xiaolin; Xin, Nan; Chen, Deren
2014-12-01
With the rapid development of Web 2.0 and the rise of online social networks, finding community structures from user data has become a hot topic in network analysis. Although research achievements are numerous at present, most of these achievements cannot be adopted in large-scale social networks because of heavy computation. Previous studies have shown that label propagation is an efficient means to detect communities in social networks and is easy to implement; however, some drawbacks, such as low accuracy, high randomness, and the formation of a “monster” community, have been found. In this study, we propose an efficient community detection method based on the label propagation algorithm (LPA) with community kernel (CK-LPA). We assign a corresponding weight to each node according to node importance in the whole network and update node labels in sequence based on weight. Then, we discuss the composition of weights, the label updating strategy, the label propagation strategy, and the convergence conditions. Compared with the primitive LPA, existing drawbacks are solved by CK-LPA. Experiments and benchmarks reveal that our proposed method sustains nearly linear time complexity and exhibits significant improvements in the quality aspect of static community detection. Hence, the algorithm can be applied in large-scale social networks.
Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport
Energy Technology Data Exchange (ETDEWEB)
Romano, Paul K.; Siegel, Andrew R.
2017-04-16
The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup due to vectorization as a function of two parameters: the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size in order to achieve vector efficiency greater than 90%. When the execution times for events are allowed to vary, however, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration. For some problems, this implies that vector effciencies over 50% may not be attainable. While there are many factors impacting performance of an event-based algorithm that are not captured by our model, it nevertheless provides insights into factors that may be limiting in a real implementation.
Directory of Open Access Journals (Sweden)
S Kiavash Fayyaz S
Full Text Available The social functions of urbanized areas are highly dependent on and supported by the convenient access to public transportation systems, particularly for the less privileged populations who have restrained auto ownership. To accurately evaluate the public transit accessibility, it is critical to capture the spatiotemporal variation of transit services. This can be achieved by measuring the shortest paths or minimum travel time between origin-destination (OD pairs at each time-of-day (e.g. every minute. In recent years, General Transit Feed Specification (GTFS data has been gaining popularity for between-station travel time estimation due to its interoperability in spatiotemporal analytics. Many software packages, such as ArcGIS, have developed toolbox to enable the travel time estimation with GTFS. They perform reasonably well in calculating travel time between OD pairs for a specific time-of-day (e.g. 8:00 AM, yet can become computational inefficient and unpractical with the increase of data dimensions (e.g. all times-of-day and large network. In this paper, we introduce a new algorithm that is computationally elegant and mathematically efficient to address this issue. An open-source toolbox written in C++ is developed to implement the algorithm. We implemented the algorithm on City of St. George's transit network to showcase the accessibility analysis enabled by the toolbox. The experimental evidence shows significant reduction on computational time. The proposed algorithm and toolbox presented is easily transferable to other transit networks to allow transit agencies and researchers perform high resolution transit performance analysis.
Fayyaz S, S Kiavash; Liu, Xiaoyue Cathy; Zhang, Guohui
2017-01-01
The social functions of urbanized areas are highly dependent on and supported by the convenient access to public transportation systems, particularly for the less privileged populations who have restrained auto ownership. To accurately evaluate the public transit accessibility, it is critical to capture the spatiotemporal variation of transit services. This can be achieved by measuring the shortest paths or minimum travel time between origin-destination (OD) pairs at each time-of-day (e.g. every minute). In recent years, General Transit Feed Specification (GTFS) data has been gaining popularity for between-station travel time estimation due to its interoperability in spatiotemporal analytics. Many software packages, such as ArcGIS, have developed toolbox to enable the travel time estimation with GTFS. They perform reasonably well in calculating travel time between OD pairs for a specific time-of-day (e.g. 8:00 AM), yet can become computational inefficient and unpractical with the increase of data dimensions (e.g. all times-of-day and large network). In this paper, we introduce a new algorithm that is computationally elegant and mathematically efficient to address this issue. An open-source toolbox written in C++ is developed to implement the algorithm. We implemented the algorithm on City of St. George's transit network to showcase the accessibility analysis enabled by the toolbox. The experimental evidence shows significant reduction on computational time. The proposed algorithm and toolbox presented is easily transferable to other transit networks to allow transit agencies and researchers perform high resolution transit performance analysis.
Indian Academy of Sciences (India)
Algorithms. 3. Procedures and Recursion. R K Shyamasundar. In this article we introduce procedural abstraction and illustrate its uses. Further, we illustrate the notion of recursion which is one of the most useful features of procedural abstraction. Procedures. Let us consider a variation of the pro blem of summing the first M.
Indian Academy of Sciences (India)
number of elements. We shall illustrate the widely used matrix multiplication algorithm using the two dimensional arrays in the following. Consider two matrices A and B of integer type with di- mensions m x nand n x p respectively. Then, multiplication of. A by B denoted, A x B , is defined by matrix C of dimension m xp where.
Energy Technology Data Exchange (ETDEWEB)
Ahn, Surl-Hee; Grate, Jay W.; Darve, Eric F.
2017-08-21
Molecular dynamics (MD) simulations are useful in obtaining thermodynamic and kinetic properties of bio-molecules but are limited by the timescale barrier, i.e., we may be unable to efficiently obtain properties because we need to run microseconds or longer simulations using femtoseconds time steps. While there are several existing methods to overcome this timescale barrier and efficiently sample thermodynamic and/or kinetic properties, problems remain in regard to being able to sample un- known systems, deal with high-dimensional space of collective variables, and focus the computational effort on slow timescales. Hence, a new sampling method, called the “Concurrent Adaptive Sampling (CAS) algorithm,” has been developed to tackle these three issues and efficiently obtain conformations and pathways. The method is not constrained to use only one or two collective variables, unlike most reaction coordinate-dependent methods. Instead, it can use a large number of collective vari- ables and uses macrostates (a partition of the collective variable space) to enhance the sampling. The exploration is done by running a large number of short simula- tions, and a clustering technique is used to accelerate the sampling. In this paper, we introduce the new methodology and show results from two-dimensional models and bio-molecules, such as penta-alanine and triazine polymer
Efficient algorithms and implementations of entropy-based moment closures for rarefied gases
Energy Technology Data Exchange (ETDEWEB)
Schaerer, Roman Pascal, E-mail: schaerer@mathcces.rwth-aachen.de; Bansal, Pratyuksh; Torrilhon, Manuel
2017-07-01
We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) , we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropy distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.
An improved modified LEACH-C algorithm for energy efficient routing in Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Amit Mansukh lal Parmar
2016-02-01
Full Text Available Wireless Sensor Networks (WSN is mainly characterized by its limited power supply. Hence the need for Energy efficient infrastructure is becoming increasingly more important since it impact in network lifetime. Here the focus of this paper on Hierarchy clustering because multi-hope short range communication between wireless sensor nodes is energy efficient compared to Single-hope long range communication. In Hierarchy clustering, there are many Protocols but this paper talk about the well-known Low-Energy Adaptive Clustering Hierarchy (LEACH[1].Centralized Low-Energy Adaptive Clustering Hierarchy (LEACH-C and Advanced Low-Energy Adaptive Clustering Hierarchy(ALEACH are energy efficient clustering routing protocol and they are belonging to hierarchy routing. In this paper we proposed Modified LEACH-C to upgrade the execution of existing Leach-C in such sort of Topology where Leach-C not performs so well. By Applying Method of Distance calculation between CH (cluster-head to Member node and BS (base-station to Member node. Making non-overlapping cluster using assigning proper ID while creating clusters. This makes the routing protocol more energy effective and delays life-time of a wireless sensor network. Simulation results demonstrate that Modified LEACH-C enhances network life-time contrasted with LEACH-C algorithm.
Efficient Algorithm and Architecture of Critical-Band Transform for Low-Power Speech Applications
Directory of Open Access Journals (Sweden)
Gan Woon-Seng
2007-01-01
Full Text Available An efficient algorithm and its corresponding VLSI architecture for the critical-band transform (CBT are developed to approximate the critical-band filtering of the human ear. The CBT consists of a constant-bandwidth transform in the lower frequency range and a Brown constant- transform (CQT in the higher frequency range. The corresponding VLSI architecture is proposed to achieve significant power efficiency by reducing the computational complexity, using pipeline and parallel processing, and applying the supply voltage scaling technique. A 21-band Bark scale CBT processor with a sampling rate of 16 kHz is designed and simulated. Simulation results verify its suitability for performing short-time spectral analysis on speech. It has a better fitting on the human ear critical-band analysis, significantly fewer computations, and therefore is more energy-efficient than other methods. With a 0.35 m CMOS technology, it calculates a 160-point speech in 4.99 milliseconds at 234 kHz. The power dissipation is 15.6 W at 1.1 V. It achieves 82.1 power reduction as compared to a benchmark 256-point FFT processor.
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
McDaniel, T.; D'Azevedo, E. F.; Li, Y. W.; Wong, K.; Kent, P. R. C.
2017-11-01
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.
Efficient algorithms and implementations of entropy-based moment closures for rarefied gases
Schaerer, Roman Pascal; Bansal, Pratyuksh; Torrilhon, Manuel
2017-07-01
We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) [13], we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropy distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.
An algorithm for efficient metal artifact reductions in permanent seed implants
Energy Technology Data Exchange (ETDEWEB)
Xu Chen; Verhaegen, Frank; Laurendeau, Denis; Enger, Shirin A.; Beaulieu, Luc [Departement de Radio-Oncologie et Centre de Recherche en Cancerologie, Universite Laval, Centre Hospitalier Universitaire de Quebec, 11 Cote du Palais, Quebec, Quebec G1R 2J6 (Canada) and Departement de Genie Electrique et Genie Informatique, Laboratoire de Vision et Systemes Numeriques, Universite Laval, Quebec, Quebec G1K 7P4 (Canada); Department of Radiation Oncology (MAASTRO), GROW-School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands) and Oncology Department, Montreal General Hospital, McGill University, 1650 Cedar Avenue, Montreal, Quebec H3G 1A4 (Canada); Departement de Genie Electrique et Genie Informatique, Laboratoire de Vision et Systemes Numeriques, Universite Laval, Quebec, Quebec G1K 7P4 (Canada); Departement de Radio-Oncologie et Centre de Recherche en Cancerologie, Universite Laval, Centre Hospitalier Universitaire de Quebec, 11 Co circumflex te du Palais, Quebec, Quebec G1R 2J6 (Canada); Departement de Radio-Oncologie et Centre de Recherche en Cancerologie, Universite Laval, Centre Hospitalier Universitaire de Quebec, 11 Cote du Palais, Quebec, Quebec G1R 2J6 (Canada) and Departement de Physique, de Genie Physique et d' Optique, Universite Laval, Quebec, Quebec G1K 7P4 (Canada)
2011-01-15
Purpose: In permanent seed implants, 60 to more than 100 small metal capsules are inserted in the prostate, creating artifacts in x-ray computed tomography (CT) imaging. The goal of this work is to develop an automatic method for metal artifact reduction (MAR) from small objects such as brachytherapy seeds for clinical applications. Methods: The approach for MAR is based on the interpolation of missing projections by directly using raw helical CT data (sinogram). First, an initial image is reconstructed from the raw CT data. Then, the metal objects segmented from the reconstructed image are reprojected back into the sinogram space to produce a metal-only sinogram. The Steger method is used to determine precisely the position and edges of the seed traces in the raw CT data. By combining the use of Steger detection and reprojections, the missing projections are detected and replaced by interpolation of non-missing neighboring projections. Results: In both phantom experiments and patient studies, the missing projections have been detected successfully and the artifacts caused by metallic objects have been substantially reduced. The performance of the algorithm has been quantified by comparing the uniformity between the uncorrected and the corrected phantom images. The results of the artifact reduction algorithm are indistinguishable from the true background value. Conclusions: An efficient algorithm for MAR in seed brachytherapy was developed. The test results obtained using raw helical CT data for both phantom and clinical cases have demonstrated that the proposed MAR method is capable of accurately detecting and correcting artifacts caused by a large number of very small metal objects (seeds) in sinogram space. This should enable a more accurate use of advanced brachytherapy dose calculations, such as Monte Carlo simulations.
On efficient randomized algorithms for finding the PageRank vector
Gasnikov, A. V.; Dmitriev, D. Yu.
2015-03-01
Two randomized methods are considered for finding the PageRank vector; in other words, the solution of the system p T = p T P with a stochastic n × n matrix P, where n ˜ 107-109, is sought (in the class of probability distributions) with accuracy ɛ: ɛ ≫ n -1. Thus, the possibility of brute-force multiplication of P by the column is ruled out in the case of dense objects. The first method is based on the idea of Markov chain Monte Carlo algorithms. This approach is efficient when the iterative process p {/t+1 T} = p {/t T} P quickly reaches a steady state. Additionally, it takes into account another specific feature of P, namely, the nonzero off-diagonal elements of P are equal in rows (this property is used to organize a random walk over the graph with the matrix P). Based on modern concentration-of-measure inequalities, new bounds for the running time of this method are presented that take into account the specific features of P. In the second method, the search for a ranking vector is reduced to finding the equilibrium in the antagonistic matrix game where S n (1) is a unit simplex in ℝ n and I is the identity matrix. The arising problem is solved by applying a slightly modified Grigoriadis-Khachiyan algorithm (1995). This technique, like the Nazin-Polyak method (2009), is a randomized version of Nemirovski's mirror descent method. The difference is that randomization in the Grigoriadis-Khachiyan algorithm is used when the gradient is projected onto the simplex rather than when the stochastic gradient is computed. For sparse matrices P, the method proposed yields noticeably better results.
Fast-SL: an efficient algorithm to identify synthetic lethal sets in metabolic networks.
Pratapa, Aditya; Balachandran, Shankar; Raman, Karthik
2015-10-15
Synthetic lethal sets are sets of reactions/genes where only the simultaneous removal of all reactions/genes in the set abolishes growth of an organism. Previous approaches to identify synthetic lethal genes in genome-scale metabolic networks have built on the framework of flux balance analysis (FBA), extending it either to exhaustively analyze all possible combinations of genes or formulate the problem as a bi-level mixed integer linear programming (MILP) problem. We here propose an algorithm, Fast-SL, which surmounts the computational complexity of previous approaches by iteratively reducing the search space for synthetic lethals, resulting in a substantial reduction in running time, even for higher order synthetic lethals. We performed synthetic reaction and gene lethality analysis, using Fast-SL, for genome-scale metabolic networks of Escherichia coli, Salmonella enterica Typhimurium and Mycobacterium tuberculosis. Fast-SL also rigorously identifies synthetic lethal gene deletions, uncovering synthetic lethal triplets that were not reported previously. We confirm that the triple lethal gene sets obtained for the three organisms have a precise match with the results obtained through exhaustive enumeration of lethals performed on a computer cluster. We also parallelized our algorithm, enabling the identification of synthetic lethal gene quadruplets for all three organisms in under 6 h. Overall, Fast-SL enables an efficient enumeration of higher order synthetic lethals in metabolic networks, which may help uncover previously unknown genetic interactions and combinatorial drug targets. The MATLAB implementation of the algorithm, compatible with COBRA toolbox v2.0, is available at https://github.com/RamanLab/FastSL CONTACT: kraman@iitm.ac.in Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
A Low Power Consumption Algorithm for Efficient Energy Consumption in ZigBee Motes.
Vaquerizo-Hdez, Daniel; Muñoz, Pablo; R-Moreno, María D; F Barrero, David
2017-09-22
Wireless Sensor Networks (WSNs) are becoming increasingly popular since they can gather information from different locations without wires. This advantage is exploited in applications such as robotic systems, telecare, domotic or smart cities, among others. To gain independence from the electricity grid, WSNs devices are equipped with batteries, therefore their operational time is determined by the time that the batteries can power on the device. As a consequence, engineers must consider low energy consumption as a critical objective to design WSNs. Several approaches can be taken to make efficient use of energy in WSNs, for instance low-duty-cycling sensor networks (LDC-WSN). Based on the LDC-WSNs, we present LOKA, a LOw power Konsumption Algorithm to minimize WSNs energy consumption using different power modes in a sensor mote. The contribution of the work is a novel algorithm called LOKA that implements two duty-cycling mechanisms using the end-device of the ZigBee protocol (of the Application Support Sublayer) and an external microcontroller (Cortex M0+) in order to minimize the energy consumption of a delay tolerant networking. Experiments show that using LOKA, the energy required by the sensor device is reduced to half with respect to the same sensor device without using LOKA.
A Low Power Consumption Algorithm for Efficient Energy Consumption in ZigBee Motes
Directory of Open Access Journals (Sweden)
Daniel Vaquerizo-Hdez
2017-09-01
Full Text Available Wireless Sensor Networks (WSNs are becoming increasingly popular since they can gather information from different locations without wires. This advantage is exploited in applications such as robotic systems, telecare, domotic or smart cities, among others. To gain independence from the electricity grid, WSNs devices are equipped with batteries, therefore their operational time is determined by the time that the batteries can power on the device. As a consequence, engineers must consider low energy consumption as a critical objective to design WSNs. Several approaches can be taken to make efficient use of energy in WSNs, for instance low-duty-cycling sensor networks (LDC-WSN. Based on the LDC-WSNs, we present LOKA, a LOw power Konsumption Algorithm to minimize WSNs energy consumption using different power modes in a sensor mote. The contribution of the work is a novel algorithm called LOKA that implements two duty-cycling mechanisms using the end-device of the ZigBee protocol (of the Application Support Sublayer and an external microcontroller (Cortex M0+ in order to minimize the energy consumption of a delay tolerant networking. Experiments show that using LOKA, the energy required by the sensor device is reduced to half with respect to the same sensor device without using LOKA.
A Robust and Efficient Algorithm for Tool Recognition and Localization for Space Station Robot
Directory of Open Access Journals (Sweden)
Lingbo Cheng
2014-12-01
Full Text Available This paper studies a robust target recognition and localization method for a maintenance robot in a space station, and its main goal is to solve the target affine transformation caused by microgravity and the strong reflection and refraction of sunlight and lamplight in the cabin, as well as the occlusion of other objects. In this method, an Affine Scale Invariant Feature Transform (Affine-SIFT algorithm is proposed to extract enough local feature points with a fully affine invariant, and the stable matching point is obtained from the above point for target recognition by the selected Random Sample Consensus (RANSAC algorithm. Then, in order to localize the target, the effective and appropriate 3D grasping scope of the target is defined, and we determine and evaluate the grasping precision with the estimated affine transformation parameters presented in this paper. Finally, the threshold of RANSAC is optimized to enhance the accuracy and efficiency of target recognition and localization, and the scopes of illumination, vision distance and viewpoint angle for robot are evaluated to obtain effective image data by Root-Mean-Square Error (RMSE. An experimental system to simulate the illumination environment in a space station is established. Enough experiments have been carried out, and the experimental results show both the validity of the proposed definition of the grasping scope and the feasibility of the proposed recognition and localization method.
An efficient self-organizing map designed by genetic algorithms for the traveling salesman problem.
Jin, Hui-Dong; Leung, Kwong-Sak; Wong, Man-Leung; Xu, Z B
2003-01-01
As a typical combinatorial optimization problem, the traveling salesman problem (TSP) has attracted extensive research interest. In this paper, we develop a self-organizing map (SOM) with a novel learning rule. It is called the integrated SOM (ISOM) since its learning rule integrates the three learning mechanisms in the SOM literature. Within a single learning step, the excited neuron is first dragged toward the input city, then pushed to the convex hull of the TSP, and finally drawn toward the middle point of its two neighboring neurons. A genetic algorithm is successfully specified to determine the elaborate coordination among the three learning mechanisms as well as the suitable parameter setting. The evolved ISOM (eISOM) is examined on three sets of TSP to demonstrate its power and efficiency. The computation complexity of the eISOM is quadratic, which is comparable to other SOM-like neural networks. Moreover, the eISOM can generate more accurate solutions than several typical approaches for TSP including the SOM developed by Budinich, the expanding SOM, the convex elastic net, and the FLEXMAP algorithm. Though its solution accuracy is not yet comparable to some sophisticated heuristics, the eISOM is one of the most accurate neural networks for the TSP.
Efficient feature selection using a hybrid algorithm for the task of epileptic seizure detection
Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline
2014-07-01
Feature selection is a very important aspect in the field of machine learning. It entails the search of an optimal subset from a very large data set with high dimensional feature space. Apart from eliminating redundant features and reducing computational cost, a good selection of feature also leads to higher prediction and classification accuracy. In this paper, an efficient feature selection technique is introduced in the task of epileptic seizure detection. The raw data are electroencephalography (EEG) signals. Using discrete wavelet transform, the biomedical signals were decomposed into several sets of wavelet coefficients. To reduce the dimension of these wavelet coefficients, a feature selection method that combines the strength of both filter and wrapper methods is proposed. Principal component analysis (PCA) is used as part of the filter method. As for wrapper method, the evolutionary harmony search (HS) algorithm is employed. This metaheuristic method aims at finding the best discriminating set of features from the original data. The obtained features were then used as input for an automated classifier, namely wavelet neural networks (WNNs). The WNNs model was trained to perform a binary classification task, that is, to determine whether a given EEG signal was normal or epileptic. For comparison purposes, different sets of features were also used as input. Simulation results showed that the WNNs that used the features chosen by the hybrid algorithm achieved the highest overall classification accuracy.
An Efficient MapReduce-Based Parallel Clustering Algorithm for Distributed Traffic Subarea Division
Directory of Open Access Journals (Sweden)
Dawen Xia
2015-01-01
Full Text Available Traffic subarea division is vital for traffic system management and traffic network analysis in intelligent transportation systems (ITSs. Since existing methods may not be suitable for big traffic data processing, this paper presents a MapReduce-based Parallel Three-Phase K-Means (Par3PKM algorithm for solving traffic subarea division problem on a widely adopted Hadoop distributed computing platform. Specifically, we first modify the distance metric and initialization strategy of K-Means and then employ a MapReduce paradigm to redesign the optimized K-Means algorithm for parallel clustering of large-scale taxi trajectories. Moreover, we propose a boundary identifying method to connect the borders of clustering results for each cluster. Finally, we divide traffic subarea of Beijing based on real-world trajectory data sets generated by 12,000 taxis in a period of one month using the proposed approach. Experimental evaluation results indicate that when compared with K-Means, Par2PK-Means, and ParCLARA, Par3PKM achieves higher efficiency, more accuracy, and better scalability and can effectively divide traffic subarea with big taxi trajectory data.
An efficient algorithm for the stochastic simulation of the hybridization of DNA to microarrays
Directory of Open Access Journals (Sweden)
Laurenzi Ian J
2009-12-01
Full Text Available Abstract Background Although oligonucleotide microarray technology is ubiquitous in genomic research, reproducibility and standardization of expression measurements still concern many researchers. Cross-hybridization between microarray probes and non-target ssDNA has been implicated as a primary factor in sensitivity and selectivity loss. Since hybridization is a chemical process, it may be modeled at a population-level using a combination of material balance equations and thermodynamics. However, the hybridization reaction network may be exceptionally large for commercial arrays, which often possess at least one reporter per transcript. Quantification of the kinetics and equilibrium of exceptionally large chemical systems of this type is numerically infeasible with customary approaches. Results In this paper, we present a robust and computationally efficient algorithm for the simulation of hybridization processes underlying microarray assays. Our method may be utilized to identify the extent to which nucleic acid targets (e.g. cDNA will cross-hybridize with probes, and by extension, characterize probe robustnessusing the information specified by MAGE-TAB. Using this algorithm, we characterize cross-hybridization in a modified commercial microarray assay. Conclusions By integrating stochastic simulation with thermodynamic prediction tools for DNA hybridization, one may robustly and rapidly characterize of the selectivity of a proposed microarray design at the probe and "system" levels. Our code is available at http://www.laurenzi.net.
Energy Technology Data Exchange (ETDEWEB)
Unsal, Alparslan, E-mail: alparslanunsal@yahoo.com [Adnan Menderes University, Faculty of Medicine, Department of Radiology, 09100 Aydin (Turkey); Caliskan, Eda Kazak [Adnan Menderes University, Faculty of Medicine, Department of Radiology, 09100 Aydin (Turkey); Erol, Haluk [Adnan Menderes University, Faculty of Medicine, Department of Urology, 09100 Aydin (Turkey); Karaman, Can Zafer [Adnan Menderes University, Faculty of Medicine, Department of Radiology, 09100 Aydin (Turkey)
2011-07-15
Purpose: To assess the efficiency of the following imaging algorithm, including intravenous urography (IVU) or computed tomography urography (CTU) based on ultrasonographic (US) selection, in the radiological management of hematuria. Materials and methods: One hundred and forty-one patients with hematuria were prospectively evaluated. Group 1 included 106 cases with normal or nearly normal US result and then they were examined with IVU. Group 2 was composed of the remaining 35 cases which had any urinary tract abnormality, and they were directed to CTU. Radiological results were compared with clinical diagnosis. Results: Ultrasonography and IVU results of 97 cases were congruent in group 1. Eight simple cysts were detected with US and 1 non-obstructing ureter stone was detected with IVU in remaining 9 patients. The only discordant case in clinical comparison was found to have urinary bladder cancer on conventional cystoscopy. Ultrasonography and CTU results were congruent in 30 cases. Additional lesions were detected with CTU (3 ureter stones, 1 ureter TCC, 1 advanced RCC) in remaining 5 patients. Ultrasonography + CTU combination results were all concordant with clinical diagnosis. Except 1 case, radio-clinical agreement was achieved. Conclusion: Cross-sectional imaging modalities are preferred in evaluation of hematuria. CTU is the method of choice; however the limitations preclude using CTU as first line or screening test. Ultrasonography is now being accepted as a first line imaging modality with the increased sensitivity in mass detection compared to IVU. The US guided imaging algorithm can be used effectively in radiological approach to hematuria.
Efficient parameterization of cardiac action potential models using a genetic algorithm
Cairns, Darby I.; Fenton, Flavio H.; Cherry, E. M.
2017-09-01
Finding appropriate values for parameters in mathematical models of cardiac cells is a challenging task. Here, we show that it is possible to obtain good parameterizations in as little as 30-40 s when as many as 27 parameters are fit simultaneously using a genetic algorithm and two flexible phenomenological models of cardiac action potentials. We demonstrate how our implementation works by considering cases of "model recovery" in which we attempt to find parameter values that match model-derived action potential data from several cycle lengths. We assess performance by evaluating the parameter values obtained, action potentials at fit and non-fit cycle lengths, and bifurcation plots for fidelity to the truth as well as consistency across different runs of the algorithm. We also fit the models to action potentials recorded experimentally using microelectrodes and analyze performance. We find that our implementation can efficiently obtain model parameterizations that are in good agreement with the dynamics exhibited by the underlying systems that are included in the fitting process. However, the parameter values obtained in good parameterizations can exhibit a significant amount of variability, raising issues of parameter identifiability and sensitivity. Along similar lines, we also find that the two models differ in terms of the ease of obtaining parameterizations that reproduce model dynamics accurately, most likely reflecting different levels of parameter identifiability for the two models.
Xu, Huihui; Jiang, Mingyan
2015-07-01
Two-dimensional to three-dimensional (3-D) conversion in 3-D video applications has attracted great attention as it can alleviate the problem of stereoscopic content shortage. Depth estimation is an essential part of this conversion since the depth accuracy directly affects the quality of a stereoscopic image. In order to generate a perceptually reasonable depth map, a comprehensive depth estimation algorithm that considers the scenario type is presented. Based on the human visual system mechanism, which is sensitive to a change in the scenario, this study classifies the type of scenario into four classes according to the relationship between the movements of the camera and the object, and then leverages different strategies on the basis of the scenario type. The proposed strategies efficiently extract the depth information from different scenarios. In addition, the depth generation method for a scenario in which there is no motion, neither of the object nor the camera, is also suitable for the single image. Qualitative and quantitative evaluation results demonstrate that the proposed depth estimation algorithm is very effective for generating stereoscopic content and providing a realistic visual experience.
Rodríguez, Manuel; Magdaleno, Eduardo; Pérez, Fernando; García, Cristhian
2017-03-28
Non-equispaced Fast Fourier transform (NFFT) is a very important algorithm in several technological and scientific areas such as synthetic aperture radar, computational photography, medical imaging, telecommunications, seismic analysis and so on. However, its computation complexity is high. In this paper, we describe an efficient NFFT implementation with a hardware coprocessor using an All-Programmable System-on-Chip (APSoC). This is a hybrid device that employs an Advanced RISC Machine (ARM) as Processing System with Programmable Logic for high-performance digital signal processing through parallelism and pipeline techniques. The algorithm has been coded in C language with pragma directives to optimize the architecture of the system. We have used the very novel Software Develop System-on-Chip (SDSoC) evelopment tool that simplifies the interface and partitioning between hardware and software. This provides shorter development cycles and iterative improvements by exploring several architectures of the global system. The computational results shows that hardware acceleration significantly outperformed the software based implementation.
Barbu, Alina L.; Laurent-Varin, Julien; Perosanz, Felix; Mercier, Flavien; Marty, Jean-Charles
2018-01-01
The implementation into the GINS CNES geodetic software of a more efficient filter was needed to satisfy the users who wanted to compute high-rate GNSS PPP solutions. We selected the SRI approach and a QR factorization technique including an innovative algorithm which optimizes the matrix reduction step. A full description of this algorithm is given for future users. The new capacities of the software have been tested using a set of 1 Hz data from the Japanese GEONET network including the Mw 9.0 2011 Tohoku earthquake. Station coordinates solution agreed at a sub-decimeter level with previous publications as well as with solutions we computed with the National Resource Canada software. An additional benefit from the implementation of the SRI filter is the capability to estimate high-rate tropospheric parameters too. As the CPU time to estimate a 1 Hz kinematic solution from 1 h of data is now less than 1 min we could produced series of coordinates for the full 1300 stations of the Japanese network. The corresponding movie shows the impressive co-seismic deformation as well as the wave propagation along the island. The processing was straightforward using a cluster of PCs which illustrates the new potentiality of the GINS software for massive network high rate PPP processing.
An Efficient Exact Algorithm for the Motif Stem Search Problem over Large Alphabets.
Yu, Qiang; Huo, Hongwei; Vitter, Jeffrey Scott; Huan, Jun; Nekrich, Yakov
2015-01-01
In recent years, there has been an increasing interest in planted (l, d) motif search (PMS) with applications to discovering significant segments in biological sequences. However, there has been little discussion about PMS over large alphabets. This paper focuses on motif stem search (MSS), which is recently introduced to search motifs on large-alphabet inputs. A motif stem is an l-length string with some wildcards. The goal of the MSS problem is to find a set of stems that represents a superset of all (l , d) motifs present in the input sequences, and the superset is expected to be as small as possible. The three main contributions of this paper are as follows: (1) We build motif stem representation more precisely by using regular expressions. (2) We give a method for generating all possible motif stems without redundant wildcards. (3) We propose an efficient exact algorithm, called StemFinder, for solving the MSS problem. Compared with the previous MSS algorithms, StemFinder runs much faster and reports fewer stems which represent a smaller superset of all (l, d) motifs. StemFinder is freely available at http://sites.google.com/site/feqond/stemfinder.
Efficient Data-Structures and Algorithms for a Coloured Petri Nets Simulator
DEFF Research Database (Denmark)
Mortensen, Kjeld Høyer
2001-01-01
occurrence scheduler algorithm so that we use lazy calculation of event lists. We only keep track of disabled transitions which we have discovered during the search for an enabled transition, and use the locality principle for an accurring transition in order to minimise the changes of enabling status...... of other transitions. Secondly we have improved the data-structures which hold multi-sets for markings. A kind of weight-balanced trees, called BB-trees. are used instead of lists as in the original version of the simulator. Although this kind of trees are more difficult to maintain at run......-time they are surprisingly efficient, even for small multi-sets. Thirdly we have improved the search for enabled binding elements. We use the first enabled binding element we find in a fair serach and make it occur immediatly instead of calculating all bindings and then randomly select one. The search is guided by a binding...
An Efficient Inverse Kinematic Algorithm for a PUMA560-Structured Robot Manipulator
Directory of Open Access Journals (Sweden)
Huashan Liu
2013-05-01
Full Text Available Abstract This paper presents an efficient inverse kinematics (IK approach which features fast computing performance for a PUMA560-structured robot manipulator. By properties of the orthogonal matrix and block matrix, the complex IK matrix equations are transformed into eight pure algebraic equations that contain the six unknown joint angle variables, which makes the solving compact without computing the reverses of the 4×4 homogeneous transformation matrices. Moreover, the appropriate combination of related equations ensures that the solutions are free of extraneous roots in the solving process, and the wrist singularity problem of the robot is also addressed. Finally, a case study is given to show the effectiveness of the proposed algorithm.
Directory of Open Access Journals (Sweden)
Adeel Anjum
2017-01-01
Full Text Available Privacy-Preserving Data Publishing (PPDP has become a critical issue for companies and organizations that would release their data. k-Anonymization was proposed as a first generalization model to guarantee against identity disclosure of individual records in a data set. Point access methods (PAMs are not well studied for the problem of data anonymization. In this article, we propose yet another approximation algorithm for anonymization, coined BangA, that combines useful features from Point Access Methods (PAMs and clustering. Hence, it achieves fast computation and scalability as a PAM, and very high quality thanks to its density-based clustering step. Extensive experiments show the efficiency and effectiveness of our approach. Furthermore, we provide guidelines for extending BangA to achieve a relaxed form of differential privacy which provides stronger privacy guarantees as compared to traditional privacy definitions.
Directory of Open Access Journals (Sweden)
José Antonio Martín H
Full Text Available Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete. In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways. In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present if and only if it adopts some particular admissible structure (an NP-certificate or be absent (no admissible structure, however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to "efficiently" solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient however parametric. The only requirement is sufficient computational power, which is controlled by the parameter α∈N. Nevertheless, here it is proved that the probability of requiring a value of α>k to obtain a solution for a random graph decreases exponentially: P(α>k≤2(-(k+1, making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results.
PID-controller with predictor and auto-tuning algorithm: study of efficiency for thermal plants
Kuzishchin, V. F.; Merzlikina, E. I.; Hoang, Van Va
2017-09-01
The problem of efficiency estimation of an automatic control system (ACS) with a Smith predictor and PID-algorithm for thermal plants is considered. In order to use the predictor, it is proposed to include an auto-tuning module (ATC) into the controller; the module calculates parameters for a second-order plant module with a time delay. The study was conducted using programmable logical controllers (PLC), one of which performed control, ATC, and predictor functions. A simulation model was used as a control plant, and there were two variants of the model: one of them was built on the basis of a separate PLC, and the other was a physical model of a thermal plant in the form of an electrical heater. Analysis of the efficiency of the ACS with the predictor was carried out for several variants of the second order plant model with time delay, and the analysis was performed on the basis of the comparison of transient processes in the system when the set point was changed and when a disturbance influenced the control plant. The recommendations are given on correction of the PID-algorithm parameters when the predictor is used by means of using the correcting coefficient k for the PID parameters. It is shown that, when the set point is changed, the use of the predictor is effective taking into account the parameters correction with k = 2. When the disturbances influence the plant, the use of the predictor is doubtful, because the transient process is too long. The reason for this is that, in the neighborhood of the zero frequency, the amplitude-frequency characteristic (AFC) of the system with the predictor has an ascent in comparison with the AFC of the system without the predictor.
An Efficient Algorithm for Fault Location on Mixed Line-Cable Transmission Corridors
Popov, M.; Rietveld, G.; Radojevic, Z.; Terzija, V.
2013-01-01
This paper presents a fault location algorithm that can be used to accurately locate the fault at any place along mixed line-cable transmission corridors. The algorithm is an impedance based line/cable parameter dependent algorithm. The fault location algorithm is derived using distributed line
An efficient algorithm for TUCKALS3 on data with large numbers of observation units
Kiers, Henk A.L.; Kroonenberg, P.M.; ten Berge, Jos M.F.
A modification of the TUCKALS3 algorithm is proposed that handles three-way arrays of order I x J x K for any I. When I is much larger than JK, the modified algorithm needs less work space to store the data during the iterative part of the algorithm than does the original algorithm. Because of this
Efficient algorithms for optimal arrival scheduling and air traffic flow management
Saraf, Aditya
The research presented in this dissertation is motivated by the need for new, efficient algorithms for the solution of two important problems currently faced by the air-traffic control community: (i) optimal scheduling of aircraft arrivals at congested airports, and (ii) optimal National Airspace System (NAS) wide traffic flow management. In the first part of this dissertation, we present an optimal airport arrival scheduling algorithm, which works within a hierarchical scheduling structure. This structure consists of schedulers at multiple points along the arrival-route. Schedulers are linked through acceptance-rate constraints, which are passed up from downstream metering-points. The innovation in this scheduling algorithm is that these constraints are computed by using an Eulerian model-based optimization scheme. This rate computation removes inefficiencies introduced in the schedule through ad hoc acceptance-rate computations. The scheduling process at every metering-point uses its optimal acceptance-rate as a constraint and computes optimal arrival sequences by using a combinatorial search-algorithm. We test this algorithm in a dynamic air-traffic environment, which can be customized to emulate different arrival scenarios. In the second part of this dissertation, we introduce a novel two-level control system for optimal traffic-flow management. The outer-level control module of this two-level control system generates an Eulerian-model of the NAS by aggregating aircraft into interconnected control-volumes. Using this Eulerian model of the airspace, control strategies like Model Predictive Control are applied to find the optimal inflow and outflow commands for each control-volume so that efficient flows are achieved in the NAS. Each control-volume has its separate inner-level control-module. The inner-level control-module takes in the optimal inflow and outflow commands generated by the outer control-module as reference inputs and uses hybrid aircraft models to
Efficient DFSA Algorithm in RFID Systems for the Internet of Things
Directory of Open Access Journals (Sweden)
Hsing-Wen Wang
2015-01-01
Full Text Available Radio Frequency IDentification (RFID used in business applications and international business management fields can create and sustain the competitive advantage, which is also one of the wireless telecommunication techniques for recognizing objects to realize Internet of Things (IoT technologies. In construction of IoT network, the RFID technologies play the role of the front-end data collection via tag identification, as the basis of IoT. Hence, the adoption of RFID technologies is spurring innovation and the development of the IoT. However, in RFID system, one of the most important challenges is the collision resolution between the tags when these tags transmit their data to the reader simultaneously. Hence, in this paper I develop an efficient scheme to estimate the number of unidentified tags for Dynamic Framed Slotted Aloha (DFSA based RFID system, with the view of increasing system performance. In addition to theoretical analysis, simulations are conducted to evaluate the performance of proposed scheme. The simulation results reveal the proposed scheme works very well in providing a substantial performance improvement in RFID system. The proposed algorithm promotes business effectiveness and efficiency while applying the RFID technologies to IoT.
Energy efficient model based algorithm for control of building HVAC systems.
Kirubakaran, V; Sahu, Chinmay; Radhakrishnan, T K; Sivakumaran, N
2015-11-01
Energy efficient designs are receiving increasing attention in various fields of engineering. Heating ventilation and air conditioning (HVAC) control system designs involve improved energy usage with an acceptable relaxation in thermal comfort. In this paper, real time data from a building HVAC system provided by BuildingLAB is considered. A resistor-capacitor (RC) framework for representing thermal dynamics of the building is estimated using particle swarm optimization (PSO) algorithm. With objective costs as thermal comfort (deviation of room temperature from required temperature) and energy measure (Ecm) explicit MPC design for this building model is executed based on its state space representation of the supply water temperature (input)/room temperature (output) dynamics. The controllers are subjected to servo tracking and external disturbance (ambient temperature) is provided from the real time data during closed loop control. The control strategies are ported on a PIC32mx series microcontroller platform. The building model is implemented in MATLAB and hardware in loop (HIL) testing of the strategies is executed over a USB port. Results indicate that compared to traditional proportional integral (PI) controllers, the explicit MPC's improve both energy efficiency and thermal comfort significantly. Copyright © 2015 Elsevier Inc. All rights reserved.
Yu, Zhenhua; Fu, Xiao; Cai, Yuanli; Vuran, Mehmet C
2011-01-01
A reliable energy-efficient multi-level routing algorithm in wireless sensor networks is proposed. The proposed algorithm considers the residual energy, number of the neighbors and centrality of each node for cluster formation, which is critical for well-balanced energy dissipation of the network. In the algorithm, a knowledge-based inference approach using fuzzy Petri nets is employed to select cluster heads, and then the fuzzy reasoning mechanism is used to compute the degree of reliability in the route sprouting tree from cluster heads to the base station. Finally, the most reliable route among the cluster heads can be constructed. The algorithm not only balances the energy load of each node but also provides global reliability for the whole network. Simulation results demonstrate that the proposed algorithm effectively prolongs the network lifetime and reduces the energy consumption.
A Fast and Efficient Algorithm for Mining Top-k Nodes in Complex Networks
Liu, Dong; Jing, Yun; Zhao, Jing; Wang, Wenjun; Song, Guojie
2017-02-01
One of the key problems in social network analysis is influence maximization, which has great significance both in theory and practical applications. Given a complex network and a positive integer k, and asks the k nodes to trigger the largest expected number of the remaining nodes. Many mature algorithms are mainly divided into propagation-based algorithms and topology- based algorithms. The propagation-based algorithms are based on optimization of influence spread process, so the influence spread of them significantly outperforms the topology-based algorithms. But these algorithms still takes days to complete on large networks. Contrary to propagation based algorithms, the topology-based algorithms are based on intuitive parameter statistics and static topology structure properties. Their running time are extremely short but the results of influence spread are unstable. In this paper, we propose a novel topology-based algorithm based on local index rank (LIR). The influence spread of our algorithm is close to the propagation-based algorithm and sometimes over them. Moreover, the running time of our algorithm is millions of times shorter than that of propagation-based algorithms. Our experimental results show that our algorithm has a good and stable performance in IC and LT model.
LightAssembler: fast and memory-efficient assembly algorithm for high-throughput sequencing reads.
El-Metwally, Sara; Zakaria, Magdi; Hamza, Taher
2016-11-01
The deluge of current sequenced data has exceeded Moore's Law, more than doubling every 2 years since the next-generation sequencing (NGS) technologies were invented. Accordingly, we will able to generate more and more data with high speed at fixed cost, but lack the computational resources to store, process and analyze it. With error prone high throughput NGS reads and genomic repeats, the assembly graph contains massive amount of redundant nodes and branching edges. Most assembly pipelines require this large graph to reside in memory to start their workflows, which is intractable for mammalian genomes. Resource-efficient genome assemblers combine both the power of advanced computing techniques and innovative data structures to encode the assembly graph efficiently in a computer memory. LightAssembler is a lightweight assembly algorithm designed to be executed on a desktop machine. It uses a pair of cache oblivious Bloom filters, one holding a uniform sample of [Formula: see text]-spaced sequenced [Formula: see text]-mers and the other holding [Formula: see text]-mers classified as likely correct, using a simple statistical test. LightAssembler contains a light implementation of the graph traversal and simplification modules that achieves comparable assembly accuracy and contiguity to other competing tools. Our method reduces the memory usage by [Formula: see text] compared to the resource-efficient assemblers using benchmark datasets from GAGE and Assemblathon projects. While LightAssembler can be considered as a gap-based sequence assembler, different gap sizes result in an almost constant assembly size and genome coverage. https://github.com/SaraEl-Metwally/LightAssembler CONTACT: sarah_almetwally4@mans.edu.egSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
An efficient algorithm for systematic analysis of nucleotide strings suitable for siRNA design.
Baranova, Ancha; Bode, Jonathan; Manyam, Ganiraju; Emelianenko, Maria
2011-05-27
The "off-target" silencing effect hinders the development of siRNA-based therapeutic and research applications. Existing solutions for finding possible locations of siRNA seats within a large database of genes are either too slow, miss a portion of the targets, or are simply not designed to handle a very large number of queries. We propose a new approach that reduces the computational time as compared to existing techniques. The proposed method employs tree-based storage in a form of a modified truncated suffix tree to sort all possible short string substrings within given set of strings (i.e. transcriptome). Using the new algorithm, we pre-computed a list of the best siRNA locations within each human gene ("siRNA seats"). siRNAs designed to reside within siRNA seats are less likely to hybridize off-target. These siRNA seats could be used as an input for the traditional "set-of-rules" type of siRNA designing software. The list of siRNA seats is available through a publicly available database located at http://web.cos.gmu.edu/~gmanyam/siRNA_db/search.php In attempt to perform top-down prediction of the human siRNA with minimized off-target hybridization, we developed an efficient algorithm that employs suffix tree based storage of the substrings. Applications of this approach are not limited to optimal siRNA design, but can also be useful for other tasks involving selection of the characteristic strings specific to individual genes. These strings could then be used as siRNA seats, as specific probes for gene expression studies by oligonucleotide-based microarrays, for the design of molecular beacon probes for Real-Time PCR and, generally, any type of PCR primers.
An efficient algorithm for systematic analysis of nucleotide strings suitable for siRNA design
Directory of Open Access Journals (Sweden)
Bode Jonathan
2011-05-01
Full Text Available Abstract Background The "off-target" silencing effect hinders the development of siRNA-based therapeutic and research applications. Existing solutions for finding possible locations of siRNA seats within a large database of genes are either too slow, miss a portion of the targets, or are simply not designed to handle a very large number of queries. We propose a new approach that reduces the computational time as compared to existing techniques. Findings The proposed method employs tree-based storage in a form of a modified truncated suffix tree to sort all possible short string substrings within given set of strings (i.e. transcriptome. Using the new algorithm, we pre-computed a list of the best siRNA locations within each human gene ("siRNA seats". siRNAs designed to reside within siRNA seats are less likely to hybridize off-target. These siRNA seats could be used as an input for the traditional "set-of-rules" type of siRNA designing software. The list of siRNA seats is available through a publicly available database located at http://web.cos.gmu.edu/~gmanyam/siRNA_db/search.php Conclusions In attempt to perform top-down prediction of the human siRNA with minimized off-target hybridization, we developed an efficient algorithm that employs suffix tree based storage of the substrings. Applications of this approach are not limited to optimal siRNA design, but can also be useful for other tasks involving selection of the characteristic strings specific to individual genes. These strings could then be used as siRNA seats, as specific probes for gene expression studies by oligonucleotide-based microarrays, for the design of molecular beacon probes for Real-Time PCR and, generally, any type of PCR primers.
Unsal, Alparslan; Calişkan, Eda Kazak; Erol, Haluk; Karaman, Can Zafer
2011-07-01
To assess the efficiency of the following imaging algorithm, including intravenous urography (IVU) or computed tomography urography (CTU) based on ultrasonographic (US) selection, in the radiological management of hematuria. One hundred and forty-one patients with hematuria were prospectively evaluated. Group 1 included 106 cases with normal or nearly normal US result and then they were examined with IVU. Group 2 was composed of the remaining 35 cases which had any urinary tract abnormality, and they were directed to CTU. Radiological results were compared with clinical diagnosis. Ultrasonography and IVU results of 97 cases were congruent in group 1. Eight simple cysts were detected with US and 1 non-obstructing ureter stone was detected with IVU in remaining 9 patients. The only discordant case in clinical comparison was found to have urinary bladder cancer on conventional cystoscopy. Ultrasonography and CTU results were congruent in 30 cases. Additional lesions were detected with CTU (3 ureter stones, 1 ureter TCC, 1 advanced RCC) in remaining 5 patients. Ultrasonography+CTU combination results were all concordant with clinical diagnosis. Except 1 case, radio-clinical agreement was achieved. Cross-sectional imaging modalities are preferred in evaluation of hematuria. CTU is the method of choice; however the limitations preclude using CTU as first line or screening test. Ultrasonography is now being accepted as a first line imaging modality with the increased sensitivity in mass detection compared to IVU. The US guided imaging algorithm can be used effectively in radiological approach to hematuria. Copyright © 2009 Elsevier Ireland Ltd. All rights reserved.
Martín H., José Antonio
2013-01-01
Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete). In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways. In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present if and only if it adopts some particular admissible structure (an NP-certificate) or be absent (no admissible structure), however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to “efficiently” solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs) in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient) however parametric. The only requirement is sufficient computational power, which is controlled by the parameter . Nevertheless, here it is proved that the probability of requiring a value of to obtain a solution for a random graph decreases exponentially: , making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results. PMID:23349711
An efficient point-to-plane registration algorithm for affine transformations
Makovetskii, Artyom; Voronin, Sergei; Kober, Vitaly; Tihonkih, Dmitrii
2017-09-01
The problem of aligning of 3D point data is the known registration task. The most popular registration algorithm is the Iterative Closest Point (ICP) algorithm. The traditional ICP algorithm is a fast and accurate approach for rigid registration between two point clouds but it is unable to handle affine case. Recently, extension of the ICP algorithm for composition of scaling, rotation, and translation is proposed. A generalized ICP version for an arbitrary affine transformation is also suggested. In this paper, a new iterative algorithm for registration of point clouds based on the point-to-plane ICP algorithm with affine transformations is proposed. At each iteration, a closed-form solution to the affine transformation is derived. This approach allows us to get a precise solution for transformations such as rotation, translation, and scaling. With the help of computer simulation, the proposed algorithm is compared with common registration algorithms.
Directory of Open Access Journals (Sweden)
R. Sagan
2011-11-01
Full Text Available This article considers different aspects which allow defining correctness of choosing sorting algorithms. Also some algorithms, needed for computational experiments for certain class of programs, are compared.
A New On-the-Fly Sampling Method for Incoherent Inelastic Thermal Neutron Scattering Data in MCNP6
Energy Technology Data Exchange (ETDEWEB)
Pavlou, Andrew Theodore [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ji, Wei [Rensselaer Polytechnic Inst., Troy, NY (United States)
2014-09-02
At thermal energies, the scattering of neutrons in a system is complicated by the comparable velocities of the neutron and target, resulting in competing upscattering and downscattering events. The neutron wavelength is also similar in size to the target's interatomic spacing making the scattering process a quantum mechanical problem. Because of the complicated nature of scattering at low energies, the thermal data files in ACE format used in continuous-energy Monte Carlo codes are quite large { on the order of megabytes for a single temperature and material. In this paper, a new storage and sampling method is introduced that is orders of magnitude less in size and is used to sample scattering parameters at any temperature on-the-fly. In addition to the reduction in storage, the need to pre-generate thermal scattering data tables at fine temperatures has been eliminated. This is advantageous for multiphysics simulations which may involve temperatures not known in advance. A new module was written for MCNP6 that bypasses the current S(α,β) table lookup in favor of the new format. The new on-the-fly sampling method was tested for graphite for two benchmark problems at ten temperatures: 1) an eigenvalue test with a fuel compact of uranium oxycarbide fuel homogenized into a graphite matrix, 2) a surface current test with a \\broomstick" problem with a monoenergetic point source. The largest eigenvalue difference was 152pcm for T= 1200K. For the temperatures and incident energies chosen for the broomstick problem, the secondary neutron spectrum showed good agreement with the traditional S(α,β) sampling method. These preliminary results show that sampling thermal scattering data on-the-fly is a viable option to eliminate both the storage burden of keeping thermal data at discrete temperatures and the need to know temperatures before simulation runtime.
Comments on 'An efficient algorithm for computing free distance' by Bahl, L., et al
DEFF Research Database (Denmark)
Larsen, Knud J.
1973-01-01
In the above paper,^1Bahl et aL described a bidirectional search algorithm for computing the free distance of convolutional codes. There are some flaws in that algorithm. This correspondence contains a corrected version of the algorithm together with a proof that the corrected version always...... computes the free distance for noncatastrophic codes....
Gao, Q.; Yao, W. A.; Wu, F.; Zhang, H. W.; Lin, J. H.; Zhong, W. X.; Howson, W. P.; Williams, F. W.
2013-09-01
This paper proposes an efficient algorithm for computing the dynamic responses of one-dimensional periodic structures and periodic structures with defects. It uses the symmetric property of the periodic structure and the energy propagation feature of the dynamic system to analyze the algebraic structure of the matrix exponential corresponding to one-dimensional periodic structures and periodic structures with defects. By using the special algebraic structure of this matrix exponential and the precise integration method, an efficient and accurate algorithm is proposed for computing the matrix exponential corresponding to one-dimensional periodic structures or periodic structures with defects. Hence an efficient method is presented for computing the dynamic responses of one-dimensional periodic structures and periodic structures with defects. It is accurate, efficient and saves memory.
Energy Technology Data Exchange (ETDEWEB)
Niknam, Taher [Electronic and Electrical Engineering Department, Shiraz University of Technology, Shiraz (Iran)
2009-08-15
This paper introduces a robust searching hybrid evolutionary algorithm to solve the multi-objective Distribution Feeder Reconfiguration (DFR). The main objective of the DFR is to minimize the real power loss, deviation of the nodes' voltage, the number of switching operations, and balance the loads on the feeders. Because of the fact that the objectives are different and no commensurable, it is difficult to solve the problem by conventional approaches that may optimize a single objective. This paper presents a new approach based on norm3 for the DFR problem. In the proposed method, the objective functions are considered as a vector and the aim is to maximize the distance (norm2) between the objective function vector and the worst objective function vector while the constraints are met. Since the proposed DFR is a multi objective and non-differentiable optimization problem, a new hybrid evolutionary algorithm (EA) based on the combination of the Honey Bee Mating Optimization (HBMO) and the Discrete Particle Swarm Optimization (DPSO), called DPSO-HBMO, is implied to solve it. The results of the proposed reconfiguration method are compared with the solutions obtained by other approaches, the original DPSO and HBMO over different distribution test systems. (author)
Performance of an efficient image-registration algorithm in processing MR renography data.
Conlin, Christopher C; Zhang, Jeff L; Rousset, Florian; Vachet, Clement; Zhao, Yangyang; Morton, Kathryn A; Carlston, Kristi; Gerig, Guido; Lee, Vivian S
2016-02-01
To evaluate the performance of an edge-based registration technique in correcting for respiratory motion artifacts in magnetic resonance renographic (MRR) data and to examine the efficiency of a semiautomatic software package in processing renographic data from a cohort of clinical patients. The developed software incorporates an image-registration algorithm based on the generalized Hough transform of edge maps. It was used to estimate glomerular filtration rate (GFR), renal plasma flow (RPF), and mean transit time (MTT) from 36 patients who underwent free-breathing MRR at 3T using saturation-recovery turbo-FLASH. The processing time required for each patient was recorded. Renal parameter estimates and model-fitting residues from the software were compared to those from a previously reported technique. Interreader variability in the software was quantified by the standard deviation of parameter estimates among three readers. GFR estimates from our software were also compared to a reference standard from nuclear medicine. The time taken to process one patient's data with the software averaged 12 ± 4 minutes. The applied image registration effectively reduced motion artifacts in dynamic images by providing renal tracer-retention curves with significantly smaller fitting residues (P data or data registered by the previously reported technique. Interreader variability was less than 10% for all parameters. GFR estimates from the proposed method showed greater concordance with reference values (P data efficiently and accurately. Its incorporated registration technique based on the generalized Hough transform effectively reduces respiratory motion artifacts in free-breathing renographic acquisitions. © 2015 Wiley Periodicals, Inc.
Millman, Daniel Raul
Computational fluid dynamics (CFD) methods have been coupled with structural solvers to provide accurate predictions of limit cycle oscillations (LCO). There is, however, a growing interest in understanding how uncertainties in flight conditions and structural parameters affect the character of an LCO response, leading to failure of an aeroelastic system. Uncertainty quantification of a stochastic system (parametric uncertainty) with stochastic inputs (initial condition uncertainty) has traditionally been analyzed with Monte Carlo simulations (MCS). Probability density functions (PDF) of the LCO response are obtained from the MCS to estimate the probability of failure. A CFD solution, however, can take days to weeks to obtain a single response, making the MCS method intractable for large problems. A candidate approach to efficiently estimate the PDF of an LCO response is the stochastic projection method. The classical stochastic projection method is a polynomial chaos expansion (PCE). The PCE approximates the response in the stochastic domain through a Fourier type expansion of the Wiener-Hermite polynomials. An LCO response can be characterized as a subcritical or supercritical bifurcation, and bifurcations are shown to be discontinuities in the stochastic domain. The PCE method, then, would be too inefficient for estimating the LCO response surface. The objective of this research is to extend the stochastic projection method to include the construction of B-spline surfaces in the stochastic domain. The multivariate B-spline problem is solved to estimate the LCO response surface. An MCS is performed on this response surface to estimate the PDF of the LCO response. The probability of failure is then computed from the PDF. The stochastic projection method via B-splines is applied to the problem of estimating the PDF of a subcritical LCO response of a nonlinear airfoil in inviscid transonic flow. The stochastic algorithm provides a conservative estimate of the
Vilanova, Pedro
2016-01-07
In this work, we present an extension of the forward-reverse representation introduced in Simulation of forward-reverse stochastic representations for conditional diffusions , a 2014 paper by Bayer and Schoenmakers to the context of stochastic reaction networks (SRNs). We apply this stochastic representation to the computation of efficient approximations of expected values of functionals of SRN bridges, i.e., SRNs conditional on their values in the extremes of given time-intervals. We then employ this SRN bridge-generation technique to the statistical inference problem of approximating reaction propensities based on discretely observed data. To this end, we introduce a two-phase iterative inference method in which, during phase I, we solve a set of deterministic optimization problems where the SRNs are replaced by their reaction-rate ordinary differential equations approximation; then, during phase II, we apply the Monte Carlo version of the Expectation-Maximization algorithm to the phase I output. By selecting a set of over-dispersed seeds as initial points in phase I, the output of parallel runs from our two-phase method is a cluster of approximate maximum likelihood estimates. Our results are supported by numerical examples.
ReactionMap: an efficient atom-mapping algorithm for chemical reactions.
Fooshee, David; Andronico, Alessio; Baldi, Pierre
2013-11-25
Large databases of chemical reactions provide new data-mining opportunities and challenges. Key challenges result from the imperfect quality of the data and the fact that many of these reactions are not properly balanced or atom-mapped. Here, we describe ReactionMap, an efficient atom-mapping algorithm. Our approach uses a combination of maximum common chemical subgraph search and minimization of an assignment cost function derived empirically from training data. We use a set of over 259,000 balanced atom-mapped reactions from the SPRESI commercial database to train the system, and we validate it on random sets of 1000 and 17,996 reactions sampled from this pool. These large test sets represent a broad range of chemical reaction types, and ReactionMap correctly maps about 99% of the atoms and about 96% of the reactions, with a mean time per mapping of 2 s. Most correctly mapped reactions are mapped with high confidence. Mapping accuracy compares favorably with ChemAxon's AutoMapper, versions 5 and 6.1, and the DREAM Web tool. These approaches correctly map 60.7%, 86.5%, and 90.3% of the reactions, respectively, on the same data set. A ReactionMap server is available on the ChemDB Web portal at http://cdb.ics.uci.edu .
Directory of Open Access Journals (Sweden)
Vinh Ho-Huu
2017-11-01
Full Text Available In an effort to allow to increase the number of aircraft and airport operations while mitigating their negative impacts (e.g., noise and pollutant emission on near-airport communities, the optimal design of new departure routes with less noise and fuel consumption becomes more important. In this paper, a multi-objective evolutionary algorithm based on decomposition (MOEA/D, which recently emerged as a potential method for solving multi-objective optimization problems (MOPs, is developed for this kind of problem. First, to minimize aircraft noise for departure routes while taking into account the interests of various stakeholders, bi-objective optimization problems involving noise and fuel consumption are formulated where both the ground track and vertical profile of a departure route are optimized simultaneously. Second, in order to make the design space of vertical profiles feasible during the optimization process, a trajectory parameterization technique recently proposed is employed. Furthermore, some modifications to MOEA/D that are aimed at significantly reducing the computational cost are also introduced. Two different examples of departure routes at Schiphol Airport in the Netherlands are shown to demonstrate the applicability and reliability of the proposed method. The simulation results reveal that the proposed method is an effective and efficient approach for solving this kind of problem.
Bayer, Christian
2016-02-20
© 2016 Taylor & Francis Group, LLC. ABSTRACT: In this work, we present an extension of the forward–reverse representation introduced by Bayer and Schoenmakers (Annals of Applied Probability, 24(5):1994–2032, 2014) to the context of stochastic reaction networks (SRNs). We apply this stochastic representation to the computation of efficient approximations of expected values of functionals of SRN bridges, that is, SRNs conditional on their values in the extremes of given time intervals. We then employ this SRN bridge-generation technique to the statistical inference problem of approximating reaction propensities based on discretely observed data. To this end, we introduce a two-phase iterative inference method in which, during phase I, we solve a set of deterministic optimization problems where the SRNs are replaced by their reaction-rate ordinary differential equations approximation; then, during phase II, we apply the Monte Carlo version of the expectation-maximization algorithm to the phase I output. By selecting a set of overdispersed seeds as initial points in phase I, the output of parallel runs from our two-phase method is a cluster of approximate maximum likelihood estimates. Our results are supported by numerical examples.
Development of Efficient Resource Allocation Algorithm in Chunk Based OFDMA System
Directory of Open Access Journals (Sweden)
Yadav Mukesh Kumar
2016-01-01
Full Text Available The emerging demand for diverse data applications in next generation wireless networks entails both high data rate wireless connections and intelligent multiuser scheduling designs. The orthogonal frequency division multiple access based system is capable of delivering high speed data rate and can operate in a multipath environment. OFDMA based system dividing an entire channel into many orthogonal narrow band subcarriers. Due to this, it is useful to eliminate inter symbol interferences which is a limit of total available data rates. In this paper, investigation about resource allocation problem for the chunk based Orthogonal Frequency Division Multiple Access (OFDMA wireless multicast systems is done. In this paper, it is expected that the Base Station (BS has multiple antennas in a Distributed Antenna System (DAS. The allocation unit is a group of contiguous subcarriers (chunk in conventional OFDMA systems. The aim of this investigation is to develop an efficient resource allocation algorithm to maximize the total throughput and minimize the average outage probability over a chunk with respect to average Bit Error Rate (BER and total available power.
Directory of Open Access Journals (Sweden)
David Simoncini
Full Text Available Fragment assembly is a powerful method of protein structure prediction that builds protein models from a pool of candidate fragments taken from known structures. Stochastic sampling is subsequently used to refine the models. The structures are first represented as coarse-grained models and then as all-atom models for computational efficiency. Many models have to be generated independently due to the stochastic nature of the sampling methods used to search for the global minimum in a complex energy landscape. In this paper we present EdaFold(AA, a fragment-based approach which shares information between the generated models and steers the search towards native-like regions. A distribution over fragments is estimated from a pool of low energy all-atom models. This iteratively-refined distribution is used to guide the selection of fragments during the building of models for subsequent rounds of structure prediction. The use of an estimation of distribution algorithm enabled EdaFold(AA to reach lower energy levels and to generate a higher percentage of near-native models. [Formula: see text] uses an all-atom energy function and produces models with atomic resolution. We observed an improvement in energy-driven blind selection of models on a benchmark of EdaFold(AA in comparison with the [Formula: see text] AbInitioRelax protocol.
Salcedo-Sanz, S; Del Ser, J; Landa-Torres, I; Gil-López, S; Portilla-Figueras, J A
2014-01-01
This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems.
Directory of Open Access Journals (Sweden)
S. Salcedo-Sanz
2014-01-01
Full Text Available This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design. The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems.
Salcedo-Sanz, S.; Del Ser, J.; Landa-Torres, I.; Gil-López, S.; Portilla-Figueras, J. A.
2014-01-01
This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems. PMID:25147860
On-the-fly detection of images with gastritis aspects in magnetically guided capsule endoscopy
Mewes, P. W.; Neumann, D.; Juloski, A. L.; Angelopoulou, E.; Hornegger, J.
2011-03-01
Capsule Endoscopy (CE) was introduced in 2000 and has since become an established diagnostic procedure for the small bowel, colon and esophagus. For the CE examination the patient swallows the capsule, which then travels through the gastrointestinal tract under the influence of the peristaltic movements. CE is not indicated for stomach examination, as the capsule movements can not be controlled from the outside and the entire surface of the stomach can not be reliably covered. Magnetically-guided capsule endoscopy (MGCE) was introduced in 2010. For the MGCE procedure the stomach is filled with water and the capsule is navigated from the outside using an external magnetic field. During the examination the operator can control the motion of the capsule in order to obtain a sufficient number of stomach-surface images with diagnostic value. The quality of the examination depends on the skill of the operator and his ability to detect aspects of interest in real time. We present a novel computer-assisted diagnostic-procedure (CADP) algorithm for indicating gastritis pathologies in the stomach during the examination. Our algorithm is based on pre-processing methods and feature vectors that are suitably chosen for the challenges of the MGCE imaging (suspended particles, bubbles, lighting). An image is classified using an ada-boost trained classifier. For the classifier training, a number of possible features were investigated. Statistical evaluation was conducted to identify relevant features with discriminative potential. The proposed algorithm was tested on 12 video sequences stemming from 6 volunteers. A mean detection rate of 91.17% was achieved during leave-one out cross-validation.
Directory of Open Access Journals (Sweden)
Luman Zhao
2015-01-01
Full Text Available A thrust allocation method was proposed based on a hybrid optimization algorithm to efficiently and dynamically position a semisubmersible drilling rig. That is, the thrust allocation was optimized to produce the generalized forces and moment required while at the same time minimizing the total power consumption under the premise that forbidden zones should be taken into account. An optimization problem was mathematically formulated to provide the optimal thrust allocation by introducing the corresponding design variables, objective function, and constraints. A hybrid optimization algorithm consisting of a genetic algorithm and a sequential quadratic programming (SQP algorithm was selected and used to solve this problem. The proposed method was evaluated by applying it to a thrust allocation problem for a semisubmersible drilling rig. The results indicate that the proposed method can be used as part of a cost-effective strategy for thrust allocation of the rig.
On-the-Fly Machine Learning of Atomic Potential in Density Functional Theory Structure Optimization
Jacobsen, T. L.; Jørgensen, M. S.; Hammer, B.
2018-01-01
Machine learning (ML) is used to derive local stability information for density functional theory calculations of systems in relation to the recently discovered SnO2 (110 )-(4 ×1 ) reconstruction. The ML model is trained on (structure, total energy) relations collected during global minimum energy search runs with an evolutionary algorithm (EA). While being built, the ML model is used to guide the EA, thereby speeding up the overall rate by which the EA succeeds. Inspection of the local atomic potentials emerging from the model further shows chemically intuitive patterns.
Degeneralization Algorithm for Generation of Büchi Automata Based on Contented Situation
Directory of Open Access Journals (Sweden)
Laixiang Shan
2015-01-01
Full Text Available We present on-the-fly degeneralization algorithm used to transform generalized Büchi automata (GBA into Büchi Automata (BA different from the standard degeneralization algorithm. Contented situation, which is used to record what acceptance conditions are satisfiable during expanding LTL formulae, is attached to the states and transitions in the BA. In order to get the deterministic BA, the Shannon expansion is used recursively when we expand LTL formulae by applying the tableau rules. On-the-fly degeneralization algorithm is carried out in each step of the expansion of LTL formulae. Ordered binary decision diagrams are used to represent the BA and simplify LTL formulae. The temporary automata are stored as syntax directed acyclic graph in order to save storage space. These ideas are implemented in a conversion algorithm used to build a property automaton corresponding to the given LTL formulae. We compare our method to previous work and show that it is more efficient for four sets of random formulae generated by LBTT.
Link, W.A.; Barker, R.J.
2008-01-01
Judicious choice of candidate generating distributions improves efficiency of the Metropolis-Hastings algorithm. In Bayesian applications, it is sometimes possible to identify an approximation to the target posterior distribution; this approximate posterior distribution is a good choice for candidate generation. These observations are applied to analysis of the Cormack-Jolly-Seber model and its extensions. ?? Springer Science+Business Media, LLC 2007.
DEFF Research Database (Denmark)
Clausen, Johan Christian; Damkilde, Lars; Andersen, Lars
2007-01-01
An efficient return algorithm for stress update in numerical plasticity computations is presented. The yield criterion must be linear in principal stress space and can be composed of any number of yield planes. Each of these yield planes may have an associated or non-associated flow rule...... considerations. The method is exemplified on non-associated Mohr-Coulomb plasticity throughout the paper....
Xiao, Kai; Chen, Danny Z; Hu, X Sharon; Zhou, Bo
2012-12-01
The three-dimensional digital differential analyzer (3D-DDA) algorithm is a widely used ray traversal method, which is also at the core of many convolution∕superposition (C∕S) dose calculation approaches. However, porting existing C∕S dose calculation methods onto graphics processing unit (GPU) has brought challenges to retaining the efficiency of this algorithm. In particular, straightforward implementation of the original 3D-DDA algorithm inflicts a lot of branch divergence which conflicts with the GPU programming model and leads to suboptimal performance. In this paper, an efficient GPU implementation of the 3D-DDA algorithm is proposed, which effectively reduces such branch divergence and improves performance of the C∕S dose calculation programs running on GPU. The main idea of the proposed method is to convert a number of conditional statements in the original 3D-DDA algorithm into a set of simple operations (e.g., arithmetic, comparison, and logic) which are better supported by the GPU architecture. To verify and demonstrate the performance improvement, this ray traversal method was integrated into a GPU-based collapsed cone convolution∕superposition (CCCS) dose calculation program. The proposed method has been tested using a water phantom and various clinical cases on an NVIDIA GTX570 GPU. The CCCS dose calculation program based on the efficient 3D-DDA ray traversal implementation runs 1.42 ∼ 2.67× faster than the one based on the original 3D-DDA implementation, without losing any accuracy. The results show that the proposed method can effectively reduce branch divergence in the original 3D-DDA ray traversal algorithm and improve the performance of the CCCS program running on GPU. Considering the wide utilization of the 3D-DDA algorithm, various applications can benefit from this implementation method.
Akhmedova, Sh; Semenkin, E.
2017-02-01
Previously, a meta-heuristic approach, called Co-Operation of Biology-Related Algorithms or COBRA, for solving real-parameter optimization problems was introduced and described. COBRA’s basic idea consists of a cooperative work of five well-known bionic algorithms such as Particle Swarm Optimization, the Wolf Pack Search, the Firefly Algorithm, the Cuckoo Search Algorithm and the Bat Algorithm, which were chosen due to the similarity of their schemes. The performance of this meta-heuristic was evaluated on a set of test functions and its workability was demonstrated. Thus it was established that the idea of the algorithms’ cooperative work is useful. However, it is unclear which bionic algorithms should be included in this cooperation and how many of them. Therefore, the five above-listed algorithms and additionally the Fish School Search algorithm were used for the development of five different modifications of COBRA by varying the number of component-algorithms. These modifications were tested on the same set of functions and the best of them was found. Ways of further improving the COBRA algorithm are then discussed.
SkyAlign: a portable, work-efficient skyline algorithm for multicore and GPU architectures
DEFF Research Database (Denmark)
Bøgh, Kenneth Sejdenfaden; Chester, Sean; Assent, Ira
2016-01-01
The skyline operator determines points in a multidimensional dataset that offer some optimal trade-off. State-of-the-art CPU skyline algorithms exploit quad-tree partitioning with complex branching to minimise the number of point-to-point comparisons. Branch-phobic GPU skyline algorithms rely...... on compute throughput rather than partitioning, but fail to match the performance of sequential algorithms. In this paper, we introduce a new skyline algorithm, SkyAlign, that is designed for the GPU, and a GPU-friendly, grid-based tree structure upon which the algorithm relies. The search tree allows us...... to dramatically reduce the amount of work done by the GPU algorithm by avoiding most point-to-point comparisons at the cost of some compute throughput. This trade-off allows SkyAlign to achieve orders of magnitude faster performance than its predecessors. Moreover, a NUMA-oblivious port of SkyAlign outperforms...
Power Analysis of Energy Efficient DES Algorithm and Implementation on 28nm FPGA
DEFF Research Database (Denmark)
Thind, Vandana; Pandey, Bishwajeet; Hussain, Dil muhammed Akbar
2016-01-01
In this work, we have done power analysis ofData Encryption Standard (DES) algorithm using Xilinx ISE software development kit. We have analyzed the amount of power utilized by selective components on board i.e., FPGA Artix-7, where DES algorithm is implemented. The components taken into consider......In this work, we have done power analysis ofData Encryption Standard (DES) algorithm using Xilinx ISE software development kit. We have analyzed the amount of power utilized by selective components on board i.e., FPGA Artix-7, where DES algorithm is implemented. The components taken...
van Lenthe, J. H.; Broer-Braam, H. B.; Rashid, Z.
2012-01-01
We comment on the paper [Song et al., J. Comput. Chem. 2009, 30, 399]. and discuss the efficiency of the orbital optimization and gradient evaluation in the Valence Bond Self Consistent Field (VBSCF) method. We note that Song et al. neglect to properly reference Broer et al., who published an
Directory of Open Access Journals (Sweden)
Imen Châari
2014-07-01
Full Text Available Path planning is a fundamental optimization problem that is crucial for the navigation of a mobile robot. Among the vast array of optimization approaches, we focus in this paper on Ant Colony Optimization (ACO and Genetic Algorithms (GA for solving the global path planning problem in a static environment, considering their effectiveness in solving such a problem. Our objective is to design an efficient hybrid algorithm that takes profit of the advantages of both ACO and GA approaches for the sake of maximizing the chance to find the optimal path even under real-time constraints. In this paper, we present smartPATH, a new hybrid ACO-GA algorithm that relies on the combination of an improved ACO algorithm (IACO for efficient and fast path selection, and a modified crossover operator to reduce the risk of falling into a local minimum. We demonstrate through extensive simulations that smartPATH outperforms classical ACO (CACO, GA algorithms. It also outperforms the Dijkstra exact method in solving the path planning problem for large graph environments. It improves the solution quality up to 57% in comparison with CACO and reduces the execution time up to 83% as compared to Dijkstra for large and dense graphs. In addition, the experimental results on a real robot shows that smartPATH finds the optimal path with a probability up to 80% with a small gap not exceeding 1m in 98%.
naiveBayesCall: an efficient model-based base-calling algorithm for high-throughput sequencing.
Kao, Wei-Chun; Song, Yun S
2011-03-01
Immense amounts of raw instrument data (i.e., images of fluorescence) are currently being generated using ultra high-throughput sequencing platforms. An important computational challenge associated with this rapid advancement is to develop efficient algorithms that can extract accurate sequence information from raw data. To address this challenge, we recently introduced a novel model-based base-calling algorithm that is fully parametric and has several advantages over previously proposed methods. Our original algorithm, called BayesCall, significantly reduced the error rate, particularly in the later cycles of a sequencing run, and also produced useful base-specific quality scores with a high discrimination ability. Unfortunately, however, BayesCall is too computationally expensive to be of broad practical use. In this article, we build on our previous model-based approach to devise an efficient base-calling algorithm that is orders of magnitude faster than BayesCall, while still maintaining a comparably high level of accuracy. Our new algorithm is called naive-BayesCall, and it utilizes approximation and optimization methods to achieve scalability. We describe the performance of naiveBayesCall and demonstrate how improved base-calling accuracy may facilitate de novo assembly and SNP detection when the sequence coverage depth is low to moderate.
Mustapha, Ibrahim; Mohd Ali, Borhanuddin; Rasid, Mohd Fadlee A; Sali, Aduwati; Mohamad, Hafizal
2015-08-13
It is well-known that clustering partitions network into logical groups of nodes in order to achieve energy efficiency and to enhance dynamic channel access in cognitive radio through cooperative sensing. While the topic of energy efficiency has been well investigated in conventional wireless sensor networks, the latter has not been extensively explored. In this paper, we propose a reinforcement learning-based spectrum-aware clustering algorithm that allows a member node to learn the energy and cooperative sensing costs for neighboring clusters to achieve an optimal solution. Each member node selects an optimal cluster that satisfies pairwise constraints, minimizes network energy consumption and enhances channel sensing performance through an exploration technique. We first model the network energy consumption and then determine the optimal number of clusters for the network. The problem of selecting an optimal cluster is formulated as a Markov Decision Process (MDP) in the algorithm and the obtained simulation results show convergence, learning and adaptability of the algorithm to dynamic environment towards achieving an optimal solution. Performance comparisons of our algorithm with the Groupwise Spectrum Aware (GWSA)-based algorithm in terms of Sum of Square Error (SSE), complexity, network energy consumption and probability of detection indicate improved performance from the proposed approach. The results further reveal that an energy savings of 9% and a significant Primary User (PU) detection improvement can be achieved with the proposed approach.
Mustapha, Ibrahim; Ali, Borhanuddin Mohd; Rasid, Mohd Fadlee A.; Sali, Aduwati; Mohamad, Hafizal
2015-01-01
It is well-known that clustering partitions network into logical groups of nodes in order to achieve energy efficiency and to enhance dynamic channel access in cognitive radio through cooperative sensing. While the topic of energy efficiency has been well investigated in conventional wireless sensor networks, the latter has not been extensively explored. In this paper, we propose a reinforcement learning-based spectrum-aware clustering algorithm that allows a member node to learn the energy and cooperative sensing costs for neighboring clusters to achieve an optimal solution. Each member node selects an optimal cluster that satisfies pairwise constraints, minimizes network energy consumption and enhances channel sensing performance through an exploration technique. We first model the network energy consumption and then determine the optimal number of clusters for the network. The problem of selecting an optimal cluster is formulated as a Markov Decision Process (MDP) in the algorithm and the obtained simulation results show convergence, learning and adaptability of the algorithm to dynamic environment towards achieving an optimal solution. Performance comparisons of our algorithm with the Groupwise Spectrum Aware (GWSA)-based algorithm in terms of Sum of Square Error (SSE), complexity, network energy consumption and probability of detection indicate improved performance from the proposed approach. The results further reveal that an energy savings of 9% and a significant Primary User (PU) detection improvement can be achieved with the proposed approach. PMID:26287191
Efficient Retrieval of Massive Ocean Remote Sensing Images via a Cloud-Based Mean-Shift Algorithm
Directory of Open Access Journals (Sweden)
Mengzhao Yang
2017-07-01
Full Text Available The rapid development of remote sensing (RS technology has resulted in the proliferation of high-resolution images. There are challenges involved in not only storing large volumes of RS images but also in rapidly retrieving the images for ocean disaster analysis such as for storm surges and typhoon warnings. In this paper, we present an efficient retrieval of massive ocean RS images via a Cloud-based mean-shift algorithm. Distributed construction method via the pyramid model is proposed based on the maximum hierarchical layer algorithm and used to realize efficient storage structure of RS images on the Cloud platform. We achieve high-performance processing of massive RS images in the Hadoop system. Based on the pyramid Hadoop distributed file system (HDFS storage method, an improved mean-shift algorithm for RS image retrieval is presented by fusion with the canopy algorithm via Hadoop MapReduce programming. The results show that the new method can achieve better performance for data storage than HDFS alone and WebGIS-based HDFS. Speedup and scaleup are very close to linear changes with an increase of RS images, which proves that image retrieval using our method is efficient.
Efficient Retrieval of Massive Ocean Remote Sensing Images via a Cloud-Based Mean-Shift Algorithm
Song, Wei; Mei, Haibin
2017-01-01
The rapid development of remote sensing (RS) technology has resulted in the proliferation of high-resolution images. There are challenges involved in not only storing large volumes of RS images but also in rapidly retrieving the images for ocean disaster analysis such as for storm surges and typhoon warnings. In this paper, we present an efficient retrieval of massive ocean RS images via a Cloud-based mean-shift algorithm. Distributed construction method via the pyramid model is proposed based on the maximum hierarchical layer algorithm and used to realize efficient storage structure of RS images on the Cloud platform. We achieve high-performance processing of massive RS images in the Hadoop system. Based on the pyramid Hadoop distributed file system (HDFS) storage method, an improved mean-shift algorithm for RS image retrieval is presented by fusion with the canopy algorithm via Hadoop MapReduce programming. The results show that the new method can achieve better performance for data storage than HDFS alone and WebGIS-based HDFS. Speedup and scaleup are very close to linear changes with an increase of RS images, which proves that image retrieval using our method is efficient. PMID:28737699
Efficient Retrieval of Massive Ocean Remote Sensing Images via a Cloud-Based Mean-Shift Algorithm.
Yang, Mengzhao; Song, Wei; Mei, Haibin
2017-07-23
The rapid development of remote sensing (RS) technology has resulted in the proliferation of high-resolution images. There are challenges involved in not only storing large volumes of RS images but also in rapidly retrieving the images for ocean disaster analysis such as for storm surges and typhoon warnings. In this paper, we present an efficient retrieval of massive ocean RS images via a Cloud-based mean-shift algorithm. Distributed construction method via the pyramid model is proposed based on the maximum hierarchical layer algorithm and used to realize efficient storage structure of RS images on the Cloud platform. We achieve high-performance processing of massive RS images in the Hadoop system. Based on the pyramid Hadoop distributed file system (HDFS) storage method, an improved mean-shift algorithm for RS image retrieval is presented by fusion with the canopy algorithm via Hadoop MapReduce programming. The results show that the new method can achieve better performance for data storage than HDFS alone and WebGIS-based HDFS. Speedup and scaleup are very close to linear changes with an increase of RS images, which proves that image retrieval using our method is efficient.
Computationally Efficient DOA Tracking Algorithm in Monostatic MIMO Radar with Automatic Association
Directory of Open Access Journals (Sweden)
Huaxin Yu
2014-01-01
Full Text Available We consider the problem of tracking the direction of arrivals (DOA of multiple moving targets in monostatic multiple-input multiple-output (MIMO radar. A low-complexity DOA tracking algorithm in monostatic MIMO radar is proposed. The proposed algorithm obtains DOA estimation via the difference between previous and current covariance matrix of the reduced-dimension transformation signal, and it reduces the computational complexity and realizes automatic association in DOA tracking. Error analysis and Cramér-Rao lower bound (CRLB of DOA tracking are derived in the paper. The proposed algorithm not only can be regarded as an extension of array-signal-processing DOA tracking algorithm in (Zhang et al. (2008, but also is an improved version of the DOA tracking algorithm in (Zhang et al. (2008. Furthermore, the proposed algorithm has better DOA tracking performance than the DOA tracking algorithm in (Zhang et al. (2008. The simulation results demonstrate effectiveness of the proposed algorithm. Our work provides the technical support for the practical application of MIMO radar.
An efficient and robust algorithm for parallel groupwise registration of bone surfaces
van de Giessen, Martijn; Vos, Frans M.; Grimbergen, Cornelis A.; van Vliet, Lucas J.; Streekstra, Geert J.
2012-01-01
In this paper a novel groupwise registration algorithm is proposed for the unbiased registration of a large number of densely sampled point clouds. The method fits an evolving mean shape to each of the example point clouds thereby minimizing the total deformation. The registration algorithm
Pan, Zhong-Liang; Chen, Ling; Zhang, Guang-Zhao
2016-04-01
The hybrid CMOS molecular (CMOL) circuit, which combines complementary metal-oxide-semiconductor (CMOS) components with nanoscale wires and switches, can exhibit significantly improved performance. In CMOL circuits, the nanodevices, which are called cells, should be placed appropriately and are connected by nanowires. The cells should be connected such that they follow the shortest path. This paper presents an efficient method of cell allocation in CMOL circuits with the hybrid CMOS/nanodevice structure; the method is based on a cultural algorithm with chaotic behavior. The optimal model of cell allocation is derived, and the coding of an individual representing a cell allocation is described. Then the cultural algorithm with chaotic behavior is designed to solve the optimal model. The cultural algorithm consists of a population space, a belief space, and a protocol that describes how knowledge is exchanged between the population and belief spaces. In this paper, the evolutionary processes of the population space employ a genetic algorithm in which three populations undergo parallel evolution. The evolutionary processes of the belief space use a chaotic ant colony algorithm. Extensive experiments on cell allocation in benchmark circuits showed that a low area usage can be obtained using the proposed method, and the computation time can be reduced greatly compared to that of a conventional genetic algorithm.
Zhou, Guiyun; Sun, Zhongxuan; Fu, Suhua
2016-05-01
Depressions are common features in raster digital elevation models (DEMs) and they are usually filled for the automatic extraction of drainage networks. Among existing algorithms for filling depressions, the Priority-Flood algorithm substantially outperforms other algorithms in terms of both time complexity and memory requirement. The Priority-Flood algorithm uses a priority queue to process cells. This study proposes an efficient variant of the Priority-Flood algorithm, which considerably reduces the number of cells processed by the priority queue by using region-growing procedures to process the majority of cells not within depressions or flat regions. We present three implementations of the proposed variant: two-pass implementation, one-pass implementation and direct implementation. Experiments are conducted on thirty DEMs with a resolution of 3m. All three implementations run faster than existing variants of the algorithm for all tested DEMs. The one-pass implementation runs the fastest and the average speed-up over the fastest existing variant is 44.6%.
EFFICIENT BLOCK MATCHING ALGORITHMS FOR MOTION ESTIMATION IN H.264/AVC
Directory of Open Access Journals (Sweden)
P. Muralidhar
2015-02-01
Full Text Available In Scalable Video Coding (SVC, motion estimation and inter-layer prediction play an important role in elimination of temporal and spatial redundancies between consecutive layers. This paper evaluates the performance of widely accepted block matching algorithms used in various video compression standards, with emphasis on the performance of the algorithms for a didactic scalable video codec. Many different implementations of Fast Motion Estimation Algorithms have been proposed to reduce motion estimation complexity. The block matching algorithms have been analyzed with emphasis on Peak Signal to Noise Ratio (PSNR and computations using MATLAB. In addition to the above comparisons, a survey has been done on Spiral Search Motion Estimation Algorithms for Video Coding. A New Modified Spiral Search (NMSS motion estimation algorithm has been proposed with lower computational complexity. The proposed algorithm achieves 72% reduction in computation with a minimal (<1dB reduction in PSNR. A brief introduction to the entire flow of video compression H.264/SVC is also presented in this paper.
Directory of Open Access Journals (Sweden)
Peng Wang
2013-01-01
Full Text Available This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO. The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions.
Wang, Peng; Zhu, Zhouquan; Huang, Shuai
2013-01-01
This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO). The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions.
Energy Technology Data Exchange (ETDEWEB)
Zhu, Di [Rensselaer Polytechnic Inst., Troy, NY (United States). Dept. of Electrical, Computer and Systems Engineering; Schubert, Martin F. [Rensselaer Polytechnic Inst., Troy, NY (United States). Dept. of Electrical, Computer and Systems Engineering; Cho, Jaehee [Rensselaer Polytechnic Inst., Troy, NY (United States). Dept. of Electrical, Computer and Systems Engineering; Schubert, E. Fred [Rensselaer Polytechnic Inst., Troy, NY (United States). Dept. of Electrical, Computer and Systems Engineering; Crawford, Mary H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Koleske, Daniel D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Shim, Hyunwook [Samsung LED, R& D Inst., Suwon (Republic of Korea); Sone, Cheolsoo [Samsung LED, R& D Inst., Suwon (Republic of Korea)
2012-01-01
Light-emitting diodes are becoming the next-generation light source because of their prominent benefits in energy efficiency, versatility, and benign environmental impact. However, because of the unique polarization effects in III–V nitrides and the high complexity of light-emitting diodes, further breakthroughs towards truly optimized devices are required. Here we introduce the concept of artificial evolution into the device optimization process. Reproduction and selection are accomplished by means of an advanced genetic algorithm and device simulator, respectively. We demonstrate that this approach can lead to new device structures that go beyond conventional approaches. The innovative designs originating from the genetic algorithm and the demonstration of the predicted results by implementing structures suggested by the algorithm establish a new avenue for complex semiconductor device design and optimization.
Pickett, B D; Karlinsey, S M; Penrod, C E; Cormier, M J; Ebbert, M T W; Shiozawa, D K; Whipple, C J; Ridge, P G
2016-09-01
Simple Sequence Repeats (SSRs) are used to address a variety of research questions in a variety of fields (e.g. population genetics, phylogenetics, forensics, etc.), due to their high mutability within and between species. Here, we present an innovative algorithm, SA-SSR, based on suffix and longest common prefix arrays for efficiently detecting SSRs in large sets of sequences. Existing SSR detection applications are hampered by one or more limitations (i.e. speed, accuracy, ease-of-use, etc.). Our algorithm addresses these challenges while being the most comprehensive and correct SSR detection software available. SA-SSR is 100% accurate and detected >1000 more SSRs than the second best algorithm, while offering greater control to the user than any existing software. SA-SSR is freely available at http://github.com/ridgelab/SA-SSR CONTACT: perry.ridge@byu.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Rheological analysis of an effect of different deflocculants on the fly-ash slurry
Sarnecki, K.; Bartosik, A.
2014-08-01
During the combustion of coal in the combined heat and power plant (CHP), a very large amount of combustion waste, called further as a fly-ash, is produced. It is typical that fly-ash appears during the combustion process of the fine coal and is transported by a pipeline with support of water as a carrier liquid to a pond storage site, where it is disposed. The pond is localized usually a few kilometers from the CHP, which makes it possible that decrease of friction in such a pipeline can result in energy savings of electricity needed for the pump and water needed as a carrier liquid. In the study an efficient method using a few deflocculants for reducing shear stress, and as a consequence viscosity, is demonstrated. The objective of the paper is to improve the efficiency of the hydrotransport of the fly-ash slurry by adding own designed additives. During the experiments a solids concentration by weight was determined from procured raw material in order to compute the real value occurring in industrial conditions. In addition, the analysis of the particle size distribution was conducted. The Anton Paar MCR 302 electronic rheometer was used to measure the dependence of shear stress and viscosity vs shear rate in the fly-ash existing in the CHP. Another part of the analysis was focused on the additives (deflocculants), to examine their influence on the reduction of the shear stress. The paper proves positive deflocculants impact on the rheological properties of the fly-ash slurry. The results of measurements are presented as figures and conclusions.
On-The-Fly Query Translation Between i2b2 and Samply in the German Biobank Node (GBN) Prototypes.
Mate, Sebastian; Vormstein, Patric; Kadioglu, Dennis; Majeed, Raphael W; Lablans, Martin; Prokosch, Hans-Ulrich; Storf, Holger
2017-01-01
Information retrieval is a major challenge in medical informatics. Various research projects have worked on this task in recent years on an institutional level by developing tools to integrate and retrieve information. However, when it comes down to querying such data across institutions, the challenge persists due to the high heterogeneity of data and differences in software systems. The German Biobank Node (GBN) project faced this challenge when trying to interconnect four biobanks to enable distributed queries for biospecimens. All biobanks had already established integrated data repositories, and some of them were already part of research networks. Instead of developing another software platform, GBN decided to form a bridge between these. This paper describes and discusses a core component from the GBN project, the OmniQuery library, which was implemented to enable on-the-fly query translation between heterogeneous research infrastructures.
On-the-fly neural network construction for repairing F-16 flight control panel using thermal imaging
Allred, Lloyd G.; Howard, Tom R.; Serpen, Gursel
1996-03-01
When the card-level tester for the F-16 flight control panel (FLCP) had been dysfunctional for over 18 months, infrared thermography was investigated as an alternative for diagnosing and repairing the 7 cards in the FLCP box. Using thermal imaging alone, over 20 FLCP boxes were made serviceable, effectively bringing the FLCP out of awaiting parts (AWP) status. Through the incorporation of a novel on-the-fly neural network paradigm, the neural radiant energy detection system (NREDS) now has the capability to make correct fault classification from a large history of repair data. By surveying the historical data, the network makes assessments about relevant repair actions and probable component malfunctions. On one of the circuit cards, a repair accuracy of 11 out of 12 was achieved during the first repair attempt. By operating on the raw repair data and doing the network calculations on the fly, the network becomes virtual, thus eliminating the need to retain intermediate calculations in trained network files. Erroneous classifications are correctable via a text editor. Erroneous training of neural networks has been a chronic problem with prior implementations. In view of the current environment of downsizing, the likelihood of obtaining functionality at the card-level tester is remote. Success of the imager points to corresponding inadequacies of the automatic test equipment (ATE) to detect certain kinds of failure. In particular, we were informed that one particular relay had never been ordered in the life of the F-16 system, whereas some cards became functional when the relay was the sole component replaced.
A new efficient RLF-like algorithm for the vertex coloring problem
Directory of Open Access Journals (Sweden)
Adegbindin Mourchid
2016-01-01
Full Text Available The Recursive Largest First (RLF algorithm is one of the most popular greedy heuristics for the vertex coloring problem. It sequentially builds color classes on the basis of greedy choices. In particular, the first vertex placed in a color class C is one with a maximum number of uncolored neighbors, and the next vertices placed in C are chosen so that they have as many uncolored neighbors which cannot be placed in C. These greedy choices can have a significant impact on the performance of the algorithm, which explains why we propose alternative selection rules. Computational experiments on 63 difficult DIMACS instances show that the resulting new RLF-like algorithm, when compared with the standard RLF, allows to obtain a reduction of more than 50% of the gap between the number of colors used and the best known upper bound on the chromatic number. The new greedy algorithm even competes with basic metaheuristics for the vertex coloring problem.
National Research Council Canada - National Science Library
Akbari, Mohsen; Reza, Ahmed Wasif; Noordin, Kamarul Ariffin; Dimyati, Kaharudin; Riahi Manesh, Mohsen; Hindia, Mohammad Nour
2016-01-01
...). This paper focuses on optimality of analytical study on the common soft decision fusion (SDF) CSS based on different iterative algorithms which confirm low total probability of error and high probability of detection in detail...
Development of an operationally efficient PTC braking enforcement algorithm for freight trains.
2013-08-01
Software algorithms used in positive train control (PTC) systems designed to predict freight train stopping distance and enforce a penalty brake application have been shown to be overly conservative, which can lead to operational inefficiencies by in...
2012-12-01
Backcalculation of pavement moduli has been an intensively researched subject for more than four decades. Despite the existence of many backcalculation programs employing different backcalculation procedures and algorithms, accurate inverse of the la...
An Efficient and Accurate Genetic Algorithm for Backcalculation of Flexible Pavement Layer Moduli
2012-12-01
The importance of a backcalculation method in the analysis of elastic modulus in pavement engineering has been : known for decades. Despite many backcalculation programs employing different backcalculation procedures and : algorithms, accurate invers...
Computationally Efficient DOA Tracking Algorithm in Monostatic MIMO Radar with Automatic Association
Huaxin Yu; Xiaofei Zhang; Xueqiang Chen; Hailang Wu
2014-01-01
We consider the problem of tracking the direction of arrivals (DOA) of multiple moving targets in monostatic multiple-input multiple-output (MIMO) radar. A low-complexity DOA tracking algorithm in monostatic MIMO radar is proposed. The proposed algorithm obtains DOA estimation via the difference between previous and current covariance matrix of the reduced-dimension transformation signal, and it reduces the computational complexity and realizes automatic association in DOA tracking. Error ana...
An Efficient Tabu Search DSA Algorithm for Heterogeneous Traffic in Cellular Networks
Kamal, Hany; Coupechoux, Marceau; Godlewski, Philippe
2010-01-01
International audience; In this paper, we propose and analyze a TS (Tabu Search) algorithm for DSA (Dynamic Spectrum Access) in cellular networks. We consider a scenario where cellular operators share a common access band, and we focus on the strategy of one operator providing packet services to the end-users. We consider a soft interference requirement for the algorithm's design that suits the packet traffic context. The operator's objective is to maximize its reward while taking into accoun...
Energy-Efficient Scheduling Problem Using an Effective Hybrid Multi-Objective Evolutionary Algorithm
Directory of Open Access Journals (Sweden)
Lvjiang Yin
2016-12-01
Full Text Available Nowadays, manufacturing enterprises face the challenge of just-in-time (JIT production and energy saving. Therefore, study of JIT production and energy consumption is necessary and important in manufacturing sectors. Moreover, energy saving can be attained by the operational method and turn off/on idle machine method, which also increases the complexity of problem solving. Thus, most researchers still focus on small scale problems with one objective: a single machine environment. However, the scheduling problem is a multi-objective optimization problem in real applications. In this paper, a single machine scheduling model with controllable processing and sequence dependence setup times is developed for minimizing the total earliness/tardiness (E/T, cost, and energy consumption simultaneously. An effective multi-objective evolutionary algorithm called local multi-objective evolutionary algorithm (LMOEA is presented to tackle this multi-objective scheduling problem. To accommodate the characteristic of the problem, a new solution representation is proposed, which can convert discrete combinational problems into continuous problems. Additionally, a multiple local search strategy with self-adaptive mechanism is introduced into the proposed algorithm to enhance the exploitation ability. The performance of the proposed algorithm is evaluated by instances with comparison to other multi-objective meta-heuristics such as Nondominated Sorting Genetic Algorithm II (NSGA-II, Strength Pareto Evolutionary Algorithm 2 (SPEA2, Multiobjective Particle Swarm Optimization (OMOPSO, and Multiobjective Evolutionary Algorithm Based on Decomposition (MOEA/D. Experimental results demonstrate that the proposed LMOEA algorithm outperforms its counterparts for this kind of scheduling problems.
Infinitely oscillating wavelets and a efficient implementation algorithm based the FFT
Directory of Open Access Journals (Sweden)
Marcela Fabio
2015-01-01
Full Text Available In this work we present the design of an orthogonal wavelet, infinitely oscillating, located in time with decay 1/|t|n and limited-band. Its appli- cation leads to the signal decomposition in waves of instantaneous, well defined frequency. We also present the implementation algorithm for the analysis and synthesis based on the Fast Fourier Transform (FFT with the same complexity as Mallat’s algorithm.
Energy Technology Data Exchange (ETDEWEB)
Brandt, Christopher; Fieg, Georg [Hamburg University of Technology, Institute of Process and Plant Engineering, Hamburg (Germany); Luo, Xing [Helmut Schmidt University, Institute of Thermodynamics, Hamburg (Germany); University of Shanghai for Science and Technology, Institute of Thermal Engineering, Shanghai (China)
2011-08-15
In this work an innovative method for heat exchanger network (HEN) synthesis is introduced and examined. It combines a genetic algorithm (GA) with a heuristic based optimization procedure. The novel algorithm removes appearing heat load loops from the HEN structures when profitable, throughout the evolution. Two examples were examined with the new HEN synthesis method and for both better results were obtained. Thus, a positive effect of heuristic based optimization methods on the HEN synthesis with GA could be located. (orig.)
EFFICIENT TIME REDUCTION USING PRINCIPAL COMPONENT ANALYSIS WITH BISECTING K MEANS ALGORITHM
R.Indhumathi; Dr.S.Sathiyabama
2013-01-01
Data Mining is the process of discovering meaningful new correlations, patterns and trends by shifting through large amounts of data stored in repositories using technologies. Clustering is considered as one of the important techniques in data mining. Clustering is a division of data into groups of similar objects. Clustering is one of the most widely used data mining technique. Many clustering algorithm have been developed among them partitioning algorithms develop a partition of the data su...
Chaudhry, Jehanzeb Hameed; Estep, Don; Tavener, Simon; Carey, Varis; Sandelin, Jeff
2016-01-01
We consider numerical methods for initial value problems that employ a two stage approach consisting of solution on a relatively coarse discretization followed by solution on a relatively fine discretization. Examples include adaptive error control, parallel-in-time solution schemes, and efficient solution of adjoint problems for computing a posteriori error estimates. We describe a general formulation of two stage computations then perform a general a posteriori error analysis based on computable residuals and solution of an adjoint problem. The analysis accommodates various variations in the two stage computation and in formulation of the adjoint problems. We apply the analysis to compute "dual-weighted" a posteriori error estimates, to develop novel algorithms for efficient solution that take into account cancellation of error, and to the Parareal Algorithm. We test the various results using several numerical examples.
Energy Technology Data Exchange (ETDEWEB)
Hurtado, S. [Servicio de Radioisotopos, Centro de Investigacion, Tecnologia e Innovacion (CITIUS), Universidad de Sevilla, Avda. Reina Mercedes s/n, 41012 Sevilla (Spain)], E-mail: shurtado@us.es; Garcia-Leon, M. [Departamento de Fisica Atomica, Molecular y Nuclear, Facultad de Fisica, Universidad de Sevilla, Aptd. 1065, 41080 Sevilla (Spain); Garcia-Tenorio, R. [Departamento de Fisica Aplicada II, E.T.S.A. Universidad de Sevilla, Avda, Reina Mercedes 2, 41012 Sevilla (Spain)
2008-09-11
In this work several mathematical functions are compared in order to perform the full-energy peak efficiency calibration of HPGe detectors using a 126cm{sup 3} HPGe coaxial detector and gamma-ray energies ranging from 36 to 1460 keV. Statistical tests and Monte Carlo simulations were used to study the performance of the fitting curve equations. Furthermore the fitting procedure of these complex functional forms to experimental data is a non-linear multi-parameter minimization problem. In gamma-ray spectrometry usually non-linear least-squares fitting algorithms (Levenberg-Marquardt method) provide a fast convergence while minimizing {chi}{sub R}{sup 2}, however, sometimes reaching only local minima. In order to overcome that shortcoming a hybrid algorithm based on simulated annealing (HSA) techniques is proposed. Additionally a new function is suggested that models the efficiency curve of germanium detectors in gamma-ray spectrometry.
Lipinski, Doug; Mohseni, Kamran
2010-03-01
A ridge tracking algorithm for the computation and extraction of Lagrangian coherent structures (LCS) is developed. This algorithm takes advantage of the spatial coherence of LCS by tracking the ridges which form LCS to avoid unnecessary computations away from the ridges. We also make use of the temporal coherence of LCS by approximating the time dependent motion of the LCS with passive tracer particles. To justify this approximation, we provide an estimate of the difference between the motion of the LCS and that of tracer particles which begin on the LCS. In addition to the speedup in computational time, the ridge tracking algorithm uses less memory and results in smaller output files than the standard LCS algorithm. Finally, we apply our ridge tracking algorithm to two test cases, an analytically defined double gyre as well as the more complicated example of the numerical simulation of a swimming jellyfish. In our test cases, we find up to a 35 times speedup when compared with the standard LCS algorithm.
On-the-fly generation and rendering of infinite cities on the GPU
Steinberger, Markus
2014-05-01
In this paper, we present a new approach for shape-grammar-based generation and rendering of huge cities in real-time on the graphics processing unit (GPU). Traditional approaches rely on evaluating a shape grammar and storing the geometry produced as a preprocessing step. During rendering, the pregenerated data is then streamed to the GPU. By interweaving generation and rendering, we overcome the problems and limitations of streaming pregenerated data. Using our methods of visibility pruning and adaptive level of detail, we are able to dynamically generate only the geometry needed to render the current view in real-time directly on the GPU. We also present a robust and efficient way to dynamically update a scene\\'s derivation tree and geometry, enabling us to exploit frame-to-frame coherence. Our combined generation and rendering is significantly faster than all previous work. For detailed scenes, we are capable of generating geometry more rapidly than even just copying pregenerated data from main memory, enabling us to render cities with thousands of buildings at up to 100 frames per second, even with the camera moving at supersonic speed. © 2014 The Author(s) Computer Graphics Forum © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.
Carroll, Chester C.; Youngblood, John N.; Saha, Aindam
1987-01-01
Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.
Directory of Open Access Journals (Sweden)
Kah Huo Leong
Full Text Available Railway and metro transport systems (RS are becoming one of the popular choices of transportation among people, especially those who live in urban cities. Urbanization and increasing population due to rapid development of economy in many cities are leading to a bigger demand for urban rail transit. Despite being a popular variant of Traveling Salesman Problem (TSP, it appears that the universal formula or techniques to solve the problem are yet to be found. This paper aims to develop an optimization algorithm for optimum route selection to multiple destinations in RS before returning to the starting point. Bee foraging behaviour is examined to generate a reliable algorithm in railway TSP. The algorithm is then verified by comparing the results with the exact solutions in 10 test cases, and a numerical case study is designed to demonstrate the application with large size sample. It is tested to be efficient and effective in railway route planning as the tour can be completed within a certain period of time by using minimal resources. The findings further support the reliability of the algorithm and capability to solve the problems with different complexity. This algorithm can be used as a method to assist business practitioners making better decision in route planning.
A New Efficient Algorithm for the 2D WLP-FDTD Method Based on Domain Decomposition Technique
Directory of Open Access Journals (Sweden)
Bo-Ao Xu
2016-01-01
Full Text Available This letter introduces a new efficient algorithm for the two-dimensional weighted Laguerre polynomials finite difference time-domain (WLP-FDTD method based on domain decomposition scheme. By using the domain decomposition finite difference technique, the whole computational domain is decomposed into several subdomains. The conventional WLP-FDTD and the efficient WLP-FDTD methods are, respectively, used to eliminate the splitting error and speed up the calculation in different subdomains. A joint calculation scheme is presented to reduce the amount of calculation. Through our work, the iteration is not essential to obtain the accurate results. Numerical example indicates that the efficiency and accuracy are improved compared with the efficient WLP-FDTD method.
Dama, James F; Rotskoff, Grant; Parrinello, Michele; Voth, Gregory A
2014-09-09
Well-tempered metadynamics has proven to be a practical and efficient adaptive enhanced sampling method for the computational study of biomolecular and materials systems. However, choosing its tunable parameter can be challenging and requires balancing a trade-off between fast escape from local metastable states and fast convergence of an overall free energy estimate. In this article, we present a new smoothly convergent variant of metadynamics, transition-tempered metadynamics, that removes that trade-off and is more robust to changes in its own single tunable parameter, resulting in substantial speed and accuracy improvements. The new method is specifically designed to study state-to-state transitions in which the states of greatest interest are known ahead of time, but transition mechanisms are not. The design is guided by a picture of adaptive enhanced sampling as a means to increase dynamical connectivity of a model's state space until percolation between all points of interest is reached, and it uses the degree of dynamical percolation to automatically tune the convergence rate. We apply the new method to Brownian dynamics on 48 random 1D surfaces, blocked alanine dipeptide in vacuo, and aqueous myoglobin, finding that transition-tempered metadynamics substantially and reproducibly improves upon well-tempered metadynamics in terms of first barrier crossing rate, convergence rate, and robustness to the choice of tuning parameter. Moreover, the trade-off between first barrier crossing rate and convergence rate is eliminated: the new method drives escape from an initial metastable state as fast as metadynamics without tempering, regardless of tuning.
Simple and Efficient Algorithm for Improving the MDL Estimator of the Number of Sources
Directory of Open Access Journals (Sweden)
Dayan A. Guimarães
2014-10-01
Full Text Available We propose a simple algorithm for improving the MDL (minimum description length estimator of the number of sources of signals impinging on multiple sensors. The algorithm is based on the norms of vectors whose elements are the normalized and nonlinearly scaled eigenvalues of the received signal covariance matrix and the corresponding normalized indexes. Such norms are used to discriminate the largest eigenvalues from the remaining ones, thus allowing for the estimation of the number of sources. The MDL estimate is used as the input data of the algorithm. Numerical results unveil that the so-called norm-based improved MDL (iMDL algorithm can achieve performances that are better than those achieved by the MDL estimator alone. Comparisons are also made with the well-known AIC (Akaike information criterion estimator and with a recently-proposed estimator based on the random matrix theory (RMT. It is shown that our algorithm can also outperform the AIC and the RMT-based estimator in some situations.
Akoguz, A.; Bozkurt, S.; Gozutok, A. A.; Alp, G.; Turan, E. G.; Bogaz, M.; Kent, S.
2016-06-01
High resolution level in satellite imagery came with its fundamental problem as big amount of telemetry data which is to be stored after the downlink operation. Moreover, later the post-processing and image enhancement steps after the image is acquired, the file sizes increase even more and then it gets a lot harder to store and consume much more time to transmit the data from one source to another; hence, it should be taken into account that to save even more space with file compression of the raw and various levels of processed data is a necessity for archiving stations to save more space. Lossless data compression algorithms that will be examined in this study aim to provide compression without any loss of data holding spectral information. Within this objective, well-known open source programs supporting related compression algorithms have been implemented on processed GeoTIFF images of Airbus Defence & Spaces SPOT 6 & 7 satellites having 1.5 m. of GSD, which were acquired and stored by ITU Center for Satellite Communications and Remote Sensing (ITU CSCRS), with the algorithms Lempel-Ziv-Welch (LZW), Lempel-Ziv-Markov chain Algorithm (LZMA & LZMA2), Lempel-Ziv-Oberhumer (LZO), Deflate & Deflate 64, Prediction by Partial Matching (PPMd or PPM2), Burrows-Wheeler Transform (BWT) in order to observe compression performances of these algorithms over sample datasets in terms of how much of the image data can be compressed by ensuring lossless compression.
Efficient Haplotype Block Partitioning and Tag SNP Selection Algorithms under Various Constraints
Chen, Wen-Pei; Lin, Yaw-Ling
2013-01-01
Patterns of linkage disequilibrium plays a central role in genome-wide association studies aimed at identifying genetic variation responsible for common human diseases. These patterns in human chromosomes show a block-like structure, and regions of high linkage disequilibrium are called haplotype blocks. A small subset of SNPs, called tag SNPs, is sufficient to capture the haplotype patterns in each haplotype block. Previously developed algorithms completely partition a haplotype sample into blocks while attempting to minimize the number of tag SNPs. However, when resource limitations prevent genotyping all the tag SNPs, it is desirable to restrict their number. We propose two dynamic programming algorithms, incorporating many diversity evaluation functions, for haplotype block partitioning using a limited number of tag SNPs. We use the proposed algorithms to partition the chromosome 21 haplotype data. When the sample is fully partitioned into blocks by our algorithms, the 2,266 blocks and 3,260 tag SNPs are fewer than those identified by previous studies. We also demonstrate that our algorithms find the optimal solution by exploiting the nonmonotonic property of a common haplotype-evaluation function. PMID:24319694
von Rudorff, Guido Falk; Wehmeyer, Christoph; Sebastiani, Daniel
2014-06-01
We adapt a swarm-intelligence-based optimization method (the artificial bee colony algorithm, ABC) to enhance its parallel scaling properties and to improve the escaping behavior from deep local minima. Specifically, we apply the approach to the geometry optimization of Lennard-Jones clusters. We illustrate the performance and the scaling properties of the parallelization scheme for several system sizes (5-20 particles). Our main findings are specific recommendations for ranges of the parameters of the ABC algorithm which yield maximal performance for Lennard-Jones clusters and Morse clusters. The suggested parameter ranges for these different interaction potentials turn out to be very similar; thus, we believe that our reported values are fairly general for the ABC algorithm applied to chemical optimization problems.
A Pareto Algorithm for Efficient De Novo Design of Multi-functional Molecules.
Daeyaert, Frits; Deem, Micheal W
2017-01-01
We have introduced a Pareto sorting algorithm into Synopsis, a de novo design program that generates synthesizable molecules with desirable properties. We give a detailed description of the algorithm and illustrate its working in 2 different de novo design settings: the design of putative dual and selective FGFR and VEGFR inhibitors, and the successful design of organic structure determining agents (OSDAs) for the synthesis of zeolites. We show that the introduction of Pareto sorting not only enables the simultaneous optimization of multiple properties but also greatly improves the performance of the algorithm to generate molecules with hard-to-meet constraints. This in turn allows us to suggest approaches to address the problem of false positive hits in de novo structure based drug design by introducing structural and physicochemical constraints in the designed molecules, and by forcing essential interactions between these molecules and their target receptor. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
The efficiency of the RULES-4 classification learning algorithm in predicting the density of agents
Directory of Open Access Journals (Sweden)
Ziad Salem
2014-12-01
Full Text Available Learning is the act of obtaining new or modifying existing knowledge, behaviours, skills or preferences. The ability to learn is found in humans, other organisms and some machines. Learning is always based on some sort of observations or data such as examples, direct experience or instruction. This paper presents a classification algorithm to learn the density of agents in an arena based on the measurements of six proximity sensors of a combined actuator sensor units (CASUs. Rules are presented that were induced by the learning algorithm that was trained with data-sets based on the CASU’s sensor data streams collected during a number of experiments with “Bristlebots (agents in the arena (environment”. It was found that a set of rules generated by the learning algorithm is able to predict the number of bristlebots in the arena based on the CASU’s sensor readings with satisfying accuracy.
A memory-efficient staining algorithm in 3D seismic modelling and imaging
Jia, Xiaofeng; Yang, Lu
2017-08-01
The staining algorithm has been proven to generate high signal-to-noise ratio (S/N) images in poorly illuminated areas in two-dimensional cases. In the staining algorithm, the stained wavefield relevant to the target area and the regular source wavefield forward propagate synchronously. Cross-correlating these two wavefields with the backward propagated receiver wavefield separately, we obtain two images: the local image of the target area and the conventional reverse time migration (RTM) image. This imaging process costs massive computer memory for wavefield storage, especially in large scale three-dimensional cases. To make the staining algorithm applicable to three-dimensional RTM, we develop a method to implement the staining algorithm in three-dimensional acoustic modelling in a standard staggered grid finite difference (FD) scheme. The implementation is adaptive to the order of spatial accuracy of the FD operator. The method can be applied to elastic, electromagnetic, and other wave equations. Taking the memory requirement into account, we adopt a random boundary condition (RBC) to backward extrapolate the receiver wavefield and reconstruct it by reverse propagation using the final wavefield snapshot only. Meanwhile, we forward simulate the stained wavefield and source wavefield simultaneously using the nearly perfectly matched layer (NPML) boundary condition. Experiments on a complex geologic model indicate that the RBC-NPML collaborative strategy not only minimizes the memory consumption but also guarantees high quality imaging results. We apply the staining algorithm to three-dimensional RTM via the proposed strategy. Numerical results show that our staining algorithm can produce high S/N images in the target areas with other structures effectively muted.
An efficient protein complex mining algorithm based on Multistage Kernel Extension.
Shen, Xianjun; Zhao, Yanli; Li, Yanan; He, Tingting; Yang, Jincai; Hu, Xiaohua
2014-01-01
In recent years, many protein complex mining algorithms, such as classical clique percolation (CPM) method and markov clustering (MCL) algorithm, have developed for protein-protein interaction network. However, most of the available algorithms primarily concentrate on mining dense protein subgraphs as protein complexes, failing to take into account the inherent organizational structure within protein complexes. Thus, there is a critical need to study the possibility of mining protein complexes using the topological information hidden in edges. Moreover, the recent massive experimental analyses reveal that protein complexes have their own intrinsic organization. Inspired by the formation process of cliques of the complex social network and the centrality-lethality rule, we propose a new protein complex mining algorithm called Multistage Kernel Extension (MKE) algorithm, integrating the idea of critical proteins recognition in the Protein- Protein Interaction (PPI) network,. MKE first recognizes the nodes with high degree as the first level kernel of protein complex, and then adds the weighted best neighbour node of the first level kernel into the current kernel to form the second level kernel of the protein complex. This process is repeated, extending the current kernel to form protein complex. In the end, overlapped protein complexes are merged to form the final protein complex set. Here MKE has better accuracy compared with the classical clique percolation method and markov clustering algorithm. MKE also performs better than the classical clique percolation method both on Gene Ontology semantic similarity and co-localization enrichment and can effectively identify protein complexes with biological significance in the PPI network.
Pap-smear Classification Using Efficient Second Order Neural Network Training Algorithms
DEFF Research Database (Denmark)
Ampazis, Nikolaos; Dounias, George; Jantzen, Jan
2004-01-01
. The algorithms are methodologically similar, and are based on iterations of the form employed in the Levenberg-Marquardt (LM) method for non-linear least squares problems with the inclusion of an additional adaptive momentum term arising from the formulation of the training task as a constrained optimization...... problem. The classification results obtained from the application of the algorithms on a standard benchmark pap-smear data set reveal the power of the two methods to obtain excellent solutions in difficult classification problems whereas other standard computational intelligence techniques achieve...
An Efficient Functional Test Generation Method For Processors Using Genetic Algorithms
Hudec, Ján; Gramatová, Elena
2015-07-01
The paper presents a new functional test generation method for processors testing based on genetic algorithms and evolutionary strategies. The tests are generated over an instruction set architecture and a processor description. Such functional tests belong to the software-oriented testing. Quality of the tests is evaluated by code coverage of the processor description using simulation. The presented test generation method uses VHDL models of processors and the professional simulator ModelSim. The rules, parameters and fitness functions were defined for various genetic algorithms used in automatic test generation. Functionality and effectiveness were evaluated using the RISC type processor DP32.
Zhou, Hong; Zhou, Michael; Li, Daisy; Manthey, Joseph; Lioutikova, Ekaterina; Wang, Hong; Zeng, Xiao
2017-11-17
The beauty and power of the genome editing mechanism, CRISPR Cas9 endonuclease system, lies in the fact that it is RNA-programmable such that Cas9 can be guided to any genomic loci complementary to a 20-nt RNA, single guide RNA (sgRNA), to cleave double stranded DNA, allowing the introduction of wanted mutations. Unfortunately, it has been reported repeatedly that the sgRNA can also guide Cas9 to off-target sites where the DNA sequence is homologous to sgRNA. Using human genome and Streptococcus pyogenes Cas9 (SpCas9) as an example, this article mathematically analyzed the probabilities of off-target homologies of sgRNAs and discovered that for large genome size such as human genome, potential off-target homologies are inevitable for sgRNA selection. A highly efficient computationl algorithm was developed for whole genome sgRNA design and off-target homology searches. By means of a dynamically constructed sequence-indexed database and a simplified sequence alignment method, this algorithm achieves very high efficiency while guaranteeing the identification of all existing potential off-target homologies. Via this algorithm, 1,876,775 sgRNAs were designed for the 19,153 human mRNA genes and only two sgRNAs were found to be free of off-target homology. By means of the novel and efficient sgRNA homology search algorithm introduced in this article, genome wide sgRNA design and off-target analysis were conducted and the results confirmed the mathematical analysis that for a sgRNA sequence, it is almost impossible to escape potential off-target homologies. Future innovations on the CRISPR Cas9 gene editing technology need to focus on how to eliminate the Cas9 off-target activity.
On the Efficiency of Algorithms for Solving Hartree–Fock and Kohn–Sham Response Equations
DEFF Research Database (Denmark)
Kauczor, Joanna; Jørgensen, Poul; Norman, Patrick
2011-01-01
solved using the preconditioned conjugate gradient or conjugate residual algorithms where trial vectors are split into symmetric and antisymmetric components. For larger frequencies in the standard response equation as well as in the damped response equation in general, the preconditioned iterative...
Berends, Constantijn J.; Van De Wal, Roderik S W
2016-01-01
Many processes govern the deglaciation of ice sheets. One of the processes that is usually ignored is the calving of ice in lakes that temporarily surround the ice sheet. In order to capture this process a "flood-fill algorithm" is needed. Here we present and evaluate several optimizations to a
DEFF Research Database (Denmark)
Brodal, G. S.; Fagerberg, R.; Mailund, T.
2013-01-01
degree of any node in the two trees. Within the same time bounds, our framework also allows us to compute the parameterized triplet and quartet distances, where a parameter is introduced to weight resolved (binary) topologies against unresolved (non-binary) topologies. The previous best algorithm...
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Fagerberg, Rolf; Mailund, Thomas
2013-01-01
degree of any node in the two trees. Within the same time bounds, our framework also allows us to compute the parameterized triplet and quartet distances, where a parameter is introduced to weight resolved (binary) topologies against unresolved (non-binary) topologies. The previous best algorithm...
Efficient Algorithms for the Discrete Gabor Transform with a Long Fir Window
DEFF Research Database (Denmark)
Søndergaard, Peter Lempel
2012-01-01
The Discrete Gabor Transform (DGT) is the most commonly used signal transform for signal analysis and synthesis using a linear frequency scale. The development of the Linear Time-Frequency Analysis Toolbox (LTFAT) has been based on a detailed study of many variants of the relevant algorithms. As ...
An efficient algorithm for computing attractors of synchronous and asynchronous Boolean networks.
Zheng, Desheng; Yang, Guowu; Li, Xiaoyu; Wang, Zhicai; Liu, Feng; He, Lei
2013-01-01
Biological networks, such as genetic regulatory networks, often contain positive and negative feedback loops that settle down to dynamically stable patterns. Identifying these patterns, the so-called attractors, can provide important insights for biologists to understand the molecular mechanisms underlying many coordinated cellular processes such as cellular division, differentiation, and homeostasis. Both synchronous and asynchronous Boolean networks have been used to simulate genetic regulatory networks and identify their attractors. The common methods of computing attractors are that start with a randomly selected initial state and finish with exhaustive search of the state space of a network. However, the time complexity of these methods grows exponentially with respect to the number and length of attractors. Here, we build two algorithms to achieve the computation of attractors in synchronous and asynchronous Boolean networks. For the synchronous scenario, combing with iterative methods and reduced order binary decision diagrams (ROBDD), we propose an improved algorithm to compute attractors. For another algorithm, the attractors of synchronous Boolean networks are utilized in asynchronous Boolean translation functions to derive attractors of asynchronous scenario. The proposed algorithms are implemented in a procedure called geneFAtt. Compared to existing tools such as genYsis, geneFAtt is significantly [Formula: see text] faster in computing attractors for empirical experimental systems. The software package is available at https://sites.google.com/site/desheng619/download.
Hartjes, S.; Visser, H.G.
2016-01-01
In this study, a genetic optimization algorithm is applied to the design of environmentally friendly aircraft departure trajectories. The environmental optimization has been primarily focused on noise abatement and local NOx emissions, whilst taking fuel burn into account as an economical criterion.
An efficient grid layout algorithm for biological networks utilizing various biological attributes
Directory of Open Access Journals (Sweden)
Kato Mitsuru
2007-03-01
Full Text Available Abstract Background Clearly visualized biopathways provide a great help in understanding biological systems. However, manual drawing of large-scale biopathways is time consuming. We proposed a grid layout algorithm that can handle gene-regulatory networks and signal transduction pathways by considering edge-edge crossing, node-edge crossing, distance measure between nodes, and subcellular localization information from Gene Ontology. Consequently, the layout algorithm succeeded in drastically reducing these crossings in the apoptosis model. However, for larger-scale networks, we encountered three problems: (i the initial layout is often very far from any local optimum because nodes are initially placed at random, (ii from a biological viewpoint, human layouts still exceed automatic layouts in understanding because except subcellular localization, it does not fully utilize biological information of pathways, and (iii it employs a local search strategy in which the neighborhood is obtained by moving one node at each step, and automatic layouts suggest that simultaneous movements of multiple nodes are necessary for better layouts, while such extension may face worsening the time complexity. Results We propose a new grid layout algorithm. To address problem (i, we devised a new force-directed algorithm whose output is suitable as the initial layout. For (ii, we considered that an appropriate alignment of nodes having the same biological attribute is one of the most important factors of the comprehension, and we defined a new score function that gives an advantage to such configurations. For solving problem (iii, we developed a search strategy that considers swapping nodes as well as moving a node, while keeping the order of the time complexity. Though a naïve implementation increases by one order, the time complexity, we solved this difficulty by devising a method that caches differences between scores of a layout and its possible updates
Resource efficient data compression algorithms for demanding, WSN based biomedical applications.
Antonopoulos, Christos P; Voros, Nikolaos S
2016-02-01
During the last few years, medical research areas of critical importance such as Epilepsy monitoring and study, increasingly utilize wireless sensor network technologies in order to achieve better understanding and significant breakthroughs. However, the limited memory and communication bandwidth offered by WSN platforms comprise a significant shortcoming to such demanding application scenarios. Although, data compression can mitigate such deficiencies there is a lack of objective and comprehensive evaluation of relative approaches and even more on specialized approaches targeting specific demanding applications. The research work presented in this paper focuses on implementing and offering an in-depth experimental study regarding prominent, already existing as well as novel proposed compression algorithms. All algorithms have been implemented in a common Matlab framework. A major contribution of this paper, that differentiates it from similar research efforts, is the employment of real world Electroencephalography (EEG) and Electrocardiography (ECG) datasets comprising the two most demanding Epilepsy modalities. Emphasis is put on WSN applications, thus the respective metrics focus on compression rate and execution latency for the selected datasets. The evaluation results reveal significant performance and behavioral characteristics of the algorithms related to their complexity and the relative negative effect on compression latency as opposed to the increased compression rate. It is noted that the proposed schemes managed to offer considerable advantage especially aiming to achieve the optimum tradeoff between compression rate-latency. Specifically, proposed algorithm managed to combine highly completive level of compression while ensuring minimum latency thus exhibiting real-time capabilities. Additionally, one of the proposed schemes is compared against state-of-the-art general-purpose compression algorithms also exhibiting considerable advantages as far as the
Shayanfar, Mohsen Ali; Barkhordari, Mohammad Ali; Roudak, Mohammad Amin
2017-06-01
Monte Carlo simulation (MCS) is a useful tool for computation of probability of failure in reliability analysis. However, the large number of required random samples makes it time-consuming. Response surface method (RSM) is another common method in reliability analysis. Although RSM is widely used for its simplicity, it cannot be trusted in highly nonlinear problems due to its linear nature. In this paper, a new efficient algorithm, employing the combination of importance sampling, as a class of MCS, and RSM is proposed. In the proposed algorithm, analysis starts with importance sampling concepts and using a represented two-step updating rule of design point. This part finishes after a small number of samples are generated. Then RSM starts to work using Bucher experimental design, with the last design point and a represented effective length as the center point and radius of Bucher's approach, respectively. Through illustrative numerical examples, simplicity and efficiency of the proposed algorithm and the effectiveness of the represented rules are shown.
Directory of Open Access Journals (Sweden)
Shanen Yu
2016-12-01
Full Text Available Most existing deployment algorithms for event coverage in underwater wireless sensor networks (UWSNs usually do not consider that network communication has non-uniform characteristics on three-dimensional underwater environments. Such deployment algorithms ignore that the nodes are distributed at different depths and have different probabilities for data acquisition, thereby leading to imbalances in the overall network energy consumption, decreasing the network performance, and resulting in poor and unreliable late network operation. Therefore, in this study, we proposed an uneven cluster deployment algorithm based network layered for event coverage. First, according to the energy consumption requirement of the communication load at different depths of the underwater network, we obtained the expected value of deployment nodes and the distribution density of each layer network after theoretical analysis and deduction. Afterward, the network is divided into multilayers based on uneven clusters, and the heterogeneous communication radius of nodes can improve the network connectivity rate. The recovery strategy is used to balance the energy consumption of nodes in the cluster and can efficiently reconstruct the network topology, which ensures that the network has a high network coverage and connectivity rate in a long period of data acquisition. Simulation results show that the proposed algorithm improves network reliability and prolongs network lifetime by significantly reducing the blind movement of overall network nodes while maintaining a high network coverage and connectivity rate.
Directory of Open Access Journals (Sweden)
Dan Yue
Full Text Available The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality.
Yue, Dan; Xu, Shuyan; Nie, Haitao; Wang, Zongyang
2016-01-01
The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD) algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality. PMID:26934045
Directory of Open Access Journals (Sweden)
Weizhen Rao
2016-01-01
Full Text Available The classical model of vehicle routing problem (VRP generally minimizes either the total vehicle travelling distance or the total number of dispatched vehicles. Due to the increased importance of environmental sustainability, one variant of VRPs that minimizes the total vehicle fuel consumption has gained much attention. The resulting fuel consumption VRP (FCVRP becomes increasingly important yet difficult. We present a mixed integer programming model for the FCVRP, and fuel consumption is measured through the degree of road gradient. Complexity analysis of FCVRP is presented through analogy with the capacitated VRP. To tackle the FCVRP’s computational intractability, we propose an efficient two-objective hybrid local search algorithm (TOHLS. TOHLS is based on a hybrid local search algorithm (HLS that is also used to solve FCVRP. Based on the Golden CVRP benchmarks, 60 FCVRP instances are generated and tested. Finally, the computational results show that the proposed TOHLS significantly outperforms the HLS.
Kotani, Naoki; Taniguchi, Kenji
An efficient learning method using Fuzzy ART with Genetic Algorithm is proposed. The proposed method reduces the number of trials by using a policy acquired in other tasks because a reinforcement learning needs a lot of the number of trials until an agent acquires appropriate actions. Fuzzy ART is an incremental unsupervised learning algorithm in responce to arbitrary sequences of analog or binary input vectors. Our proposed method gives a policy by crossover or mutation when an agent observes unknown states. Selection controls the category proliferation problem of Fuzzy ART. The effectiveness of the proposed method was verified with the simulation of the reaching problem for the two-link robot arm. The proposed method achieves a reduction of both the number of trials and the number of states.
Ortiz P., D.; Villa, Luisa F.; Salazar, Carlos; Quintero, O. L.
2016-04-01
A simple but efficient voice activity detector based on the Hilbert transform and a dynamic threshold is presented to be used on the pre-processing of audio signals. The algorithm to define the dynamic threshold is a modification of a convex combination found in literature. This scheme allows the detection of prosodic and silence segments on a speech in presence of non-ideal conditions like a spectral overlapped noise. The present work shows preliminary results over a database built with some political speech. The tests were performed adding artificial noise to natural noises over the audio signals, and some algorithms are compared. Results will be extrapolated to the field of adaptive filtering on monophonic signals and the analysis of speech pathologies on futures works.
An Efficient and Energy-Aware Cloud Consolidation Algorithm for Multimedia Big Data Applications
Directory of Open Access Journals (Sweden)
JongBeom Lim
2017-09-01
Full Text Available It is well known that cloud computing has many potential advantages over traditional distributed systems. Many enterprises can build their own private cloud with open source infrastructure as a service (IaaS frameworks. Since enterprise applications and data are migrating to private cloud, the performance of cloud computing environments is of utmost importance for both cloud providers and users. To improve the performance, previous studies on cloud consolidation have been focused on live migration of virtual machines based on resource utilization. However, the approaches are not suitable for multimedia big data applications. In this paper, we reveal the performance bottleneck of multimedia big data applications in cloud computing environments and propose a cloud consolidation algorithm that considers application types. We show that our consolidation algorithm outperforms previous approaches.
DEFF Research Database (Denmark)
Keibler, Evan; Arumugam, Manimozhiyan; Brent, Michael R
2007-01-01
MOTIVATION: Hidden Markov models (HMMs) and generalized HMMs been successfully applied to many problems, but the standard Viterbi algorithm for computing the most probable interpretation of an input sequence (known as decoding) requires memory proportional to the length of the sequence, which can......, our pair HMM based cDNA-to-genome aligner. AVAILABILITY: The TWINSCAN/N-SCAN/PAIRAGON open source software package is available from http://genes.cse.wustl.edu....... be prohibitive. Existing approaches to reducing memory usage either sacrifice optimality or trade increased running time for reduced memory. RESULTS: We developed two novel decoding algorithms, Treeterbi and Parallel Treeterbi, and implemented them in the TWINSCAN/N-SCAN gene-prediction system. The worst case...
Energy Technology Data Exchange (ETDEWEB)
He, Hongxing; Fang, Hengrui [Department of Physics and Texas Center for Superconductivity, University of Houston, Houston, Texas 77204 (United States); Miller, Mitchell D. [Department of BioSciences, Rice University, Houston, Texas 77005 (United States); Phillips, George N. Jr [Department of BioSciences, Rice University, Houston, Texas 77005 (United States); Department of Chemistry, Rice University, Houston, Texas 77005 (United States); Department of Biochemistry, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States); Su, Wu-Pei, E-mail: wpsu@uh.edu [Department of Physics and Texas Center for Superconductivity, University of Houston, Houston, Texas 77204 (United States)
2016-07-15
An iterative transform algorithm is proposed to improve the conventional molecular-replacement method for solving the phase problem in X-ray crystallography. Several examples of successful trial calculations carried out with real diffraction data are presented. An iterative transform method proposed previously for direct phasing of high-solvent-content protein crystals is employed for enhancing the molecular-replacement (MR) algorithm in protein crystallography. Target structures that are resistant to conventional MR due to insufficient similarity between the template and target structures might be tractable with this modified phasing method. Trial calculations involving three different structures are described to test and illustrate the methodology. The relationship of the approach to PHENIX Phaser-MR and MR-Rosetta is discussed.
Efficiency of Unicast and Broadcast Gossip Algorithms for Wireless Sensor Networks
Zanaj, Elma; Baldi, Marco; Chiaraluce, Franco
2008-01-01
Gossip is a well-known technique for distributed computing in an arbitrarily connected network, that can be adopted effectively in wireless sensor networks. Gossip algorithms have been widely studied in previous literature, but mostly from a theoretical point of view. The aim of this paper is to verify the behavior of the gossip approach in practical scenarios, through the analysis and interpretation of simulated results. So, we investigate the impact of optimizing the neighbor selection prob...
Ranganadh Narayanam
2013-01-01
Voice Activity Detection (VAD) problem considers detecting the presence of speech in a noisy signal. The speech/non-speech classification task is not as trivial as it appears, and most of the VAD algorithms fail when the level of background noise increases. In this research we are presenting a new technique for Voice Activity Detection (VAD) in EEG collected brain stem speech evoked potentials data [7, 8, 9]. This one is spectral subtraction method in which we have developed ou...
Juarez-Salazar, Rigoberto; Guerrero-Sanchez, Fermin; Robledo-Sanchez, Carlos
2015-06-10
Some advances in fringe analysis technology for phase computing are presented. A full scheme for phase evaluation, applicable to automatic applications, is proposed. The proposal consists of: a fringe-pattern normalization method, Fourier fringe-normalized analysis, generalized phase-shifting processing for inhomogeneous nonlinear phase shifts and spatiotemporal visibility, and a phase-unwrapping method by a rounding-least-squares approach. The theoretical principles of each algorithm are given. Numerical examples and an experimental evaluation are presented.
CSIR Research Space (South Africa)
Du Plessis, WP
2011-09-01
Full Text Available of of: W. P. du Plessis, \\E cient Synthesis of large- scale thinned arrays using a density-taper initialised genetic algorithm," International Conference on Electromagnetics in Advanced Applications (ICEAA), 12-16 September 2011, pp. 363... other techniques [16] limit the usefulness of DSs in thinned array synthesis. In an attempt to overcome these problems, hy- brid techniques that use DSs to initialise a GA (DS- GA) [2, 17] and a PSO (DS-PSO) [18] have been developed...
QuickLexSort: An efficient algorithm for lexicographically sorting nested restrictions of a database
Haws, David
2013-01-01
Lexicographical sorting is a fundamental problem with applications to contingency tables, databases, Bayesian networks, and more. A standard method to lexicographically sort general data is to iteratively use a stable sort -- a sort which preserves existing orders. Here we present a new method of lexicographical sorting called QuickLexSort. Whereas a stable sort based lexicographical sorting algorithm operates from the least important to most important features, in contrast, QuickLexSort sort...
An Efficient Algorithm for Congestion Control in Highly Loaded DiffServ/MPLS Networks
Directory of Open Access Journals (Sweden)
Srecko Krile
2009-06-01
Full Text Available The optimal QoS path provisioning of coexisted and aggregated traffic in networks is still demanding problem. All traffic flows in a domain are distributed among LSPs (Label Switching Path related to N service classes, but the congestion problem of concurrent flows can appear. As we know the IGP (Interior Getaway Protocol uses simple on-line routing algorithms (e.g. OSPFS, IS-IS based on shortest path methodology. In QoS end-to-end provisioning where some links may be reserved for certain traffic classes (for particular set of users it becomes insufficient technique. On other hand, constraint based explicit routing (CR based on IGP metric ensures traffic engineering (TE capabilities. The algorithm proposed in this paper may find a longer but lightly loaded path, better than the heavily loaded shortest path. LSP can be pre-computed much earlier, possibly during SLA (Service Level Agreement negotiation process. As we need firm correlation with bandwidth management and traffic engineering (TE the initial (pro-active routing can be pre-computed in the context of all priority traffic flows (former contracted SLAs traversing the network simultaneously. It could be a very good solution for congestion avoidance and for better load-balancing purpose where links are running close to capacity. Also, such technique could be useful in inter-domain end-to-end provisioning, where bandwidth reservation has to be negotiated with neighbor ASes (Autonomous System. To be acceptable for real applications such complicated routing algorithm can be significantly improved. Algorithm was tested on the network of M core routers on the path (between edge routers and results are given for N=3 service classes. Further improvements through heuristic approach are made and results are discussed.
Yurtkuran, Alkın; Emel, Erdal
2014-01-01
The traveling salesman problem with time windows (TSPTW) is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA) that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle's boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms.
An Efficient Addressing Scheme and Its Routing Algorithm for a Large-Scale Wireless Sensor Network
Directory of Open Access Journals (Sweden)
Choi Jeonghee
2008-01-01
Full Text Available Abstract So far, various addressing and routing algorithms have been extensively studied for wireless sensor networks (WSNs, but many of them were limited to cover less than hundreds of sensor nodes. It is largely due to stringent requirements for fully distributed coordination among sensor nodes, leading to the wasteful use of available address space. As there is a growing need for a large-scale WSN, it will be extremely challenging to support more than thousands of nodes, using existing standard bodies. Moreover, it is highly unlikely to change the existing standards, primarily due to backward compatibility issue. In response, we propose an elegant addressing scheme and its routing algorithm. While maintaining the existing address scheme, it tackles the wastage problem and achieves no additional memory storage during a routing. We also present an adaptive routing algorithm for location-aware applications, using our addressing scheme. Through a series of simulations, we prove that our approach can achieve two times lesser routing time than the existing standard in a ZigBee network.
Energy Technology Data Exchange (ETDEWEB)
Madduri, Kamesh; Ediger, David; Jiang, Karl; Bader, David A.; Chavarria-Miranda, Daniel
2009-02-15
We present a new lock-free parallel algorithm for computing betweenness centralityof massive small-world networks. With minor changes to the data structures, ouralgorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in HPCS SSCA#2, a benchmark extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the Threadstorm processor, and a single-socket Sun multicore server with the UltraSPARC T2 processor. For a small-world network of 134 million vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.
Directory of Open Access Journals (Sweden)
Alkın Yurtkuran
2014-01-01
Full Text Available The traveling salesman problem with time windows (TSPTW is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle’s boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms.
Yurtkuran, Alkın
2014-01-01
The traveling salesman problem with time windows (TSPTW) is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA) that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle's boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms. PMID:24723834
An Efficient Algorithm for the Detection of Exposed and Hidden Wormhole Attack
Directory of Open Access Journals (Sweden)
ZUBAIR AHMED KHAN
2016-07-01
Full Text Available MANETs (Mobile Ad Hoc Networks are slowly integrating into our everyday lives, their most prominent uses are visible in the disaster and war struck areas where physical infrastructure is almost impossible or very hard to build. MANETs like other networks are facing the threat of malicious users and their activities. A number of attacks have been identified but the most severe of them is the wormhole attack which has the ability to succeed even in case of encrypted traffic and secure networks. Once wormhole is launched successfully, the severity increases by the fact that attackers can launch other attacks too. This paper presents a comprehensive algorithm for the detection of exposed as well as hidden wormhole attack while keeping the detection rate to maximum and at the same reducing false alarms. The algorithm does not require any extra hardware, time synchronization or any special type of nodes. The architecture consists of the combination of Routing Table, RTT (Round Trip Time and RSSI (Received Signal Strength Indicator for comprehensive detection of wormhole attack. The proposed technique is robust, light weight, has low resource requirements and provides real-time detection against the wormhole attack. Simulation results show that the algorithm is able to provide a higher detection rate, packet delivery ratio, negligible false alarms and is also better in terms of Ease of Implementation, Detection Accuracy/ Speed and processing overhead.
Efficiently Hiding Sensitive Itemsets with Transaction Deletion Based on Genetic Algorithms
Directory of Open Access Journals (Sweden)
Chun-Wei Lin
2014-01-01
Full Text Available Data mining is used to mine meaningful and useful information or knowledge from a very large database. Some secure or private information can be discovered by data mining techniques, thus resulting in an inherent risk of threats to privacy. Privacy-preserving data mining (PPDM has thus arisen in recent years to sanitize the original database for hiding sensitive information, which can be concerned as an NP-hard problem in sanitization process. In this paper, a compact prelarge GA-based (cpGA2DT algorithm to delete transactions for hiding sensitive itemsets is thus proposed. It solves the limitations of the evolutionary process by adopting both the compact GA-based (cGA mechanism and the prelarge concept. A flexible fitness function with three adjustable weights is thus designed to find the appropriate transactions to be deleted in order to hide sensitive itemsets with minimal side effects of hiding failure, missing cost, and artificial cost. Experiments are conducted to show the performance of the proposed cpGA2DT algorithm compared to the simple GA-based (sGA2DT algorithm and the greedy approach in terms of execution time and three side effects.
Saa, Pedro A; Nielsen, Lars K
2016-12-15
Computation of steady-state flux solutions in large metabolic models is routinely performed using flux balance analysis based on a simple LP (Linear Programming) formulation. A minimal requirement for thermodynamic feasibility of the flux solution is the absence of internal loops, which are enforced using 'loopless constraints'. The resulting loopless flux problem is a substantially harder MILP (Mixed Integer Linear Programming) problem, which is computationally expensive for large metabolic models. We developed a pre-processing algorithm that significantly reduces the size of the original loopless problem into an easier and equivalent MILP problem. The pre-processing step employs a fast matrix sparsification algorithm-Fast- sparse null-space pursuit (SNP)-inspired by recent results on SNP. By finding a reduced feasible 'loop-law' matrix subject to known directionalities, Fast-SNP considerably improves the computational efficiency in several metabolic models running different loopless optimization problems. Furthermore, analysis of the topology encoded in the reduced loop matrix enabled identification of key directional constraints for the potential permanent elimination of infeasible loops in the underlying model. Overall, Fast-SNP is an effective and simple algorithm for efficient formulation of loop-law constraints, making loopless flux optimization feasible and numerically tractable at large scale. Source code for MATLAB including examples is freely available for download at http://www.aibn.uq.edu.au/cssb-resources under Software. Optimization uses Gurobi, CPLEX or GLPK (the latter is included with the algorithm). lars.nielsen@uq.edu.auSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Bahaz, Mohamed; Benzid, Redha
2018-02-05
Electrocardiogram (ECG) signals are often contaminated with artefacts and noises which can lead to incorrect diagnosis when they are visually inspected by cardiologists. In this paper, the well-known discrete Fourier series (DFS) is re-explored and an efficient DFS-based method is proposed to reduce contribution of both baseline wander (BW) and powerline interference (PLI) noises in ECG records. In the first step, the determination of the exact number of low frequency harmonics contributing in BW is achieved. Next, the baseline drift is estimated by the sum of all associated Fourier sinusoids components. Then, the baseline shift is discarded efficiently by a subtraction of its approximated version from the original biased ECG signal. Concerning the PLI, the subtraction of the contributing harmonics calculated in the same manner reduces efficiently such type of noise. In addition of visual quality results, the proposed algorithm shows superior performance in terms of higher signal-to-noise ratio and smaller mean square error when faced to the DCT-based algorithm.
Du, Likai; Lan, Zhenggang
2015-04-14
Nonadiabatic dynamics simulations have rapidly become an indispensable tool for understanding ultrafast photochemical processes in complex systems. Here, we present our recently developed on-the-fly nonadiabatic dynamics package, JADE, which allows researchers to perform nonadiabatic excited-state dynamics simulations of polyatomic systems at an all-atomic level. The nonadiabatic dynamics is based on Tully's surface-hopping approach. Currently, several electronic structure methods (CIS, TDHF, TDDFT(RPA/TDA), and ADC(2)) are supported, especially TDDFT, aiming at performing nonadiabatic dynamics on medium- to large-sized molecules. The JADE package has been interfaced with several quantum chemistry codes, including Turbomole, Gaussian, and Gamess (US). To consider environmental effects, the Langevin dynamics was introduced as an easy-to-use scheme into the standard surface-hopping dynamics. The JADE package is mainly written in Fortran for greater numerical performance and Python for flexible interface construction, with the intent of providing open-source, easy-to-use, well-modularized, and intuitive software in the field of simulations of photochemical and photophysical processes. To illustrate the possible applications of the JADE package, we present a few applications of excited-state dynamics for various polyatomic systems, such as the methaniminium cation, fullerene (C20), p-dimethylaminobenzonitrile (DMABN) and its primary amino derivative aminobenzonitrile (ABN), and 10-hydroxybenzo[h]quinoline (10-HBQ).
Directory of Open Access Journals (Sweden)
Tonny J. Oyana
2012-01-01
Full Text Available The objective of this paper is to introduce an efficient algorithm, namely, the mathematically improved learning-self organizing map (MIL-SOM algorithm, which speeds up the self-organizing map (SOM training process. In the proposed MIL-SOM algorithm, the weights of Kohonen’s SOM are based on the proportional-integral-derivative (PID controller. Thus, in a typical SOM learning setting, this improvement translates to faster convergence. The basic idea is primarily motivated by the urgent need to develop algorithms with the competence to converge faster and more efficiently than conventional techniques. The MIL-SOM algorithm is tested on four training geographic datasets representing biomedical and disease informatics application domains. Experimental results show that the MIL-SOM algorithm provides a competitive, better updating procedure and performance, good robustness, and it runs faster than Kohonen’s SOM.
Hierarchical Stochastic Simulation Algorithm for SBML Models of Genetic Circuits
Directory of Open Access Journals (Sweden)
Leandro eWatanabe
2014-11-01
Full Text Available This paper describes a hierarchical stochastic simulation algorithm which has been implemented within iBioSim, a tool used to model, analyze, and visualize genetic circuits. Many biological analysis tools flatten out hierarchy before simulation, but there are many disadvantages associated with this approach. First, the memory required to represent the model can quickly expand in the process. Second, the flattening process is computationally expensive. Finally, when modeling a dynamic cellular population within iBioSim, inlining the hierarchy of the model is inefficient since models must grow dynamically over time. This paper discusses a new approach to handle hierarchy on the fly to make the tool faster and more memory-efficient. This approach yields significant performance improvements as compared to the former flat analysis method.
A simple and efficient algorithm operating with linear time for MCEEG data compression.
Titus, Geevarghese; Sudhakar, M S
2017-09-01
Popularisation of electroencephalograph (EEG) signals in diversified fields have increased the need for devices capable of operating at lower power and storage requirements. This has led to a great deal of research in data compression, that can address (a) low latency in the coding of the signal, (b) reduced hardware and software dependencies, (c) quantify the system anomalies, and (d) effectively reconstruct the compressed signal. This paper proposes a computationally simple and novel coding scheme named spatial pseudo codec (SPC), to achieve lossy to near lossless compression of multichannel EEG (MCEEG). In the proposed system, MCEEG signals are initially normalized, followed by two parallel processes: one operating on integer part and the other, on fractional part of the normalized data. The redundancies in integer part are exploited using spatial domain encoder, and the fractional part is coded as pseudo integers. The proposed method has been tested on a wide range of databases having variable sampling rates and resolutions. Results indicate that the algorithm has a good recovery performance with an average percentage root mean square deviation (PRD) of 2.72 for an average compression ratio (CR) of 3.16. Furthermore, the algorithm has a complexity of only O(n) with an average encoding and decoding time per sample of 0.3 ms and 0.04 ms respectively. The performance of the algorithm is comparable with recent methods like fast discrete cosine transform (fDCT) and tensor decomposition methods. The results validated the feasibility of the proposed compression scheme for practical MCEEG recording, archiving and brain computer interfacing systems.
Zhang, Y.; Chatterjea, Supriyo; Havinga, Paul J.M.
2007-01-01
We report our experiences with implementing a distributed and self-organizing scheduling algorithm designed for energy-efficient data gathering on a 25-node multihop wireless sensor network (WSN). The algorithm takes advantage of spatial correlations that exist in readings of adjacent sensor nodes
Directory of Open Access Journals (Sweden)
Paul G. Spirakis
2009-02-01
Full Text Available In this work we focus on the energy efficiency challenge in wireless sensor networks, from both an on-line perspective (related to routing, as well as a network design perspective (related to tracking. We investigate a few representative, important aspects of energy efficiency: a the robust and fast data propagation b the problem of balancing the energy dissipation among all sensors in the network and c the problem of efficiently tracking moving entities in sensor networks. Our work here is a methodological survey of selected results that have already appeared in the related literature. In particular, we investigate important issues of energy optimization, like minimizing the total energy dissipation, minimizing the number of transmissions as well as balancing the energy load to prolong the system’s lifetime. We review characteristic protocols and techniques in the recent literature, including probabilistic forwarding and local optimization methods. We study the problem of localizing and tracking multiple moving targets from a network design perspective i.e. towards estimating the least possible number of sensors, their positions and operation characteristics needed to efficiently perform the tracking task. To avoid an expensive massive deployment, we try to take advantage of possible coverage overlaps over space and time, by introducing a novel combinatorial model that captures such overlaps. Under this model, we abstract the tracking network design problem by a covering combinatorial problem and then design and analyze an efficient approximate method for sensor placement and operation.
Li, X Y; Yang, G W; Zheng, D S; Guo, W S; Hung, W N N
2015-04-28
Genetic regulatory networks are the key to understanding biochemical systems. One condition of the genetic regulatory network under different living environments can be modeled as a synchronous Boolean network. The attractors of these Boolean networks will help biologists to identify determinant and stable factors. Existing methods identify attractors based on a random initial state or the entire state simultaneously. They cannot identify the fixed length attractors directly. The complexity of including time increases exponentially with respect to the attractor number and length of attractors. This study used the bounded model checking to quickly locate fixed length attractors. Based on the SAT solver, we propose a new algorithm for efficiently computing the fixed length attractors, which is more suitable for large Boolean networks and numerous attractors' networks. After comparison using the tool BooleNet, empirical experiments involving biochemical systems demonstrated the feasibility and efficiency of our approach.
Efficient Rectangular Maximal-Volume Algorithm for Rating Elicitation in Collaborative Filtering
Fonarev, Alexander
2017-02-07
Cold start problem in Collaborative Filtering can be solved by asking new users to rate a small seed set of representative items or by asking representative users to rate a new item. The question is how to build a seed set that can give enough preference information for making good recommendations. One of the most successful approaches, called Representative Based Matrix Factorization, is based on Maxvol algorithm. Unfortunately, this approach has one important limitation - a seed set of a particular size requires a rating matrix factorization of fixed rank that should coincide with that size. This is not necessarily optimal in the general case. In the current paper, we introduce a fast algorithm for an analytical generalization of this approach that we call Rectangular Maxvol. It allows the rank of factorization to be lower than the required size of the seed set. Moreover, the paper includes the theoretical analysis of the method\\'s error, the complexity analysis of the existing methods and the comparison to the state-of-the-art approaches.
An Energy Efficient Simultaneous-Node Repositioning Algorithm for Mobile Sensor Networks
Directory of Open Access Journals (Sweden)
Muhammad Amir Khan
2014-01-01
Full Text Available Recently, wireless sensor network (WSN applications have seen an increase in interest. In search and rescue, battlefield reconnaissance, and some other such applications, so that a survey of the area of interest can be made collectively, a set of mobile nodes is deployed. Keeping the network nodes connected is vital for WSNs to be effective. The provision of connectivity can be made at the time of startup and can be maintained by carefully coordinating the nodes when they move. However, if a node suddenly fails, the network could be partitioned to cause communication problems. Recently, several methods that use the relocation of nodes for connectivity restoration have been proposed. However, these methods have the tendency to not consider the potential coverage loss in some locations. This paper addresses the concerns of both connectivity and coverage in an integrated way so that this gap can be filled. A novel algorithm for simultaneous-node repositioning is introduced. In this approach, each neighbour of the failed node, one by one, moves in for a certain amount of time to take the place of the failed node, after which it returns to its original location in the network. The effectiveness of this algorithm has been verified by the simulation results.
Schack, Tim; Safi Harb, Yosef; Muma, Michael; Zoubir, Abdelhak M
2017-07-01
Atrial fibrillation (AF) is one of the major causes of stroke, heart failure, sudden death, and cardiovascular morbidity and the most common type of arrhythmia. Its diagnosis and the initiation of treatment, however, currently requires electrocardiogram (ECG)-based heart rhythm monitoring. The photoplethysmogram (PPG) offers an alternative method, which is convenient in terms of its recording and allows for self-monitoring, thus relieving clinical staff and enabling early AF diagnosis. We introduce a PPG-based AF detection algorithm using smartphones that has a low computational cost and low memory requirements. In particular, we propose a modified PPG signal acquisition, explore new statistical discriminating features and propose simple classification equations by using sequential forward selection (SFS) and support vector machines (SVM). The algorithm is applied to clinical data and evaluated in terms of receiver operating characteristic (ROC) curve and statistical measures. The combination of Shannon entropy and the median of the peak rise height achieves perfect detection of AF on the recorded data, highlighting the potential of PPG for reliable AF detection.
Hui, Tin-Yu J; Burt, Austin
2015-05-01
The effective population size [Formula: see text] is a key parameter in population genetics and evolutionary biology, as it quantifies the expected distribution of changes in allele frequency due to genetic drift. Several methods of estimating [Formula: see text] have been described, the most direct of which uses allele frequencies measured at two or more time points. A new likelihood-based estimator [Formula: see text] for contemporary effective population size using temporal data is developed in this article. The existing likelihood methods are computationally intensive and unable to handle the case when the underlying [Formula: see text] is large. This article tries to work around this problem by using a hidden Markov algorithm and applying continuous approximations to allele frequencies and transition probabilities. Extensive simulations are run to evaluate the performance of the proposed estimator [Formula: see text], and the results show that it is more accurate and has lower variance than previous methods. The new estimator also reduces the computational time by at least 1000-fold and relaxes the upper bound of [Formula: see text] to several million, hence allowing the estimation of larger [Formula: see text]. Finally, we demonstrate how this algorithm can cope with nonconstant [Formula: see text] scenarios and be used as a likelihood-ratio test to test for the equality of [Formula: see text] throughout the sampling horizon. An R package "NB" is now available for download to implement the method described in this article. Copyright © 2015 by the Genetics Society of America.
An energy efficient simultaneous-node repositioning algorithm for mobile sensor networks.
Khan, Muhammad Amir; Hasbullah, Halabi; Nazir, Babar; Khan, Imran Ali
2014-01-01
Recently, wireless sensor network (WSN) applications have seen an increase in interest. In search and rescue, battlefield reconnaissance, and some other such applications, so that a survey of the area of interest can be made collectively, a set of mobile nodes is deployed. Keeping the network nodes connected is vital for WSNs to be effective. The provision of connectivity can be made at the time of startup and can be maintained by carefully coordinating the nodes when they move. However, if a node suddenly fails, the network could be partitioned to cause communication problems. Recently, several methods that use the relocation of nodes for connectivity restoration have been proposed. However, these methods have the tendency to not consider the potential coverage loss in some locations. This paper addresses the concerns of both connectivity and coverage in an integrated way so that this gap can be filled. A novel algorithm for simultaneous-node repositioning is introduced. In this approach, each neighbour of the failed node, one by one, moves in for a certain amount of time to take the place of the failed node, after which it returns to its original location in the network. The effectiveness of this algorithm has been verified by the simulation results.
Efficient algorithms for the dynamics of large and infinite classical central spin models
Fauseweh, Benedikt; Schering, Philipp; Hüdepohl, Jan; Uhrig, Götz S.
2017-08-01
We investigate the time dependence of correlation functions in the central spin model, which describes the electron or hole spin confined in a quantum dot, interacting with a bath of nuclear spins forming the Overhauser field. For large baths, a classical description of the model yields quantitatively correct results. We develop and apply various algorithms in order to capture the long-time limit of the central spin for bath sizes from 1000 to infinitely many bath spins. Representing the Overhauser field in terms of orthogonal polynomials, we show that a carefully reduced set of differential equations is sufficient to compute the spin correlations of the full problem up to very long times, for instance up to 105ℏ /JQ where JQ is the natural energy unit of the system. This technical progress renders an analysis of the model with experimentally relevant parameters possible. We benchmark the results of the algorithms with exact data for a small number of bath spins and we predict how the long-time correlations behave for different effective numbers of bath spins.
Energy Technology Data Exchange (ETDEWEB)
Frolov, Vladimir [Moscow Inst. of Physics and Technology (MIPT), Moscow (Russian Federation); Backhaus, Scott N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Chertkov, Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2014-01-14
In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~2700 nodes and ~3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polish grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements
Directory of Open Access Journals (Sweden)
C. Ozkan
2012-07-01
Full Text Available Marine oil spills due to releases of crude oil from tankers, offshore platforms, drilling rigs and wells, etc. are seriously affecting the fragile marine and coastal ecosystem and cause political and environmental concern. A catastrophic explosion and subsequent fire in the Deepwater Horizon oil platform caused the platform to burn and sink, and oil leaked continuously between April 20th and July 15th of 2010, releasing about 780,000 m3 of crude oil into the Gulf of Mexico. Today, space-borne SAR sensors are extensively used for the detection of oil spills in the marine environment, as they are independent from sun light, not affected by cloudiness, and more cost-effective than air patrolling due to covering large areas. In this study, generalization extent of an object based classification algorithm was tested for oil spill detection using multiple SAR imagery data. Among many geometrical, physical and textural features, some more distinctive ones were selected to distinguish oil and look alike objects from each others. The tested classifier was constructed from a Multilayer Perception Artificial Neural Network trained by ABC, LM and BP optimization algorithms. The training data to train the classifier were constituted from SAR data consisting of oil spill originated from Lebanon in 2007. The classifier was then applied to the Deepwater Horizon oil spill data in the Gulf of Mexico on RADARSAT-2 and ALOS PALSAR images to demonstrate the generalization efficiency of oil slick classification algorithm.
DEFF Research Database (Denmark)
Meng, Lexuan; Dragicevic, Tomislav; Guerrero, Josep M.
2014-01-01
In a DC microgrid, several paralleled conversion systems are installed in distributed substations for transferring power from external grid to a DC microgrid. Droop control is used for the distributed load sharing among all the DC/DC converters. Considering the typical efficiency feature of power...
An Efficient Voting Algorithm for Finding Additive Biclusters with Random Background
Xiao, Jing; Wang, Lusheng; Liu, Xiaowen
2008-01-01
Abstract The biclustering problem has been extensively studied in many areas, including e-commerce, data mining, machine learning, pattern recognition, statistics, and, more recently, computational biology. Given an n × m matrix A (n ≥ m), the main goal of biclustering is to identify a subset of rows (called objects) and a subset of columns (called properties) such that some objective function that specifies the quality of the found bicluster (formed by the subsets of rows and of columns of A) is optimized. The problem has been proved or conjectured to be NP-hard for various objective functions. In this article, we study a probabilistic model for the implanted additive bicluster problem, where each element in the n × m background matrix is a random integer from [0, L − 1] for some integer L, and a k × k implanted additive bicluster is obtained from an error-free additive bicluster by randomly changing each element to a number in [0, L − 1] with probability θ. We propose an O (n2m) time algorithm based on voting to solve the problem. We show that when \\documentclass{aastex}\\usepackage{amsbsy}\\usepackage{amsfonts}\\usepackage{amssymb}\\usepackage{bm}\\usepackage{mathrsfs}\\usepackage{pifont}\\usepackage{stmaryrd}\\usepackage{textcomp}\\usepackage{portland, xspace}\\usepackage{amsmath, amsxtra}\\pagestyle{empty}\\DeclareMathSizes{10}{9}{7}{6}\\begin{document}$$k \\geq \\Omega (\\sqrt{n \\log n})$$\\end{document}, the voting algorithm can correctly find the implanted bicluster with probability at least \\documentclass{aastex}\\usepackage{amsbsy}\\usepackage{amsfonts}\\usepackage{amssymb}\\usepackage{bm}\\usepackage{mathrsfs}\\usepackage{pifont}\\usepackage{stmaryrd}\\usepackage{textcomp}\\usepackage{portland, xspace}\\usepackage{amsmath, amsxtra}\\pagestyle{empty}\\DeclareMathSizes{10}{9}{7}{6}\\begin{document}$$1 - {\\frac {9} {n^ {2}}}$$\\end{document}. We also implement our algorithm as a C++ program named VOTE. The implementation incorporates several
An efficient and portable SIMD algorithm for charge/current deposition in Particle-In-Cell codes
Vincenti, H; Sasanka, R; Vay, J-L
2016-01-01
In current computer architectures, data movement (from die to network) is by far the most energy consuming part of an algorithm (10pJ/word on-die to 10,000pJ/word on the network). To increase memory locality at the hardware level and reduce energy consumption related to data movement, future exascale computers tend to use more and more cores on each compute nodes ("fat nodes") that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. As a consequence, Particle-In-Cell (PIC) codes will have to achieve good vectorization to fully take advantage of these upcoming architectures. In this paper, we present a new algorithm that allows for efficient and portable SIMD vectorization of current/charge deposition routines that are, along with the field gathering...
James, Jason; Dagli, Cihan H.
1995-04-01
In this study an attempt is being made to encode the architecture of a neural network in a chromosome string for evolving robust, fast-learning, minimal neural network architectures through genetic algorithms. Various attributes affecting the learning of the network are represented as genes. The performance of the networks is used as the fitness value. Neural network architecture design concepts are initially demonstrated using a backpropagation architecture with the standard data set of Rosenberg and Sejnowski for text to speech conversion on Adaptive Solutions Inc.'s CNAPS Neuro-Computer. The architectures obtained are compared with the one reported in the literature for the standard data set used. The study concludes by providing some insights regarding the architecture encoding for other artificial neural network paradigms.
Two Efficient Generalized Laguerre Spectral Algorithms for Fractional Initial Value Problems
Directory of Open Access Journals (Sweden)
D. Baleanu
2013-01-01
Full Text Available We present a direct solution technique for approximating linear multiterm fractional differential equations (FDEs on semi-infinite interval, using generalized Laguerre polynomials. We derive the operational matrix of Caputo fractional derivative of the generalized Laguerre polynomials which is applied together with generalized Laguerre tau approximation for implementing a spectral solution of linear multiterm FDEs on semi-infinite interval subject to initial conditions. The generalized Laguerre pseudo-spectral approximation based on the generalized Laguerre operational matrix is investigated to reduce the nonlinear multiterm FDEs and its initial conditions to nonlinear algebraic system, thus greatly simplifying the problem. Through several numerical examples, we confirm the accuracy and performance of the proposed spectral algorithms. Indeed, the methods yield accurate results, and the exact solutions are achieved for some tested problems.
Fan, Tong-liang; Wen, Yu-cang; Kadri, Chaibou
Orthogonal frequency-division multiplexing (OFDM) is robust against frequency selective fading because of the increase of the symbol duration. However, the time-varying nature of the channel causes inter-carrier interference (ICI) which destroys the orthogonal of sub-carriers and degrades the system performance severely. To alleviate the detrimental effect of ICI, there is a need for ICI mitigation within one OFDM symbol. We propose an iterative Inter-Carrier Interference (ICI) estimation and cancellation technique for OFDM systems based on regularized constrained total least squares. In the proposed scheme, ICI aren't treated as additional additive white Gaussian noise (AWGN). The effect of Inter-Carrier Interference (ICI) and inter-symbol interference (ISI) on channel estimation is regarded as perturbation of channel. We propose a novel algorithm for channel estimation o based on regularized constrained total least squares. Computer simulations show that significant improvement can be obtained by the proposed scheme in fast fading channels.
Chen, Ying-Lun; Hwang, Wen-Jyi; Ke, Chi-En
2015-08-13
A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO), and the feature extraction is carried out by the generalized Hebbian algorithm (GHA). To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction.
An Efficient Surface Algorithm for Random-Particle Simulation of Vorticity and Heat Transport
Smith, P. A.; Stansby, P. K.
1989-04-01
A new surface algorithm has been incorporated into the random-vortex method for the simulation of 2-dimensional laminar flow, in which vortex particles are deleted rather than reflected as they cross a solid surface. This involves a modification to the strength and random walk of newly created vortex particles. Computations of the early stages of symmetric, impulsively started flow around a circular cylinder for a wide range of Reynolds numbers demonstrate that the number of vortices required for convergence is substantially reduced. The method has been further extended to accommodate forced convective heat transfer where temperature particles are created at a surface to satisfy the condition of constant surface temperature. Vortex and temperature particles are handled together throughout each time step. For long runs, in which a steady state is reached, comparison is made with some time-averaged experimental heat transfer data for Reynolds numbers up to a few hundred. A Karman vortex street occurs at the higher Reynolds numbers.
Energy efficient sensor nodes placement using Territorial Predator Scent Marking Algorithm (TPSMA)
Abidin, H. Z.; Din, N. M.
2013-06-01
The positions of sensor nodes in a Wireless Sensor Network (WSN) must be able to provide maximum coverage with a longer lifetime. This paper proposed a sensor node placement technique that utilizes a new biologically inspired optimization technique that imitates the behavior of territorial predators in marking their territories with their odors known as Territorial Predator Scent Marking Algorithm (TPSMA). The TPSMA deployed in this paper uses the maximum coverage ratio as the objective function. The performance of the proposed technique is then compared with other schemes in terms of uniformity and average energy consumption. Simulation results show that the WSN deployed with the proposed sensor node placement scheme consumes lower energy compared to the other two schemes and is expected to provide longer lifetime.
Proposing an efficient algorithm for designing universal nanoelectronic molecular logic gates
Khamforoosh, Keyhan
2015-06-01
In today's world, there are still demands for minimising the dimensions of electronic circuits, the result of which is designing nanoelectronic circuits and very small molecular gates and switches. The point which causes trouble in this design is high impact of different parameters on the performance of circuit. Despite the suggestion of simple electronic circuits and different gates, impact of parameters like length of molecule, angle between different atoms, coupling situation of electrodes to molecule, the type of atoms used in a molecule's structure and other cases has made their development almost impossible. In this paper, there was an attempt to study previous works in order to, first, mention the effects of different conditions on circuit performance and, second, present an algorithm for designing gates so as to minimise the effects of these parameters on circuit performance.
An efficient algorithm for numerical computations of continuous densities of states
Langfeld, K.; Lucini, B.; Pellegrini, R.; Rago, A.
2016-06-01
In Wang-Landau type algorithms, Monte-Carlo updates are performed with respect to the density of states, which is iteratively refined during simulations. The partition function and thermodynamic observables are then obtained by standard integration. In this work, our recently introduced method in this class (the LLR approach) is analysed and further developed. Our approach is a histogram free method particularly suited for systems with continuous degrees of freedom giving rise to a continuum density of states, as it is commonly found in lattice gauge theories and in some statistical mechanics systems. We show that the method possesses an exponential error suppression that allows us to estimate the density of states over several orders of magnitude with nearly constant relative precision. We explain how ergodicity issues can be avoided and how expectation values of arbitrary observables can be obtained within this framework. We then demonstrate the method using compact U(1) lattice gauge theory as a show case. A thorough study of the algorithm parameter dependence of the results is performed and compared with the analytically expected behaviour. We obtain high precision values for the critical coupling for the phase transition and for the peak value of the specific heat for lattice sizes ranging from 8^4 to 20^4. Our results perfectly agree with the reference values reported in the literature, which covers lattice sizes up to 18^4. Robust results for the 20^4 volume are obtained for the first time. This latter investigation, which, due to strong metastabilities developed at the pseudo-critical coupling of the system, so far has been out of reach even on supercomputers with importance sampling approaches, has been performed to high accuracy with modest computational resources. This shows the potential of the method for studies of first order phase transitions. Other situations where the method is expected to be superior to importance sampling techniques are pointed
A contention-based efficient-information perception algorithm (CEiPA) for vehicular ad hoc networks
Institute of Scientific and Technical Information of China (English)
Chen Lijia; Jiang Hao; Yan Puliu
2009-01-01
The problem of information dissemination is researched for vehicular ad-hoc networks (VANET) in this paper, and a contention-based efficient-information perception algorithm (CEiPA) is proposed. The idea of CEiPA is that beacons are delivered over VANET with limited lifetime and efficient information. CEiPA consists of two phases. The first one is initialization phase, during which the count timers Tcycle and Tlocal are set to start beacon delivery while Tcycle is also used to monitor and restart beaconing. The second one is beacon delivery phase. An elaborate distance function is employed to set contention delay for beacons of each vehicle. In this way beacons will be sent in order, which decreases the collision of beacons. Simulation results show that CEiPA enables each beacon to carry more efficient information and spread them over more vehicles with lower network overhead than the periodic beacon scheme. CEiPA is also flexible and scalable because the efficient information threshold it employs is a balance among the freshness of information, network overhead and perception area of a vehicle.
Directory of Open Access Journals (Sweden)
Peng Xie
2017-05-01
Full Text Available The Earth’s surface is uneven, and conventional area calculation methods are based on the assumption that the projection plane area can be obtained without considering the actual undulation of the Earth’s surface and by simplifying the Earth’s shape to be a standard ellipsoid. However, the true surface area is important for investigating and evaluating land resources. In this study, the authors propose a new method based on an efficient vector-raster overlay algorithm (VROA-based method to calculate the surface areas of irregularly shaped land use patches. In this method, a surface area raster file is first generated based on the raster-based digital elevation model (raster-based DEM. Then, a vector-raster overlay algorithm (VROA is used that considers the precise clipping of raster cells using the vector polygon boundary. Xiantao City, Luotian County, and the Shennongjia Forestry District, which are representative of a plain landform, a hilly topography, and a mountain landscape, respectively, are selected to calculate the surface area. Compared with a traditional method based on triangulated irregular networks (TIN-based method, our method significantly reduces the processing time. In addition, our method effectively improves the accuracy compared with another traditional method based on raster-based DEM (raster-based method. Therefore, the method satisfies the requirements of large-scale engineering applications.
Li, Tiexiang; Huang, Tsung-Ming; Lin, Wen-Wei; Wang, Jenn-Nan
2017-03-01
We propose an efficient eigensolver for computing densely distributed spectra of the two-dimensional transmission eigenvalue problem (TEP), which is derived from Maxwell’s equations with Tellegen media and the transverse magnetic mode. The governing equations, when discretized by the standard piecewise linear finite element method, give rise to a large-scale quadratic eigenvalue problem (QEP). Our numerical simulation shows that half of the positive eigenvalues of the QEP are densely distributed in some interval near the origin. The quadratic Jacobi-Davidson method with a so-called non-equivalence deflation technique is proposed to compute the dense spectrum of the QEP. Extensive numerical simulations show that our proposed method processes the convergence efficiently, even when it needs to compute more than 5000 desired eigenpairs. Numerical results also illustrate that the computed eigenvalue curves can be approximated by nonlinear functions, which can be applied to estimate the denseness of the eigenvalues for the TEP.
An Efficient I-MINE Algorithm for Materialized Views in a data Warehouse Environment
Nalini, T.; Dr. A. Kumaravel; Dr. K. Rangarajan
2011-01-01
The ability to afford decision makers with both accurate and timely consolidated information as well as rapid query response times is the fundamental requirement for the success of a Data Warehouse. Selecting views to materialize for the purpose of supporting the decision making efficiently is one of the most significant decisions in designing Data Warehouse. Selecting a set of derived views to materialize which minimizes the sum of total query response time maintenance of the selected views ...
An Efficient Inverse Kinematic Algorithm for a PUMA560-Structured Robot Manipulator
Huashan Liu; Wuneng Zhou; Xiaobo Lai; Shiqiang Zhu
2013-01-01
Abstract This paper presents an efficient inverse kinematics (IK) approach which features fast computing performance for a PUMA560-structured robot manipulator. By properties of the orthogonal matrix and block matrix, the complex IK matrix equations are transformed into eight pure algebraic equations that contain the six unknown joint angle variables, which makes the solving compact without computing the reverses of the 4×4 homogeneous transformation matrices. Moreover, the appropriate combin...
Shukla, K K
2013-01-01
Due to its inherent time-scale locality characteristics, the discrete wavelet transform (DWT) has received considerable attention in signal/image processing. Wavelet transforms have excellent energy compaction characteristics and can provide perfect reconstruction. The shifting (translation) and scaling (dilation) are unique to wavelets. Orthogonality of wavelets with respect to dilations leads to multigrid representation. As the computation of DWT involves filtering, an efficient filtering process is essential in DWT hardware implementation. In the multistage DWT, coefficients are calculated
Zhu, Yanwei; Yi, Fajun; Meng, Songhe; Zhuo, Lijun; Pan, Weizhen
2017-11-01
Improving the surface heat load measurement technique for vehicles in aerodynamic heating environments is imperative, regarding aspects of both the apparatus design and identification efficiency. A simple novel apparatus is designed for heat load identification, taking into account the lessons learned from several aerodynamic heating measurement devices. An inverse finite difference scheme (invFDM) for the apparatus is studied to identify its surface heat flux from the interior temperature measurements with high efficiency. A weighted piecewise regression filter is also proposed for temperature measurement prefiltering. Preliminary verification of the invFDM scheme and the filter is accomplished via numerical simulation experiments. Three specific pieces of apparatus have been concretely designed and fabricated using different sensing materials. The aerodynamic heating process is simulated by an inductively coupled plasma wind tunnel facility. The identification of surface temperature and heat flux from the temperature measurements is performed by invFDM. The results validate the high efficiency, reliability and feasibility of heat load measurements with different heat flux levels utilizing the designed apparatus and proposed method.
An Efficient Algorithm for Server Thermal Fault Diagnosis Based on Infrared Image
Liu, Hang; Xie, Ting; Ran, Jian; Gao, Shan
2017-10-01
It is essential for a data center to maintain server security and stability. Long-time overload operation or high room temperature may cause service disruption even a server crash, which would result in great economic loss for business. Currently, the methods to avoid server outages are monitoring and forecasting. Thermal camera can provide fine texture information for monitoring and intelligent thermal management in large data center. This paper presents an efficient method for server thermal fault monitoring and diagnosis based on infrared image. Initially thermal distribution of server is standardized and the interest regions of the image are segmented manually. Then the texture feature, Hu moments feature as well as modified entropy feature are extracted from the segmented regions. These characteristics are applied to analyze and classify thermal faults, and then make efficient energy-saving thermal management decisions such as job migration. For the larger feature space, the principal component analysis is employed to reduce the feature dimensions, and guarantee high processing speed without losing the fault feature information. Finally, different feature vectors are taken as input for SVM training, and do the thermal fault diagnosis after getting the optimized SVM classifier. This method supports suggestions for optimizing data center management, it can improve air conditioning efficiency and reduce the energy consumption of the data center. The experimental results show that the maximum detection accuracy is 81.5%.
Komorkiewicz, Mateusz; Kryjak, Tomasz; Gorgon, Marek
2014-01-01
This article presents an efficient hardware implementation of the Horn-Schunck algorithm that can be used in an embedded optical flow sensor. An architecture is proposed, that realises the iterative Horn-Schunck algorithm in a pipelined manner. This modification allows to achieve data throughput of 175 MPixels/s and makes processing of Full HD video stream (1, 920 × 1, 080 @ 60 fps) possible. The structure of the optical flow module as well as pre- and post-filtering blocks and a flow reliability computation unit is described in details. Three versions of optical flow modules, with different numerical precision, working frequency and obtained results accuracy are proposed. The errors caused by switching from floating- to fixed-point computations are also evaluated. The described architecture was tested on popular sequences from an optical flow dataset of the Middlebury University. It achieves state-of-the-art results among hardware implementations of single scale methods. The designed fixed-point architecture achieves performance of 418 GOPS with power efficiency of 34 GOPS/W. The proposed floating-point module achieves 103 GFLOPS, with power efficiency of 24 GFLOPS/W. Moreover, a 100 times speedup compared to a modern CPU with SIMD support is reported. A complete, working vision system realized on Xilinx VC707 evaluation board is also presented. It is able to compute optical flow for Full HD video stream received from an HDMI camera in real-time. The obtained results prove that FPGA devices are an ideal platform for embedded vision systems. PMID:24526303
Efficient parallel implementations of approximation algorithms for guarding 1.5D terrains
Directory of Open Access Journals (Sweden)
Goran Martinović
2015-03-01
Full Text Available In the 1.5D terrain guarding problem, an x-monotone polygonal line is dened by k vertices and a G set of terrain points, i.e. guards, and a N set of terrain points which guards are to observe (guard. This involves a weighted version of the guarding problem where guards G have weights. The goal is to determine a minimum weight subset of G to cover all the points in N, including a version where points from N have demands. Furthermore, another goal is to determine the smallest subset of G, such that every point in N is observed by the required number of guards. Both problems are NP-hard and have a factor 5 approximation [3, 4]. This paper will show that if the (1+ϵ-approximate solver for the corresponding linear program is a computer, for any ϵ > 0, an extra 1+ϵ factor will appear in the final approximation factor for both problems. A comparison will be carried out the parallel implementation based on GPU and CPU threads with the Gurobi solver, leading to the conclusion that the respective algorithm outperforms the Gurobi solver on large and dense inputs typically by one order of magnitude.
PEAT: an intelligent and efficient paired-end sequencing adapter trimming algorithm.
Li, Yun-Lung; Weng, Jui-Cheng; Hsiao, Chiung-Chih; Chou, Min-Te; Tseng, Chin-Wen; Hung, Jui-Hung
2015-01-01
In modern paired-end sequencing protocols short DNA fragments lead to adapter-appended reads. Current paired-end adapter removal approaches trim adapter by scanning the fragment of adapter on the 3' end of the reads, which are not competent in some applications. Here, we propose a fast and highly accurate adapter-trimming algorithm, PEAT, designed specifically for paired-end sequencing. PEAT requires no a priori adaptor sequence, which is convenient for large-scale meta-analyses. We assessed the performance of PEAT with many adapter trimmers in both simulated and real life paired-end sequencing libraries. The importance of adapter trimming was exemplified by the influence of the downstream analyses on RNA-seq, ChIP-seq and MNase-seq. Several useful guidelines of applying adapter trimmers with aligners were suggested. PEAT can be easily included in the routine paired-end sequencing pipeline. The executable binaries and the standalone C++ source code package of PEAT are freely available online.
Deuerlein, Jochen; Meyer-Harries, Lea; Guth, Nicolai
2017-07-01
Drinking water distribution networks are part of critical infrastructures and are exposed to a number of different risks. One of them is the risk of unintended or deliberate contamination of the drinking water within the pipe network. Over the past decade research has focused on the development of new sensors that are able to detect malicious substances in the network and early warning systems for contamination. In addition to the optimal placement of sensors, the automatic identification of the source of a contamination is an important component of an early warning and event management system for security enhancement of water supply networks. Many publications deal with the algorithmic development; however, only little information exists about the integration within a comprehensive real-time event detection and management system. In the following the analytical solution and the software implementation of a real-time source identification module and its integration within a web-based event management system are described. The development was part of the SAFEWATER project, which was funded under FP 7 of the European Commission.
Directory of Open Access Journals (Sweden)
J. Deuerlein
2017-07-01
Full Text Available Drinking water distribution networks are part of critical infrastructures and are exposed to a number of different risks. One of them is the risk of unintended or deliberate contamination of the drinking water within the pipe network. Over the past decade research has focused on the development of new sensors that are able to detect malicious substances in the network and early warning systems for contamination. In addition to the optimal placement of sensors, the automatic identification of the source of a contamination is an important component of an early warning and event management system for security enhancement of water supply networks. Many publications deal with the algorithmic development; however, only little information exists about the integration within a comprehensive real-time event detection and management system. In the following the analytical solution and the software implementation of a real-time source identification module and its integration within a web-based event management system are described. The development was part of the SAFEWATER project, which was funded under FP 7 of the European Commission.
qPMS9: An Efficient Algorithm for Quorum Planted Motif Search
Nicolae, Marius; Rajasekaran, Sanguthevar
2015-01-01
Discovering patterns in biological sequences is a crucial problem. For example, the identification of patterns in DNA sequences has resulted in the determination of open reading frames, identification of gene promoter elements, intron/exon splicing sites, and SH RNAs, location of RNA degradation signals, identification of alternative splicing sites, etc. In protein sequences, patterns have led to domain identification, location of protease cleavage sites, identification of signal peptides, protein interactions, determination of protein degradation elements, identification of protein trafficking elements, discovery of short functional motifs, etc. In this paper we focus on the identification of an important class of patterns, namely, motifs. We study the (l, d) motif search problem or Planted Motif Search (PMS). PMS receives as input n strings and two integers l and d. It returns all sequences M of length l that occur in each input string, where each occurrence differs from M in at most d positions. Another formulation is quorum PMS (qPMS), where the motif appears in at least q% of the strings. We introduce qPMS9, a parallel exact qPMS algorithm that offers significant runtime improvements on DNA and protein datasets. qPMS9 solves the challenging DNA (l, d)-instances (28, 12) and (30, 13). The source code is available at https://code.google.com/p/qpms9/.
A Weighted Spatial-Spectral Kernel RX Algorithm and Efficient Implementation on GPUs
Directory of Open Access Journals (Sweden)
Chunhui Zhao
2017-02-01
Full Text Available The kernel RX (KRX detector proposed by Kwon and Nasrabadi exploits a kernel function to obtain a better detection performance. However, it still has two limits that can be improved. On the one hand, reasonable integration of spatial-spectral information can be used to further improve its detection accuracy. On the other hand, parallel computing can be used to reduce the processing time in available KRX detectors. Accordingly, this paper presents a novel weighted spatial-spectral kernel RX (WSSKRX detector and its parallel implementation on graphics processing units (GPUs. The WSSKRX utilizes the spatial neighborhood resources to reconstruct the testing pixels by introducing a spectral factor and a spatial window, thereby effectively reducing the interference of background noise. Then, the kernel function is redesigned as a mapping trick in a KRX detector to implement the anomaly detection. In addition, a powerful architecture based on the GPU technique is designed to accelerate WSSKRX. To substantiate the performance of the proposed algorithm, both synthetic and real data are conducted for experiments.
Ayub, Kamran; Khan, M. Yaqub; Mahmood-Ul-Hassan, Qazi; Ahmad, Jamshad
2017-09-01
Nonlinear mathematical problems and their solutions attain much attention in solitary waves. In soliton theory, an efficient tool to attain various types of soliton solutions is the \\exp (-φ (ζ ))-expansion technique. This article is devoted to find exact travelling wave solutions of Drinfeld-Sokolov equation via a reliable mathematical technique. By using the proposed technique, we attain soliton wave solution of various types. It is observed that the technique under discussion is user friendly with minimum computational work, and can be extended for physical problems of different nature in mathematical physics.
An efficient algorithm for solving fractional differential equations with boundary conditions
Directory of Open Access Journals (Sweden)
Alkan Sertan
2016-01-01
Full Text Available In this paper, a sinc-collocation method is described to determine the approximate solution of fractional order boundary value problem (FBVP. The results obtained are presented as two new theorems. The fractional derivatives are defined in the Caputo sense, which is often used in fractional calculus. In order to demonstrate the efficiency and capacity of the present method, it is applied to some FBVP with variable coefficients. Obtained results are compared to exact solutions as well as Cubic Spline solutions. The comparisons can be used to conclude that sinc-collocation method is powerful and promising method for determining the approximate solutions of FBVPs in different types of scenarios.
An efficient Hardware implementation of the Peak Cancellation Crest Factor Reduction Algorithm
Bernini, Matteo
2016-01-01
An important component of the cost of a radio base station comes from to the Power Amplifier driving the array of antennas. The cost can be split in Capital and Operational expenditure, due to the high design and realization costs and low energy efficiency of the Power Amplifier respectively. Both these cost components are related to the Crest Factor of the input signal. In order to reduce both costs, it would be possible to lower the average power level of the transmitting signal, whereas in...
Efficient Irregular Wavefront Propagation Algorithms on Hybrid CPU-GPU Machines.
Teodoro, George; Pan, Tony; Kurc, Tahsin; Kong, Jun; Cooper, Lee; Saltz, Joel
2013-04-01
We address the problem of efficient execution of a computation pattern, referred to here as the irregular wavefront propagation pattern (IWPP), on hybrid systems with multiple CPUs and GPUs. The IWPP is common in several image processing operations. In the IWPP, data elements in the wavefront propagate waves to their neighboring elements on a grid if a propagation condition is satisfied. Elements receiving the propagated waves become part of the wavefront. This pattern results in irregular data accesses and computations. We develop and evaluate strategies for efficient computation and propagation of wavefronts using a multi-level queue structure. This queue structure improves the utilization of fast memories in a GPU and reduces synchronization overheads. We also develop a tile-based parallelization strategy to support execution on multiple CPUs and GPUs. We evaluate our approaches on a state-of-the-art GPU accelerated machine (equipped with 3 GPUs and 2 multicore CPUs) using the IWPP implementations of two widely used image processing operations: morphological reconstruction and euclidean distance transform. Our results show significant performance improvements on GPUs. The use of multiple CPUs and GPUs cooperatively attains speedups of 50× and 85× with respect to single core CPU executions for morphological reconstruction and euclidean distance transform, respectively.
Feature-based fast coding unit partition algorithm for high efficiency video coding
Directory of Open Access Journals (Sweden)
Yih-Chuan Lin
2015-04-01
Full Text Available High Efficiency Video Coding (HEVC, which is the newest video coding standard, has been developed for the efficient compression of ultra high definition videos. One of the important features in HEVC is the adoption of a quad-tree based video coding structure, in which each incoming frame is represented as a set of non-overlapped coding tree blocks (CTB by variable-block sized prediction and coding process. To do this, each CTB needs to be recursively partitioned into coding unit (CU, predict unit (PU and transform unit (TU during the coding process, leading to a huge computational load in the coding of each video frame. This paper proposes to extract visual features in a CTB and uses them to simplify the coding procedure by reducing the depth of quad-tree partition for each CTB in HEVC intra coding mode. A measure for the edge strength in a CTB, which is defined with simple Sobel edge detection, is used to constrain the possible maximum depth of quad-tree partition of the CTB. With the constrained partition depth, the proposed method can reduce a lot of encoding time. Experimental results by HM10.1 show that the average time-savings is about 13.4% under the increase of encoded BD-Rate by only 0.02%, which is a less performance degradation in comparison to other similar methods.
Efficient Irregular Wavefront Propagation Algorithms on Hybrid CPU-GPU Machines
Teodoro, George; Pan, Tony; Kurc, Tahsin; Kong, Jun; Cooper, Lee; Saltz, Joel
2013-01-01
We address the problem of efficient execution of a computation pattern, referred to here as the irregular wavefront propagation pattern (IWPP), on hybrid systems with multiple CPUs and GPUs. The IWPP is common in several image processing operations. In the IWPP, data elements in the wavefront propagate waves to their neighboring elements on a grid if a propagation condition is satisfied. Elements receiving the propagated waves become part of the wavefront. This pattern results in irregular data accesses and computations. We develop and evaluate strategies for efficient computation and propagation of wavefronts using a multi-level queue structure. This queue structure improves the utilization of fast memories in a GPU and reduces synchronization overheads. We also develop a tile-based parallelization strategy to support execution on multiple CPUs and GPUs. We evaluate our approaches on a state-of-the-art GPU accelerated machine (equipped with 3 GPUs and 2 multicore CPUs) using the IWPP implementations of two widely used image processing operations: morphological reconstruction and euclidean distance transform. Our results show significant performance improvements on GPUs. The use of multiple CPUs and GPUs cooperatively attains speedups of 50× and 85× with respect to single core CPU executions for morphological reconstruction and euclidean distance transform, respectively. PMID:23908562
Tryggvason, Ari; Melchiorre, Caterina; Johansson, Kerstin
2015-01-01
We present an algorithm developed for GIS-applications in order to produce maps of landside susceptibility in postglacial and glacial sediments in Sweden. The algorithm operates on detailed topographic and Quaternary deposit data. We compare our algorithm to two similar computational schemes based on a global visibility operator and a shadow-casting algorithm. We find that our algorithm produces more reliable results in the vicinity of stable material than the global visibility algorithm. We ...
Adaptive GDDA-BLAST: fast and efficient algorithm for protein sequence embedding.
Directory of Open Access Journals (Sweden)
Yoojin Hong
2010-10-01
Full Text Available A major computational challenge in the genomic era is annotating structure/function to the vast quantities of sequence information that is now available. This problem is illustrated by the fact that most proteins lack comprehensive annotations, even when experimental evidence exists. We previously theorized that embedded-alignment profiles (simply "alignment profiles" hereafter provide a quantitative method that is capable of relating the structural and functional properties of proteins, as well as their evolutionary relationships. A key feature of alignment profiles lies in the interoperability of data format (e.g., alignment information, physio-chemical information, genomic information, etc.. Indeed, we have demonstrated that the Position Specific Scoring Matrices (PSSMs are an informative M-dimension that is scored by quantitatively measuring the embedded or unmodified sequence alignments. Moreover, the information obtained from these alignments is informative, and remains so even in the "twilight zone" of sequence similarity (<25% identity. Although our previous embedding strategy was powerful, it suffered from contaminating alignments (embedded AND unmodified and high computational costs. Herein, we describe the logic and algorithmic process for a heuristic embedding strategy named "Adaptive GDDA-BLAST." Adaptive GDDA-BLAST is, on average, up to 19 times faster than, but has similar sensitivity to our previous method. Further, data are provided to demonstrate the benefits of embedded-alignment measurements in terms of detecting structural homology in highly divergent protein sequences and isolating secondary structural elements of transmembrane and ankyrin-repeat domains. Together, these advances allow further exploration of the embedded alignment data space within sufficiently large data sets to eventually induce relevant statistical inferences. We show that sequence embedding could serve as one of the vehicles for measurement of low