AN SVAD ALGORITHM BASED ON FNNKD METHOD
Chen Dong; Zhang Yan; Kuang Jingming
2002-01-01
The capacity of mobile communication system is improved by using Voice Activity Detection (VAD) technology. In this letter, a novel VAD algorithm, SVAD algorithm based on Fuzzy Neural Network Knowledge Discovery (FNNKD) method is proposed. The performance of SVAD algorithm is discussed and compared with traditional algorithm recommended by ITU G.729B in different situations. The simulation results show that the SVAD algorithm performs better.
Kernel method-based fuzzy clustering algorithm
Wu Zhongdong; Gao Xinbo; Xie Weixin; Yu Jianping
2005-01-01
The fuzzy C-means clustering algorithm(FCM) to the fuzzy kernel C-means clustering algorithm(FKCM) to effectively perform cluster analysis on the diversiform structures are extended, such as non-hyperspherical data, data with noise, data with mixture of heterogeneous cluster prototypes, asymmetric data, etc. Based on the Mercer kernel, FKCM clustering algorithm is derived from FCM algorithm united with kernel method. The results of experiments with the synthetic and real data show that the FKCM clustering algorithm is universality and can effectively unsupervised analyze datasets with variform structures in contrast to FCM algorithm. It is can be imagined that kernel-based clustering algorithm is one of important research direction of fuzzy clustering analysis.
ADAPTIVE FUSION ALGORITHMS BASED ON WEIGHTED LEAST SQUARE METHOD
SONG Kaichen; NIE Xili
2006-01-01
Weighted fusion algorithms, which can be applied in the area of multi-sensor data fusion,are advanced based on weighted least square method. A weighted fusion algorithm, in which the relationship between weight coefficients and measurement noise is established, is proposed by giving attention to the correlation of measurement noise. Then a simplified weighted fusion algorithm is deduced on the assumption that measurement noise is uncorrelated. In addition, an algorithm, which can adjust the weight coefficients in the simplified algorithm by making estimations of measurement noise from measurements, is presented. It is proved by emulation and experiment that the precision performance of the multi-sensor system based on these algorithms is better than that of the multi-sensor system based on other algorithms.
New Iris Localization Method Based on Chaos Genetic Algorithm
Jia Dongli; Muhammad Khurram Khan; Zhang Jiashu
2005-01-01
This paper present a new method based on Chaos Genetic Algorithm (CGA) to localize the human iris in a given image. First, the iris image is preprocessed to estimate the range of the iris localization, and then CGA is used to extract the boundary of the iris. Simulation results show that the proposed algorithms is efficient and robust, and can achieve sub pixel precision. Because Genetic Algorithms (GAs) can search in a large space, the algorithm does not need accurate estimation of iris center for subsequent localization, and hence can lower the requirement for original iris image processing. On this point, the present localization algirithm is superior to Daugmans algorithm.
A NUMERICAL METHOD BASED ENCRYPTION ALGORITHM WITH STEGANOGRAPHY
Amartya Ghosh
2013-02-01
Full Text Available Now-a-days many encryption algorithms have been proposed for network security. In this paper, a new cryptographic algorithm for network security is proposed to assist the effectiveness of network security. Here symmetric key concept instead of public key is considered to develop the encryption – decryption algorithm. Also, to give more security in the algorithm, the idea of one way function alongwith Newton’s method is applied as a secret key to the proposed work as well as Digital Signature Standard (DSS technology is used to send the key. Moreover, steganography is used to hide the cipher within a picture in encryption algorithm. In brief, a numerical method based secret key encryption – decryption algorithm is developed using steganography to enhance the network security.
A network diagnostics method based on pattern recognition algorithms
Olizarovich, E. V.; Rodchenko, V. G.
2009-01-01
This report deals with the problem of designing and building of a computer network diagnostic system. Diagnostic content problems are reviewed, as well as ways of their solution based on mathematical and computer modelling methods. A traffic analysis-based diagnostic method is suggested for process statuses in a computer network. The method is based on algorithms of the mathematical pattern recognition theory. To build a diagnostic system, a multi-level model building and verification arrange...
Barzilai-Borwein method in graph drawing algorithm based on Kamada-Kawai algorithm
Hasal, Martin; Pospisil, Lukas; Nowakova, Jana
2016-06-01
Extension of Kamada-Kawai algorithm, which was designed for calculating layouts of simple undirected graphs, is presented in this paper. Graphs drawn by Kamada-Kawai algorithm exhibit symmetries, tend to produce aesthetically pleasing and crossing-free layouts for planar graphs. Minimization of Kamada-Kawai algorithm is based on Newton-Raphson method, which needs Hessian matrix of second derivatives of minimized node. Disadvantage of Kamada-Kawai embedder algorithm is computational requirements. This is caused by searching of minimal potential energy of the whole system, which is minimized node by node. The node with highest energy is minimized against all nodes till the local equilibrium state is reached. In this paper with Barzilai-Borwein (BB) minimization algorithm, which needs only gradient for minimum searching, instead of Newton-Raphson method, is worked. It significantly improves the computational time and requirements.
A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method
Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang
2016-01-01
Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR.
A Novel Assembly Line Balancing Method Based on PSO Algorithm
Xiaomei Hu
2014-01-01
Full Text Available Assembly line is widely used in manufacturing system. Assembly line balancing problem is a crucial question during design and management of assembly lines since it directly affects the productivity of the whole manufacturing system. The model of assembly line balancing problem is put forward and a general optimization method is proposed. The key data on assembly line balancing problem is confirmed, and the precedence relations diagram is described. A double objective optimization model based on takt time and smoothness index is built, and balance optimization scheme based on PSO algorithm is proposed. Through the simulation experiments of examples, the feasibility and validity of the assembly line balancing method based on PSO algorithm is proved.
Research on Palmprint Identification Method Based on Quantum Algorithms
Hui Li
2014-01-01
Full Text Available Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%.
An Improved Image Segmentation Algorithm Based on MET Method
Z. A. Abo-Eleneen
2012-09-01
Full Text Available Image segmentation is a basic component of many computer vision systems and pattern recognition. Thresholding is a simple but effective method to separate objects from the background. A commonly used method, Kittler and Illingworth's minimum error thresholding (MET, improves the image segmentation effect obviously. Its simpler and easier to implement. However, it fails in the presence of skew and heavy-tailed class-conditional distributions or if the histogram is unimodal or close to unimodal. The Fisher information (FI measure is an important concept in statistical estimation theory and information theory. Employing the FI measure, an improved threshold image segmentation algorithm FI-based extension of MET is developed. Comparing with the MET method, the improved method in general can achieve more robust performance when the data for either class is skew and heavy-tailed.
A constrained optimization algorithm based on the simplex search method
Mehta, Vivek Kumar; Dasgupta, Bhaskar
2012-05-01
In this article, a robust method is presented for handling constraints with the Nelder and Mead simplex search method, which is a direct search algorithm for multidimensional unconstrained optimization. The proposed method is free from the limitations of previous attempts that demand the initial simplex to be feasible or a projection of infeasible points to the nonlinear constraint boundaries. The method is tested on several benchmark problems and the results are compared with various evolutionary algorithms available in the literature. The proposed method is found to be competitive with respect to the existing algorithms in terms of effectiveness and efficiency.
An Emotion-Based Method to Perform Algorithmic Composition
Huang, Chih-Fang; Lin, En-Ju
2013-01-01
The generative music using algorithmic composition techniques has been developed in many years. However it usually lacks of emotion-based mechanism to generate music with specific affective features. In this article the automated music algorithm will be performed based on Prof. Phil Winosr’s “MusicSculptor” software with proper emotion parameter mapping to drive the music content with specific context using various music pa-rameters distribution with different probability control, in order to...
Distortion Parameters Analysis Method Based on Improved Filtering Algorithm
ZHANG Shutuan
2013-10-01
Full Text Available In order to realize the accurate distortion parameters test of aircraft power supply system, and satisfy the requirement of corresponding equipment in the aircraft, the novel power parameters test system based on improved filtering algorithm is introduced in this paper. The hardware of the test system has the characters of s portable and high-speed data acquisition and processing, and the software parts utilize the software Labwindows/CVI as exploitation software, and adopt the pre-processing technique and adding filtering algorithm. Compare with the traditional filtering algorithm, the test system adopted improved filtering algorithm can help to increase the test accuracy. The application shows that the test system with improved filtering algorithm can realize the accurate test results, and reach to the design requirements.
Color Image Segmentation Method Based on Improved Spectral Clustering Algorithm
Dong Qin
2014-08-01
Full Text Available Contraposing to the features of image data with high sparsity of and the problems on determination of clustering numbers, we try to put forward an color image segmentation algorithm, combined with semi-supervised machine learning technology and spectral graph theory. By the research of related theories and methods of spectral clustering algorithms, we introduce information entropy conception to design a method which can automatically optimize the scale parameter value. So it avoids the unstability in clustering result of the scale parameter input manually. In addition, we try to excavate available priori information existing in large number of non-generic data and apply semi-supervised algorithm to improve the clustering performance for rare class. We also use added tag data to compute similar matrix and perform clustering through FKCM algorithms. By the simulation of standard dataset and image segmentation, the experiments demonstrate our algorithm has overcome the defects of traditional spectral clustering methods, which are sensitive to outliers and easy to fall into local optimum, and also poor in the convergence rate
Applied RCM2 Algorithms Based on Statistical Methods
Fausto Pedro García Márquez; Diego J. Pedregal
2007-01-01
The main purpose of this paper is to implement a system capable of detecting faults in railway point mechanisms. This is achieved by developing an algorithm that takes advantage of three empirical criteria simultaneously capable of detecting faults from records of measurements of force against time. The system is dynamic in several respects: the base reference data is computed using all the curves free from faults as they are encountered in the experimental data; the algorithm that uses the three criteria simultaneously may be applied in on-line situations as each new data point becomes available; and recursive algorithms are applied to filter noise from the raw data in an automatic way. Encouraging results are found in practice when the system is applied to a number of experiments carried out by an industrial sponsor.
Improved Algorithm for Weak GPS Signal Acquisition Based on Delay-accumulation Method
Li, Yuanming; Li, Jing; Zhang, Peng; Zheng, Yong
2016-01-01
A new improved algorithm is proposed to solve the problem of GPS weak signal capture that the traditional algorithms are unavailable to capture under a weak signal environment. This algorithm is based on the analysis of double block zero padding (DBZP) algorithm and it adopts the delay-accumulation method to retain the operation results temporarily which are discarded in DBZP algorithm. Waiting for delaying 1 ms, the corresponding correlation calculation results are obtained. Then superimpose...
Visual tracking method based on cuckoo search algorithm
Gao, Ming-Liang; Yin, Li-Ju; Zou, Guo-Feng; Li, Hai-Tao; Liu, Wei
2015-07-01
Cuckoo search (CS) is a new meta-heuristic optimization algorithm that is based on the obligate brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior of some birds and fruit flies. It has been found to be efficient in solving global optimization problems. An application of CS is presented to solve the visual tracking problem. The relationship between optimization and visual tracking is comparatively studied and the parameters' sensitivity and adjustment of CS in the tracking system are experimentally studied. To demonstrate the tracking ability of a CS-based tracker, a comparative study of tracking accuracy and speed of the CS-based tracker with six "state-of-art" trackers, namely, particle filter, meanshift, PSO, ensemble tracker, fragments tracker, and compressive tracker are presented. Comparative results show that the CS-based tracker outperforms the other trackers.
GPU-based parallel algorithm for blind image restoration using midfrequency-based methods
Xie, Lang; Luo, Yi-han; Bao, Qi-liang
2013-08-01
GPU-based general-purpose computing is a new branch of modern parallel computing, so the study of parallel algorithms specially designed for GPU hardware architecture is of great significance. In order to solve the problem of high computational complexity and poor real-time performance in blind image restoration, the midfrequency-based algorithm for blind image restoration was analyzed and improved in this paper. Furthermore, a midfrequency-based filtering method is also used to restore the image hardly with any recursion or iteration. Combining the algorithm with data intensiveness, data parallel computing and GPU execution model of single instruction and multiple threads, a new parallel midfrequency-based algorithm for blind image restoration is proposed in this paper, which is suitable for stream computing of GPU. In this algorithm, the GPU is utilized to accelerate the estimation of class-G point spread functions and midfrequency-based filtering. Aiming at better management of the GPU threads, the threads in a grid are scheduled according to the decomposition of the filtering data in frequency domain after the optimization of data access and the communication between the host and the device. The kernel parallelism structure is determined by the decomposition of the filtering data to ensure the transmission rate to get around the memory bandwidth limitation. The results show that, with the new algorithm, the operational speed is significantly increased and the real-time performance of image restoration is effectively improved, especially for high-resolution images.
A New Genetic Algorithm Based on Niche Technique and Local Search Method
无
2001-01-01
The genetic algorithm has been widely used in many fields as an easy robust global search and optimization method. In this paper, a new genetic algorithm based on niche technique and local search method is presented under the consideration of inadequacies of the simple genetic algorithm. In order to prove the adaptability and validity of the improved genetic algorithm, optimization problems of multimodal functions with equal peaks, unequal peaks and complicated peak distribution are discussed. The simulation results show that compared to other niching methods, this improved genetic algorithm has obvious potential on many respects, such as convergence speed, solution accuracy, ability of global optimization, etc.
On the importance of graph search algorithms for DRGEP-based mechanism reduction methods
Niemeyer, Kyle E
2016-01-01
The importance of graph search algorithm choice to the directed relation graph with error propagation (DRGEP) method is studied by comparing basic and modified depth-first search, basic and R-value-based breadth-first search (RBFS), and Dijkstra's algorithm. By using each algorithm with DRGEP to produce skeletal mechanisms from a detailed mechanism for n-heptane with randomly-shuffled species order, it is demonstrated that only Dijkstra's algorithm and RBFS produce results independent of species order. In addition, each algorithm is used with DRGEP to generate skeletal mechanisms for n-heptane covering a comprehensive range of autoignition conditions for pressure, temperature, and equivalence ratio. Dijkstra's algorithm combined with a coefficient scaling approach is demonstrated to produce the most compact skeletal mechanism with a similar performance compared to larger skeletal mechanisms resulting from the other algorithms. The computational efficiency of each algorithm is also compared by applying the DRG...
A Steganographic Method Based on Integer Wavelet Transform & Genatic Algorithm
Preeti Arora
2014-05-01
Full Text Available The proposed system presents a novel approach of building a secure data hiding technique of steganography using inverse wavelet transform along with Genetic algorithm. The prominent focus of the proposed work is to develop RS-analysis proof design with higest imperceptibility. Optimal Pixal Adjustment process is also adopted to minimize the difference error between the input cover image and the embedded-image and in order to maximize the hiding capacity with low distortions respectively. The analysis is done for mapping function, PSNR, image histogram, and parameter of RS analysis. The simulation results highlights that the proposed security measure basically gives better and optimal results in comparison to prior research work conducted using wavelets and genetic algorithm.
A novel method to design S-box based on chaotic map and genetic algorithm
The substitution box (S-box) is an important component in block encryption algorithms. In this Letter, the problem of constructing S-box is transformed to a Traveling Salesman Problem and a method for designing S-box based on chaos and genetic algorithm is proposed. Since the proposed method makes full use of the traits of chaotic map and evolution process, stronger S-box is obtained. The results of performance test show that the presented S-box has good cryptographic properties, which justify that the proposed algorithm is effective in generating strong S-boxes. -- Highlights: ► The problem of constructing S-box is transformed to a Traveling Salesman Problem. ► We present a new method for designing S-box based on chaos and genetic algorithm. ► The proposed algorithm is effective in generating strong S-boxes.
Method of fault diagnosis in nuclear power plant base on genetic algorithm and knowledge base
Via using the knowledge base, combining Genetic Algorithm and classical probability and contraposing the characteristic of the fault diagnosis of NPP. The authors put forward a method of fault diagnosis. In the process of fault diagnosis, this method contact the state of NPP with the colony in GA and transform the colony to get the individual that adapts to the condition. On the 950MW full size simulator in Beijing NPP simulation training center, experimentation shows it has comparative adaptability to the imperfection of expert knowledge, illusive signal and other instance
Asymptotically Optimal Algorithm for Short-Term Trading Based on the Method of Calibration
V'yugin, Vladimir
2012-01-01
A trading strategy based on a natural learning process, which asymptotically outperforms any trading strategy from RKHS (Reproduced Kernel Hilbert Space), is presented. In this process, the trader rationally chooses his gambles using predictions made by a randomized well calibrated algorithm. Our strategy is based on Dawid's notion of calibration with more general changing checking rules and on some modification of Kakade and Foster's randomized algorithm for computing calibrated forecasts. We use also Vovk's method of defensive forecasting in RKHS.
A Novel Super Resolution Algorithm Using Interpolation and LWT Based Denoising Method
Sapan Naik, Asst. Professor; Viral Borisagar, Asst. Professor
2012-01-01
Image capturing technique has some limitations and due to that we often get low resolution(LR) images. Super Resolution(SR) is a process by which we can generate High Resolution(HR) image from one or more LR images. Here we have proposed one SR algorithm which take three shifted and noisy LR images and generate HR image using Lifting Wavelet Transform(LWT) based denoising method and Directional Filtering and Data Fusion based Edge-Guided Interpolation Algorithm.
Space-borne antenna adaptive anti-jamming method based on gradient-genetic algorithm
无
2007-01-01
A novel space-borne antenna adaptive anti-jamming method based on the genetic algorithm(GA), which is combined with gradient-like reproduction operators is presented,to search for the best weight for pattern synthesis in radio frequency(RF).Combined,the GA's the capability of the whole searching is,but not limited by selection of the initial parameter,with the gradient algorithm's advantage of fast searching.The proposed method requires a smaller sized initial population and lower computational complexity.Therefore,it is flexible to implement this method in the real-time systems.By using the proposed algorithm,the designer can efficiently control both main-lobe shaping and side-lobe level.Simulation results based on the spot survey data show that the algorithm proposed is efficient and feasible.
A novel method to design S-box based on chaotic map and genetic algorithm
Wang, Yong; Wong, Kwok-Wo; Li, Changbing; Li, Yang
2012-01-01
The substitution box (S-box) is an important component in block encryption algorithms. In this Letter, the problem of constructing S-box is transformed to a Traveling Salesman Problem and a method for designing S-box based on chaos and genetic algorithm is proposed. Since the proposed method makes full use of the traits of chaotic map and evolution process, stronger S-box is obtained. The results of performance test show that the presented S-box has good cryptographic properties, which justify that the proposed algorithm is effective in generating strong S-boxes.
Research on B Cell Algorithm for Learning to Rank Method Based on Parallel Strategy
Tian, Yuling; Zhang, Hongxian
2016-01-01
For the purposes of information retrieval, users must find highly relevant documents from within a system (and often a quite large one comprised of many individual documents) based on input query. Ranking the documents according to their relevance within the system to meet user needs is a challenging endeavor, and a hot research topic–there already exist several rank-learning methods based on machine learning techniques which can generate ranking functions automatically. This paper proposes a parallel B cell algorithm, RankBCA, for rank learning which utilizes a clonal selection mechanism based on biological immunity. The novel algorithm is compared with traditional rank-learning algorithms through experimentation and shown to outperform the others in respect to accuracy, learning time, and convergence rate; taken together, the experimental results show that the proposed algorithm indeed effectively and rapidly identifies optimal ranking functions. PMID:27487242
Jiancai Song
Full Text Available In this paper, a novel method for selecting a navigation satellite subset for a global positioning system (GPS based on a genetic algorithm is presented. This approach is based on minimizing the factors in the geometric dilution of precision (GDOP using a modified genetic algorithm (MGA with an elite conservation strategy, adaptive selection, adaptive mutation, and a hybrid genetic algorithm that can select a subset of the satellites represented by specific numbers in the interval (4 ∼ n while maintaining position accuracy. A comprehensive simulation demonstrates that the MGA-based satellite selection method effectively selects the correct number of optimal satellite subsets using receiver autonomous integrity monitoring (RAIM or fault detection and exclusion (FDE. This method is more adaptable and flexible for GPS receivers, particularly for those used in handset equipment and mobile phones.
Song, Jiancai; Xue, Guixiang; Kang, Yanan
2016-01-01
In this paper, a novel method for selecting a navigation satellite subset for a global positioning system (GPS) based on a genetic algorithm is presented. This approach is based on minimizing the factors in the geometric dilution of precision (GDOP) using a modified genetic algorithm (MGA) with an elite conservation strategy, adaptive selection, adaptive mutation, and a hybrid genetic algorithm that can select a subset of the satellites represented by specific numbers in the interval (4 ∼ n) while maintaining position accuracy. A comprehensive simulation demonstrates that the MGA-based satellite selection method effectively selects the correct number of optimal satellite subsets using receiver autonomous integrity monitoring (RAIM) or fault detection and exclusion (FDE). This method is more adaptable and flexible for GPS receivers, particularly for those used in handset equipment and mobile phones. PMID:26943638
Fast Matrix Computation Algorithms Based on Rough Attribute Vector Tree Method in RDSS
无
2005-01-01
The concepts of Rough Decision Support System (RDSS)and equivalence matrix are introduced in this paper. Based on a rough attribute vector tree (RAVT) method, two kinds of matrix computation algorithms - Recursive Matrix Computation (RMC) and Parallel Matrix Computation (PMC) are proposed for rules extraction, attributes reduction and data cleaning finished synchronously. The algorithms emphasize the practicability and efficiency of rules generation. A case study of PMC is analyzed, and a comparison experiment of RMC algorithm shows that it is feasible and efficient for data mining and knowledge-discovery in RDSS.
Penalty Algorithm Based on Conjugate Gradient Method for Solving Portfolio Management Problem
Wang YaLin
2009-01-01
Full Text Available A new approach was proposed to reformulate the biobjectives optimization model of portfolio management into an unconstrained minimization problem, where the objective function is a piecewise quadratic polynomial. We presented some properties of such an objective function. Then, a class of penalty algorithms based on the well-known conjugate gradient methods was developed to find the solution of portfolio management problem. By implementing the proposed algorithm to solve the real problems from the stock market in China, it was shown that this algorithm is promising.
Cost Analysis of Algorithm Based Billboard Manger Based Handover Method in LEO satellite Networks
Suman Kumar Sikdar
2012-12-01
Full Text Available Now-a-days LEO satellites have an important role in global communication system. They have some advantages like low power requirement and low end-to-end delay, more efficient frequency spectrum utilization between satellites and spot beams over GEO and MEO. So in future they can be used as a replacement of modern terrestrial wireless networks. But the handover occurrence is more due to the speed of the LEOs. Different protocol has been proposed for a successful handover among which BMBHO is more efficient. But it had a problem during the selection of the mobile node during handover. In our previous work we have proposed an algorithm so that the connection can be established easily with the appropriate satellite. In this paper we will evaluate the mobility management cost of Algorithm based Billboard Manager Based Handover method (BMBHO. A simulation result shows that the cost is lower than the cost of Mobile IP of SeaHO-LEO and PatHOLEO
NONLINEAR FILTER METHOD OF GPS DYNAMIC POSITIONING BASED ON BANCROFT ALGORITHM
ZHANGQin; TAOBen-zao; ZHAOChao-ying; WANGLi
2005-01-01
Because of the ignored items after linearization, the extended Kalman filter (EKF) becomes a form of suboptimal gradient descent algorithm. The emanative tendency exists in GPS solution when the filter equations are ill-posed. The deviation in the estimation cannot be avoided. Furthermore, the true solution may be lost in pseudorange positioning because the linearized pseudorange equations are partial solutions. To solve the above problems in GPS dynamic positioning by using EKF, a closed-form Kalman filter method called the two-stage algorithm is presented for the nonlinear algebraic solution of GPS dynamic positioning based on the global nonlinear least squares closed algorithm--Bancroft numerical algorithm of American. The method separates the spatial parts from temporal parts during processing the GPS filter problems, and solves the nonlinear GPS dynamic positioning, thus getting stable and reliable dynamic positioning solutions.
Numerical algorithm based on the PDE method for solution of the Fokker-Planck equation
This paper discus a fast and accurate algorithm for numerical solution of Fokker-Planck equation (FPE) based on the PDE (Partial Differential Equation) method. The PDE concepts and methods largely are used in computer simulation of fluid-dynamical systems. This method can be used for studying of stochastic beam dynamics in one dimensional phase space in the storage ring. The performances of the PDE-method are calculated using the stochastic cooling process in the CR storage ring (FAIR, Germany).
Improved Algorithm for Weak GPS Signal Acquisition Based on Delay-accumulation Method
LI Yuanming
2016-01-01
Full Text Available A new improved algorithm is proposed to solve the problem of GPS weak signal capture that the traditional algorithms are unavailable to capture under a weak signal environment. This algorithm is based on the analysis of double block zero padding (DBZP algorithm and it adopts the delay-accumulation method to retain the operation results temporarily which are discarded in DBZP algorithm. Waiting for delaying 1 ms, the corresponding correlation calculation results are obtained. Then superimpose the obtained results with the operation results retained temporarily and compare the coherent accumulation results with the threshold value. The data measurements are increased by improving the utilization rate of correlation operation results in the improved algorithm on the premise of increasing little computation. Simulation results showed that the improved algorithm can improve the acquisition algorithm processing gain and it is able to capture the signals whose carrier-to-noise ratio(C/N0 is 17 dB-Hz and the detection probability can achieve to 91%.
2D-3D Face Recognition Method Based on a Modified CCA-PCA Algorithm
Patrik Kamencay
2014-03-01
Full Text Available This paper presents a proposed methodology for face recognition based on an information theory approach to coding and decoding face images. In this paper, we propose a 2D-3D face-matching method based on a principal component analysis (PCA algorithm using canonical correlation analysis (CCA to learn the mapping between a 2D face image and 3D face data. This method makes it possible to match a 2D face image with enrolled 3D face data. Our proposed fusion algorithm is based on the PCA method, which is applied to extract base features. PCA feature-level fusion requires the extraction of different features from the source data before features are merged together. Experimental results on the TEXAS face image database have shown that the classification and recognition results based on the modified CCA-PCA method are superior to those based on the CCA method. Testing the 2D-3D face match results gave a recognition rate for the CCA method of a quite poor 55% while the modified CCA method based on PCA-level fusion achieved a very good recognition score of 85%.
A new Initial Centroid finding Method based on Dissimilarity Tree for K-means Algorithm
Kumar, Abhishek; Gupta, Suresh Chandra
2015-01-01
Cluster analysis is one of the primary data analysis technique in data mining and K-means is one of the commonly used partitioning clustering algorithm. In K-means algorithm, resulting set of clusters depend on the choice of initial centroids. If we can find initial centroids which are coherent with the arrangement of data, the better set of clusters can be obtained. This paper proposes a method based on the Dissimilarity Tree to find, the better initial centroid as well as every bit more acc...
Prof. Vikas Gupta
2014-01-01
Full Text Available Due to the exponential increase of noise pollution, the demand for noise controlling system is also increases. Basically two types of techniques are used for noise cancellation active and passive. But passive techniques are inactive for low frequency noise, hence there is an increasing demand of research and developmental work on active noise cancellation techniques. In this paper we introduce a new method in the active noise cancellation system. This new method is the transfer function based method which used Genetic and Particle swarm optimization (PSO algorithm for noise cancellation. This method is very simple and efficient for low frequency noise cancellation. Here we analysis the performance of this method in the presence of white Gaussian noise and compare the results of Particle swarm optimization (PSO and Genetic algorithm. Both algorithms are suitable for different environment, so we observe their performance in different fields. In this paper a comparative study of Genetic and Particle swarm optimization (PSO is described with proper results. It will go in depth what exactly transfer function method, how it work and advantages over neural network based method
Arablouei, Reza; Doğançay, Kutluyıl; Werner, Stefan
2014-01-01
We develop a recursive total least-squares (RTLS) algorithm for errors-in-variables system identification utilizing the inverse power method and the dichotomous coordinate-descent (DCD) iterations. The proposed algorithm, called DCD-RTLS, outperforms the previously-proposed RTLS algorithms, which are based on the line-search method, with reduced computational complexity. We perform a comprehensive analysis of the DCD-RTLS algorithm and show that it is asymptotically unbiased as well as being ...
An effective trust-based recommendation method using a novel graph clustering algorithm
Moradi, Parham; Ahmadian, Sajad; Akhlaghian, Fardin
2015-10-01
Recommender systems are programs that aim to provide personalized recommendations to users for specific items (e.g. music, books) in online sharing communities or on e-commerce sites. Collaborative filtering methods are important and widely accepted types of recommender systems that generate recommendations based on the ratings of like-minded users. On the other hand, these systems confront several inherent issues such as data sparsity and cold start problems, caused by fewer ratings against the unknowns that need to be predicted. Incorporating trust information into the collaborative filtering systems is an attractive approach to resolve these problems. In this paper, we present a model-based collaborative filtering method by applying a novel graph clustering algorithm and also considering trust statements. In the proposed method first of all, the problem space is represented as a graph and then a sparsest subgraph finding algorithm is applied on the graph to find the initial cluster centers. Then, the proposed graph clustering algorithm is performed to obtain the appropriate users/items clusters. Finally, the identified clusters are used as a set of neighbors to recommend unseen items to the current active user. Experimental results based on three real-world datasets demonstrate that the proposed method outperforms several state-of-the-art recommender system methods.
Urinary stone size estimation: a new segmentation algorithm-based CT method
Liden, Mats; Geijer, Haakan [Oerebro University, School of Health and Medical Sciences, Oerebro (Sweden); Oerebro University Hospital, Department of Radiology, Oerebro (Sweden); Andersson, Torbjoern [Oerebro University, School of Health and Medical Sciences, Oerebro (Sweden); Broxvall, Mathias [Oerebro University, Centre for Modelling and Simulation, Oerebro (Sweden); Thunberg, Per [Oerebro University, School of Health and Medical Sciences, Oerebro (Sweden); Oerebro University Hospital, Department of Medical Physics, Oerebro (Sweden)
2012-04-15
The size estimation in CT images of an obstructing ureteral calculus is important for the clinical management of a patient presenting with renal colic. The objective of the present study was to develop a reader independent urinary calculus segmentation algorithm using well-known digital image processing steps and to validate the method against size estimations by several readers. Fifty clinical CT examinations demonstrating urinary calculi were included. Each calculus was measured independently by 11 readers. The mean value of their size estimations was used as validation data for each calculus. The segmentation algorithm consisted of interpolated zoom, binary thresholding and morphological operations. Ten examinations were used for algorithm optimisation and 40 for validation. Based on the optimisation results three segmentation method candidates were identified. Between the primary segmentation algorithm using cubic spline interpolation and the mean estimation by 11 readers, the bias was 0.0 mm, the standard deviation of the difference 0.26 mm and the Bland-Altman limits of agreement 0.0{+-}0.5 mm. The validation showed good agreement between the suggested algorithm and the mean estimation by a large number of readers. The limit of agreement was narrower than the inter-reader limit of agreement previously reported for the same data. (orig.)
Blind restoration method of three-dimensional microscope image based on RL algorithm
Yao, Jin-li; Tian, Si; Wang, Xiang-rong; Wang, Jing-li
2013-08-01
Thin specimens of biological tissue appear three dimensional transparent under a microscope. The optic slice images can be captured by moving the focal planes at the different locations of the specimen. The captured image has low resolution due to the influence of the out-of-focus information comes from the planes adjacent to the local plane. Using traditional methods can remove the blur in the images at a certain degree, but it needs to know the point spread function (PSF) of the imaging system accurately. The accuracy degree of PSF influences the restoration result greatly. In fact, it is difficult to obtain the accurate PSF of the imaging system. In order to restore the original appearance of the specimen under the conditions of the imaging system parameters are unknown or there is noise and spherical aberration in the system, a blind restoration methods of three-dimensional microscope based on the R-L algorithm is proposed in this paper. On the basis of the exhaustive study of the two-dimension R-L algorithm, according to the theory of the microscopy imaging and the wavelet transform denoising pretreatment, we expand the R-L algorithm to three-dimension space. It is a nonlinear restoration method with the maximum entropy constraint. The method doesn't need to know the PSF of the microscopy imaging system precisely to recover the blur image. The image and PSF converge to the optimum solutions by many alterative iterations and corrections. The matlab simulation and experiments results show that the expansion algorithm is better in visual indicators, peak signal to noise ratio and improved signal to noise ratio when compared with the PML algorithm, and the proposed algorithm can suppress noise, restore more details of target, increase image resolution.
Because it is difficulty to accurately determine the extraction steam turbine enthalpy and the exhaust enthalpy, the calculated result from the conventional equivalent enthalpy drop method of PWR nuclear steam turbine is not accurate. This paper presents the improved algorithm on the equivalent enthalpy drop method of PWR nuclear steam turbine to solve this problem and takes the secondary circuit thermal system calculation of 1000 MW PWR as an example. The results show that, comparing with the design value, the error of actual thermal efficiency of the steam turbine cycle obtained by the improved algorithm is within the allowable range. Since the improved method is based on the isentropic expansion process, the extraction steam turbine enthalpy and the exhaust enthalpy can be determined accurately, which is more reasonable and accurate compared to the traditional equivalent enthalpy drop method. (authors)
Self-Organizing Genetic Algorithm Based Method for Constructing Bayesian Networks from Databases
郑建军; 刘玉树; 陈立潮
2003-01-01
The typical characteristic of the topology of Bayesian networks (BNs) is the interdependence among different nodes (variables), which makes it impossible to optimize one variable independently of others, and the learning of BNs structures by general genetic algorithms is liable to converge to local extremum. To resolve efficiently this problem, a self-organizing genetic algorithm (SGA) based method for constructing BNs from databases is presented. This method makes use of a self-organizing mechanism to develop a genetic algorithm that extended the crossover operator from one to two, providing mutual competition between them, even adjusting the numbers of parents in recombination (crossover/recomposition) schemes. With the K2 algorithm, this method also optimizes the genetic operators, and utilizes adequately the domain knowledge. As a result, with this method it is able to find a global optimum of the topology of BNs, avoiding premature convergence to local extremum. The experimental results proved to be and the convergence of the SGA was discussed.
Ramlau, R.; Saxenhuber, D.; Yudytskiy, M.
2014-07-01
The problem of atmospheric tomography arises in ground-based telescope imaging with adaptive optics (AO), where one aims to compensate in real-time for the rapidly changing optical distortions in the atmosphere. Many of these systems depend on a sufficient reconstruction of the turbulence profiles in order to obtain a good correction. Due to steadily growing telescope sizes, there is a strong increase in the computational load for atmospheric reconstruction with current methods, first and foremost the MVM. In this paper we present and compare three novel iterative reconstruction methods. The first iterative approach is the Finite Element- Wavelet Hybrid Algorithm (FEWHA), which combines wavelet-based techniques and conjugate gradient schemes to efficiently and accurately tackle the problem of atmospheric reconstruction. The method is extremely fast, highly flexible and yields superior quality. Another novel iterative reconstruction algorithm is the three step approach which decouples the problem in the reconstruction of the incoming wavefronts, the reconstruction of the turbulent layers (atmospheric tomography) and the computation of the best mirror correction (fitting step). For the atmospheric tomography problem within the three step approach, the Kaczmarz algorithm and the Gradient-based method have been developed. We present a detailed comparison of our reconstructors both in terms of quality and speed performance in the context of a Multi-Object Adaptive Optics (MOAO) system for the E-ELT setting on OCTOPUS, the ESO end-to-end simulation tool.
Method and application of wavelet shrinkage denoising based on genetic algorithm
无
2006-01-01
Genetic algorithm (GA) based on wavelet transform threshold shrinkage (WTS) and translation-invafiant threshold shrinkage (TIS) is introduced into the method of noise reduction, where parameters used in WTS and TIS, such as wavelet function,decomposition levels, hard or soft threshold and threshold can be selected automatically. This paper ends by comparing two noise reduction methods on the basis of their denoising performances, computation time, etc. The effectiveness of these methods introduced in this paper is validated by the results of analysis of the simulated and real signals.
A scalable method for parallelizing sampling-based motion planning algorithms
Jacobs, Sam Ade
2012-05-01
This paper describes a scalable method for parallelizing sampling-based motion planning algorithms. It subdivides configuration space (C-space) into (possibly overlapping) regions and independently, in parallel, uses standard (sequential) sampling-based planners to construct roadmaps in each region. Next, in parallel, regional roadmaps in adjacent regions are connected to form a global roadmap. By subdividing the space and restricting the locality of connection attempts, we reduce the work and inter-processor communication associated with nearest neighbor calculation, a critical bottleneck for scalability in existing parallel motion planning methods. We show that our method is general enough to handle a variety of planning schemes, including the widely used Probabilistic Roadmap (PRM) and Rapidly-exploring Random Trees (RRT) algorithms. We compare our approach to two other existing parallel algorithms and demonstrate that our approach achieves better and more scalable performance. Our approach achieves almost linear scalability on a 2400 core LINUX cluster and on a 153,216 core Cray XE6 petascale machine. © 2012 IEEE.
An adaptive turbo-shaft engine modeling method based on PS and MRR-LSSVR algorithms
Wang Jiankang; Zhang Haibo; Yan Changkai; Duan Shujing; Huang Xianghua
2013-01-01
In order to establish an adaptive turbo-shaft engine model with high accuracy,a new modeling method based on parameter selection (PS) algorithm and multi-input multi-output recursive reduced least square support vector regression (MRR-LSSVR) machine is proposed.Firstly,the PS algorithm is designed to choose the most reasonable inputs of the adaptive module.During this process,a wrapper criterion based on least square support vector regression (LSSVR) machine is adopted,which can not only reduce computational complexity but also enhance generalization performance.Secondly,with the input variables determined by the PS algorithm,a mapping model of engine parameter estimation is trained off-line using MRR-LSSVR,which has a satisfying accuracy within 5‰.Finally,based on a numerical simulation platform of an integrated helicopter/turbo-shaft engine system,an adaptive turbo-shaft engine model is developed and tested in a certain flight envelope.Under the condition of single or multiple engine components being degraded,many simulation experiments are carried out,and the simulation results show the effectiveness and validity of the proposed adaptive modeling method.
Speech Emotion Feature Selection Method Based on Contribution Analysis Algorithm of Neural Network
There are many emotion features. If all these features are employed to recognize emotions, redundant features may be existed. Furthermore, recognition result is unsatisfying and the cost of feature extraction is high. In this paper, a method to select speech emotion features based on contribution analysis algorithm of NN is presented. The emotion features are selected by using contribution analysis algorithm of NN from the 95 extracted features. Cluster analysis is applied to analyze the effectiveness for the features selected, and the time of feature extraction is evaluated. Finally, 24 emotion features selected are used to recognize six speech emotions. The experiments show that this method can improve the recognition rate and the time of feature extraction
State Generation Method for Humanoid Motion Planning Based on Genetic Algorithm
Xuyang Wang
2008-11-01
Full Text Available A new approach to generate the original motion data for humanoid motion planning is presented in this paper. And a state generator is developed based on the genetic algorithm, which enables users to generate various motion states without using any reference motion data. By specifying various types of constraints such as configuration constraints and contact constraints, the state generator can generate stable states that satisfy the constraint conditions for humanoid robots.To deal with the multiple constraints and inverse kinematics, the state generation is finally simplified as a problem of optimizing and searching. In our method, we introduce a convenient mathematic representation for the constraints involved in the state generator, and solve the optimization problem with the genetic algorithm to acquire a desired state. To demonstrate the effectiveness and advantage of the method, a number of motion states are generated according to the requirements of the motion.
State Generation Method for Humanoid Motion Planning Based on Genetic Algorithm
Xuyang Wang
2012-05-01
Full Text Available A new approach to generate the original motion data for humanoid motion planning is presented in this paper. And a state generator is developed based on the genetic algorithm, which enables users to generate various motion states without using any reference motion data. By specifying various types of constraints such as configuration constraints and contact constraints, the state generator can generate stable states that satisfy the constraint conditions for humanoid robots.To deal with the multiple constraints and inverse kinematics, the state generation is finally simplified as a problem of optimizing and searching. In our method, we introduce a convenient mathematic representation for the constraints involved in the state generator, and solve the optimization problem with the genetic algorithm to acquire a desired state. To demonstrate the effectiveness and advantage of the method, a number of motion states are generated according to the requirements of the motion.
Two Novel On-policy Reinforcement Learning Algorithms based on TD(lambda)-methods
Wiering, M.A.; Hasselt, H. van
2007-01-01
This paper describes two novel on-policy reinforcement learning algorithms, named QV(lambda)-learning and the actor critic learning automaton (ACLA). Both algorithms learn a state value-function using TD(lambda)-methods. The difference between the algorithms is that QV-learning uses the learned valu
Aliasghar Baziar
2015-03-01
Full Text Available Abstract In order to handle large scale problems this study has used shuffled frog leaping algorithm. This algorithm is an optimization method based on natural memetics that uses a new two-phase modification to it to have a better search in the problem space. The suggested algorithm is evaluated by comparing to some well known algorithms using several benchmark optimization problems. The simulation results have clearly shown the superiority of this algorithm over other well-known methods in the area.
Method of transient identification based on a possibilistic approach, optimized by genetic algorithm
This work develops a method for transient identification based on a possible approach, optimized by Genetic Algorithm to optimize the number of the centroids of the classes that represent the transients. The basic idea of the proposed method is to optimize the partition of the search space, generating subsets in the classes within a partition, defined as subclasses, whose centroids are able to distinguish the classes with the maximum correct classifications. The interpretation of the subclasses as fuzzy sets and the possible approach provided a heuristic to establish influence zones of the centroids, allowing to achieve the 'don't know' answer for unknown transients, that is, outside the training set. (author)
Convergence Analysis of Contrastive Divergence Algorithm Based on Gradient Method with Errors
Xuesi Ma; Xiaojie Wang
2015-01-01
Contrastive Divergence has become a common way to train Restricted Boltzmann Machines; however, its convergence has not been made clear yet. This paper studies the convergence of Contrastive Divergence algorithm. We relate Contrastive Divergence algorithm to gradient method with errors and derive convergence conditions of Contrastive Divergence algorithm using the convergence theorem of gradient method with errors. We give specific convergence conditions of Contrastive Divergence ...
A Method for Crude Oil Selection and Blending Optimization Based on Improved Cuckoo Search Algorithm
Yang Huihua; Ma Wei; Zhang Xiaofeng; Li Hu; Tian Songbai
2014-01-01
Reifneries often need to ifnd similar crude oil to replace the scarce crude oil for stabilizing the feedstock prop-erty. We introduced the method for calculation of crude blended properties ifrstly, and then created a crude oil selection and blending optimization model based on the data of crude oil property. The model is a mixed-integer nonlinear programming (MINLP) with constraints, and the target is to maximize the similarity between the blended crude oil and the objective crude oil. Furthermore, the model takes into account the selection of crude oils and their blending ratios simultaneously, and trans-forms the problem of looking for similar crude oil into the crude oil selection and blending optimization problem. We ap-plied the Improved Cuckoo Search (ICS) algorithm to solving the model. Through the simulations, ICS was compared with the genetic algorithm, the particle swarm optimization algorithm and the CPLEX solver. The results show that ICS has very good optimization efifciency. The blending solution can provide a reference for reifneries to ifnd the similar crude oil. And the method proposed can also give some references to selection and blending optimization of other materials.
Predicting Modeling Method of Ship Radiated Noise Based on Genetic Algorithm
Guohui Li
2016-01-01
Full Text Available Because the forming mechanism of underwater acoustic signal is complex, it is difficult to establish the accurate predicting model. In this paper, we propose a nonlinear predicting modeling method of ship radiated noise based on genetic algorithm. Three types of ship radiated noise are taken as real underwater acoustic signal. First of all, a basic model framework is chosen. Secondly, each possible model is done with genetic coding. Thirdly, model evaluation standard is established. Fourthly, the operation of genetic algorithm such as crossover, reproduction, and mutation is designed. Finally, a prediction model of real underwater acoustic signal is established by genetic algorithm. By calculating the root mean square error and signal error ratio of underwater acoustic signal predicting model, the satisfactory results are obtained. The results show that the proposed method can establish the accurate predicting model with high prediction accuracy and may play an important role in the further processing of underwater acoustic signal such as noise reduction and feature extraction and classification.
Research and Simulation of FECG Signal Blind Separation Algorithm Based on Gradient Method
Yu Chen
2012-08-01
Full Text Available Independent Component Analysis (ICA is a new developed signal separation and digital analysis technology in recent years. ICA has widely used because it does not need to know the signal prior information, which has became the hot spot in signal processing field research. In this study, we firstly introduce the principle, meaning and blind source separation algorithm based on the gradient. By using the traditional natural gradient algorithm and Equi-variant Adaptive Source Separation via Independent (EASI blind separation algorithm, mixing ECG signals with noises had been separated effectively into the Maternal Electrocardiograph (MECG signal, Fetal Electrocardiograph (FECG signal and noise signal. The algorithm separation test showed that EASI algorithm can better separate the fetal ECG signal and because the gradient algorithm is a kind of online algorithm, which can be used for clinical fetal ECG signal of the real-time detection with important practical value and research significance.
A Novel Method to Predict Genomic Islands Based on Mean Shift Clustering Algorithm.
de Brito, Daniel M; Maracaja-Coutinho, Vinicius; de Farias, Savio T; Batista, Leonardo V; do Rêgo, Thaís G
2016-01-01
Genomic Islands (GIs) are regions of bacterial genomes that are acquired from other organisms by the phenomenon of horizontal transfer. These regions are often responsible for many important acquired adaptations of the bacteria, with great impact on their evolution and behavior. Nevertheless, these adaptations are usually associated with pathogenicity, antibiotic resistance, degradation and metabolism. Identification of such regions is of medical and industrial interest. For this reason, different approaches for genomic islands prediction have been proposed. However, none of them are capable of predicting precisely the complete repertory of GIs in a genome. The difficulties arise due to the changes in performance of different algorithms in the face of the variety of nucleotide distribution in different species. In this paper, we present a novel method to predict GIs that is built upon mean shift clustering algorithm. It does not require any information regarding the number of clusters, and the bandwidth parameter is automatically calculated based on a heuristic approach. The method was implemented in a new user-friendly tool named MSGIP--Mean Shift Genomic Island Predictor. Genomes of bacteria with GIs discussed in other papers were used to evaluate the proposed method. The application of this tool revealed the same GIs predicted by other methods and also different novel unpredicted islands. A detailed investigation of the different features related to typical GI elements inserted in these new regions confirmed its effectiveness. Stand-alone and user-friendly versions for this new methodology are available at http://msgip.integrativebioinformatics.me. PMID:26731657
A Novel Method to Predict Genomic Islands Based on Mean Shift Clustering Algorithm.
Daniel M de Brito
Full Text Available Genomic Islands (GIs are regions of bacterial genomes that are acquired from other organisms by the phenomenon of horizontal transfer. These regions are often responsible for many important acquired adaptations of the bacteria, with great impact on their evolution and behavior. Nevertheless, these adaptations are usually associated with pathogenicity, antibiotic resistance, degradation and metabolism. Identification of such regions is of medical and industrial interest. For this reason, different approaches for genomic islands prediction have been proposed. However, none of them are capable of predicting precisely the complete repertory of GIs in a genome. The difficulties arise due to the changes in performance of different algorithms in the face of the variety of nucleotide distribution in different species. In this paper, we present a novel method to predict GIs that is built upon mean shift clustering algorithm. It does not require any information regarding the number of clusters, and the bandwidth parameter is automatically calculated based on a heuristic approach. The method was implemented in a new user-friendly tool named MSGIP--Mean Shift Genomic Island Predictor. Genomes of bacteria with GIs discussed in other papers were used to evaluate the proposed method. The application of this tool revealed the same GIs predicted by other methods and also different novel unpredicted islands. A detailed investigation of the different features related to typical GI elements inserted in these new regions confirmed its effectiveness. Stand-alone and user-friendly versions for this new methodology are available at http://msgip.integrativebioinformatics.me.
Adaptive Initialization Method Based on Spatial Local Information for k-Means Algorithm
Honghong Liao; Jinhai Xiang; Weiping Sun; Jianghua Dai; Shengsheng Yu
2014-01-01
k-means algorithm is a widely used clustering algorithm in data mining and machine learning community. However, the initial guess of cluster centers affects the clustering result seriously, which means that improper initialization cannot lead to a desirous clustering result. How to choose suitable initial centers is an important research issue for k-means algorithm. In this paper, we propose an adaptive initialization framework based on spatial local information (AIF-SLI), which takes advanta...
Adaptive Initialization Method Based on Spatial Local Information for k-Means Algorithm
Honghong Liao
2014-01-01
Full Text Available k-means algorithm is a widely used clustering algorithm in data mining and machine learning community. However, the initial guess of cluster centers affects the clustering result seriously, which means that improper initialization cannot lead to a desirous clustering result. How to choose suitable initial centers is an important research issue for k-means algorithm. In this paper, we propose an adaptive initialization framework based on spatial local information (AIF-SLI, which takes advantage of local density of data distribution. As it is difficult to estimate density correctly, we develop two approximate estimations: density by t-nearest neighborhoods (t-NN and density by ϵ-neighborhoods (ϵ-Ball, leading to two implements of the proposed framework. Our empirical study on more than 20 datasets shows promising performance of the proposed framework and denotes that it has several advantages: (1 can find the reasonable candidates of initial centers effectively; (2 it can reduce the iterations of k-means’ methods significantly; (3 it is robust to outliers; and (4 it is easy to implement.
Xie, Li; Li, Guangyao; Xiao, Mang; Peng, Lei
2016-04-01
Various kinds of remote sensing image classification algorithms have been developed to adapt to the rapid growth of remote sensing data. Conventional methods typically have restrictions in either classification accuracy or computational efficiency. Aiming to overcome the difficulties, a new solution for remote sensing image classification is presented in this study. A discretization algorithm based on information entropy is applied to extract features from the data set and a vector space model (VSM) method is employed as the feature representation algorithm. Because of the simple structure of the feature space, the training rate is accelerated. The performance of the proposed method is compared with two other algorithms: back propagation neural networks (BPNN) method and ant colony optimization (ACO) method. Experimental results confirm that the proposed method is superior to the other algorithms in terms of classification accuracy and computational efficiency.
Jing Xu
2016-07-01
Full Text Available As the sound signal of a machine contains abundant information and is easy to measure, acoustic-based monitoring or diagnosis systems exhibit obvious superiority, especially in some extreme conditions. However, the sound directly collected from industrial field is always polluted. In order to eliminate noise components from machinery sound, a wavelet threshold denoising method optimized by an improved fruit fly optimization algorithm (WTD-IFOA is proposed in this paper. The sound is firstly decomposed by wavelet transform (WT to obtain coefficients of each level. As the wavelet threshold functions proposed by Donoho were discontinuous, many modified functions with continuous first and second order derivative were presented to realize adaptively denoising. However, the function-based denoising process is time-consuming and it is difficult to find optimal thresholds. To overcome these problems, fruit fly optimization algorithm (FOA was introduced to the process. Moreover, to avoid falling into local extremes, an improved fly distance range obeying normal distribution was proposed on the basis of original FOA. Then, sound signal of a motor was recorded in a soundproof laboratory, and Gauss white noise was added into the signal. The simulation results illustrated the effectiveness and superiority of the proposed approach by a comprehensive comparison among five typical methods. Finally, an industrial application on a shearer in coal mining working face was performed to demonstrate the practical effect.
Matsui, Futoshi; Goriki, Shin'ichi; Shimizu, Yukio; Tomizawa, Hiromitsu; Kawato, Sakae; Kobayashi, Takao
2008-05-01
Arbitrary spatial beam shaping was demonstrated with a membrane electrostatic actuator type deformable mirror (DM). An automatic closed loop system must optimize such beam shapes as flattop. Well-characterized short pulse laser beam is widely required for a photocathode RF gun or for microscopic processing, etc. We propose a new sophisticated optimizing method based on a genetic algorithm (GA) for spatial shaping. A membrane type DM is driven by electrostatic attraction power, and applied electrode voltages vs displacement of membrane surface have a square function relationship. We prepare discrete electrode voltages to linearly change displacement as a utilized gene of the initial population in GA. Using uniform crossover without mutation in this method, we can make an arbitrary spatial beam shape quasi-flattop.
Li Honglin
2009-03-01
Full Text Available Abstract Background Conformation generation is a ubiquitous problem in molecule modelling. Many applications require sampling the broad molecular conformational space or perceiving the bioactive conformers to ensure success. Numerous in silico methods have been proposed in an attempt to resolve the problem, ranging from deterministic to non-deterministic and systemic to stochastic ones. In this work, we described an efficient conformation sampling method named Cyndi, which is based on multi-objective evolution algorithm. Results The conformational perturbation is subjected to evolutionary operation on the genome encoded with dihedral torsions. Various objectives are designated to render the generated Pareto optimal conformers to be energy-favoured as well as evenly scattered across the conformational space. An optional objective concerning the degree of molecular extension is added to achieve geometrically extended or compact conformations which have been observed to impact the molecular bioactivity (J Comput -Aided Mol Des 2002, 16: 105–112. Testing the performance of Cyndi against a test set consisting of 329 small molecules reveals an average minimum RMSD of 0.864 Å to corresponding bioactive conformations, indicating Cyndi is highly competitive against other conformation generation methods. Meanwhile, the high-speed performance (0.49 ± 0.18 seconds per molecule renders Cyndi to be a practical toolkit for conformational database preparation and facilitates subsequent pharmacophore mapping or rigid docking. The copy of precompiled executable of Cyndi and the test set molecules in mol2 format are accessible in Additional file 1. Conclusion On the basis of MOEA algorithm, we present a new, highly efficient conformation generation method, Cyndi, and report the results of validation and performance studies comparing with other four methods. The results reveal that Cyndi is capable of generating geometrically diverse conformers and outperforms
Sheta, B.; Elhabiby, M.; Sheimy, N.
2012-07-01
A robust scale and rotation invariant image matching algorithm is vital for the Visual Based Navigation (VBN) of aerial vehicles, where matches between an existing geo-referenced database images and the real-time captured images are used to georeference (i.e. six transformation parameters - three rotation and three translation) the real-time captured image from the UAV through the collinearity equations. The georeferencing information is then used in aiding the INS integration Kalman filter as Coordinate UPdaTe (CUPT). It is critical for the collinearity equations to use the proper optimization algorithm to ensure accurate and fast convergence for georeferencing parameters with the minimum required conjugate points necessary for convergence. Fast convergence to a global minimum will require non-linear approach to overcome the high degree of non-linearity that will exist in case of having large oblique images (i.e. large rotation angles).The main objective of this paper is investigating the estimation of the georeferencing parameters necessary for VBN of aerial vehicles in case of having large values of the rotational angles, which will lead to non-linearity of the estimation model. In this case, traditional least squares approaches will fail to estimate the georeferencing parameters, because of the expected non-linearity of the mathematical model. Five different nonlinear least squares methods are presented for estimating the transformation parameters. Four gradient based nonlinear least squares methods (Trust region, Trust region dogleg algorithm, Levenberg-Marquardt, and Quasi-Newton line search method) and one non-gradient method (Nelder-Mead simplex direct search) is employed for the six transformation parameters estimation process. The research was done on simulated data and the results showed that the Nelder-Mead method has failed because of its dependency on the objective function without any derivative information. Although, the tested gradient methods
Biazzo, Indaco; Braunstein, Alfredo; Zecchina, Riccardo
2012-08-01
We study the behavior of an algorithm derived from the cavity method for the prize-collecting steiner tree (PCST) problem on graphs. The algorithm is based on the zero temperature limit of the cavity equations and as such is formally simple (a fixed point equation resolved by iteration) and distributed (parallelizable). We provide a detailed comparison with state-of-the-art algorithms on a wide range of existing benchmarks, networks, and random graphs. Specifically, we consider an enhanced derivative of the Goemans-Williamson heuristics and the dhea solver, a branch and cut integer linear programming based approach. The comparison shows that the cavity algorithm outperforms the two algorithms in most large instances both in running time and quality of the solution. Finally we prove a few optimality properties of the solutions provided by our algorithm, including optimality under the two postprocessing procedures defined in the Goemans-Williamson derivative and global optimality in some limit cases.
Chang Liu
2015-01-01
Full Text Available Path planning is a classic optimization problem which can be solved by many optimization algorithms. The complexity of three-dimensional (3D path planning for autonomous underwater vehicles (AUVs requires the optimization algorithm to have a quick convergence speed. This work provides a new 3D path planning method for AUV using a modified firefly algorithm. In order to solve the problem of slow convergence of the basic firefly algorithm, an improved method was proposed. In the modified firefly algorithm, the parameters of the algorithm and the random movement steps can be adjusted according to the operating process. At the same time, an autonomous flight strategy is introduced to avoid instances of invalid flight. An excluding operator was used to improve the effect of obstacle avoidance, and a contracting operator was used to enhance the convergence speed and the smoothness of the path. The performance of the modified firefly algorithm and the effectiveness of the 3D path planning method were proved through a varied set of experiments.
Adjoint Algorithm for CAD-Based Shape Optimization Using a Cartesian Method
Nemec, Marian; Aftosmis, Michael J.
2004-01-01
Adjoint solutions of the governing flow equations are becoming increasingly important for the development of efficient analysis and optimization algorithms. A well-known use of the adjoint method is gradient-based shape optimization. Given an objective function that defines some measure of performance, such as the lift and drag functionals, its gradient is computed at a cost that is essentially independent of the number of design variables (geometric parameters that control the shape). More recently, emerging adjoint applications focus on the analysis problem, where the adjoint solution is used to drive mesh adaptation, as well as to provide estimates of functional error bounds and corrections. The attractive feature of this approach is that the mesh-adaptation procedure targets a specific functional, thereby localizing the mesh refinement and reducing computational cost. Our focus is on the development of adjoint-based optimization techniques for a Cartesian method with embedded boundaries.12 In contrast t o implementations on structured and unstructured grids, Cartesian methods decouple the surface discretization from the volume mesh. This feature makes Cartesian methods well suited for the automated analysis of complex geometry problems, and consequently a promising approach to aerodynamic optimization. Melvin et developed an adjoint formulation for the TRANAIR code, which is based on the full-potential equation with viscous corrections. More recently, Dadone and Grossman presented an adjoint formulation for the Euler equations. In both approaches, a boundary condition is introduced to approximate the effects of the evolving surface shape that results in accurate gradient computation. Central to automated shape optimization algorithms is the issue of geometry modeling and control. The need to optimize complex, "real-life" geometry provides a strong incentive for the use of parametric-CAD systems within the optimization procedure. In previous work, we presented
A User Differential Range Error Calculating Algorithm Based on Analytic Method
SHAO Bo; LIU Jiansheng; ZHAO Ruibin; HUANG Zhigang; LI Rui
2011-01-01
To enhance the integrity,an analytic method (AM) which has less execution time is proposed to calculate the user differential range error (UDRE) used by the user to detect the potential risk.An ephemeris and clock correction calculation method is introduced first.It shows that the most important thing of computing UDRE is to find the worst user location (WUL) in the service volume.Then,a UDRE algorithm using AM is described to solve this problem.By using the covariance matrix of the error vector,the searching of WUL is converted to an analytic geometry problem.The location of WUL can be obtained directly by mathematical derivation.Experiments are conducted to compare the performance between the proposed AM algorithm and the exhaustive grid search (EGS) method used in the master station.The results show that the correctness of the AM algorithm can be proved by the EGS method and the AM algorithm can reduce the calculation time by more than 90%.The computational complexity of this proposed algorithm is better than that of EGS.Thereby this algorithm is more suitable for computing UDRE at the master station.
Two Novel On-policy Reinforcement Learning Algorithms based on TD(lambda)-methods
Wiering, M.A.; Hasselt, H. van
2007-01-01
This paper describes two novel on-policy reinforcement learning algorithms, named QV(lambda)-learning and the actor critic learning automaton (ACLA). Both algorithms learn a state value-function using TD(lambda)-methods. The difference between the algorithms is that QV-learning uses the learned value function and a form of Q-learning to learn Q-values, whereas ACLA uses the value function and a learning automaton-like update rule to update the actor. We describe several possible advantages of...
New Tabu Search based global optimization methods outline of algorithms and study of efficiency.
Stepanenko, Svetlana; Engels, Bernd
2008-04-15
The study presents two new nonlinear global optimization routines; the Gradient Only Tabu Search (GOTS) and the Tabu Search with Powell's Algorithm (TSPA). They are based on the Tabu-Search strategy, which tries to determine the global minimum of a function by the steepest descent-mildest ascent strategy. The new algorithms are explained and their efficiency is compared with other approaches by determining the global minima of various well-known test functions with varying dimensionality. These tests show that for most tests the GOTS possesses a much faster convergence than global optimizer taken from the literature. The efficiency of the TSPA compares to the efficiency of genetic algorithms. PMID:17910004
A Semi-Supervised WLAN Indoor Localization Method Based on ℓ1-Graph Algorithm
Liye Zhang; Lin Ma; Yubin Xu
2015-01-01
For indoor location estimation based on received signal strength ( RSS ) in wireless local area networks ( WLAN) , in order to reduce the influence of noise on the positioning accuracy, a large number of RSS should be collected in offline phase. Therefore, collecting training data with positioning information is time consuming which becomes the bottleneck of WLAN indoor localization. In this paper, the traditional semi⁃supervised learning method based on k⁃NN andε⁃NN graph for reducing collection workload of offline phase are analyzed, and the result shows that the k⁃NN or ε⁃NN graph are sensitive to data noise, which limit the performance of semi⁃supervised learning WLAN indoor localization system. Aiming at the above problem, it proposes a ℓ1⁃graph⁃algorithm⁃based semi⁃supervised learning ( LG⁃SSL) indoor localization method in which the graph is built by ℓ1⁃norm algorithm. In our system, it firstly labels the unlabeled data using LG⁃SSL and labeled data to build the Radio Map in offline training phase, and then uses LG⁃SSL to estimate user’ s location in online phase. Extensive experimental results show that, benefit from the robustness to noise and sparsity ofℓ1⁃graph, LG⁃SSL exhibits superior performance by effectively reducing the collection workload in offline phase and improving localization accuracy in online phase.
Automatic PET-CT Image Registration Method Based on Mutual Information and Genetic Algorithms
Martina Marinelli; Vincenzo Positano; Francesco Tucci; Danilo Neglia; Luigi Landini
2012-01-01
Hybrid PET/CT scanners can simultaneously visualize coronary artery disease as revealed by computed tomography (CT) and myocardial perfusion as measured by positron emission tomography (PET). Manual registration is usually required in clinical practice to compensate spatial mismatch between datasets. In this paper, we present a registration algorithm that is able to automatically align PET/CT cardiac images. The algorithm bases on mutual information (MI) as registration metric and on genetic ...
Jie-Sheng Wang; Chen-Xu Ning
2015-01-01
In order to improve the accuracy and real-time of all kinds of information in the cash business, and solve the problem which accuracy and stability is not high of the data linkage between cash inventory forecasting and cash management information in the commercial bank, a hybrid learning algorithm is proposed based on adaptive population activity particle swarm optimization (APAPSO) algorithm combined with the least squares method (LMS) to optimize the adaptive network-based fuzzy inference s...
LI Zicheng; SUN Yukun
2006-01-01
Considering the detection principle that "when load current is periodic current, the integral in a cycle for absolute value of load current subtracting fundamental active current is the least", harmonic current real-time detection methods for power active filter are proposed based on direct computation, simple iterative algorithm and optimal iterative algorithm. According to the direct computation method, the amplitude of the fundamental active current can be accurately calculated when load current is placed in stable state. The simple iterative algorithm and the optimal iterative algorithm provide an idea about judging the state of load current. On the basis of the direct computation method, the simple iterative algorithm, the optimal iterative algorithm and precise definition of the basic concepts such as the true amplitude of the fundamental active current when load current is placed in varying state, etc., the double linear construction idea is proposed in which the amplitude of the fundamental active current at the moment of the sample is accurately calculated by using the first linear construction and the condition which disposes the next sample is created by using the second linear construction. On the basis of the double linear construction idea, a harmonic current real-time detection method for power active filter is proposed based on the double linear construction algorithm. This method has the characteristics of small computing quantity, fine real-time performance, being capable of accurately calculating the amplitude of the fundamental active current and so on.
Many problems need to be solved before vision systems can actually be applied in industry, such as the precision of the kinematics model of the robot control algorithm based on visual information, active compensation of the camera's focal length and orientation during the movement of the robot, and understanding the mapping of the physical 3-D space into 2-D camera coordinates. An algorithm is proposed to enable robot to move actively even if the relative positions between the camera and the robot is unknown. To solve the correction problem, this study proposes vision system model with six camera parameters. To develop the robot vision control algorithm, the N-R and EKG methods are applied to the vision system model. Finally, the position accuracy and processing time of the two algorithms developed based based on the EKG and the N-R methods are compared experimentally by making the robot perform slender bar placement task
Alexander B. Bakulev
2012-11-01
Full Text Available This article deals with mathematical models and algorithms, providing mobility of sequential programs parallel representation on the high-level language, presents formal model of operation environment processes management, based on the proposed model of programs parallel representation, presenting computation process on the base of multi-core processors.
Alexander B. Bakulev; Marina A. Bakuleva; Svetlana B. Avilkina
2012-01-01
This article deals with mathematical models and algorithms, providing mobility of sequential programs parallel representation on the high-level language, presents formal model of operation environment processes management, based on the proposed model of programs parallel representation, presenting computation process on the base of multi-core processors.
Highlights: • BBO algorithm is capable of finding suitably optimized loading pattern. • It seems BBO reaches to better final parameter value in comparison with the PSO. • PSO exhibits faster convergence characteristics in comparison with BBO. • Even with same initial random patterns the BBO is found to outperform PSO. - Abstract: In this investigation, we developed a new optimization method, i.e., biogeography based optimization (BBO), for loading pattern optimization problem of pressurized water reactors. BBO is a novel stochastic force based on the science of biogeography. Biogeography is the schoolwork of geographical allotment of biological organisms. BBO make use of migration operator to share information between the problem solutions. The problem solutions are called as habitats and sharing of features is called migration. For the evaluation of the proposed method, we applied a multi-objective fitness function i.e., the maximization of reactivity at BOC and the flattening of power distribution are achieved efficiently and simultaneously. The neutronic calculation is done by CITATION and WIMS codes
A New Tool Wear Monitoring Method Based on Ant Colony Algorithm
Qianjian Guo
2013-06-01
Full Text Available Tool wear prediction is a major contributor to the dimensional errors of a work piece in precision machining, which plays an important role in industry for higher productivity and product quality. Tool wear monitoring is an effective way to predict the tool wear loss in milling process. In this paper, a new bionic prediction model is presented based on the generation mechanism of tool wear loss. Different milling conditions are estimated as the input variables, tool wear loss is estimated as the output variable, neural network method is proposed to establish the mapping relation and ant algorithm is used to train the weights of BP neural networks during tool wear modeling. Finally, a real-time tool wear loss estimator is developed based on ant colony alogrithm and experiments have been conducted for measuring tool wear based on the estimator in a milling machine. The experimental and estimated results are found to be in satisfactory agreement with average error lower than 6%.
Unsupervised classification algorithm based on EM method for polarimetric SAR images
Fernández-Michelli, J. I.; Hurtado, M.; Areta, J. A.; Muravchik, C. H.
2016-07-01
In this work we develop an iterative classification algorithm using complex Gaussian mixture models for the polarimetric complex SAR data. It is a non supervised algorithm which does not require training data or an initial set of classes. Additionally, it determines the model order from data, which allows representing data structure with minimum complexity. The algorithm consists of four steps: initialization, model selection, refinement and smoothing. After a simple initialization stage, the EM algorithm is iteratively applied in the model selection step to compute the model order and an initial classification for the refinement step. The refinement step uses Classification EM (CEM) to reach the final classification and the smoothing stage improves the results by means of non-linear filtering. The algorithm is applied to both simulated and real Single Look Complex data of the EMISAR mission and compared with the Wishart classification method. We use confusion matrix and kappa statistic to make the comparison for simulated data whose ground-truth is known. We apply Davies-Bouldin index to compare both classifications for real data. The results obtained for both types of data validate our algorithm and show that its performance is comparable to Wishart's in terms of classification quality.
Fast Contour-Tracing Algorithm Based on a Pixel-Following Method for Image Sensors.
Seo, Jonghoon; Chae, Seungho; Shim, Jinwook; Kim, Dongchul; Cheong, Cheolho; Han, Tack-Don
2016-01-01
Contour pixels distinguish objects from the background. Tracing and extracting contour pixels are widely used for smart/wearable image sensor devices, because these are simple and useful for detecting objects. In this paper, we present a novel contour-tracing algorithm for fast and accurate contour following. The proposed algorithm classifies the type of contour pixel, based on its local pattern. Then, it traces the next contour using the previous pixel's type. Therefore, it can classify the type of contour pixels as a straight line, inner corner, outer corner and inner-outer corner, and it can extract pixels of a specific contour type. Moreover, it can trace contour pixels rapidly because it can determine the local minimal path using the contour case. In addition, the proposed algorithm is capable of the compressing data of contour pixels using the representative points and inner-outer corner points, and it can accurately restore the contour image from the data. To compare the performance of the proposed algorithm to that of conventional techniques, we measure their processing time and accuracy. In the experimental results, the proposed algorithm shows better performance compared to the others. Furthermore, it can provide the compressed data of contour pixels and restore them accurately, including the inner-outer corner, which cannot be restored using conventional algorithms. PMID:27005632
Fast Contour-Tracing Algorithm Based on a Pixel-Following Method for Image Sensors
Jonghoon Seo
2016-03-01
Full Text Available Contour pixels distinguish objects from the background. Tracing and extracting contour pixels are widely used for smart/wearable image sensor devices, because these are simple and useful for detecting objects. In this paper, we present a novel contour-tracing algorithm for fast and accurate contour following. The proposed algorithm classifies the type of contour pixel, based on its local pattern. Then, it traces the next contour using the previous pixel’s type. Therefore, it can classify the type of contour pixels as a straight line, inner corner, outer corner and inner-outer corner, and it can extract pixels of a specific contour type. Moreover, it can trace contour pixels rapidly because it can determine the local minimal path using the contour case. In addition, the proposed algorithm is capable of the compressing data of contour pixels using the representative points and inner-outer corner points, and it can accurately restore the contour image from the data. To compare the performance of the proposed algorithm to that of conventional techniques, we measure their processing time and accuracy. In the experimental results, the proposed algorithm shows better performance compared to the others. Furthermore, it can provide the compressed data of contour pixels and restore them accurately, including the inner-outer corner, which cannot be restored using conventional algorithms.
We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm
An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method
Campolina, Daniel; Lima, Paulo Rubens I., E-mail: campolina@cdtn.br, E-mail: pauloinacio@cpejr.com.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil). Servico de Tecnologia de Reatores; Pereira, Claubia; Veloso, Maria Auxiliadora F., E-mail: claubia@nuclear.ufmg.br, E-mail: dora@nuclear.ufmg.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Dept. de Engenharia Nuclear
2015-07-01
Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k{sub eff} was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)
An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method
Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor keff was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)
Luman Zhao
2015-01-01
Full Text Available A thrust allocation method was proposed based on a hybrid optimization algorithm to efficiently and dynamically position a semisubmersible drilling rig. That is, the thrust allocation was optimized to produce the generalized forces and moment required while at the same time minimizing the total power consumption under the premise that forbidden zones should be taken into account. An optimization problem was mathematically formulated to provide the optimal thrust allocation by introducing the corresponding design variables, objective function, and constraints. A hybrid optimization algorithm consisting of a genetic algorithm and a sequential quadratic programming (SQP algorithm was selected and used to solve this problem. The proposed method was evaluated by applying it to a thrust allocation problem for a semisubmersible drilling rig. The results indicate that the proposed method can be used as part of a cost-effective strategy for thrust allocation of the rig.
Creating IRT-Based Parallel Test Forms Using the Genetic Algorithm Method
Sun, Koun-Tem; Chen, Yu-Jen; Tsai, Shu-Yen; Cheng, Chien-Fen
2008-01-01
In educational measurement, the construction of parallel test forms is often a combinatorial optimization problem that involves the time-consuming selection of items to construct tests having approximately the same test information functions (TIFs) and constraints. This article proposes a novel method, genetic algorithm (GA), to construct parallel…
NUMERICAL METHOD BASED ON HAMILTON SYSTEM AND SYMPLECTIC ALGORITHM TO DIFFERENTIAL GAMES
无
2006-01-01
The resolution of differential games often concerns the difficult problem of two points border value (TPBV), then ascribe linear quadratic differential game to Hamilton system. To Hamilton system, the algorithm of symplectic geometry has the merits of being able to copy the dynamic structure of Hamilton system and keep the measure of phase plane. From the viewpoint of Hamilton system, the symplectic characters of linear quadratic differential game were probed; as a try, Symplectic-Runge-Kutta algorithm was presented for the resolution of infinite horizon linear quadratic differential game. An example of numerical calculation was given, and the result can illuminate the feasibility of this method. At the same time, it embodies the fine conservation characteristics of symplectic algorithm to system energy.
An infrared small target detection algorithm based on high-speed local contrast method
Cui, Zheng; Yang, Jingli; Jiang, Shouda; Li, Junbao
2016-05-01
Small-target detection in infrared imagery with a complex background is always an important task in remote sensing fields. It is important to improve the detection capabilities such as detection rate, false alarm rate, and speed. However, current algorithms usually improve one or two of the detection capabilities while sacrificing the other. In this letter, an Infrared (IR) small target detection algorithm with two layers inspired by Human Visual System (HVS) is proposed to balance those detection capabilities. The first layer uses high speed simplified local contrast method to select significant information. And the second layer uses machine learning classifier to separate targets from background clutters. Experimental results show the proposed algorithm pursue good performance in detection rate, false alarm rate and speed simultaneously.
A genetic algorithm approach for assessing soil liquefaction potential based on reliability method
M H Bagheripour; I Shooshpasha; M Afzalirad
2012-02-01
Deterministic approaches are unable to account for the variations in soil’s strength properties, earthquake loads, as well as source of errors in evaluations of liquefaction potential in sandy soils which make them questionable against other reliability concepts. Furthermore, deterministic approaches are incapable of precisely relating the probability of liquefaction and the factor of safety (FS). Therefore, the use of probabilistic approaches and especially, reliability analysis is considered since a complementary solution is needed to reach better engineering decisions. In this study, Advanced First-Order Second-Moment (AFOSM) technique associated with genetic algorithm (GA) and its corresponding sophisticated optimization techniques have been used to calculate the reliability index and the probability of liquefaction. The use of GA provides a reliable mechanism suitable for computer programming and fast convergence. A new relation is developed here, by which the liquefaction potential can be directly calculated based on the estimated probability of liquefaction (), cyclic stress ratio (CSR) and normalized standard penetration test (SPT) blow counts while containing a mean error of less than 10% from the observational data. The validity of the proposed concept is examined through comparison of the results obtained by the new relation and those predicted by other investigators. A further advantage of the proposed relation is that it relates and FS and hence it provides possibility of decision making based on the liquefaction risk and the use of deterministic approaches. This could be beneficial to geotechnical engineers who use the common methods of FS for evaluation of liquefaction. As an application, the city of Babolsar which is located on the southern coasts of Caspian Sea is investigated for liquefaction potential. The investigation is based primarily on in situ tests in which the results of SPT are analysed.
Jamadi, Mohammad; Merrikh-Bayat, Farshad
2014-01-01
This paper proposes an effective method for estimating the parameters of double-cage induction motors by using Artificial Bee Colony (ABC) algorithm. For this purpose the unknown parameters in the electrical model of asynchronous machine are calculated such that the sum of the square of differences between full load torques, starting torques, maximum torques, starting currents, full load currents, and nominal power factors obtained from model and provided by manufacturer is minimized. In orde...
Histogram-Based Estimation of Distribution Algorithm: A Competent Method for Continuous Optimization
Nan Ding; Shu-De Zhou; Zeng-Qi Sun
2008-01-01
Designing efficient estimation of distribution algorithms for optimizing complex continuous problems is still a challenging task. This paper utilizes histogram probabilistic model to describe the distribution of population and to generate promising solutions. The advantage of histogram model, its intrinsic multimodality, makes it proper to describe the solution distribution of complex and multimodal continuous problems. To make histogram model more efficiently explore and exploit the search space, several strategies are brought into the algorithms: the surrounding effect reduces the population size in estimating the model with a certain number of the bins and the shrinking strategy guarantees the accuracy of optimal solutions. Furthermore, this paper shows that histogram-based EDA (Estimation of distributiona lgorithm) can give comparable or even much better performance than those predominant EDAs based on Gaussianmodels.
Sheta, B.; Elhabiby, M.; Sheimy, N.
2012-01-01
A robust scale and rotation invariant image matching algorithm is vital for the Visual Based Navigation (VBN) of aerial vehicles, where matches between an existing geo-referenced database images and the real-time captured images are used to georeference (i.e. six transformation parameters - three rotation and three translation) the real-time captured image from the UAV through the collinearity equations. The georeferencing information is then used in aiding the INS integration Kalman filter a...
A New Algorithm Based on the Homotopy Perturbation Method For a Class of Singularly Perturbed Boundary Value Problems
2013-12-01
Full Text Available . In this paper, a new algorithm is presented to approximate the solution of a singularly perturbed boundary value problem with leftlayer based on the homotopy perturbation technique and applying the Laplace transformation. The convergence theorem and the error bound of the proposed method are proved. The method is examined by solving two examples. The results demonstrate the reliability and efficiency of the proposed method.
Zhang, Ying; Liang, Jixing; Jiang, Shengming; Chen, Wei
2016-01-01
Due to their special environment, Underwater Wireless Sensor Networks (UWSNs) are usually deployed over a large sea area and the nodes are usually floating. This results in a lower beacon node distribution density, a longer time for localization, and more energy consumption. Currently most of the localization algorithms in this field do not pay enough consideration on the mobility of the nodes. In this paper, by analyzing the mobility patterns of water near the seashore, a localization method for UWSNs based on a Mobility Prediction and a Particle Swarm Optimization algorithm (MP-PSO) is proposed. In this method, the range-based PSO algorithm is used to locate the beacon nodes, and their velocities can be calculated. The velocity of an unknown node is calculated by using the spatial correlation of underwater object's mobility, and then their locations can be predicted. The range-based PSO algorithm may cause considerable energy consumption and its computation complexity is a little bit high, nevertheless the number of beacon nodes is relatively smaller, so the calculation for the large number of unknown nodes is succinct, and this method can obviously decrease the energy consumption and time cost of localizing these mobile nodes. The simulation results indicate that this method has higher localization accuracy and better localization coverage rate compared with some other widely used localization methods in this field. PMID:26861348
Zhang, Ying; Liang, Jixing; Jiang, Shengming; Chen, Wei
2016-01-01
Due to their special environment, Underwater Wireless Sensor Networks (UWSNs) are usually deployed over a large sea area and the nodes are usually floating. This results in a lower beacon node distribution density, a longer time for localization, and more energy consumption. Currently most of the localization algorithms in this field do not pay enough consideration on the mobility of the nodes. In this paper, by analyzing the mobility patterns of water near the seashore, a localization method for UWSNs based on a Mobility Prediction and a Particle Swarm Optimization algorithm (MP-PSO) is proposed. In this method, the range-based PSO algorithm is used to locate the beacon nodes, and their velocities can be calculated. The velocity of an unknown node is calculated by using the spatial correlation of underwater object’s mobility, and then their locations can be predicted. The range-based PSO algorithm may cause considerable energy consumption and its computation complexity is a little bit high, nevertheless the number of beacon nodes is relatively smaller, so the calculation for the large number of unknown nodes is succinct, and this method can obviously decrease the energy consumption and time cost of localizing these mobile nodes. The simulation results indicate that this method has higher localization accuracy and better localization coverage rate compared with some other widely used localization methods in this field. PMID:26861348
M. A. Demir
2012-04-01
Full Text Available Blind equalization is a technique for adaptive equalization of a communication channel without the use of training sequence. Although the constant modulus algorithm (CMA is one of the most popular adaptive blind equalization algorithms, because of using fixed step size it suffers from slow convergence rate. A novel enhanced variable step size CMA algorithm (VSS-CMA based on autocorrelation of error signal has been proposed to improve the weakness of CMA for application to blind equalization in this paper. The new algorithm resolves the conflict between the convergence rate and precision of the fixed step-size conventional CMA algorithm. Computer simulations have been performed to illustrate the performance of the proposed method in simulated frequency selective Rayleigh fading channels and experimental real communication channels. The obtained simulation results using single carrier (SC IEEE 802.16-2004 protocol have demonstrated that the proposed VSS-CMA algorithm has considerably better performance than conventional CMA, normalized CMA (N-CMA and the other VSS-CMA algorithms.
Nadernejad, Ehsan; Sharifzadeh, Sara
2013-01-01
In this paper, a new pixon-based method is presented for image segmentation. In the proposed algorithm, bilateral filtering is used as a kernel function to form a pixonal image. Using this filter reduces the noise and smoothes the image slightly. By using this pixon-based method, the image over...... segmentation could be avoided. Indeed, the bilateral filtering, as a preprocessing step, eliminates the unnecessary details of the image and results in a few numbers of pixons, faster performance and more robustness against unwanted environmental noises. Then, the obtained pixonal image is segmented using the...... hierarchical clustering method (Fuzzy C-means algorithm). The experimental results show that the proposed pixon-based approach has a reduced computational load and a better accuracy compared to the other existing pixon-based image segmentation techniques....
Jie-Sheng Wang
2015-06-01
Full Text Available In order to improve the accuracy and real-time of all kinds of information in the cash business, and solve the problem which accuracy and stability is not high of the data linkage between cash inventory forecasting and cash management information in the commercial bank, a hybrid learning algorithm is proposed based on adaptive population activity particle swarm optimization (APAPSO algorithm combined with the least squares method (LMS to optimize the adaptive network-based fuzzy inference system (ANFIS model parameters. Through the introduction of metric function of population diversity to ensure the diversity of population and adaptive changes in inertia weight and learning factors, the optimization ability of the particle swarm optimization (PSO algorithm is improved, which avoids the premature convergence problem of the PSO algorithm. The simulation comparison experiments are carried out with BP-LMS algorithm and standard PSO-LMS by adopting real commercial banks’ cash flow data to verify the effectiveness of the proposed time series prediction of bank cash flow based on improved PSO-ANFIS optimization method. Simulation results show that the optimization speed is faster and the prediction accuracy is higher.
Ruan, Cong; Sun, Xiao-Min; Song, Yi-Xu
In this paper, we propose a method to optimize etching yield parameters. By means of defining a fitness function between the actual etching profile and the simulation profile, the etching yield parameters solving problem is transformed into an optimization problem. The problem is nonlinear and high dimensional, and each simulation is computationally expensive. To solve this problem, we need to search a better solution in a multidimensional space. Ordinal optimization and tabu search hybrid algorithm is introduced to solve this complex problem. This method ensures getting good enough solution in an acceptable time. The experimental results illustrate that simulation profile obtained by this method is very similar with the actual etching profile in surface topography. It also proves that our proposed method has feasibility and validity.
Ashkan Emami Ale Agha
2013-06-01
Full Text Available One of the most important concepts in multi programming Operating Systems is scheduling. It helps in choosing the processes for execution. Round robin method is one of the most important algorithms in scheduling. It is the most popular algorithm due to its fairness and starvation free nature towards the processes, which is achieved by using proper quantum time. The main challenge in this algorithm is selection of quantum time. This parameter affects on average Waiting Time and average Turnaround Time in execution queue. As the quantum time is static, it causes less context switching in case of high quantum time and high context switching in case of less quantum time. Increasing context switch leads to high average waiting time, high average turnaround time which is an overhead and degrades the system performance. With respect to these points, the algorithms should calculate proper value for the quantum time. Two main classes of algorithms that are proposed to calculate the quantum time include static and dynamic methods. In static methods quantum time is fixed during the scheduling. Dynamic algorithms are one of these methods that change the value of quantum time in each cycle. For example in one method the value of quantum time in each cycle is equal to the median of burst times of processes in ready queue and for another method this value is equal to arithmetic mean of burst times of ready processes.In this paper we proposed a new method to obtaining quantum time in each cycle based on arithmetic-harmonic mean (HARM. Harmonic mean is calculated by dividing the number of observations by the reciprocal of each number in the series. With examples we show that in some cases it can provides better scheduling criteria and improves the average Turnaround Time and average Waiting Time.
Yukai Yao
2015-01-01
Full Text Available We propose an optimized Support Vector Machine classifier, named PMSVM, in which System Normalization, PCA, and Multilevel Grid Search methods are comprehensively considered for data preprocessing and parameters optimization, respectively. The main goals of this study are to improve the classification efficiency and accuracy of SVM. Sensitivity, Specificity, Precision, and ROC curve, and so forth, are adopted to appraise the performances of PMSVM. Experimental results show that PMSVM has relatively better accuracy and remarkable higher efficiency compared with traditional SVM algorithms.
Trobec, Roman
2015-01-01
This book is concentrated on the synergy between computer science and numerical analysis. It is written to provide a firm understanding of the described approaches to computer scientists, engineers or other experts who have to solve real problems. The meshless solution approach is described in more detail, with a description of the required algorithms and the methods that are needed for the design of an efficient computer program. Most of the details are demonstrated on solutions of practical problems, from basic to more complicated ones. This book will be a useful tool for any reader interes
Survey on Parameters of Fingerprint Classification Methods Based On Algorithmic Flow
Dimple Parekh
2011-09-01
Full Text Available Classification refers to assigning a given fingerprint to one of the existing classes already recognized inthe literature. A search over all the records in the database takes a long time, so the goal is to reduce thesize of the search space by choosing an appropriate subset of database for search. Classifying afingerprint images is a very difficult pattern recognition problem, due to the minimal interclassvariability and maximal intraclass variability. This paper presents a sequence flow diagram which willhelp in developing the clarity on designing algorithm for classification based on various parametersextracted from the fingerprint image. It discusses in brief the ways in which the parameters are extractedfrom the image. Existing fingerprint classification approaches are based on these parameters as inputfor classifying the image. Parameters like orientation map, singular points, spurious singular points,ridge flow, transforms and hybrid feature are discussed in the paper.
A brain-region-based meta-analysis method utilizing the Apriori algorithm
Niu, Zhendong; Nie, Yaoxin; Zhou, Qian; Zhu, Linlin; Wei, Jieyao
2016-01-01
Background Brain network connectivity modeling is a crucial method for studying the brain’s cognitive functions. Meta-analyses can unearth reliable results from individual studies. Meta-analytic connectivity modeling is a connectivity analysis method based on regions of interest (ROIs) which showed that meta-analyses could be used to discover brain network connectivity. Results In this paper, we propose a new meta-analysis method that can be used to find network connectivity models based on t...
Improved Power Flow Algorithm for VSC-HVDC System Based on High-Order Newton-Type Method
Yanfang Wei
2013-01-01
Full Text Available Voltage source converter (VSC based high-voltage direct-current (HVDC system is a new transmission technique, which has the most promising applications in the fields of power systems and power electronics. Considering the importance of power flow analysis of the VSC-HVDC system for its utilization and exploitation, the improved power flow algorithms for VSC-HVDC system based on third-order and sixth-order Newton-type method are presented. The steady power model of VSC-HVDC system is introduced firstly. Then the derivation solving formats of multivariable matrix for third-order and sixth-order Newton-type power flow method of VSC-HVDC system are given. The formats have the feature of third-order and sixth-order convergence based on Newton method. Further, based on the automatic differentiation technology and third-order Newton method, a new improved algorithm is given, which will help in improving the program development, computation efficiency, maintainability, and flexibility of the power flow. Simulations of AC/DC power systems in two-terminal, multi-terminal, and multi-infeed DC with VSC-HVDC are carried out for the modified IEEE bus systems, which show the effectiveness and practicality of the presented algorithms for VSC-HVDC system.
Erik Cuevas
2015-01-01
Full Text Available In this paper, a new method for robustly estimating multiple view relations from point correspondences is presented. The approach combines the popular random sampling consensus (RANSAC algorithm and the evolutionary method harmony search (HS. With this combination, the proposed method adopts a different sampling strategy than RANSAC to generate putative solutions. Under the new mechanism, at each iteration, new candidate solutions are built taking into account the quality of the models generated by previous candidate solutions, rather than purely random as it is the case of RANSAC. The rules for the generation of candidate solutions (samples are motivated by the improvisation process that occurs when a musician searches for a better state of harmony. As a result, the proposed approach can substantially reduce the number of iterations still preserving the robust capabilities of RANSAC. The method is generic and its use is illustrated by the estimation of homographies, considering synthetic and real images. Additionally, in order to demonstrate the performance of the proposed approach within a real engineering application, it is employed to solve the problem of position estimation in a humanoid robot. Experimental results validate the efficiency of the proposed method in terms of accuracy, speed, and robustness.
Cuevas, Erik; Díaz, Margarita
2015-01-01
In this paper, a new method for robustly estimating multiple view relations from point correspondences is presented. The approach combines the popular random sampling consensus (RANSAC) algorithm and the evolutionary method harmony search (HS). With this combination, the proposed method adopts a different sampling strategy than RANSAC to generate putative solutions. Under the new mechanism, at each iteration, new candidate solutions are built taking into account the quality of the models generated by previous candidate solutions, rather than purely random as it is the case of RANSAC. The rules for the generation of candidate solutions (samples) are motivated by the improvisation process that occurs when a musician searches for a better state of harmony. As a result, the proposed approach can substantially reduce the number of iterations still preserving the robust capabilities of RANSAC. The method is generic and its use is illustrated by the estimation of homographies, considering synthetic and real images. Additionally, in order to demonstrate the performance of the proposed approach within a real engineering application, it is employed to solve the problem of position estimation in a humanoid robot. Experimental results validate the efficiency of the proposed method in terms of accuracy, speed, and robustness. PMID:26339228
无
2006-01-01
A new method for power quality (PQ) disturbances identification is brought forward based on combining a neural network with least square (LS) weighted fusion algorithm. The characteristic components of PQ disturbances are distilled through an improved phase-located loop (PLL) system at first, and then five child BP ANNs with different structures are trained and adopted to identify the PQ disturbances respectively. The combining neural network fuses the identification results of these child ANNs with LS weighted fusion algorithm, and identifies PQ disturbances with the fused result finally. Compared with a single neural network, the combining one with LS weighted fusion algorithm can identify the PQ disturbances correctly when noise is strong. However, a single neural network may fail in this case. Furthermore, the combining neural network is more reliable than a single neural network. The simulation results prove the conclusions above.
Numerical methods and inversion algorithms in reservoir simulation based on front tracking
Haugse, Vidar
1999-04-01
This thesis uses front tracking to analyse laboratory experiments on multiphase flow in porous media. New methods for parameter estimation for two- and three-phase relative permeability experiments have been developed. Up scaling of heterogeneous and stochastic porous media is analysed. Numerical methods based on front tracking is developed and analysed. Such methods are efficient for problems involving steep changes in the physical quantities. Multi-dimensional problems are solved by combining front tracking with dimensional splitting. A method for adaptive grid refinement is developed.
Ebrahim BARATI
2013-03-01
Full Text Available In this paper the optimization of kinematics, which has great influence in performance of flapping foil propulsion, is investigated. The purpose of optimization is to design a flapping-wing micro aircraft with appropriate kinematics and aerodynamics features, making the micro aircraft suitable for transportation over large distance with minimum energy consumption. On the point of optimal design, the pitch amplitude, wing reduced frequency and phase difference between plunging and pitching are considered as given parameters and consumed energy, generated thrust by wings and lost power are computed using the 2D quasi-steady aerodynamic model and multi-objective genetic algorithm. Based on the thrust optimization, the increase in pitch amplitude reduces the power consumption. In this case the lost power increases and the maximum thrust coefficient is computed of 2.43. Based on the power optimization, the results show that the increase in pitch amplitude leads to power consumption increase. Additionally, the minimum lost power obtained in this case is 23% at pitch amplitude of 25°, wing reduced frequency of 0.42 and phase angle difference between plunging and pitching of 77°. Furthermore, the wing reduced frequency can be estimated using regression with respect to pitch amplitude, because reduced frequency variations with pitch amplitude is approximately a linear function.
Yang, Yun; Kimura, Shinji
This paper proposes an efficient design algorithm for power/ground (P/G) network synthesis with dynamic signal consideration, which is mainly caused by Ldi/dt noise and Cdv/dt decoupling capacitance (DECAP) current in the distribution network. To deal with the nonlinear global optimization under synthesis constraints directly, the genetic algorithm (GA) is introduced. The proposed GA-based synthesis method can avoid the linear transformation loss and the restraint condition complexity in current SLP, SQP, ICG, and random-walk methods. In the proposed Hybrid Grid Synthesis algorithm, the dynamic signal is simulated in the gene disturbance process, and Trapezoidal Modified Euler (TME) method is introduced to realize the precise dynamic time step process. We also use a hybrid-SLP method to reduce the genetic execute time and increase the network synthesis efficiency. Experimental results on given power distribution network show the reduction on layout area and execution time compared with current P/G network synthesis methods.
An Aircraft Navigation System Fault Diagnosis Method Based on Optimized Neural Network Algorithm
Jean-dedieu Weyepe
2014-01-01
Air data and inertial reference system (ADIRS) is one of the complex sub-system in the aircraft navigation system and it plays an important role into the flight safety of the aircraft. This paper propose an optimize neural network algorithm which is a combination of neural network and ant colony algorithm to improve efficiency of maintenance engineer job task.
Projection Classification Based Iterative Algorithm
Zhang, Ruiqiu; Li, Chen; Gao, Wenhua
2015-05-01
Iterative algorithm has good performance as it does not need complete projection data in 3D image reconstruction area. It is possible to be applied in BGA based solder joints inspection but with low convergence speed which usually acts with x-ray Laminography that has a worse reconstruction image compared to the former one. This paper explores to apply one projection classification based method which tries to separate the object to three parts, i.e. solute, solution and air, and suppose that the reconstruction speed decrease from solution to two other parts on both side lineally. And then SART and CAV algorithms are improved under the proposed idea. Simulation experiment result with incomplete projection images indicates the fast convergence speed of the improved iterative algorithms and the effectiveness of the proposed method. Less the projection images, more the superiority is also founded.
He, Hongyang; Xu, Jiangning; Qin, Fangjun; Li, Feng
2015-11-01
In order to shorten the alignment time and eliminate the small initial misalignment limit for compass alignment of strap-down inertial navigation system (SINS), which is sometimes not easy to satisfy when the ship is moored or anchored, an optimal model based time-varying parameter compass alignment algorithm is proposed in this paper. The contributions of the work presented here are twofold. First, the optimization of compass alignment parameters, which involves a lot of trial-and-error traditionally, is achieved based on genetic algorithm. On this basis, second, the optimal parameter varying model is established by least-square polynomial fitting. Experiments are performed with a navigational grade fiber optical gyroscope SINS, which validate the efficiency of the proposed method.
Li, Chen; Pan, Zengxin; Mao, Feiyue; Gong, Wei; Chen, Shihua; Min, Qilong
2015-10-01
The signal-to-noise ratio (SNR) of an atmospheric lidar decreases rapidly as range increases, so that maintaining high accuracy when retrieving lidar data at the far end is difficult. To avoid this problem, many de-noising algorithms have been developed; in particular, an effective de-noising algorithm has been proposed to simultaneously retrieve lidar data and obtain a de-noised signal by combining the ensemble Kalman filter (EnKF) and the Fernald method. This algorithm enhances the retrieval accuracy and effective measure range of a lidar based on the Fernald method, but sometimes leads to a shift (bias) in the near range as a result of the over-smoothing caused by the EnKF. This study proposes a new scheme that avoids this phenomenon using a particle filter (PF) instead of the EnKF in the de-noising algorithm. Synthetic experiments show that the PF performs better than the EnKF and Fernald methods. The root mean square error of PF are 52.55% and 38.14% of that of the Fernald and EnKF methods, and PF increases the SNR by 44.36% and 11.57% of that of the Fernald and EnKF methods, respectively. For experiments with real signals, the relative bias of the EnKF is 5.72%, which is reduced to 2.15% by the PF in the near range. Furthermore, the suppression impact on the random noise in the far range is also made significant via the PF. An extensive application of the PF method can be useful in determining the local and global properties of aerosols. PMID:26480164
General moving objects recognition method based on graph embedding dimension reduction algorithm
Yi ZHANG; Jie YANG; Kun LIU
2009-01-01
Effective and robust recognition and tracking of objects are the key problems in visual surveillance systems. Most existing object recognition methods were designed with particular objects in mind. This study presents a general moving objects recognition method using global features of targets. Targets are extracted with an adaptive Gaussian mixture model and their silhouette images are captured and unified. A new objects silhouette database is built to provide abundant samples to train the subspace feature. This database is more convincing than the previous ones. A more effective dimension reduction method based on graph embedding is used to obtain the projection eigenvector. In our experiments, we show the effective performance of our method in addressing the moving objects recognition problem and its superiority compared with the previous methods.
Highlights: ► The performance of GA, HNN and combination of them in BPP optimization in PWR core are adequate. ► It seems HNN + GA arrives to better final parameter value in comparison with the two other methods. ► The computation time for HNN + GA is higher than GA and HNN. Thus a trade-off is necessary. - Abstract: In the last decades genetic algorithm (GA) and Hopfield Neural Network (HNN) have attracted considerable attention for the solution of optimization problems. In this paper, a hybrid optimization method based on the combination of the GA and HNN is introduced and applied to the burnable poison placement (BPP) problem to increase the quality of the results. BPP in a nuclear reactor core is a combinatorial and complicated problem. Arrangement and the worth of the burnable poisons (BPs) has an impressive effect on the main control parameters of a nuclear reactor. Improper design and arrangement of the BPs can be dangerous with respect to the nuclear reactor safety. In this paper, increasing BP worth along with minimizing the radial power peaking are considered as objective functions. Three optimization algorithms, genetic algorithm, Hopfield neural network optimization and a hybrid optimization method, are applied to the BPP problem and their efficiencies are compared. The hybrid optimization method gives better result in finding a better BP arrangement.
Improved Power Flow Algorithm for VSC-HVDC System Based on High-Order Newton-Type Method
Yanfang Wei; Qiang He; Yonghui Sun; Yanzhou Sun; Cong Ji
2013-01-01
Voltage source converter (VSC) based high-voltage direct-current (HVDC) system is a new transmission technique, which has the most promising applications in the fields of power systems and power electronics. Considering the importance of power flow analysis of the VSC-HVDC system for its utilization and exploitation, the improved power flow algorithms for VSC-HVDC system based on third-order and sixth-order Newton-type method are presented. The steady power model of VSC-HVDC system is introdu...
An Improved Robot Path Planning Algorithm Based on Genetic Algorithm
Hammin Liu
2012-12-01
Full Text Available Robot path planning is a NP problem; traditional optimization methods are not very effective to solve it. Traditional genetic algorithm trapped into the local minimum easily. Therefore, based on a simple genetic algorithm and combine the base ideology of orthogonal design method then applied it to the population initialization, using the intergenerational elite mechanism, as well as the introduction of adaptive local search operator to prevent trapped into the local minimum and improve the convergence speed to form a new genetic algorithm. Through the series of numerical experiments, the new algorithm has been proved to be efficiency. We also use the proposed algorithm to solve the robot path planning problem and the experiment results indicated that the new algorithm is efficiency for solving the robot path planning problems and the best path usually can be found.
Zhao, Peng; Zhang, Yan; Qian, Weiping
2015-10-01
Diffuse reflection laser ranging is one of the feasible ways to realize high precision measurement of the space debris. However, the weak echo of diffuse reflection results in a poor signal-to-noise ratio. Thus, it is difficult to realize the real-time signal extraction for diffuse reflection laser ranging when echo signal photons are blocked by a large amount of noise photons. The Genetic Algorithm, originally evolved from the idea of natural selection process, is a heuristic search algorithm which is famous for the adaptive optimization and the global search ability. To the best of our knowledge, this paper is the first one to propose a method of real-time signal extraction for diffuse reflection laser ranging based on Genetic Algorithm. The extraction results are regarded as individuals in the population. Besides, short-term linear fitting degree and data correlation level are used as selection criteria to search for an optimal solution. Fine search in the real-time data part gives the suitable new data quickly in real-time signal extraction. A coarse search in both historical data and real-time data after the fine search is designed. The co-evolution of both parts can increase the search accuracy of real-time data as well as the precision of the history data. Simulation experiments show that our method has good signal extraction capability in poor signal-to-noise ratio circumstance, especially for data with high correlation.
Liu, Yun; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Hui, Mei; Liu, Xiaohua; Wu, Yijian
2015-09-01
As an important branch of infrared imaging technology, infrared target tracking and detection has a very important scientific value and a wide range of applications in both military and civilian areas. For the infrared image which is characterized by low SNR and serious disturbance of background noise, an innovative and effective target detection algorithm is proposed in this paper, according to the correlation of moving target frame-to-frame and the irrelevance of noise in sequential images based on OpenCV. Firstly, since the temporal differencing and background subtraction are very complementary, we use a combined detection method of frame difference and background subtraction which is based on adaptive background updating. Results indicate that it is simple and can extract the foreground moving target from the video sequence stably. For the background updating mechanism continuously updating each pixel, we can detect the infrared moving target more accurately. It paves the way for eventually realizing real-time infrared target detection and tracking, when transplanting the algorithms on OpenCV to the DSP platform. Afterwards, we use the optimal thresholding arithmetic to segment image. It transforms the gray images to black-white images in order to provide a better condition for the image sequences detection. Finally, according to the relevance of moving objects between different frames and mathematical morphology processing, we can eliminate noise, decrease the area, and smooth region boundaries. Experimental results proves that our algorithm precisely achieve the purpose of rapid detection of small infrared target.
A method for classification of network traffic based on C5.0 Machine Learning Algorithm
Bujlow, Tomasz; Riaz, M. Tahir; Pedersen, Jens Myrup
2012-01-01
and classification, an algorithm for recognizing flow direction and the C5.0 itself. Classified applications include Skype, FTP, torrent, web browser traffic, web radio, interactive gaming and SSH. We performed subsequent tries using different sets of parameters and both training and classification options...
Ensemble Methods Foundations and Algorithms
Zhou, Zhi-Hua
2012-01-01
An up-to-date, self-contained introduction to a state-of-the-art machine learning approach, Ensemble Methods: Foundations and Algorithms shows how these accurate methods are used in real-world tasks. It gives you the necessary groundwork to carry out further research in this evolving field. After presenting background and terminology, the book covers the main algorithms and theories, including Boosting, Bagging, Random Forest, averaging and voting schemes, the Stacking method, mixture of experts, and diversity measures. It also discusses multiclass extension, noise tolerance, error-ambiguity a
FEATURE SELECTION METHODS AND ALGORITHMS
L.Ladha,
2011-05-01
Full Text Available Feature selection is an important topic in data mining, especially for high dimensional datasets. Feature selection (also known as subset selection is a process commonly used in machine learning, wherein subsets of the features available from the data are selected for application of a learning algorithm. The best subset contains the least number of dimensions that most contribute to accuracy; wediscard the remaining, unimportant dimensions. This is an important stage of preprocessing and is one of two ways of avoiding the curse of dimensionality (the other is feature extraction. There are twoapproaches in Feature selection known as Forward selection and backward selection. Feature selection has been an active research area in pattern recognition, statistics, and data mining communities.The main idea of feature selection is to choose a subset of input variables by eliminating features with little or no predictive information. Feature selection methods can be decomposed into three broad classes. One is Filter methods and another one is Wrapper method and the third one is Embedded method. This paper presents an empirical comparison of feature selection methods and its algorithms. In view of the substantial number of existing feature selection algorithms, the need arises to count on criteria that enable to adequately decide which algorithm to use in certain situations. This work reviews several fundamental algorithms found in the literature and assesses their performance in a controlled scenario.
Diversity-Based Boosting Algorithm
Jafar A. Alzubi
2016-05-01
Full Text Available Boosting is a well known and efficient technique for constructing a classifier ensemble. An ensemble is built incrementally by altering the distribution of training data set and forcing learners to focus on misclassification errors. In this paper, an improvement to Boosting algorithm called DivBoosting algorithm is proposed and studied. Experiments on several data sets are conducted on both Boosting and DivBoosting. The experimental results show that DivBoosting is a promising method for ensemble pruning. We believe that it has many advantages over traditional boosting method because its mechanism is not solely based on selecting the most accurate base classifiers but also based on selecting the most diverse set of classifiers.
Tao, Wang; Dongying, Wang; Yu, Pei; Wei, Fan
2015-09-01
To resolve the measured target position to determine and locate leak problems with current gas leak detection and localization systems based on ultrasonic technology, this paper presents an improved multi-array ultrasonic gas leak TDOA (time difference of arrival) localization and detection method. This method involves arranging ultrasonic transducers at equal intervals in a high-sensitivity detector array, using small differences in ultrasonic sound intensity to determine the scope of the leak and generate a rough localization, and then using an array TDOA localization algorithm to determine the precise leak location. This method is then implemented in an ultrasonic leak detection and localization system. Experimental results showed that the TDOA localization method, using auxiliary sound intensity factors to avoid dependence on a single sound intensity to determine the leak size and location, achieved a localization error of less than 2 mm. The validity and correctness of this approach were thus verified.
Universal Algorithm for Online Trading Based on the Method of Calibration
Vladimir V'yugin; Vladimir Trunov
2012-01-01
We present a universal algorithm for online trading in Stock Market which performs asymptotically at least as good as any stationary trading strategy that computes the investment at each step using a fixed function of the side information that belongs to a given RKHS (Reproducing Kernel Hilbert Space). Using a universal kernel, we extend this result for any continuous stationary strategy. In this learning process, a trader rationally chooses his gambles using predictions made by a randomized ...
Fast Density Based Clustering Algorithm
Priyanka Trikha; Singh Vijendra
2013-01-01
Clustering problem is an unsupervised learning problem. It is a procedure that partition data objects into matching clusters. The data objects in the same cluster are quite similar to each other and dissimilar in the other clusters. The traditional algorithms do not meet the latest multiple requirements simultaneously for objects. Density-based clustering algorithms find clusters based on density of data points in a region. DBSCAN algorithm is one of the density-based clustering algorithms. I...
ZHANG Shi-hai; LIU Shu-jun; LIU Xiao-yan; OU Jin-ping
2006-01-01
First, the high-rise building structure design process is divided into three relevant steps, that is,scheme generation and creation, performance evaluation, and scheme optimization. Then with the application of relational database, the case database of high-rise structures is constructed, the structure form-selection designing methods such as the smart algorithm based on CBR, DM, FINS, NN and GA is presented, and the original forms system of this method and its general structure are given. CBR and DM are used to generate scheme candidates; FINS and NN to evaluate and optimize the scheme performance; GA to create new structure forms.Finally, the application cases are presented, whose results fit in with the real project. It proves by combining and using the expert intelligence, algorithm intelligence and machine intelligence that this method makes good use of not only the engineering project knowledge and expertise but also much deeper knowledge contained in various engineering cases. In other words, it is because the form selection has a strong background support of vast real cases that its results prove more reliable and more acceptable. So the introduction of this method prorides an effective approach to improving the quality, efficiency, automatic and smart level of high-rise structures form selection design.
Jun Wang
2015-01-01
Full Text Available The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.
Qu Li
2014-01-01
Full Text Available Online friend recommendation is a fast developing topic in web mining. In this paper, we used SVD matrix factorization to model user and item feature vector and used stochastic gradient descent to amend parameter and improve accuracy. To tackle cold start problem and data sparsity, we used KNN model to influence user feature vector. At the same time, we used graph theory to partition communities with fairly low time and space complexity. What is more, matrix factorization can combine online and offline recommendation. Experiments showed that the hybrid recommendation algorithm is able to recommend online friends with good accuracy.
贺建军; 喻寿益; 钟掘
2003-01-01
A new searching algorithm named the annealing-genetic algorithm(AGA) was proposed by skillfully merging GA with SAA. It draws on merits of both GA and SAA ,and offsets their shortcomings. The difference from GA is that AGA takes objective function as adaptability function directly, so it cuts down some unnecessary time expense because of float-point calculation of function conversion. The difference from SAA is that AGA need not execute a very long Markov chain iteration at each point of temperature, so it speeds up the convergence of solution and makes no assumption on the search space,so it is simple and easy to be implemented. It can be applied to a wide class of problems. The optimizing principle and the implementing steps of AGA were expounded. The example of the parameter optimization of a typical complex electromechanical system named temper mill shows that AGA is effective and superior to the conventional GA and SAA. The control system of temper mill optimized by AGA has the optimal performance in the adjustable ranges of its parameters.
Adaptive de-noising method based on wavelet and adaptive learning algorithm in on-line PD monitoring
王立欣; 诸定秋; 蔡惟铮
2002-01-01
It is an important step in the online monitoring of partial discharge (PD) to extract PD pulses from various background noises. An adaptive de-noising method is introduced for adaptive noise reduction during detection of PD pulses. This method is based on Wavelet Transform (WT) , and in the wavelet domain the noises decomposed at the levels are reduced by independent thresholds. Instead of the standard hard thresholding function, a new type of hard thresholding function with continuous derivative is employed by this method. For the selection of thresholds, an unsupervised learning algorithm based on gradient in a mean square error (MSE) is present to search for the optimal threshold for noise reduction, and the optimal threshold is selected when the minimum MSE is obtained. With the simulating signals and on-site experimental data processed by this method,it is shown that the background noises such as narrowband noises can be reduced efficiently. Furthermore, it is proved that in comparison with the conventional wavelet de-noising method the adaptive de-noising method has a better performance in keeping the pulses and is more adaptive when suppressing the background noises of PD signals.
In this paper, a new type of narrow-band function is proposed for the artificial history simulation method based on narrow-band superposition, which aims to meet the needs of both fitting of the target response spectrum and envelop of the power spectral density. The new narrow-band function is based on the normal distribution function and trigonometric functions. Its band width can be controlled and it decays rapidly on both sides. While the target response spectrum is fitted by superimposing the new narrow-band time history, the power spectral density is enveloped by modifying the Fourier amplitude spectrum locally. The numerical example demonstrates that not only the artificial time history generated by this algorithm reaches high matching precision to the target response spectrum, but also the corresponding calculated power spectrum envelopes the target power spectrum. (authors)
Qing Guo
2015-04-01
Full Text Available A gait identification method for a lower extremity exoskeleton is presented in order to identify the gait sub-phases in human-machine coordinated motion. First, a sensor layout for the exoskeleton is introduced. Taking the difference between human lower limb motion and human-machine coordinated motion into account, the walking gait is divided into five sub-phases, which are ‘double standing’, ‘right leg swing and left leg stance’, ‘double stance with right leg front and left leg back’, ‘right leg stance and left leg swing’, and ‘double stance with left leg front and right leg back’. The sensors include shoe pressure sensors, knee encoders, and thigh and calf gyroscopes, and are used to measure the contact force of the foot, and the knee joint angle and its angular velocity. Then, five sub-phases of walking gait are identified by a C4.5 decision tree algorithm according to the data fusion of the sensors’ information. Based on the simulation results for the gait division, identification accuracy can be guaranteed by the proposed algorithm. Through the exoskeleton control experiment, a division of five sub-phases for the human-machine coordinated walk is proposed. The experimental results verify this gait division and identification method. They can make hydraulic cylinders retract ahead of time and improve the maximal walking velocity when the exoskeleton follows the person’s motion.
Fantin, Yuri S.; Neverov, Alexey D; Favorov, Alexander V.; Alvarez-Figueroa, Maria V.; Braslavskaya, Svetlana I.; Gordukova, Maria A.; Karandashova, Inga V.; Kuleshov, Konstantin V.; Myznikova, Anna I; Polishchuk, Maya S.; Reshetov, Denis A.; Voiciehovskaya, Yana A.; Mironov, Andrei A.; Chulanov, Vladimir P.
2013-01-01
Sanger sequencing is a common method of reading DNA sequences. It is less expensive than high-throughput methods, and it is appropriate for numerous applications including molecular diagnostics. However, sequencing mixtures of similar DNA of pathogens with this method is challenging. This is important because most clinical samples contain such mixtures, rather than pure single strains. The traditional solution is to sequence selected clones of PCR products, a complicated, time-consuming, and ...
3D PET image reconstruction based on Maximum Likelihood Estimation Method (MLEM) algorithm
Słomski, Artur; Bednarski, Tomasz; Białas, Piotr; Czerwiński, Eryk; Kapłon, Łukasz; Kochanowski, Andrzej; Korcyl, Grzegorz; Kowal, Jakub; Kowalski, Paweł; Kozik, Tomasz; Krzemień, Wojciech; Molenda, Marcin; Moskal, Paweł; Niedźwiecki, Szymon; Pałka, Marek; Pawlik, Monika; Raczyński, Lech; Salabura, Piotr; Gupta-Sharma, Neha; Silarski, Michał; Smyrski, Jerzy; Strzelecki, Adam; Wiślicki, Wojciech; Zieliński, Marcin; Zoń, Natalia
2015-01-01
Positron emission tomographs (PET) do not measure an image directly. Instead, they measure at the boundary of the field-of-view (FOV) of PET tomograph a sinogram that consists of measurements of the sums of all the counts along the lines connecting two detectors. As there is a multitude of detectors build-in typical PET tomograph structure, there are many possible detector pairs that pertain to the measurement. The problem is how to turn this measurement into an image (this is called imaging). Decisive improvement in PET image quality was reached with the introduction of iterative reconstruction techniques. This stage was reached already twenty years ago (with the advent of new powerful computing processors). However, three dimensional (3D) imaging remains still a challenge. The purpose of the image reconstruction algorithm is to process this imperfect count data for a large number (many millions) of lines-of-responce (LOR) and millions of detected photons to produce an image showing the distribution of the l...
Zhuo, Li; Zheng, Jing; Li, Xia; Wang, Fang; Ai, Bin; Qian, Junping
2008-10-01
The high-dimensional feature vectors of hyper spectral data often impose a high computational cost as well as the risk of "over fitting" when classification is performed. Therefore it is necessary to reduce the dimensionality through ways like feature selection. Currently, there are two kinds of feature selection methods: filter methods and wrapper methods. The former kind requires no feedback from classifiers and estimates the classification performance indirectly. The latter kind evaluates the "goodness" of selected feature subset directly based on the classification accuracy. Many experimental results have proved that the wrapper methods can yield better performance, although they have the disadvantage of high computational cost. In this paper, we present a Genetic Algorithm (GA) based wrapper method for classification of hyper spectral data using Support Vector Machine (SVM), a state-of-art classifier that has found success in a variety of areas. The genetic algorithm (GA), which seeks to solve optimization problems using the methods of evolution, specifically survival of the fittest, was used to optimize both the feature subset, i.e. band subset, of hyper spectral data and SVM kernel parameters simultaneously. A special strategy was adopted to reduce computation cost caused by the high-dimensional feature vectors of hyper spectral data when the feature subset part of chromosome was designed. The GA-SVM method was realized using the ENVI/IDL language, and was then tested by applying to a HYPERION hyper spectral image. Comparison of the optimized results and the un-optimized results showed that the GA-SVM method could significantly reduce the computation cost while improving the classification accuracy. The number of bands used for classification was reduced from 198 to 13, while the classification accuracy increased from 88.81% to 92.51%. The optimized values of the two SVM kernel parameters were 95.0297 and 0.2021, respectively, which were different from the
A new method by steering kernel-based Richardson–Lucy algorithm for neutron imaging restoration
Motivated by industrial applications, neutron radiography has become a powerful tool for non-destructive investigation techniques. However, resulted from a combined effect of neutron flux, collimated beam, limited spatial resolution of detector and scattering, etc., the images made with neutrons are degraded severely by blur and noise. For dealing with it, by integrating steering kernel regression into Richardson–Lucy approach, we present a novel restoration method in this paper, which is capable of suppressing noise while restoring details of the blurred imaging result efficiently. Experimental results show that compared with the other methods, the proposed method can improve the restoration quality both visually and quantitatively
Tim eHolmes
2013-12-01
Full Text Available Studying aesthetic preference is notoriously difficult because it targets individual experience. Eye movements provide a rich source of behavioural measures that directly reflect subjective choice. To determine individual preferences for simple composition rules we here use fixation duration as the fitness measure in a Gaze Driven Evolutionary Algorithm (GDEA, which has been used as a tool to identify aesthetic preferences (Holmes & Zanker, 2012. In the present study, the GDEA was used to investigate the preferred combination of colour and shape which have been promoted in the Bauhaus arts school. We used the same 3 shapes (square, circle, triangle used by Kandinsky (1923, with the 3 colour palette from the original experiment (A, an extended 7 colour palette (B, and 8 different shape orientation (C. Participants were instructed to look for their preferred circle, triangle or square in displays with 8 stimuli of different shapes, colours and rotations, in an attempt to test for a strong preference for red squares, yellow triangles and blue circles in such an unbiased experimental design and with an extended set of possible combinations. We Tested 6 participants extensively on the different conditions and found consistent preferences for individuals, but little evidence at the group level for preference consistent with Kandinsky’s claims, apart from some weak link between yellow and triangles. Our findings suggest substantial inter-individual differences in the presence of stable individual associations of colour and shapes, but also that these associations are robust within a single individual. These individual differences go some way towards challenging the claims of the universal preference for colour/shape combinations proposed by Kandinsky, but also indicate that a much larger sample size would be needed to confidently reject that hypothesis. Moreover, these experiments highlight the vast potential of the GDEA in experimental aesthetics
HISTORY BASED PROBABILISTIC BACKOFF ALGORITHM
Narendran Rajagopalan
2012-01-01
Full Text Available Performance of Wireless LAN can be improved at each layer of the protocol stack with respect to energy efficiency. The Media Access Control layer is responsible for the key functions like access control and flow control. During contention, Backoff algorithm is used to gain access to the medium with minimum probability of collision. After studying different variations of back off algorithms that have been proposed, a new variant called History based Probabilistic Backoff Algorithm is proposed. Through mathematical analysis and simulation results using NS-2, it is seen that proposed History based Probabilistic Backoff algorithm performs better than Binary Exponential Backoff algorithm.
A sport scene images segmentation method based on edge detection algorithm
Chen, Biqing
2011-12-01
This paper proposes a simple, fast sports scene image segmentation method; a lot of work so far has been looking for a way to reduce the different shades of emotions in smooth area. A novel method of pretreatment, proposed the elimination of different shades feelings. Internal filling mechanism is used to change the pixels enclosed by the interest as interest pixels. For some test has achieved harvest sports scene images has been confirmed.
Parasyris, Antonios E.; Spanoudaki, Katerina; Kampanis, Nikolaos A.
2016-04-01
Groundwater level monitoring networks provide essential information for water resources management, especially in areas with significant groundwater exploitation for agricultural and domestic use. Given the high maintenance costs of these networks, development of tools, which can be used by regulators for efficient network design is essential. In this work, a monitoring network optimisation tool is presented. The network optimisation tool couples geostatistical modelling based on the Spartan family variogram with a genetic algorithm method and is applied to Mires basin in Crete, Greece, an area of high socioeconomic and agricultural interest, which suffers from groundwater overexploitation leading to a dramatic decrease of groundwater levels. The purpose of the optimisation tool is to determine which wells to exclude from the monitoring network because they add little or no beneficial information to groundwater level mapping of the area. Unlike previous relevant investigations, the network optimisation tool presented here uses Ordinary Kriging with the recently-established non-differentiable Spartan variogram for groundwater level mapping, which, based on a previous geostatistical study in the area leads to optimal groundwater level mapping. Seventy boreholes operate in the area for groundwater abstraction and water level monitoring. The Spartan variogram gives overall the most accurate groundwater level estimates followed closely by the power-law model. The geostatistical model is coupled to an integer genetic algorithm method programmed in MATLAB 2015a. The algorithm is used to find the set of wells whose removal leads to the minimum error between the original water level mapping using all the available wells in the network and the groundwater level mapping using the reduced well network (error is defined as the 2-norm of the difference between the original mapping matrix with 70 wells and the mapping matrix of the reduced well network). The solution to the
Numerical Method based on SIMPLE Algorithm for a Two-Phase Flow with Non-condensable Gas
In this study, a numerical method based on SIMPLE algorithm for a two-phase flow with non-condensable gas has been developed in order to simulate thermal hydraulics in a containment of a nuclear power plant. As governing equations, it adopts a two-fluid three-field model for the two-phase flows. The three fields include gas, drops, and continuous liquid. The gas field can contains vapor and non-condensable gases such as air and hydrogen. In order to resolve mixing phenomena of gas species, gas transport equations for each species base on the gas mass fractions are solved with gas phase governing equations such as mass, momentum and energy equations. Methods to evaluate the properties of the gas species were implemented in the code. They are constant or polynomial function based a user input and a property library from Chemkin and JANAF table for gas specific heat. Properties for the gas mixture which are dependent on mole fractions of the gas species were evaluated by a mix rule
Quasigroup based crypto-algorithms
Shcherbacov, Victor
2012-01-01
Modifications of Markovski quasigroup based crypto-algorithm have been proposed. Some of these modifications are based on the systems of orthogonal n-ary groupoids. T-quasigroups based stream ciphers have been constructed.
Edge Crossing Minimization Algorithm for Hierarchical Graphs Based on Genetic Algorithms
无
2001-01-01
We present an edge crossing minimization algorithm forhierarchical gr aphs based on genetic algorithms, and comparing it with some heuristic algorithm s. The proposed algorithm is more efficient and has the following advantages: th e frame of the algorithms is unified, the method is simple, and its implementati on and revision are easy.
U. Jeong
2015-06-01
Full Text Available An online version of the OMI (Ozone Monitoring Instrument near-ultraviolet (UV aerosol retrieval algorithm was developed to retrieve aerosol optical thickness (AOT and single scattering albedo (SSA based on the optimal estimation (OE method. Instead of using the traditional look-up tables for radiative transfer calculations, it performs online radiative transfer calculations with the Vector Linearized Discrete Ordinate Radiative Transfer (VLIDORT model to eliminate interpolation errors and improve stability. The OE-based algorithm has the merit of providing useful estimates of uncertainties simultaneously with the inversion products. The measurements and inversion products of the Distributed Regional Aerosol Gridded Observation Network campaign in Northeast Asia (DRAGON NE-Asia 2012 were used to validate the retrieved AOT and SSA. The retrieved AOT and SSA at 388 nm have a correlation with the Aerosol Robotic Network (AERONET products that is comparable to or better than the correlation with the operational product during the campaign. The estimated retrieval noise and smoothing error perform well in representing the envelope curve of actual biases of AOT at 388 nm between the retrieved AOT and AERONET measurements. The forward model parameter errors were analyzed separately for both AOT and SSA retrievals. The surface albedo at 388 nm, the imaginary part of the refractive index at 354 nm, and the number fine mode fraction (FMF were found to be the most important parameters affecting the retrieval accuracy of AOT, while FMF was the most important parameter for the SSA retrieval. The additional information provided with the retrievals, including the estimated error and degrees of freedom, is expected to be valuable for future studies.
Genetic algorithms as global random search methods
Peck, Charles C.; Dhawan, Atam P.
1995-01-01
Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that the schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solutions and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.
An Improved DDoS Detection Method with EAQPSO-SVM Algorithm Based on Data Center Network
Peng Yu
2014-01-01
Full Text Available This study presents a novel approach for the prediction and detection of distributed denial of service (DDoS attacks by using EAQPSO-SVM algorithm which is implemented through combining the improved Quantum-behaved Particle Swarm Optimization (QPSO algorithm with Support Vector Machine (SVM theory. In order to improve the global searching performance of the classical QPSO algorithm for avoiding falling into local extreme value point, the Evolution Speed Factor (ESF and Aggregation Degree Factor (ADF were introduced in the EAQPSO-SVM algorithm to establish a binary relation function for correcting the self-adaptive expansion-contraction coefficient. Furthermore, a hybrid entropies strategy is proposed to identify the potential DDoS attacks by comparing the mean value entropy with average alarm threshold. Simulation results demonstrate that the proposed method remarkably improves the abilities of prediction and detection of DDoS attacks. Meanwhile, a novel framework of performance evaluation further proves that the proposed algorithm has better generalization ability and superior performance in terms of less algorithm execution time, average iterations, average relative variance and root mean square error.
HISTORY BASED PROBABILISTIC BACKOFF ALGORITHM
Narendran Rajagopalan; C.Mala
2012-01-01
Performance of Wireless LAN can be improved at each layer of the protocol stack with respect to energy efficiency. The Media Access Control layer is responsible for the key functions like access control and flow control. During contention, Backoff algorithm is used to gain access to the medium with minimum probability of collision. After studying different variations of back off algorithms that have been proposed, a new variant called History based Probabilistic Backoff Algorithm is proposed....
Genetic algorithm and particle swarm optimization combined with Powell method
Bento, David; Pinho, Diana; Pereira, Ana I.; Lima, Rui
2013-10-01
In recent years, the population algorithms are becoming increasingly robust and easy to use, based on Darwin's Theory of Evolution, perform a search for the best solution around a population that will progress according to several generations. This paper present variants of hybrid genetic algorithm - Genetic Algorithm and a bio-inspired hybrid algorithm - Particle Swarm Optimization, both combined with the local method - Powell Method. The developed methods were tested with twelve test functions from unconstrained optimization context.
Koo, Keunhwi; Kim, Soo-Yong; Jeong, Jae Jin; Kim, Sang Woo
2014-09-01
This study introduces a two-dimensional (2D) partial response maximum likelihood (PRML) method to reconstruct a degraded data page having 2D inter-symbol interference for holographic data storage. The proposed 2D PRML method consists of 2D partial response (PR) target, 2D equalizer using least mean square algorithm, and 2D soft output Viterbi algorithm (SOVA) having just two one-dimensional (1D) SOVAs in horizontal and vertical directions. To accurately organize a trellis diagram of the 1D SOVA in structural accordance with the 2D PR target, this study proposes the self-reference process for the extrinsic information in the 1D SOVA. Finally, simulation results show that the proposed method has bit error rate performance similar to that of modified 2D SOVA having four 1D SOVAs despite the relatively low computational complexity. Moreover, parallel processing is possible in the two 1D SOVAs through the self-reference process.
Eigenvalue Decomposition-Based Modified Newton Algorithm
Wen-jun Wang
2013-01-01
Full Text Available When the Hessian matrix is not positive, the Newton direction may not be the descending direction. A new method named eigenvalue decomposition-based modified Newton algorithm is presented, which first takes the eigenvalue decomposition of the Hessian matrix, then replaces the negative eigenvalues with their absolute values, and finally reconstructs the Hessian matrix and modifies the searching direction. The new searching direction is always the descending direction. The convergence of the algorithm is proven and the conclusion on convergence rate is presented qualitatively. Finally, a numerical experiment is given for comparing the convergence domains of the modified algorithm and the classical algorithm.
Mirzadeh, Zeynab; Mehri, Razieh; Rabbani, Hossein
2010-01-01
In this paper the degraded video with blur and noise is enhanced by using an algorithm based on an iterative procedure. In this algorithm at first we estimate the clean data and blur function using Newton optimization method and then the estimation procedure is improved using appropriate denoising methods. These noise reduction techniques are based on local statistics of clean data and blur function. For estimated blur function we use LPA-ICI (local polynomial approximation - intersection of confidence intervals) method that use an anisotropic window around each point and obtain the enhanced data employing Wiener filter in this local window. Similarly, to improvement the quality of estimated clean video, at first we transform the data to wavelet transform domain and then improve our estimation using maximum a posterior (MAP) estimator and local Laplace prior. This procedure (initial estimation and improvement of estimation by denoising) is iterated and finally the clean video is obtained. The implementation of this algorithm is slow in MATLAB1 environment and so it is not suitable for online applications. However, MATLAB has the capability of running functions written in C. The files which hold the source for these functions are called MEX-Files. The MEX functions allow system-specific APIs to be called to extend MATLAB's abilities. So, in this paper to speed up our algorithm, the written code in MATLAB is sectioned and the elapsed time for each section is measured and slow sections (that use 60% of complete running time) are selected. Then these slow sections are translated to C++ and linked to MATLAB. In fact, the high loads of information in images and processed data in the "for" loops of relevant code, makes MATLAB an unsuitable candidate for writing such programs. The written code for our video deblurring algorithm in MATLAB contains eight "for" loops. These eighth "for" utilize 60% of the total execution time of the entire program and so the runtime should be
Parameter-search methods are problem-sensitive. All methods depend on some meta-parameters of their own, which must be determined experimentally in advance. A better choice of these intrinsic parameters for a certain parameter-search method may improve its performance. Moreover, there are various implementations of the same method, which may also affect its performance. The choice of the matching (error) function has a great impact on the search process in terms of finding the optimal parameter set and minimizing the computational cost. An initial assessment of the matching function ability to distinguish between good and bad models is recommended, before launching exhaustive computations. However, different runs of a parameter search method may result in the same optimal parameter set or in different parameter sets (the model is insufficiently constrained to accurately characterize the real system). Robustness of the parameter set is expressed by the extent to which small perturbations in the parameter values are not affecting the best solution. A parameter set that is not robust is unlikely to be physiologically relevant. Robustness can also be defined as the stability of the optimal parameter set to small variations of the inputs. When trying to estimate things like the minimum, or the least-squares optimal parameters of a nonlinear system, the existence of multiple local minima can cause problems with the determination of the global optimum. Techniques such as Newton's method, the Simplex method and Least-squares Linear Taylor Differential correction technique can be useful provided that one is lucky enough to start sufficiently close to the global minimum. All these methods suffer from the inability to distinguish a local minimum from a global one because they follow the local gradients towards the minimum, even if some methods are resetting the search direction when it is likely to get stuck in presumably a local minimum. Deterministic methods based on
Lüdtke Rainer; Willich Stefan N; Ostermann Thomas
2008-01-01
Abstract Background Regression to the mean (RTM) occurs in situations of repeated measurements when extreme values are followed by measurements in the same subjects that are closer to the mean of the basic population. In uncontrolled studies such changes are likely to be interpreted as a real treatment effect. Methods Several statistical approaches have been developed to analyse such situations, including the algorithm of Mee and Chua which assumes a known population mean μ. We extend this ap...
Evolutionary algorithm based configuration interaction approach
Chakraborty, Rahul
2016-01-01
A stochastic configuration interaction method based on evolutionary algorithm is designed as an affordable approximation to full configuration interaction (FCI). The algorithm comprises of initiation, propagation and termination steps, where the propagation step is performed with cloning, mutation and cross-over, taking inspiration from genetic algorithm. We have tested its accuracy in 1D Hubbard problem and a molecular system (symmetric bond breaking of water molecule). We have tested two different fitness functions based on energy of the determinants and the CI coefficients of determinants. We find that the absolute value of CI coefficients is a more suitable fitness function when combined with a fixed selection scheme.
An Experimental Method for the Active Learning of Greedy Algorithms
Velazquez-Iturbide, J. Angel
2013-01-01
Greedy algorithms constitute an apparently simple algorithm design technique, but its learning goals are not simple to achieve.We present a didacticmethod aimed at promoting active learning of greedy algorithms. The method is focused on the concept of selection function, and is based on explicit learning goals. It mainly consists of an…
An image segmentation method based on accelerated Dijkstra algorithm%一种基于加速Dijkstra算法的图像分割技术
戴虹
2011-01-01
An optimal path searching algorithm called " Dijkstra Algorithm" is used for image segmentation. An accelerated Dijkstra Algorithm is presented to reduce the calculation work of the classical Dijkstra algorithm and to accelerate its operating speed. Live-Wire image segmentation method based on the accelerated Dijkstra algorithm is presented to sketch the object' s contour of interest in an image and area filling method is used to segment the object. The experimental results show that the algorithm can run image segmentation successfully and has good anti-noise ability, in addition, the algorithm has less interactive times than that of the manual segmentation method and run faster than the original live-Wire algorithm.%利用最短路径搜索算法中的Dijkstra算法进行图像分割.提出一种加速Dijkstra算法减小经典Dijkstra算法的运算量,以加快其运行速度.提出基于加速Dijkstra算法的Live-Wire图像分割方法勾画出一幅图像中感兴趣目标的轮廓并采用边界填充分割该目标.实验结果表明该算法能正确地进行图像分割,抗噪声性能好,与手工分割法相比交互次数较少,与原Live-Wire分割算法相比运行时间较短.
We propose a new method for selecting importance factors (for regions of interest like organs at risk) used to plan conformal radiotherapy. Importance factors, also known as weighting factors or penalty factors, are essential in determining the relative importance of multiple objectives or the penalty ratios of constraints incorporated into cost functions, especially in dealing with dose optimization in radiotherapy treatment planning. Researchers usually choose importance factors on the basis of a trial-and-error process to reach a balance between all the objectives. In this study, we used a genetic algorithm and adopted a real-number encoding method to represent both beam weights and importance factors in each chromosome. The algorithm starts by optimizing the beam weights for a fixed number of iterations then modifying the importance factors for another fixed number of iterations. During the first phase, the genetic operators, such as crossover and mutation, are carried out only on beam weights, and importance factors for each chromosome are not changed or 'frozen'. In the second phase, the situation is reversed: the beam weights are 'frozen' and the importance factors are changed after crossover and mutation. Through alternation of these two phases, both beam weights and importance factors are adjusted according to a fitness function that describes the conformity of dose distribution in planning target volume and dose-tolerance constraints in organs at risk. Those chromosomes with better fitness are passed into the next generation, showing that they have a better combination of beam weights and importance factors. Although the ranges of the importance factors should be set in advance by using this algorithm, it is much more convenient than selecting specific numbers for importance factors. Three clinical examples are presented and compared with manual plans to verify this method. Three-dimensional standard displays and dose-volume histograms are shown to
Research of the Kernel Operator Library Based on Cryptographic Algorithm
王以刚; 钱力; 黄素梅
2001-01-01
The variety of encryption mechanism and algorithms which were conventionally used have some limitations.The kernel operator library based on Cryptographic algorithm is put forward. Owing to the impenetrability of algorithm, the data transfer system with the cryptographic algorithm library has many remarkable advantages in algorithm rebuilding and optimization,easily adding and deleting algorithm, and improving the security power over the traditional algorithm. The user can choose any one in all algorithms with the method against any attack because the cryptographic algorithm library is extensible.
Cao, Ye [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Tang, Xiao-Bin, E-mail: tangxiaobin@nuaa.edu.cn [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Jiangsu Key Laboratory of Nuclear Energy Equipment Materials Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Wang, Peng; Meng, Jia; Huang, Xi; Wen, Liang-Sheng [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Chen, Da [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Jiangsu Key Laboratory of Nuclear Energy Equipment Materials Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China)
2015-10-11
The unmanned aerial vehicle (UAV) radiation monitoring method plays an important role in nuclear accidents emergency. In this research, a spectrum correction algorithm about the UAV airborne radioactivity monitoring equipment NH-UAV was studied to measure the radioactive nuclides within a small area in real time and in a fixed place. The simulation spectra of the high-purity germanium (HPGe) detector and the lanthanum bromide (LaBr{sub 3}) detector in the equipment were obtained using the Monte Carlo technique. Spectrum correction coefficients were calculated after performing ratio processing techniques about the net peak areas between the double detectors on the detection spectrum of the LaBr{sub 3} detector according to the accuracy of the detection spectrum of the HPGe detector. The relationship between the spectrum correction coefficient and the size of the source term was also investigated. A good linear relation exists between the spectrum correction coefficient and the corresponding energy (R{sup 2}=0.9765). The maximum relative deviation from the real condition reduced from 1.65 to 0.035. The spectrum correction method was verified as feasible. - Highlights: • An airborne radioactivity monitoring equipment NH-UAV was developed to measure radionuclide after a nuclear accident. • A spectrum correction algorithm was proposed to obtain precise information on the detected radioactivity within a small area. • The spectrum correction method was verified as feasible. • The corresponding spectrum correction coefficients increase first and then stay constant.
The unmanned aerial vehicle (UAV) radiation monitoring method plays an important role in nuclear accidents emergency. In this research, a spectrum correction algorithm about the UAV airborne radioactivity monitoring equipment NH-UAV was studied to measure the radioactive nuclides within a small area in real time and in a fixed place. The simulation spectra of the high-purity germanium (HPGe) detector and the lanthanum bromide (LaBr3) detector in the equipment were obtained using the Monte Carlo technique. Spectrum correction coefficients were calculated after performing ratio processing techniques about the net peak areas between the double detectors on the detection spectrum of the LaBr3 detector according to the accuracy of the detection spectrum of the HPGe detector. The relationship between the spectrum correction coefficient and the size of the source term was also investigated. A good linear relation exists between the spectrum correction coefficient and the corresponding energy (R2=0.9765). The maximum relative deviation from the real condition reduced from 1.65 to 0.035. The spectrum correction method was verified as feasible. - Highlights: • An airborne radioactivity monitoring equipment NH-UAV was developed to measure radionuclide after a nuclear accident. • A spectrum correction algorithm was proposed to obtain precise information on the detected radioactivity within a small area. • The spectrum correction method was verified as feasible. • The corresponding spectrum correction coefficients increase first and then stay constant
Ramos, A; Talaia, P; Queirós de Melo, F J
2016-01-01
The main goal of this work was to develop an approached model to study dynamic behavior and prediction of the stress distribution in an in vitro Charnley cemented hip arthroplasty. An alternative version of the described pseudo-dynamic procedure is proposed by using the time integration Newmark algorithm. An internal restoring force vector is numerically calculated from the displacement, velocity, and acceleration vectors. A numerical model of hip replacement was developed to analyze the deformation of a dynamically stressed structure for all time steps. The experimental measurement of resulting internal forces generated in the structure (internal restoring force vector) is the second fundamental step of the pseudo-dynamic procedure. These data (as a feedback) are used by the time integration algorithm, which allows updating of the structure's shape for the next displacement, velocity, and acceleration vectors. In the field of Biomechanics, the potentialities of this method contribute to the determination of a dynamically equivalent in vitro stress field of a cemented hip prosthesis; implant fitted in patients with a normal mobility or practice sports. Consequences of the stress distribution in the implant zone that underwent cyclic fatigue loads were also discussed by using a finite element model. Application of this method in Biomechanics appears as a useful tool in the approximate stress field characterization of the peak stress state. Results show a peak value around two times the static situation, more for making possible the prediction of future damage and a programed clinical examination in patients using hip prosthesis. PMID:25483822
Novel biomedical tetrahedral mesh methods: algorithms and applications
Yu, Xiao; Jin, Yanfeng; Chen, Weitao; Huang, Pengfei; Gu, Lixu
2007-12-01
Tetrahedral mesh generation algorithm, as a prerequisite of many soft tissue simulation methods, becomes very important in the virtual surgery programs because of the real-time requirement. Aiming to speed up the computation in the simulation, we propose a revised Delaunay algorithm which makes a good balance of quality of tetrahedra, boundary preservation and time complexity, with many improved methods. Another mesh algorithm named Space-Disassembling is also presented in this paper, and a comparison of Space-Disassembling, traditional Delaunay algorithm and the revised Delaunay algorithm is processed based on clinical soft-tissue simulation projects, including craniofacial plastic surgery and breast reconstruction plastic surgery.
A New Method of Detecting Pulmonary Nodules with PET/CT Based on an Improved Watershed Algorithm
Zhao, Juanjuan; Ji, Guohua; Qiang, Yan; Han, Xiaohong; Pei, Bo; Shi, Zhenghao
2015-01-01
Background Integrated 18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT) is widely performed for staging solitary pulmonary nodules (SPNs). However, the diagnostic efficacy of SPNs based on PET/CT is not optimal. Here, we propose a method of detection based on PET/CT that can differentiate malignant and benign SPNs with few false-positives. Method Our proposed method combines the features of positron-emission tomography (PET) and computed tomography (CT)....
A novel algorithmic method for piezoresistance calculation
A novel algorithmic method, based on the different stress distribution on the surface of thin film in an SOI microstructure, is put forward to calculate the value of the silicon piezoresistance on the sensitive film. In the proposed method, we take the Ritz method as an initial theoretical model to calculate the rate of piezoresistance ΔR/R through an integral (the closed area Ω where the surface piezoresistance of the film lies as the integral area and the product of stress σ and piezoresistive coefficient π as the integral object) and compare the theoretical values with the experimental results. Compared with the traditional method, this novel calculation method is more accurate when applied to calculating the value of the silicon piezoresistance on the sensitive film of an SOI pieoresistive pressure sensor. (semiconductor devices)
Seizure detection algorithms based on EMG signals
Conradsen, Isa
Background: the currently used non-invasive seizure detection methods are not reliable. Muscle fibers are directly connected to the nerves, whereby electric signals are generated during activity. Therefore, an alarm system on electromyography (EMG) signals is a theoretical possibility. Objective......: to show whether medical signal processing of EMG data is feasible for detection of epileptic seizures. Methods: EMG signals during generalised seizures were recorded from 3 patients (with 20 seizures in total). Two possible medical signal processing algorithms were tested. The first algorithm was...... patients, while the frequency-based algorithm was efficient for detecting the seizures in the third patient. Conclusion: Our results suggest that EMG signals could be used to develop an automatic seizuredetection system. However, different patients might require different types of algorithms /approaches....
SOM-based algorithms for qualitative variables.
Cottrell, Marie; Ibbou, Smaïl; Letrémy, Patrick
2004-01-01
It is well known that the SOM algorithm achieves a clustering of data which can be interpreted as an extension of Principal Component Analysis, because of its topology-preserving property. But the SOM algorithm can only process real-valued data. In previous papers, we have proposed several methods based on the SOM algorithm to analyze categorical data, which is the case in survey data. In this paper, we present these methods in a unified manner. The first one (Kohonen Multiple Correspondence Analysis, KMCA) deals only with the modalities, while the two others (Kohonen Multiple Correspondence Analysis with individuals, KMCA_ind, Kohonen algorithm on DISJonctive table, KDISJ) can take into account the individuals, and the modalities simultaneously. PMID:15555858
In this paper, a novel and effective damage diagnosis algorithm is proposed to localize and quantify structural damage using incomplete modal data, considering the existence of some limitations in the number of attached sensors on structures. The damage detection problem is formulated as an optimization problem by computing static displacements in the reduced model of a structure subjected to a unique static load. The static responses are computed through the flexibility matrix of the damaged structure obtained based on the incomplete modal data of the structure. In the algorithm, an iterated improved reduction system method is applied to prepare an accurate reduced model of a structure. The optimization problem is solved via a new evolutionary optimization algorithm called the cuckoo optimization algorithm. The efficiency and robustness of the presented method are demonstrated through three numerical examples. Moreover, the efficiency of the method is verified by an experimental study of a five-story shear building structure on a shaking table considering only two sensors. The obtained damage identification results for the numerical and experimental studies show the suitable and stable performance of the proposed damage identification method for structures with limited sensors. (paper)
Zare Hosseinzadeh, Ali; Bagheri, Abdollah; Ghodrati Amiri, Gholamreza; Koo, Ki-Young
2014-04-01
In this paper, a novel and effective damage diagnosis algorithm is proposed to localize and quantify structural damage using incomplete modal data, considering the existence of some limitations in the number of attached sensors on structures. The damage detection problem is formulated as an optimization problem by computing static displacements in the reduced model of a structure subjected to a unique static load. The static responses are computed through the flexibility matrix of the damaged structure obtained based on the incomplete modal data of the structure. In the algorithm, an iterated improved reduction system method is applied to prepare an accurate reduced model of a structure. The optimization problem is solved via a new evolutionary optimization algorithm called the cuckoo optimization algorithm. The efficiency and robustness of the presented method are demonstrated through three numerical examples. Moreover, the efficiency of the method is verified by an experimental study of a five-story shear building structure on a shaking table considering only two sensors. The obtained damage identification results for the numerical and experimental studies show the suitable and stable performance of the proposed damage identification method for structures with limited sensors.
Bechet, P.; Mitran, R.; Munteanu, M.
2013-08-01
Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact.
Knowledge-based tracking algorithm
Corbeil, Allan F.; Hawkins, Linda J.; Gilgallon, Paul F.
1990-10-01
This paper describes the Knowledge-Based Tracking (KBT) algorithm for which a real-time flight test demonstration was recently conducted at Rome Air Development Center (RADC). In KBT processing, the radar signal in each resolution cell is thresholded at a lower than normal setting to detect low RCS targets. This lower threshold produces a larger than normal false alarm rate. Therefore, additional signal processing including spectral filtering, CFAR and knowledge-based acceptance testing are performed to eliminate some of the false alarms. TSC's knowledge-based Track-Before-Detect (TBD) algorithm is then applied to the data from each azimuth sector to detect target tracks. In this algorithm, tentative track templates are formed for each threshold crossing and knowledge-based association rules are applied to the range, Doppler, and azimuth measurements from successive scans. Lastly, an M-association out of N-scan rule is used to declare a detection. This scan-to-scan integration enhances the probability of target detection while maintaining an acceptably low output false alarm rate. For a real-time demonstration of the KBT algorithm, the L-band radar in the Surveillance Laboratory (SL) at RADC was used to illuminate a small Cessna 310 test aircraft. The received radar signal wa digitized and processed by a ST-100 Array Processor and VAX computer network in the lab. The ST-100 performed all of the radar signal processing functions, including Moving Target Indicator (MTI) pulse cancelling, FFT Doppler filtering, and CFAR detection. The VAX computers performed the remaining range-Doppler clustering, beamsplitting and TBD processing functions. The KBT algorithm provided a 9.5 dB improvement relative to single scan performance with a nominal real time delay of less than one second between illumination and display.
Ying-Chih Lai
2016-05-01
Full Text Available The demand for pedestrian navigation has increased along with the rapid progress in mobile and wearable devices. This study develops an accurate and usable Step Length Estimation (SLE method for a Pedestrian Dead Reckoning (PDR system with features including a wide range of step lengths, a self-contained system, and real-time computing, based on the multi-sensor fusion and Fuzzy Logic (FL algorithms. The wide-range SLE developed in this study was achieved by using a knowledge-based method to model the walking patterns of the user. The input variables of the FL are step strength and frequency, and the output is the estimated step length. Moreover, a waist-mounted sensor module has been developed using low-cost inertial sensors. Since low-cost sensors suffer from various errors, a calibration procedure has been utilized to improve accuracy. The proposed PDR scheme in this study demonstrates its ability to be implemented on waist-mounted devices in real time and is suitable for the indoor and outdoor environments considered in this study without the need for map information or any pre-installed infrastructure. The experiment results show that the maximum distance error was within 1.2% of 116.51 m in an indoor environment and was 1.78% of 385.2 m in an outdoor environment.
Generalized Evolutionary Algorithm based on Tsallis Statistics
Dukkipati, Ambedkar; Murty, M. Narasimha; Bhatnagar, Shalabh
2004-01-01
Generalized evolutionary algorithm based on Tsallis canonical distribution is proposed. The algorithm uses Tsallis generalized canonical distribution to weigh the configurations for `selection' instead of Gibbs-Boltzmann distribution. Our simulation results show that for an appropriate choice of non-extensive index that is offered by Tsallis statistics, evolutionary algorithms based on this generalization outperform algorithms based on Gibbs-Boltzmann distribution.
Application of detecting algorithm based on network
张凤斌; 杨永田; 江子扬; 孙冰心
2004-01-01
Because currently intrusion detection systems cannot detect undefined intrusion behavior effectively,according to the robustness and adaptability of the genetic algorithms, this paper integrates the genetic algorithms into an intrusion detection system, and a detection algorithm based on network traffic is proposed. This algorithm is a real-time and self-study algorithm and can detect undefined intrusion behaviors effectively.
Swarm Intelligence Based Algorithms: A Critical Analysis
Yang, Xin-She
2014-01-01
Many optimization algorithms have been developed by drawing inspiration from swarm intelligence (SI). These SI-based algorithms can have some advantages over traditional algorithms. In this paper, we carry out a critical analysis of these SI-based algorithms by analyzing their ways to mimic evolutionary operators. We also analyze the ways of achieving exploration and exploitation in algorithms by using mutation, crossover and selection. In addition, we also look at algorithms using dynamic sy...
A Hybrid Algorithm for Satellite Data Transmission Schedule Based on Genetic Algorithm
LI Yun-feng; WU Xiao-yue
2008-01-01
A hybrid scheduling algorithm based on genetic algorithm is proposed in this paper for reconnaissance satellite data transmission. At first, based on description of satellite data transmission request, satellite data transmission task modal and satellite data transmission scheduling problem model are established. Secondly, the conflicts in scheduling are discussed. According to the meaning of possible conflict, the method to divide possible conflict task set is given. Thirdly, a hybrid algorithm which consists of genetic algorithm and heuristic information is presented. The heuristic information comes from two concepts, conflict degree and conflict number. Finally, an example shows the algorithm's feasibility and performance better than other traditional algorithms.
Cao, Ye; Tang, Xiao-Bin; Wang, Peng; Meng, Jia; Huang, Xi; Wen, Liang-Sheng; Chen, Da
2015-10-01
The unmanned aerial vehicle (UAV) radiation monitoring method plays an important role in nuclear accidents emergency. In this research, a spectrum correction algorithm about the UAV airborne radioactivity monitoring equipment NH-UAV was studied to measure the radioactive nuclides within a small area in real time and in a fixed place. The simulation spectra of the high-purity germanium (HPGe) detector and the lanthanum bromide (LaBr3) detector in the equipment were obtained using the Monte Carlo technique. Spectrum correction coefficients were calculated after performing ratio processing techniques about the net peak areas between the double detectors on the detection spectrum of the LaBr3 detector according to the accuracy of the detection spectrum of the HPGe detector. The relationship between the spectrum correction coefficient and the size of the source term was also investigated. A good linear relation exists between the spectrum correction coefficient and the corresponding energy (R2=0.9765). The maximum relative deviation from the real condition reduced from 1.65 to 0.035. The spectrum correction method was verified as feasible.
Bisheng He
2014-01-01
Full Text Available A time-space network based optimization method is designed for high-speed rail train timetabling problem to improve the service level of the high-speed rail. The general time-space path cost is presented which considers both the train travel time and the high-speed rail operation requirements: (1 service frequency requirement; (2 stopping plan adjustment; and (3 priority of train types. Train timetabling problem based on time-space path aims to minimize the total general time-space path cost of all trains. An improved branch-and-price algorithm is applied to solve the large scale integer programming problem. When dealing with the algorithm, a rapid branching and node selection for branch-and-price tree and a heuristic train time-space path generation for column generation are adopted to speed up the algorithm computation time. The computational results of a set of experiments on China’s high-speed rail system are presented with the discussions about the model validation, the effectiveness of the general time-space path cost, and the improved branch-and-price algorithm.
Optimisation of nonlinear motion cueing algorithm based on genetic algorithm
Asadi, Houshyar; Mohamed, Shady; Rahim Zadeh, Delpak; Nahavandi, Saeid
2015-04-01
Motion cueing algorithms (MCAs) are playing a significant role in driving simulators, aiming to deliver the most accurate human sensation to the simulator drivers compared with a real vehicle driver, without exceeding the physical limitations of the simulator. This paper provides the optimisation design of an MCA for a vehicle simulator, in order to find the most suitable washout algorithm parameters, while respecting all motion platform physical limitations, and minimising human perception error between real and simulator driver. One of the main limitations of the classical washout filters is that it is attuned by the worst-case scenario tuning method. This is based on trial and error, and is effected by driving and programmers experience, making this the most significant obstacle to full motion platform utilisation. This leads to inflexibility of the structure, production of false cues and makes the resulting simulator fail to suit all circumstances. In addition, the classical method does not take minimisation of human perception error and physical constraints into account. Production of motion cues and the impact of different parameters of classical washout filters on motion cues remain inaccessible for designers for this reason. The aim of this paper is to provide an optimisation method for tuning the MCA parameters, based on nonlinear filtering and genetic algorithms. This is done by taking vestibular sensation error into account between real and simulated cases, as well as main dynamic limitations, tilt coordination and correlation coefficient. Three additional compensatory linear blocks are integrated into the MCA, to be tuned in order to modify the performance of the filters successfully. The proposed optimised MCA is implemented in MATLAB/Simulink software packages. The results generated using the proposed method show increased performance in terms of human sensation, reference shape tracking and exploiting the platform more efficiently without reaching
XIA Yuan-yuan; SHAO He-song; LI Shi-xiong; LU Jing-yu
2012-01-01
The essential for microseismic monitoring is fast and accurate calculation of seismic wave source location.The precision of most traditional microseismic monitoring processes of mines,using TDOA location method in two-dimensional space to position the microseismic events,as well as the accuracy of positioning microseismic events,may be reduced by the two-dimensional model and simple method,and ill-conditioned equations produced by TDOA location method will increase the positioning error.This article,based on inversion theory,studies the mathematical model of TDOA location method,polarization analysis location method,and comprehensive difference location method of adding angle factor in the traditional TDOA location method.The feasibility of three methods is verified by numerical simulation and analysis of the positioning error of them.The results show that the comprehensive location method of adding angle difference has strong positioning stability and high positioning accuracy,and it may reduce the impact effectively about ill-conditioned equations to positioning results.Comprehensive location method with the data of actual measure may get better positioning results.
Cheng, Xuemin; Hao, Qun; Xie, Mengdi
2016-01-01
Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large. PMID:27070603
Cheng, Xuemin; Hao, Qun; Xie, Mengdi
2016-01-01
Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large. PMID:27070603
Differential Search Algorithm Based Edge Detection
Gunen, M. A.; Civicioglu, P.; Beşdok, E.
2016-06-01
In this paper, a new method has been presented for the extraction of edge information by using Differential Search Optimization Algorithm. The proposed method is based on using a new heuristic image thresholding method for edge detection. The success of the proposed method has been examined on fusion of two remote sensed images. The applicability of the proposed method on edge detection and image fusion problems have been analysed in detail and the empirical results exposed that the proposed method is useful for solving the mentioned problems.
Structure-Based Algorithms for Microvessel Classification
Smith, Amy F.
2015-02-01
© 2014 The Authors. Microcirculation published by John Wiley & Sons Ltd. Objective: Recent developments in high-resolution imaging techniques have enabled digital reconstruction of three-dimensional sections of microvascular networks down to the capillary scale. To better interpret these large data sets, our goal is to distinguish branching trees of arterioles and venules from capillaries. Methods: Two novel algorithms are presented for classifying vessels in microvascular anatomical data sets without requiring flow information. The algorithms are compared with a classification based on observed flow directions (considered the gold standard), and with an existing resistance-based method that relies only on structural data. Results: The first algorithm, developed for networks with one arteriolar and one venular tree, performs well in identifying arterioles and venules and is robust to parameter changes, but incorrectly labels a significant number of capillaries as arterioles or venules. The second algorithm, developed for networks with multiple inlets and outlets, correctly identifies more arterioles and venules, but is more sensitive to parameter changes. Conclusions: The algorithms presented here can be used to classify microvessels in large microvascular data sets lacking flow information. This provides a basis for analyzing the distinct geometrical properties and modelling the functional behavior of arterioles, capillaries, and venules.
Duality based optical flow algorithms with applications
Rakêt, Lars Lau
We consider the popular TV-L1 optical flow formulation, and the so-called duality based algorithm for minimizing the TV-L1 energy. The original formulation is extended to allow for vector valued images, and minimization results are given. In addition we consider different definitions of total...... variation regularization, and related formulations of the optical flow problem that may be used with a duality based algorithm. We present a highly optimized algorithmic setup to estimate optical flows, and give five novel applications. The first application is registration of medical images, where X...... chromatograms, where registration only have to be done in one of the two dimensions, resulting in a vector valued registration problem with values having several hundred dimensions. We propose a novel method for solving this problem, where instead of a vector valued data term, the different channels are coupled...
Graphical model construction based on evolutionary algorithms
Youlong YANG; Yan WU; Sanyang LIU
2006-01-01
Using Bayesian networks to model promising solutions from the current population of the evolutionary algorithms can ensure efficiency and intelligence search for the optimum. However, to construct a Bayesian network that fits a given dataset is a NP-hard problem, and it also needs consuming mass computational resources. This paper develops a methodology for constructing a graphical model based on Bayesian Dirichlet metric. Our approach is derived from a set of propositions and theorems by researching the local metric relationship of networks matching dataset. This paper presents the algorithm to construct a tree model from a set of potential solutions using above approach. This method is important not only for evolutionary algorithms based on graphical models, but also for machine learning and data mining.The experimental results show that the exact theoretical results and the approximations match very well.
利用盲源分离算法实现DOA估计%DOA estimation method based on blind source separation algorithm
徐先峰; 刘义艳; 段晨东
2012-01-01
A new DOA (direction-of-arrival) estimation method based on an algorithm for fast blind source separation (FBSS-DOA) is proposed in this paper. A group of correlation matrices possessing diagonal structure is generated. A cost function of joint diagonalization for blind source separation is introduced. For solving this cost function, a fast multiplied iterative algorithm in complex-valued domain is utilized. The demixing matrix was then estimated and the estimation of DOA was realized. Compared with familiar algorithms, the algorithm has more generality and better estimation performance. The simulation results illustrate its efficiency.%提出一种基于快速盲源分离算法实现波达方向(DOA)估计的方法.构造了具有对角化结构的相关矩阵组,引入解盲源分离问题的联合对角化代价函数,采用一种快速的复数域乘性迭代算法求解代价函数,得到混迭矩阵逆的估计,进而实现DOA估计.与同类算法相比,该算法具有更广的适用性和更精确的DOA估计性能.仿真实验结果验证了算法的快速收敛性和优越的估计性能.
Gaubert, Stephane; Qu, Zheng
2011-01-01
Max-plus based methods have been recently developed to approximate the value function of possibly high dimensional optimal control problems. A critical step of these methods consists in approximating a function by a supremum of a small number of functions (max-plus "basis functions") taken from a prescribed dictionary. We study several variants of this approximation problem, which we show to be continuous versions of the facility location and $k$-center combinatorial optimization problems, in which the connection costs arise from a Bregman distance. We give theoretical error estimates, quantifying the number of basis functions needed to reach a prescribed accuracy. We derive from our approach a refinement of the curse of dimensionality free method introduced previously by McEneaney, with a higher accuracy for a comparable computational cost.
Menghan Wang∗,Zongmin Yue; Lie Meng
2016-01-01
In order to prevent cracking appeared in the work⁃piece during the hot stamping operation, this paper proposes a hybrid optimization method based on Hammersley sequence sampling ( HSS) , finite analysis, back⁃propagation ( BP ) neural network and genetic algorithm ( GA ) . The mechanical properties of high strength boron steel are characterized on the basis of uniaxial tensile test at elevated temperatures. The samples of process parameters are chosen via the HSS that encourages the exploration throughout the design space and hence achieves better discovery of possible global optimum in the solution space. Meanwhile, numerical simulation is carried out to predict the forming quality for the optimized design. A BP neural network model is developed to obtain the mathematical relationship between optimization goal and design variables, and genetic algorithm is used to optimize the process parameters. Finally, the results of numerical simulation are compared with those of production experiment to demonstrate that the optimization strategy proposed in the paper is feasible.
Li Ma
2015-01-01
Full Text Available Image segmentation plays an important role in medical image processing. Fuzzy c-means (FCM clustering is one of the popular clustering algorithms for medical image segmentation. However, FCM has the problems of depending on initial clustering centers, falling into local optimal solution easily, and sensitivity to noise disturbance. To solve these problems, this paper proposes a hybrid artificial fish swarm algorithm (HAFSA. The proposed algorithm combines artificial fish swarm algorithm (AFSA with FCM whose advantages of global optimization searching and parallel computing ability of AFSA are utilized to find a superior result. Meanwhile, Metropolis criterion and noise reduction mechanism are introduced to AFSA for enhancing the convergence rate and antinoise ability. The artificial grid graph and Magnetic Resonance Imaging (MRI are used in the experiments, and the experimental results show that the proposed algorithm has stronger antinoise ability and higher precision. A number of evaluation indicators also demonstrate that the effect of HAFSA is more excellent than FCM and suppressed FCM (SFCM.
Ma, Li; Li, Yang; Fan, Suohai; Fan, Runzhu
2015-01-01
Image segmentation plays an important role in medical image processing. Fuzzy c-means (FCM) clustering is one of the popular clustering algorithms for medical image segmentation. However, FCM has the problems of depending on initial clustering centers, falling into local optimal solution easily, and sensitivity to noise disturbance. To solve these problems, this paper proposes a hybrid artificial fish swarm algorithm (HAFSA). The proposed algorithm combines artificial fish swarm algorithm (AFSA) with FCM whose advantages of global optimization searching and parallel computing ability of AFSA are utilized to find a superior result. Meanwhile, Metropolis criterion and noise reduction mechanism are introduced to AFSA for enhancing the convergence rate and antinoise ability. The artificial grid graph and Magnetic Resonance Imaging (MRI) are used in the experiments, and the experimental results show that the proposed algorithm has stronger antinoise ability and higher precision. A number of evaluation indicators also demonstrate that the effect of HAFSA is more excellent than FCM and suppressed FCM (SFCM). PMID:26649068
Ma, Li; Li, Yang; Fan, Suohai; Fan, Runzhu
2015-01-01
Image segmentation plays an important role in medical image processing. Fuzzy c-means (FCM) clustering is one of the popular clustering algorithms for medical image segmentation. However, FCM has the problems of depending on initial clustering centers, falling into local optimal solution easily, and sensitivity to noise disturbance. To solve these problems, this paper proposes a hybrid artificial fish swarm algorithm (HAFSA). The proposed algorithm combines artificial fish swarm algorithm (AFSA) with FCM whose advantages of global optimization searching and parallel computing ability of AFSA are utilized to find a superior result. Meanwhile, Metropolis criterion and noise reduction mechanism are introduced to AFSA for enhancing the convergence rate and antinoise ability. The artificial grid graph and Magnetic Resonance Imaging (MRI) are used in the experiments, and the experimental results show that the proposed algorithm has stronger antinoise ability and higher precision. A number of evaluation indicators also demonstrate that the effect of HAFSA is more excellent than FCM and suppressed FCM (SFCM). PMID:26649068
Kostrzewa, Daniel; Josiński, Henryk
2016-06-01
The expanded Invasive Weed Optimization algorithm (exIWO) is an optimization metaheuristic modelled on the original IWO version inspired by dynamic growth of weeds colony. The authors of the present paper have modified the exIWO algorithm introducing a set of both deterministic and non-deterministic strategies of individuals' selection. The goal of the project was to evaluate the modified exIWO by testing its usefulness for multidimensional numerical functions optimization. The optimized functions: Griewank, Rastrigin, and Rosenbrock are frequently used as benchmarks because of their characteristics.
A Simplification Algorithm Based On Appearance Maintenance
Fang Wan
2010-12-01
Full Text Available This paper present a new simplification algorithm named EQEM which is based on the QEM algorith to simplify geometry model with texture. The algorithm used a new framework to integrate geometry and texture factor into simplification process. In the framework, an error metric descriptor is described in detail. Firstly, Gauss curvature of the vertex should be calculated to assure good geometry similarity. Then, the descriptor take into account the visual importance in the simplification. In this way, we introduce a method of edge detection algorithm in image processing, which use mallat wavelet method to quickly extract the distinct edges from a texture image. Also, we compute the color distance of the vertices which are not belonged to any edges as the supplement of the metric descriptor. Experiment prove the simplification can make the sharp-simplified model keep similar appearance. For the descriptor is extended from the QEM error metric and can also be calculated by matrix multiplication, the algorithm is with high efficiency. After all, for all the parameters in the error metric formula are adjustable, algorithm can fit for different type of the mesh models and simplification scale.
ALGORITHM AND METHODS OF HUMAN RESOURCES EVALUATION
Gontiuk, Viktoriia
2014-01-01
The paper deals with the scientific position and methodical approaches of human resource evaluation and indicates it importance in the organization management. This study provides the algorithm of human resources evaluation. Author argues that the most fully human resources evaluation manifests in complex using of different methods (qualitative, quantitative and combined methods).
FACE RECOGNITION BASED ON CUCKOO SEARCH ALGORITHM
VIPINKUMAR TIWARI
2012-01-01
Feature Selection is a optimization technique used in face recognition technology. Feature selection removes the irrelevant, noisy and redundant data thus leading to the more accurate recognition of face from the database.Cuckko Algorithm is one of the recent optimization algorithm in the league of nature based algorithm. Its optimization results are better than the PSO and ACO optimization algorithms. The proposal of applying the Cuckoo algorithm for feature selection in the process of face ...
The Improved Cuckoo Search Algorithm Based on Quadratic Interpolation Method%基于二次插值法的布谷鸟搜索算法研究
刘佳; 冯震; 徐越群
2015-01-01
The Cuckoo Search algorithm (CS) was studied, and in order to improve the shortcomings of the basic CS algorithm, such as low optimization precision and convergence slowly and poor local search ability in late evolution, an improved CS algorithm(QI_GSO) based on quadratic interpolation method was proposed in this paper. New algorithm makes full use of the bird’s nest local information, enhances the local search ability of the algorithm, and speeds up the convergence of the global optimal solution. The feasibility and effectiveness of the new approach was verified through testing by functions. The experimental results show that the QI_CS algorithm is significantly superior to original CS and can greatly improve the ability of seeking the global excellent result and convergence property and accuracy, which is an effective method to solve multimodal function optimization problem.%对基本的布谷鸟搜索算法（Cuckoo Search，CS）进行研究，为改进CS算法局部搜索能力差、进化后期收敛速度慢、求解精度低等缺陷，考虑到二次插值法是一种局部搜索能力较强的搜索方法，提出一种基于二次插值法的布谷鸟搜索算法（QI_CS）。新算法充分利用鸟窝个体局部的优化信息，增强算法的局部搜索能力，加快算法搜索全局最优解的收敛速度。仿真实验结果表明，QI_CS 算法在保持原算法的强大全局寻优能力的基础上大幅提高算法的收敛能力和求解精度，是求解多峰函数优化问题的一种可行和有效的方法。
Yusran
2013-01-01
This research discussed about optimization of multi-type Distributed Generation (DG) capacity and location. Multi-type DG was used in this research to investigate the effect of DG type to losses and voltage profile of network. The three types of DG were injecting active power only; injecting both active and reactive power; and the one injecting active power and absorbing reactive power. This research used combination of Binary Encoding Genetic Algorithm (GA) and Newton Raphson (NR...
A New Page Ranking Algorithm Based On WPRVOL Algorithm
Roja Javadian Kootenae
2013-03-01
Full Text Available The amount of information on the web is always growing, thus powerful search tools are needed to search for such a large collection. Search engines in this direction help users so they can find their desirable information among the massive volume of information in an easier way. But what is important in the search engines and causes a distinction between them is page ranking algorithm used in them. In this paper a new page ranking algorithm based on "Weighted Page Ranking based on Visits of Links (WPRVOL Algorithm" for search engines is being proposed which is called WPR'VOL for short. The proposed algorithm considers the number of visits of first and second level in-links. The original WPRVOL algorithm takes into account the number of visits of first level in-links of the pages and distributes rank scores based on the popularity of the pages whereas the proposed algorithm considers both in-links of that page (first level in-links and in-links of the pages that point to it (second level in-links in order to calculation of rank of the page, hence more related pages are displayed at the top of search result list. In the summary it is said that the proposed algorithm assigns higher rank to pages that both themselves and pages that point to them be important.
DNA Coding Based Knowledge Discovery Algorithm
LI Ji-yun; GENG Zhao-feng; SHAO Shi-huang
2002-01-01
A novel DNA coding based knowledge discovery algorithm was proposed, an example which verified its validity was given. It is proved that this algorithm can discover new simplified rules from the original rule set efficiently.
DE and NLP Based QPLS Algorithm
Yu, Xiaodong; Huang, Dexian; Wang, Xiong; Liu, Bo
As a novel evolutionary computing technique, Differential Evolution (DE) has been considered to be an effective optimization method for complex optimization problems, and achieved many successful applications in engineering. In this paper, a new algorithm of Quadratic Partial Least Squares (QPLS) based on Nonlinear Programming (NLP) is presented. And DE is used to solve the NLP so as to calculate the optimal input weights and the parameters of inner relationship. The simulation results based on the soft measurement of diesel oil solidifying point on a real crude distillation unit demonstrate that the superiority of the proposed algorithm to linear PLS and QPLS which is based on Sequential Quadratic Programming (SQP) in terms of fitting accuracy and computational costs.
Opposition-Based Adaptive Fireworks Algorithm
Chibing Gong
2016-07-01
Full Text Available A fireworks algorithm (FWA is a recent swarm intelligence algorithm that is inspired by observing fireworks explosions. An adaptive fireworks algorithm (AFWA proposes additional adaptive amplitudes to improve the performance of the enhanced fireworks algorithm (EFWA. The purpose of this paper is to add opposition-based learning (OBL to AFWA with the goal of further boosting performance and achieving global optimization. Twelve benchmark functions are tested in use of an opposition-based adaptive fireworks algorithm (OAFWA. The final results conclude that OAFWA significantly outperformed EFWA and AFWA in terms of solution accuracy. Additionally, OAFWA was compared with a bat algorithm (BA, differential evolution (DE, self-adapting control parameters in differential evolution (jDE, a firefly algorithm (FA, and a standard particle swarm optimization 2011 (SPSO2011 algorithm. The research results indicate that OAFWA ranks the highest of the six algorithms for both solution accuracy and runtime cost.
RED Algorithm based Iris Recognition
Mayuri M. Memane
2012-07-01
Full Text Available Human iris is one of the most reliable biometric because of its uniqueness, stability and non-invasive nature. Biometrics based human authentication systems are becoming more important as government & corporations worldwide deploy them in such schemes as access & border control, time & attendance record, driving license registration and national ID card schemes. For this various preprocessing steps are carried out on the iris image which also includes segmentation. Normalization deals with polar to rectangular conversion. The edges are detected using canny edge detector. Features are extracted using ridge energy direction algorithm. It uses two directional filters i.e. horizontal and vertical oriented. The final template is generated by comparing the two templates and considering the predominant edge. This final template is match with the stored one using hamming distance and the match ID is displayed.
Distance Concentration-Based Artificial Immune Algorithm
LIU Tao; WANG Yao-cai; WANG Zhi-jie; MENG Jiang
2005-01-01
The diversity, adaptation and memory of biological immune system attract much attention of researchers. Several optimal algorithms based on immune system have also been proposed up to now. The distance concentration-based artificial immune algorithm (DCAIA) is proposed to overcome defects of the classical artificial immune algorithm (CAIA) in this paper. Compared with genetic algorithm (GA) and CAIA, DCAIA is good for solving the problem of precocity,holding the diversity of antibody, and enhancing convergence rate.
Zhang, Yanjun; Liu, Wen-zhe; Fu, Xing-hu; Bi, Wei-hong
2016-02-01
Given that the traditional signal processing methods can not effectively distinguish the different vibration intrusion signal, a feature extraction and recognition method of the vibration information is proposed based on EMD-AWPP and HOSA-SVM, using for high precision signal recognition of distributed fiber optic intrusion detection system. When dealing with different types of vibration, the method firstly utilizes the adaptive wavelet processing algorithm based on empirical mode decomposition effect to reduce the abnormal value influence of sensing signal and improve the accuracy of signal feature extraction. Not only the low frequency part of the signal is decomposed, but also the high frequency part the details of the signal disposed better by time-frequency localization process. Secondly, it uses the bispectrum and bicoherence spectrum to accurately extract the feature vector which contains different types of intrusion vibration. Finally, based on the BPNN reference model, the recognition parameters of SVM after the implementation of the particle swarm optimization can distinguish signals of different intrusion vibration, which endows the identification model stronger adaptive and self-learning ability. It overcomes the shortcomings, such as easy to fall into local optimum. The simulation experiment results showed that this new method can effectively extract the feature vector of sensing information, eliminate the influence of random noise and reduce the effects of outliers for different types of invasion source. The predicted category identifies with the output category and the accurate rate of vibration identification can reach above 95%. So it is better than BPNN recognition algorithm and improves the accuracy of the information analysis effectively. PMID:27209772
A graph spectrum based geometric biclustering algorithm.
Wang, Doris Z; Yan, Hong
2013-01-21
Biclustering is capable of performing simultaneous clustering on two dimensions of a data matrix and has many applications in pattern classification. For example, in microarray experiments, a subset of genes is co-expressed in a subset of conditions, and biclustering algorithms can be used to detect the coherent patterns in the data for further analysis of function. In this paper, we present a graph spectrum based geometric biclustering (GSGBC) algorithm. In the geometrical view, biclusters can be seen as different linear geometrical patterns in high dimensional spaces. Based on this, the modified Hough transform is used to find the Hough vector (HV) corresponding to sub-bicluster patterns in 2D spaces. A graph can be built regarding each HV as a node. The graph spectrum is utilized to identify the eigengroups in which the sub-biclusters are grouped naturally to produce larger biclusters. Through a comparative study, we find that the GSGBC achieves as good a result as GBC and outperforms other kinds of biclustering algorithms. Also, compared with the original geometrical biclustering algorithm, it reduces the computing time complexity significantly. We also show that biologically meaningful biclusters can be identified by our method from real microarray gene expression data. PMID:23079285
A generalized GPU-based connected component labeling algorithm
Komura, Yukihiro
2016-01-01
We propose a generalized GPU-based connected component labeling (CCL) algorithm that can be applied to both various lattices and to non-lattice environments in a uniform fashion. We extend our recent GPU-based CCL algorithm without the use of conventional iteration to the generalized method. As an application of this algorithm, we deal with the bond percolation problem. We investigate bond percolation on the honeycomb and triangle lattices to confirm the correctness of this algorithm. Moreover, we deal with bond percolation on the Bethe lattice as a substitute for a network structure, and demonstrate the performance of this algorithm on those lattices.
Li, Chuan; Li, Lin; Jie ZHANG; Alexov, Emil
2012-01-01
The Gauss-Seidel method is a standard iterative numerical method widely used to solve a system of equations and, in general, is more efficient comparing to other iterative methods, such as the Jacobi method. However, standard implementation of the Gauss-Seidel method restricts its utilization in parallel computing due to its requirement of using updated neighboring values (i.e., in current iteration) as soon as they are available. Here we report an efficient and exact (not requiring assumptio...
Research of MD5 shadow table method based on MD5 algorithm%一种融入MD5的影子表算法研究
袁满; 康峰峰; 黄刚
2013-01-01
As for the rapid extraction of source data, the rapid identification of changing data and the rapid incremental extraction of data, underlying the analysis of the working principle of the traditional shadow table, this paper proposed a kind of improved linear algorithm which merged MD5 algorithm into the ST. It scanned compared tables for the linear and eliminates unnecessary inverse operation. During scanning these tables, it employed MD5 algorithm to calculate the whole record ' fingerprint ' , it reduced the frequency and duration of string matching, and it could quickly recognize the changed records. The algorithm was verified practically, and the result shows that the proposed algorithm can improve the efficiency of data extraction in database. Incremental extraction method based on shadow table is a general incremental capturing method, and it can be implemented in any database. These applications can easily be transplanted in all kinds of platforms, so it is suitable to solve the heterogeneous database replication issues.%为了快速提取源头数据、快速识别变化记录以及实现数据的快速增量提取,在剖析传统影子表法的工作原理上,提出基于MD5算法的影子表法的改进型线性算法,对对比表进行线性扫描,排除了不必要的回扫操作；同时通过MD5算法计算整条记录的“指纹”,降低了字符串比对次数和时间,能够迅速识别出发生变化的记录.对所提出算法进行了应用测试,结果表明通过融入MD5算法后的影子表法提高了数据提取效率.基于影子表的增量提取方法是一种通用的增量捕获方法,能在任何数据库上实现；应用程序可以方便地在多种平台间移植,因此很适合解决异构数据库复制问题.
Honey Bees Inspired Optimization Method: The Bees Algorithm.
Yuce, Baris; Packianather, Michael S; Mastrocinque, Ernesto; Pham, Duc Truong; Lambiase, Alfredo
2013-01-01
Optimization algorithms are search methods where the goal is to find an optimal solution to a problem, in order to satisfy one or more objective functions, possibly subject to a set of constraints. Studies of social animals and social insects have resulted in a number of computational models of swarm intelligence. Within these swarms their collective behavior is usually very complex. The collective behavior of a swarm of social organisms emerges from the behaviors of the individuals of that swarm. Researchers have developed computational optimization methods based on biology such as Genetic Algorithms, Particle Swarm Optimization, and Ant Colony. The aim of this paper is to describe an optimization algorithm called the Bees Algorithm, inspired from the natural foraging behavior of honey bees, to find the optimal solution. The algorithm performs both an exploitative neighborhood search combined with random explorative search. In this paper, after an explanation of the natural foraging behavior of honey bees, the basic Bees Algorithm and its improved versions are described and are implemented in order to optimize several benchmark functions, and the results are compared with those obtained with different optimization algorithms. The results show that the Bees Algorithm offering some advantage over other optimization methods according to the nature of the problem. PMID:26462528
Honey Bees Inspired Optimization Method: The Bees Algorithm
Ernesto Mastrocinque
2013-11-01
Full Text Available Optimization algorithms are search methods where the goal is to find an optimal solution to a problem, in order to satisfy one or more objective functions, possibly subject to a set of constraints. Studies of social animals and social insects have resulted in a number of computational models of swarm intelligence. Within these swarms their collective behavior is usually very complex. The collective behavior of a swarm of social organisms emerges from the behaviors of the individuals of that swarm. Researchers have developed computational optimization methods based on biology such as Genetic Algorithms, Particle Swarm Optimization, and Ant Colony. The aim of this paper is to describe an optimization algorithm called the Bees Algorithm, inspired from the natural foraging behavior of honey bees, to find the optimal solution. The algorithm performs both an exploitative neighborhood search combined with random explorative search. In this paper, after an explanation of the natural foraging behavior of honey bees, the basic Bees Algorithm and its improved versions are described and are implemented in order to optimize several benchmark functions, and the results are compared with those obtained with different optimization algorithms. The results show that the Bees Algorithm offering some advantage over other optimization methods according to the nature of the problem.
Numerical Methods, Algorithms and Tools in C#
Dos Passos, Waldemar
2009-01-01
Along with providing the C# source code online, this book presents practical, ready-to-use mathematical routines employing the C# programming language from Microsoft. It shows how to write mathematically intense object-oriented computer programs. It covers a spectrum of computational tools, including sorting algorithms and optimization methods.
Sangeetha S; S Jeevananthan
2015-12-01
Genetic Algorithms (GA) has always done justice to the art of optimization. One such endeavor has been made in employing the roots of GA in a most proficient way to determine the switching moments of a cascaded H-bridge seven level inverter with equal DC sources. Evolutionary techniques have proved themselves efficient to solve such an obscurity. GA is one of the methods to achieve the objective through biological mimicking. The extraordinary property of crossover is extracted using Random 3-Point Neighbourhood Crossover (RPNC) and Multi Midpoint Selective Bit Neighbourhood crossover (MMSBNC). This paper deals with solving of the selective harmonic equations (SHE) using binary coded GA specific to knowledge based neighbourhood multipoint crossover technique. This is directly related to the switching moments of the multilevel inverter under consideration. Although the previous root-finding techniques such as N-R or resultant like methods endeavor the same, the latter offers faster convergence, better program reliability and wide range of solutions. With an acute algorithm developed in Turbo C, the switching moments are calculated offline. The simulation results closely agree with the hardware results.
Zhang, Mei; Gu, Xiaoyun; Chen, Zhenjie; Li, Xue; Su, Mo
2007-06-01
Up until now, the optical cable network has covered many cities of China. However, there are still so many middle or small sized cities which are not connected to the grid backbone optical cable network of the country. It is urgent to connect these middle or small sized cities into the backbone optical cable network of the country as soon as possible. However, up until now, little work has been done to find a better way for route choice of main optical cables, including those based on GIS methods. This paper proposes a new method for route choice of main optical cables, i.e., the method for route choice of main optical cables based on Dijkstra Algorithm. In this paper, a model for route choice of main optical cables is built, the influencing factors are chosen and quantified according to the specific situation of Guanyun County, and the route of the main optical cables of Guanyun is chosen and drawn on the map. The result shows that the method proposed by this paper has more potentials than the traditional method used in Guanyun County.
Normalization based K means Clustering Algorithm
Virmani, Deepali; Taneja, Shweta; Malhotra, Geetika
2015-01-01
K-means is an effective clustering technique used to separate similar data into groups based on initial centroids of clusters. In this paper, Normalization based K-means clustering algorithm(N-K means) is proposed. Proposed N-K means clustering algorithm applies normalization prior to clustering on the available data as well as the proposed approach calculates initial centroids based on weights. Experimental results prove the betterment of proposed N-K means clustering algorithm over existing...
S. Vijaya Kumar
2010-07-01
Full Text Available This paper presents an automated system for detecting masses in mammogram images. Breast cancer is one of the leading causes of women mortality in the world. Since the causes are unknown, breast cancer cannot be prevented. It is difficult for radiologists to provide both accurate and uniform evaluation over the enormous number of ammograms generated in widespread screening. Microcalcifications (calcium deposits and masses are the earliest signs of breast carcinomas and their detection is one of the key issues for breast cancer control. Computer-aided detection of Microcalcifications and masses is an important and challenging task in breast cancer control. This paper presents a novel approach for detecting microcalcification clusters. First digitized mammogram has been taken from Mammography Image Analysis Society (MIAS database. The Mammogram is preprocessed using Adaptive median filtered. Next, the microcalcification clusters are identified by using the marker extractions of the gradient images obtained by multiscale morphological reconstruction and avoids Oversegmentation vivid in Watershed algorithm. Experimental result show that the microcalcification can be accurately and efficiently detected using the proposed approach.
吴振军; 冯为民; 胡智宏; 崔光照
2012-01-01
The asynchronous sampling leak causes errors in electric energy measurement. A new electric energy measurement method is proposed by fixing the sampling frequency, and this method is based on Simpson integral method, and it takes account of the energy leak in asynchronous sampling when system frequency shifts, thus it can achieve an accurate result. The modified Simpson integral formula is given. The ideal power system electric energy and the real one are simulated by the algorithm, and are compared with other methods. The simulation results illustrate that the nev algorithm can reduce the measurement errors dramatically and computational complexity has little increased. So the modified Simpson integral method has good practical value.%非同步采样的泄露是电能计量出现较大误差的原因.提出一种固定采样频率条件下的电能计量方法,该方法基于复化Simpson积分算法,在系统频率发生偏移时充分考虑非同步采样的泄露误差,能够得到较精确的电能计量结果.推导了改进复化Simpson积分公式,并利用该方法对理想及实际电力系统电能进行了仿真,和其它计量算法进行了对比.仿真结果表明使用改进的复化Simpson积分算法能显著提高计量精度,且计算量增加较少,具有很好的使用价值.
Stochastic Recursive Algorithms for Optimization Simultaneous Perturbation Methods
Bhatnagar, S; Prashanth, L A
2013-01-01
Stochastic Recursive Algorithms for Optimization presents algorithms for constrained and unconstrained optimization and for reinforcement learning. Efficient perturbation approaches form a thread unifying all the algorithms considered. Simultaneous perturbation stochastic approximation and smooth fractional estimators for gradient- and Hessian-based methods are presented. These algorithms: • are easily implemented; • do not require an explicit system model; and • work with real or simulated data. Chapters on their application in service systems, vehicular traffic control and communications networks illustrate this point. The book is self-contained with necessary mathematical results placed in an appendix. The text provides easy-to-use, off-the-shelf algorithms that are given detailed mathematical treatment so the material presented will be of significant interest to practitioners, academic researchers and graduate students alike. The breadth of applications makes the book appropriate for reader from sim...
Continuous Attributes Discretization Algorithm based on FPGA
Guoqiang Sun
2013-07-01
Full Text Available The paper addresses the problem of Discretization of continuous attributes in rough set. Discretization of continuous attributes is an important part of rough set theory because most of data that we usually gain are continuous data. In order to improve processing speed of discretization, we propose a FPGA-based discretization algorithm of continuous attributes making use of the speed advantage of FPGA. Combined attributes dependency degree of rough ret, the discretization system was divided into eight modules according to block design. This method can save much time of pretreatment in rough set and improve operation efficiency. Extensive experiments on a certain fighter fault diagnosis validate the effectiveness of the algorithm.
Social emotional optimization algorithm based on quadratic interpolation method%基于二次插值法的社会情感优化算法
武建娜; 崔志华; 刘静
2011-01-01
社会情感优化算法是一种模拟人类社会行为的新型群智能优化算法,算法中考虑了个体决策能力以及个体的情感对寻优结果的影响,因此算法的多样性比常见的群智能算法改善了很多,但是局部搜索能力还有待提高.二次插值法是一种局部搜索能力较强的搜索方法,把二次插值法引入社会情感优化算法,搜索效果会改善.通过使用测试函数对算法的优化性能进行测试,证明把二次插值法引入社会情感优化算法,可以使得社会情感优化算法的局部搜索能力增强,从而增强了社会情感优化算法的全局搜索能力.%Social Emotional Optimization Algorithm (SEOA) is a new swarm intelligent population-based optimization algorithm to simulate the human social behaviors. The individual decision-making ability and individual emotion which have impact on optimization results were taken into account, so the diversity of the algorithm has been improved a lot than common swam intelligence algorithms. However, the local search capacity needs to be updated. Quadratic interpolation method is better-behaved in local search. Therefore, the introduction of it into SEOA will improve the search capability. According to the test for the optimization performance by using benchmark functions, it is proved that the local search ability can be improved by introducing quadratic interpolation method into SEOA, thus increasing the global search capability.
A Multi-Scale Gradient Algorithm Based on Morphological Operators
无
2000-01-01
Watershed transformation is a powerful morphological tool for image segmentation. However, the performance of the image segmentation methods based on watershed transformation depends largely on the algorithm for computing the gradient of the image to be segmented. In this paper, we present a multi-scale gradient algorithm based on morphological operators for watershed-based image segmentation, with effective handling of both step and blurred edges. We also present an algorithm to eliminate the local minima produced by noise and quantization errors. Experimental results indicate that watershed transformation with the algorithms proposed in this paper produces meaningful segmentations, even without a region-merging step.
基于OpenCV算法库的摄像机标定方法%Camera Calibration Method Based on OpenCV Algorithm Library
刘国平; 蔡建平
2011-01-01
通过分析摄像机的透视投影成像模型和四个笛卡尔坐标系之间的变换关系，从而明确摄像机标定的目的就是求解其内外参数。比较常用标定方法的优缺点，在VC＋＋环境下开发了一种基于OpencV算法库的摄像机标定程序，实验结果表明，该程序能自动、快速、精确地标定摄像机。%Through analyzing the perspective imaging model and coordinate transformations between four different Cartesian coordinate systems, a camera calibration method based on OpenCV algorithm library is presented. The camera calibration aims to solve the intrin
一种基于FKT算法的人脸表情识别方法%A Method of Facial Expression Recognition Based on FKT Algorithm
邱伟
2011-01-01
Facial expression recognition has been one of the important research themes in the fields of pattern recognition and computer vision. In this paper, a face detection system base on the dynamic cascade learning algorithm is implemented by C++ programs. With the help of the obtained face detector, face samples are extracted to form the data sets for expression recognition. Finally, FKT (Fuktunaga-Koontz Transform) algorithm is applied to solve the problem of expression recognition. The experimental results demonstrate the effectiveness of the proposed method.%人脸表情识别是模式识别和计算机视觉领域的重要课题之一.本文使用C++编程实现基于动态级联算法的人脸检测器,使用人脸检测器提取出表情识别需要的人脸数据集,通过FKT(Fukunaga-Koontz Transform)算法来解决表情识别问题.实验结果表明算法的有效性.
FACE RECOGNITION BASED ON CUCKOO SEARCH ALGORITHM
VIPINKUMAR TIWARI
2012-07-01
Full Text Available Feature Selection is a optimization technique used in face recognition technology. Feature selection removes the irrelevant, noisy and redundant data thus leading to the more accurate recognition of face from the database.Cuckko Algorithm is one of the recent optimization algorithm in the league of nature based algorithm. Its optimization results are better than the PSO and ACO optimization algorithms. The proposal of applying the Cuckoo algorithm for feature selection in the process of face recognition is presented in this paper.
ILU preconditioning based on the FAPINV algorithm
Davod Khojasteh Salkuyeh
2015-01-01
Full Text Available A technique for computing an ILU preconditioner based on the factored approximate inverse (FAPINV algorithm is presented. We show that this algorithm is well-defined for H-matrices. Moreover, when used in conjunction with Krylov-subspace-based iterative solvers such as the GMRES algorithm, results in reliable solvers. Numerical experiments on some test matrices are given to show the efficiency of the new ILU preconditioner.
Tang, Guang-hua; Xu, Chuan-long; Shao, Li-tang; Yang, Dao-ye; Zhou, Bin; Wang, Shi-min
2009-04-01
Valuable achievements on differential optical absorption spectroscopy (DOAS) for monitoring atmospheric pollutants gas have been made in the past decades. Based on the idea of setting the threshold according to the maximum value, symbolized as OD'm, of differential optical density, the algorithm of traditional DOAS was combined with the DOAS algorithm based on the kalman filtering to improve the detection limit without losing measurement accuracy in the present article. Two algorithms have different inversion accuracy at the same ratio of signal to noise and the problem of inversion accuracy was well resolved by combining two algorithms at short light path length. Theoretical and experimental research on the concentration measurement of SO2 in the flue gases was carried out at the normal temperature and atmospheric pressure. The research results show that with the OD'm less than 0.0481, the measurement precision is very high for SO2 with the improved DOAS algorithm. The measurement lower limit of SO2 is less than 28.6 mg x m(-3) and the zero drift of the system is less than 2.9 mg x m(-3). If the OD'm is between 0.0481 and 0.9272, the measurement precision is high with the traditional DOAS algorithm. However, if the OD'm is more than 0.922, the errors of measurement results for both two DOAS algorithms are very large and the linearity correction must be performed. PMID:19626898
无
2006-01-01
A novel method of global optimal path planning for mobile robot was proposed based on the improved Dijkstra algorithm and ant system algorithm. This method includes three steps: the first step is adopting the MAKLINK graph theory to establish the free space model of the mobile robot, the second step is adopting the improved Dijkstra algorithm to find out a sub-optimal collision-free path, and the third step is using the ant system algorithm to adjust and optimize the location of the sub-optimal path so as to generate the global optimal path for the mobile robot. The computer simulation experiment was carried out and the results show that this method is correct and effective. The comparison of the results confirms that the proposed method is better than the hybrid genetic algorithm in the global optimal path planning.
Cryptography Based MSLDIP Watermarking Algorithm
Abdelmgeid A. Ali
2015-08-01
Full Text Available In recent years, internet revolution resulted in an explosive growth in multimedia applications. The rapid advancement of internet has made it easier to send the data accurate and faster to the destination. Aside to this, it is easier to modify and misuse the valuable information through hacking at the same time. Digital watermarking is one of the proposed solutions for copyright protection of multimedia data. In this paper cryptography based MSLDIP watermarking method (Modified Substitute Last Digit in Pixel is proposed. The main goal of this method is to increase the security of the MSLDIP technique besides to hiding the watermark in the pixels of digital image in such a manner that the human visual system is not able to differentiate between the cover image and the watermarked image. Also the experimental results showed that this method can be used effectively in the field of watermarking.
Accuracy verification methods theory and algorithms
Mali, Olli; Repin, Sergey
2014-01-01
The importance of accuracy verification methods was understood at the very beginning of the development of numerical analysis. Recent decades have seen a rapid growth of results related to adaptive numerical methods and a posteriori estimates. However, in this important area there often exists a noticeable gap between mathematicians creating the theory and researchers developing applied algorithms that could be used in engineering and scientific computations for guaranteed and efficient error control. The goals of the book are to (1) give a transparent explanation of the underlying mathematical theory in a style accessible not only to advanced numerical analysts but also to engineers and students; (2) present detailed step-by-step algorithms that follow from a theory; (3) discuss their advantages and drawbacks, areas of applicability, give recommendations and examples.
Algorithms for Irrelevance-Based Partial MAPs
Shimony, Solomon Eyal
2013-01-01
Irrelevance-based partial MAPs are useful constructs for domain-independent explanation using belief networks. We look at two definitions for such partial MAPs, and prove important properties that are useful in designing algorithms for computing them effectively. We make use of these properties in modifying our standard MAP best-first algorithm, so as to handle irrelevance-based partial MAPs.
Designers' Cognitive Thinking Based on Evolutionary Algorithms
Zhang Shutao
2013-09-01
Full Text Available The research on cognitive thinking is important to construct the efficient intelligent design systems. But it is difficult to describe the model of cognitive thinking with reasonable mathematical theory. Based on the analysis of design strategy and innovative thinking, we investigated the design cognitive thinking model that included the external guide thinking of "width priority - depth priority" and the internal dominated thinking of "divergent thinking - convergent thinking", built a reasoning mechanism of design information with the thinking mathematics theory and established a product image form design model with the generalized interactive genetic algorithm. The example of testing machine form design shows that the method is reasonable and feasible.
Genetic algorithm-based form error evaluation
Cui, Changcai; Li, Bing; Huang, Fugui; Zhang, Rencheng
2007-07-01
Form error evaluation of geometrical products is a nonlinear optimization problem, for which a solution has been attempted by different methods with some complexity. A genetic algorithm (GA) was developed to deal with the problem, which was proved simple to understand and realize, and its key techniques have been investigated in detail. Firstly, the fitness function of GA was discussed emphatically as a bridge between GA and the concrete problems to be solved. Secondly, the real numbers-based representation of the desired solutions in the continual space optimization problem was discussed. Thirdly, many improved evolutionary strategies of GA were described on emphasis. These evolutionary strategies were the selection operation of 'odd number selection plus roulette wheel selection', the crossover operation of 'arithmetic crossover between near relatives and far relatives' and the mutation operation of 'adaptive Gaussian' mutation. After evolutions from generation to generation with the evolutionary strategies, the initial population produced stochastically around the least-squared solutions of the problem would be updated and improved iteratively till the best chromosome or individual of GA appeared. Finally, some examples were given to verify the evolutionary method. Experimental results show that the GA-based method can find desired solutions that are superior to the least-squared solutions except for a few examples in which the GA-based method can obtain similar results to those by the least-squared method. Compared with other optimization techniques, the GA-based method can obtain almost equal results but with less complicated models and computation time.
A solution quality assessment method for swarm intelligence optimization algorithms.
Zhang, Zhaojun; Wang, Gai-Ge; Zou, Kuansheng; Zhang, Jianhua
2014-01-01
Nowadays, swarm intelligence optimization has become an important optimization tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment method for intelligent optimization is proposed in this paper. It is an experimental analysis method based on the analysis of search space and characteristic of algorithm itself. Instead of "value performance," the "ordinal performance" is used as evaluation criteria in this method. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and "good enough" set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed method, some intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and artificial fish swarm algorithm (AFS) were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed method. PMID:25013845
Morshed, Mohammad Sarwar; Kamal, Mostafa Mashnoon; Khan, Somaiya Islam
2016-07-01
Inventory has been a major concern in supply chain and numerous researches have been done lately on inventory control which brought forth a number of methods that efficiently manage inventory and related overheads by reducing cost of replenishment. This research is aimed towards providing a better replenishment policy in case of multi-product, single supplier situations for chemical raw materials of textile industries in Bangladesh. It is assumed that industries currently pursue individual replenishment system. The purpose is to find out the optimum ideal cycle time and individual replenishment cycle time of each product for replenishment that will cause lowest annual holding and ordering cost, and also find the optimum ordering quantity. In this paper indirect grouping strategy has been used. It is suggested that indirect grouping Strategy outperforms direct grouping strategy when major cost is high. An algorithm by Kaspi and Rosenblatt (1991) called RAND is exercised for its simplicity and ease of application. RAND provides an ideal cycle time (T) for replenishment and integer multiplier (ki) for individual items. Thus the replenishment cycle time for each product is found as T×ki. Firstly, based on data, a comparison between currently prevailing (individual) process and RAND is provided that uses the actual demands which presents 49% improvement in total cost of replenishment. Secondly, discrepancies in demand is corrected by using Holt's method. However, demands can only be forecasted one or two months into the future because of the demand pattern of the industry under consideration. Evidently, application of RAND with corrected demand display even greater improvement. The results of this study demonstrates that cost of replenishment can be significantly reduced by applying RAND algorithm and exponential smoothing models.
Multicast Routing Based on Hybrid Genetic Algorithm
CAO Yuan-da; CAI Gui
2005-01-01
A new multicast routing algorithm based on the hybrid genetic algorithm (HGA) is proposed. The coding pattern based on the number of routing paths is used. A fitness function that is computed easily and makes algorithm quickly convergent is proposed. A new approach that defines the HGA's parameters is provided. The simulation shows that the approach can increase largely the convergent ratio, and the fitting values of the parameters of this algorithm are different from that of the original algorithms. The optimal mutation probability of HGA equals 0.50 in HGA in the experiment, but that equals 0.07 in SGA. It has been concluded that the population size has a significant influence on the HGA's convergent ratio when it's mutation probability is bigger. The algorithm with a small population size has a high average convergent rate. The population size has little influence on HGA with the lower mutation probability.
A rule-based algorithm for automatic bond type perception
Zhang Qian; Zhang Wei; Li Youyong; Wang Junmei; Zhang Liling; Hou Tingjun
2012-01-01
Abstract Assigning bond orders is a necessary and essential step for characterizing a chemical structure correctly in force field based simulations. Several methods have been developed to do this. They all have advantages but with limitations too. Here, an automatic algorithm for assigning chemical connectivity and bond order regardless of hydrogen for organic molecules is provided, and only three dimensional coordinates and element identities are needed for our algorithm. The algorithm uses ...
A Global Optimization Algorithm Based on Incremental Metamodel Method%一种基于增量元模型方法的全局优化算法
魏昕; 吴义忠; 陈立平
2013-01-01
A new global optimization method based on incremental metamodel was proposed for the complex simulation model problem. Firstly, to overcome the defects of existing incremental Latin hyper-cube design, which was hard to control the number of sampling points and limited to multiples, we proposed an improved incremental Latin hyper-cube sampling method based on subtraction rule idea. Secondly, combined incremental Latin hyper - cube design and the method of incremental update radial basis functions, we proposed a new efficient global optimization algorithm. Finally, the method was applied to a pressure vessel design problem and the results of the example demonstrate the efficiency and engineering practicability of the presented method.%针对复杂仿真模型的全局优化问题,提出一种基于增量元模型方法的全局优化算法.首先,分析了现有增量拉丁超立方采样方法新增点数量难以控制以及其数值必须为原采样点数量的整数倍的缺陷,在此基础上利用减法规则思想改进了增量拉丁超立方采样；其次,将改进后的增量拉丁超立方采样与径向基函数的增量更新方法相结合,提出了一种全新的高效全局优化算法；最后,将该算法应用于压力容器的优化设计,计算结果证明了该方法的实用性与工程有效性.
Product design optimization method based on genetic algorithm%运用遗传算法的产品造型设计方案优化方法
杨延璞; 余隋怀; 陈登凯
2012-01-01
Starting from the product semantics, the research presented a product design optimization method based on genetic algorithm and constructed a product design evolution model based on semantic. By constructing product gene coding and gene string and the fitness function,new programs were developed through optimizing original product design programs. Application on panel furniture design proves that the method is suitable.%从产品语义出发,提出一种基于遗传算法的产品造型设计方案优化方法,构造基于语义的产品造型进化设计模型.通过构造产品造型基因编码和基因串及适应度函数,对产品造型设计方案进行优化并形成新的方案,应用板式家具的造型设计验证了方法的适宜性.
Enterprise Human Resources Information Mining Based on Improved Apriori Algorithm
Lei He
2013-05-01
Full Text Available With the unceasing development of information and technology in today’s modern society, enterprises’ demand of human resources information mining is getting bigger and bigger. Based on the enterprise human resources information mining situation, this paper puts forward a kind of improved Apriori algorithm based model on the enterprise human resources information mining, this model introduced data mining technology and traditional Apriori algorithm, and improved on its basis, divided the association rules mining task of the original algorithm into two subtasks of producing frequent item sets and producing rule, using SQL technology to directly generating frequent item sets, and using the method of establishing chart to extract the information which are interested to customers. The experimental results show that the improved Apriori algorithm based model on the enterprise human resources information mining is better in efficiency than the original algorithm, and the practical application test results show that the improved algorithm is practical and effective.
PDE Based Algorithms for Smooth Watersheds.
Hodneland, Erlend; Tai, Xue-Cheng; Kalisch, Henrik
2016-04-01
Watershed segmentation is useful for a number of image segmentation problems with a wide range of practical applications. Traditionally, the tracking of the immersion front is done by applying a fast sorting algorithm. In this work, we explore a continuous approach based on a geometric description of the immersion front which gives rise to a partial differential equation. The main advantage of using a partial differential equation to track the immersion front is that the method becomes versatile and may easily be stabilized by introducing regularization terms. Coupling the geometric approach with a proper "merging strategy" creates a robust algorithm which minimizes over- and under-segmentation even without predefined markers. Since reliable markers defined prior to segmentation can be difficult to construct automatically for various reasons, being able to treat marker-free situations is a major advantage of the proposed method over earlier watershed formulations. The motivation for the methods developed in this paper is taken from high-throughput screening of cells. A fully automated segmentation of single cells enables the extraction of cell properties from large data sets, which can provide substantial insight into a biological model system. Applying smoothing to the boundaries can improve the accuracy in many image analysis tasks requiring a precise delineation of the plasma membrane of the cell. The proposed segmentation method is applied to real images containing fluorescently labeled cells, and the experimental results show that our implementation is robust and reliable for a variety of challenging segmentation tasks. PMID:26625408
Speech Enhancement based on Compressive Sensing Algorithm
Sulong, Amart; Gunawan, Teddy S.; Khalifa, Othman O.; Chebil, Jalel
2013-12-01
There are various methods, in performance of speech enhancement, have been proposed over the years. The accurate method for the speech enhancement design mainly focuses on quality and intelligibility. The method proposed with high performance level. A novel speech enhancement by using compressive sensing (CS) is a new paradigm of acquiring signals, fundamentally different from uniform rate digitization followed by compression, often used for transmission or storage. Using CS can reduce the number of degrees of freedom of a sparse/compressible signal by permitting only certain configurations of the large and zero/small coefficients, and structured sparsity models. Therefore, CS is significantly provides a way of reconstructing a compressed version of the speech in the original signal by taking only a small amount of linear and non-adaptive measurement. The performance of overall algorithms will be evaluated based on the speech quality by optimise using informal listening test and Perceptual Evaluation of Speech Quality (PESQ). Experimental results show that the CS algorithm perform very well in a wide range of speech test and being significantly given good performance for speech enhancement method with better noise suppression ability over conventional approaches without obvious degradation of speech quality.
FMS Scheduling Simulation Based on an Evolution Algorithm
无
2002-01-01
A FMS (flexible manufacturing system) scheduling algorithm based on an evolution algorithm (EA) is developed by intensively analyzing and researching the scheduling method in this paper. Many factors related to FMS scheduling are considered sufficiently. New explanations for a common kind of the encoding model are given. The rationality of encoding model is ensured by designing a set ofnew encoding methods, while the simulation experiment is performed. The results show that a FMS scheduling optimum problem with multi-constraint conditions can be effectively solved by a FMS scheduling simulation model based on EA. Comparing this method with others, this algorithm has the advantage of good stability and quick convergence.