AN SVAD ALGORITHM BASED ON FNNKD METHOD
Institute of Scientific and Technical Information of China (English)
Chen Dong; Zhang Yan; Kuang Jingming
2002-01-01
The capacity of mobile communication system is improved by using Voice Activity Detection (VAD) technology. In this letter, a novel VAD algorithm, SVAD algorithm based on Fuzzy Neural Network Knowledge Discovery (FNNKD) method is proposed. The performance of SVAD algorithm is discussed and compared with traditional algorithm recommended by ITU G.729B in different situations. The simulation results show that the SVAD algorithm performs better.
Kernel method-based fuzzy clustering algorithm
Institute of Scientific and Technical Information of China (English)
Wu Zhongdong; Gao Xinbo; Xie Weixin; Yu Jianping
2005-01-01
The fuzzy C-means clustering algorithm(FCM) to the fuzzy kernel C-means clustering algorithm(FKCM) to effectively perform cluster analysis on the diversiform structures are extended, such as non-hyperspherical data, data with noise, data with mixture of heterogeneous cluster prototypes, asymmetric data, etc. Based on the Mercer kernel, FKCM clustering algorithm is derived from FCM algorithm united with kernel method. The results of experiments with the synthetic and real data show that the FKCM clustering algorithm is universality and can effectively unsupervised analyze datasets with variform structures in contrast to FCM algorithm. It is can be imagined that kernel-based clustering algorithm is one of important research direction of fuzzy clustering analysis.
ADAPTIVE FUSION ALGORITHMS BASED ON WEIGHTED LEAST SQUARE METHOD
Institute of Scientific and Technical Information of China (English)
SONG Kaichen; NIE Xili
2006-01-01
Weighted fusion algorithms, which can be applied in the area of multi-sensor data fusion,are advanced based on weighted least square method. A weighted fusion algorithm, in which the relationship between weight coefficients and measurement noise is established, is proposed by giving attention to the correlation of measurement noise. Then a simplified weighted fusion algorithm is deduced on the assumption that measurement noise is uncorrelated. In addition, an algorithm, which can adjust the weight coefficients in the simplified algorithm by making estimations of measurement noise from measurements, is presented. It is proved by emulation and experiment that the precision performance of the multi-sensor system based on these algorithms is better than that of the multi-sensor system based on other algorithms.
New Iris Localization Method Based on Chaos Genetic Algorithm
Institute of Scientific and Technical Information of China (English)
Jia Dongli; Muhammad Khurram Khan; Zhang Jiashu
2005-01-01
This paper present a new method based on Chaos Genetic Algorithm (CGA) to localize the human iris in a given image. First, the iris image is preprocessed to estimate the range of the iris localization, and then CGA is used to extract the boundary of the iris. Simulation results show that the proposed algorithms is efficient and robust, and can achieve sub pixel precision. Because Genetic Algorithms (GAs) can search in a large space, the algorithm does not need accurate estimation of iris center for subsequent localization, and hence can lower the requirement for original iris image processing. On this point, the present localization algirithm is superior to Daugmans algorithm.
Competition assignment problem algorithm based on Hungarian method
Institute of Scientific and Technical Information of China (English)
KONG Chao; REN Yongtai; GE Huiling; DENG Hualing
2007-01-01
Traditional Hungarian method can only solve standard assignment problems, while can not solve competition assignment problems. This article emphatically discussed the difference between standard assignment problems and competition assignment problems. The kinds of competition assignment problem algorithms based on Hungarian method and the solutions of them were studied.
A CT Image Segmentation Algorithm Based on Level Set Method
Institute of Scientific and Technical Information of China (English)
QU Jing-yi; SHI Hao-shan
2006-01-01
Level Set methods are robust and efficient numerical tools for resolving curve evolution in image segmentation. This paper proposes a new image segmentation algorithm based on Mumford-Shah module. The method is used to CT images and the experiment results demonstrate its efficiency and veracity.
A dynamic fuzzy clustering method based on genetic algorithm
Institute of Scientific and Technical Information of China (English)
ZHENG Yan; ZHOU Chunguang; LIANG Yanchun; GUO Dongwei
2003-01-01
A dynamic fuzzy clustering method is presented based on the genetic algorithm. By calculating the fuzzy dissimilarity between samples the essential associations among samples are modeled factually. The fuzzy dissimilarity between two samples is mapped into their Euclidean distance, that is, the high dimensional samples are mapped into the two-dimensional plane. The mapping is optimized globally by the genetic algorithm, which adjusts the coordinates of each sample, and thus the Euclidean distance, to approximate to the fuzzy dissimilarity between samples gradually. A key advantage of the proposed method is that the clustering is independent of the space distribution of input samples, which improves the flexibility and visualization. This method possesses characteristics of a faster convergence rate and more exact clustering than some typical clustering algorithms. Simulated experiments show the feasibility and availability of the proposed method.
Barzilai-Borwein method in graph drawing algorithm based on Kamada-Kawai algorithm
Hasal, Martin; Pospisil, Lukas; Nowakova, Jana
2016-06-01
Extension of Kamada-Kawai algorithm, which was designed for calculating layouts of simple undirected graphs, is presented in this paper. Graphs drawn by Kamada-Kawai algorithm exhibit symmetries, tend to produce aesthetically pleasing and crossing-free layouts for planar graphs. Minimization of Kamada-Kawai algorithm is based on Newton-Raphson method, which needs Hessian matrix of second derivatives of minimized node. Disadvantage of Kamada-Kawai embedder algorithm is computational requirements. This is caused by searching of minimal potential energy of the whole system, which is minimized node by node. The node with highest energy is minimized against all nodes till the local equilibrium state is reached. In this paper with Barzilai-Borwein (BB) minimization algorithm, which needs only gradient for minimum searching, instead of Newton-Raphson method, is worked. It significantly improves the computational time and requirements.
A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method
Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang
2016-01-01
Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR.
A Novel Assembly Line Balancing Method Based on PSO Algorithm
Directory of Open Access Journals (Sweden)
Xiaomei Hu
2014-01-01
Full Text Available Assembly line is widely used in manufacturing system. Assembly line balancing problem is a crucial question during design and management of assembly lines since it directly affects the productivity of the whole manufacturing system. The model of assembly line balancing problem is put forward and a general optimization method is proposed. The key data on assembly line balancing problem is confirmed, and the precedence relations diagram is described. A double objective optimization model based on takt time and smoothness index is built, and balance optimization scheme based on PSO algorithm is proposed. Through the simulation experiments of examples, the feasibility and validity of the assembly line balancing method based on PSO algorithm is proved.
Research on Palmprint Identification Method Based on Quantum Algorithms
Directory of Open Access Journals (Sweden)
Hui Li
2014-01-01
Full Text Available Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%.
An Improved Image Segmentation Algorithm Based on MET Method
Directory of Open Access Journals (Sweden)
Z. A. Abo-Eleneen
2012-09-01
Full Text Available Image segmentation is a basic component of many computer vision systems and pattern recognition. Thresholding is a simple but effective method to separate objects from the background. A commonly used method, Kittler and Illingworth's minimum error thresholding (MET, improves the image segmentation effect obviously. Its simpler and easier to implement. However, it fails in the presence of skew and heavy-tailed class-conditional distributions or if the histogram is unimodal or close to unimodal. The Fisher information (FI measure is an important concept in statistical estimation theory and information theory. Employing the FI measure, an improved threshold image segmentation algorithm FI-based extension of MET is developed. Comparing with the MET method, the improved method in general can achieve more robust performance when the data for either class is skew and heavy-tailed.
A constrained optimization algorithm based on the simplex search method
Mehta, Vivek Kumar; Dasgupta, Bhaskar
2012-05-01
In this article, a robust method is presented for handling constraints with the Nelder and Mead simplex search method, which is a direct search algorithm for multidimensional unconstrained optimization. The proposed method is free from the limitations of previous attempts that demand the initial simplex to be feasible or a projection of infeasible points to the nonlinear constraint boundaries. The method is tested on several benchmark problems and the results are compared with various evolutionary algorithms available in the literature. The proposed method is found to be competitive with respect to the existing algorithms in terms of effectiveness and efficiency.
Generating Decision Trees Method Based on Improved ID3 Algorithm
Institute of Scientific and Technical Information of China (English)
Yang Ming; Guo Shuxu1; Wang Jun3
2011-01-01
The ID3 algorithm is a classical learning algorithm of decision tree in data mining.The algorithm trends to choosing the attribute with more values,affect the efficiency of classification and prediction for building a decision tree.This article proposes a new approach based on an improved ID3 algorithm.The new algorithm introduces the importance factor λ when calculating the information entropy.It can strengthen the label of important attributes of a tree and reduce the label of non-important attributes.The algorithm overcomes the flaw of the traditional ID3 algorithm which tends to choose the attributes with more values,and also improves the efficiency and flexibility in the process of generating decision trees.
An Emotion-Based Method to Perform Algorithmic Composition
Huang, Chih-Fang; Lin, En-Ju
2013-01-01
The generative music using algorithmic composition techniques has been developed in many years. However it usually lacks of emotion-based mechanism to generate music with specific affective features. In this article the automated music algorithm will be performed based on Prof. Phil Winosr’s “MusicSculptor” software with proper emotion parameter mapping to drive the music content with specific context using various music pa-rameters distribution with different probability control, in order to...
Distortion Parameters Analysis Method Based on Improved Filtering Algorithm
Directory of Open Access Journals (Sweden)
ZHANG Shutuan
2013-10-01
Full Text Available In order to realize the accurate distortion parameters test of aircraft power supply system, and satisfy the requirement of corresponding equipment in the aircraft, the novel power parameters test system based on improved filtering algorithm is introduced in this paper. The hardware of the test system has the characters of s portable and high-speed data acquisition and processing, and the software parts utilize the software Labwindows/CVI as exploitation software, and adopt the pre-processing technique and adding filtering algorithm. Compare with the traditional filtering algorithm, the test system adopted improved filtering algorithm can help to increase the test accuracy. The application shows that the test system with improved filtering algorithm can realize the accurate test results, and reach to the design requirements.
Switching Equalization Algorithm Based on a New SNR Estimation Method
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
It is well-known that turbo equalization with the max-log-map (MLM) rather than the log-map (LM) algorithm is insensitive to signal to noise ratio (SNR) mismatch. As our first contribution, an improved MLM algorithm called scaled max-log-map (SMLM) algorithm is presented. Simulation results show that the SMLM scheme can dramatically outperform the MLM without sacrificing the robustness against SNR mismatch. Unfortunately, its performance is still inferior to that of the LM algorithm with exact SNR knowledge over the class of high-loss channels. As our second contribution, a switching turbo equalization scheme, which switches between the SMLM and LM schemes, is proposed to practically close the performance gap. It is based on a novel way to estimate the SNR from the reliability values of the extrinsic information of the SMLM algorithm.
Color Image Segmentation Method Based on Improved Spectral Clustering Algorithm
Directory of Open Access Journals (Sweden)
Dong Qin
2014-08-01
Full Text Available Contraposing to the features of image data with high sparsity of and the problems on determination of clustering numbers, we try to put forward an color image segmentation algorithm, combined with semi-supervised machine learning technology and spectral graph theory. By the research of related theories and methods of spectral clustering algorithms, we introduce information entropy conception to design a method which can automatically optimize the scale parameter value. So it avoids the unstability in clustering result of the scale parameter input manually. In addition, we try to excavate available priori information existing in large number of non-generic data and apply semi-supervised algorithm to improve the clustering performance for rare class. We also use added tag data to compute similar matrix and perform clustering through FKCM algorithms. By the simulation of standard dataset and image segmentation, the experiments demonstrate our algorithm has overcome the defects of traditional spectral clustering methods, which are sensitive to outliers and easy to fall into local optimum, and also poor in the convergence rate
Applied RCM2 Algorithms Based on Statistical Methods
Institute of Scientific and Technical Information of China (English)
Fausto Pedro García Márquez; Diego J. Pedregal
2007-01-01
The main purpose of this paper is to implement a system capable of detecting faults in railway point mechanisms. This is achieved by developing an algorithm that takes advantage of three empirical criteria simultaneously capable of detecting faults from records of measurements of force against time. The system is dynamic in several respects: the base reference data is computed using all the curves free from faults as they are encountered in the experimental data; the algorithm that uses the three criteria simultaneously may be applied in on-line situations as each new data point becomes available; and recursive algorithms are applied to filter noise from the raw data in an automatic way. Encouraging results are found in practice when the system is applied to a number of experiments carried out by an industrial sponsor.
Multiobjective Optimization Method Based on Adaptive Parameter Harmony Search Algorithm
Directory of Open Access Journals (Sweden)
P. Sabarinath
2015-01-01
Full Text Available The present trend in industries is to improve the techniques currently used in design and manufacture of products in order to meet the challenges of the competitive market. The crucial task nowadays is to find the optimal design and machining parameters so as to minimize the production costs. Design optimization involves more numbers of design variables with multiple and conflicting objectives, subjected to complex nonlinear constraints. The complexity of optimal design of machine elements creates the requirement for increasingly effective algorithms. Solving a nonlinear multiobjective optimization problem requires significant computing effort. From the literature it is evident that metaheuristic algorithms are performing better in dealing with multiobjective optimization. In this paper, we extend the recently developed parameter adaptive harmony search algorithm to solve multiobjective design optimization problems using the weighted sum approach. To determine the best weightage set for this analysis, a performance index based on least average error is used to determine the index of each weightage set. The proposed approach is applied to solve a biobjective design optimization of disc brake problem and a newly formulated biobjective design optimization of helical spring problem. The results reveal that the proposed approach is performing better than other algorithms.
Improved Algorithm for Weak GPS Signal Acquisition Based on Delay-accumulation Method
Li, Yuanming; Li, Jing; Zhang, Peng; Zheng, Yong
2016-01-01
A new improved algorithm is proposed to solve the problem of GPS weak signal capture that the traditional algorithms are unavailable to capture under a weak signal environment. This algorithm is based on the analysis of double block zero padding (DBZP) algorithm and it adopts the delay-accumulation method to retain the operation results temporarily which are discarded in DBZP algorithm. Waiting for delaying 1 ms, the corresponding correlation calculation results are obtained. Then superimpose...
Visual tracking method based on cuckoo search algorithm
Gao, Ming-Liang; Yin, Li-Ju; Zou, Guo-Feng; Li, Hai-Tao; Liu, Wei
2015-07-01
Cuckoo search (CS) is a new meta-heuristic optimization algorithm that is based on the obligate brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior of some birds and fruit flies. It has been found to be efficient in solving global optimization problems. An application of CS is presented to solve the visual tracking problem. The relationship between optimization and visual tracking is comparatively studied and the parameters' sensitivity and adjustment of CS in the tracking system are experimentally studied. To demonstrate the tracking ability of a CS-based tracker, a comparative study of tracking accuracy and speed of the CS-based tracker with six "state-of-art" trackers, namely, particle filter, meanshift, PSO, ensemble tracker, fragments tracker, and compressive tracker are presented. Comparative results show that the CS-based tracker outperforms the other trackers.
GPU-based parallel algorithm for blind image restoration using midfrequency-based methods
Xie, Lang; Luo, Yi-han; Bao, Qi-liang
2013-08-01
GPU-based general-purpose computing is a new branch of modern parallel computing, so the study of parallel algorithms specially designed for GPU hardware architecture is of great significance. In order to solve the problem of high computational complexity and poor real-time performance in blind image restoration, the midfrequency-based algorithm for blind image restoration was analyzed and improved in this paper. Furthermore, a midfrequency-based filtering method is also used to restore the image hardly with any recursion or iteration. Combining the algorithm with data intensiveness, data parallel computing and GPU execution model of single instruction and multiple threads, a new parallel midfrequency-based algorithm for blind image restoration is proposed in this paper, which is suitable for stream computing of GPU. In this algorithm, the GPU is utilized to accelerate the estimation of class-G point spread functions and midfrequency-based filtering. Aiming at better management of the GPU threads, the threads in a grid are scheduled according to the decomposition of the filtering data in frequency domain after the optimization of data access and the communication between the host and the device. The kernel parallelism structure is determined by the decomposition of the filtering data to ensure the transmission rate to get around the memory bandwidth limitation. The results show that, with the new algorithm, the operational speed is significantly increased and the real-time performance of image restoration is effectively improved, especially for high-resolution images.
A New Genetic Algorithm Based on Niche Technique and Local Search Method
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
The genetic algorithm has been widely used in many fields as an easy robust global search and optimization method. In this paper, a new genetic algorithm based on niche technique and local search method is presented under the consideration of inadequacies of the simple genetic algorithm. In order to prove the adaptability and validity of the improved genetic algorithm, optimization problems of multimodal functions with equal peaks, unequal peaks and complicated peak distribution are discussed. The simulation results show that compared to other niching methods, this improved genetic algorithm has obvious potential on many respects, such as convergence speed, solution accuracy, ability of global optimization, etc.
Wang, Tianren
2015-01-01
This paper presents a numerical study on a fast marching method based back projection reconstruction algorithm for photoacoustic tomography in heterogeneous media. Transcranial imaging is used here as a case study. To correct for the phase aberration from the heterogeneity (i.e., skull), the fast marching method is adopted to compute the phase delay based on the known speed of sound distribution, and the phase delay is taken into account by the back projection algorithm for more accurate reconstructions. It is shown that the proposed algorithm is more accurate than the conventional back projection algorithm, but slightly less accurate than the time reversal algorithm particularly in the area close to the skull. However, the image reconstruction time for the proposed algorithm can be as little as 124 ms when implemented by a GPU (512 sensors, 21323 pixels reconstructed), which is two orders of magnitude faster than the time reversal reconstruction. The proposed algorithm, therefore, not only corrects for the p...
Joint Interference Detection Method for DSSS Communications Based on the OMP Algorithm and CA-CFAR
Directory of Open Access Journals (Sweden)
Zhang Yongshun
2016-01-01
Full Text Available The existing direct sequence spread spectrum (DSSS communications interference detection algorithms are confined to the high sampling rate. In order to solve this problem, algorithm for DSSS communications interference detection was designed based on compressive sensing (CS. First of all, the orthogonal matching pursuit (OMP algorithm was applied to the interference detection in DSSS communications, the advantages and weaknesses of the algorithm were analyzed; Secondly, according to the weaknesses of the OMP algorithm, a joint interference detection method based on the OMP algorithm and cell average constant false alarm rate (CA-CFAR was proposed. The theoretical analyze and computer simulation all proved the effectiveness of the new algorithm. The simulation results show that the new method not only could achieve the interference detection, but also could estimate the interference quantity effectively.
On the importance of graph search algorithms for DRGEP-based mechanism reduction methods
Niemeyer, Kyle E
2016-01-01
The importance of graph search algorithm choice to the directed relation graph with error propagation (DRGEP) method is studied by comparing basic and modified depth-first search, basic and R-value-based breadth-first search (RBFS), and Dijkstra's algorithm. By using each algorithm with DRGEP to produce skeletal mechanisms from a detailed mechanism for n-heptane with randomly-shuffled species order, it is demonstrated that only Dijkstra's algorithm and RBFS produce results independent of species order. In addition, each algorithm is used with DRGEP to generate skeletal mechanisms for n-heptane covering a comprehensive range of autoignition conditions for pressure, temperature, and equivalence ratio. Dijkstra's algorithm combined with a coefficient scaling approach is demonstrated to produce the most compact skeletal mechanism with a similar performance compared to larger skeletal mechanisms resulting from the other algorithms. The computational efficiency of each algorithm is also compared by applying the DRG...
A Steganographic Method Based on Integer Wavelet Transform & Genatic Algorithm
Directory of Open Access Journals (Sweden)
Preeti Arora
2014-05-01
Full Text Available The proposed system presents a novel approach of building a secure data hiding technique of steganography using inverse wavelet transform along with Genetic algorithm. The prominent focus of the proposed work is to develop RS-analysis proof design with higest imperceptibility. Optimal Pixal Adjustment process is also adopted to minimize the difference error between the input cover image and the embedded-image and in order to maximize the hiding capacity with low distortions respectively. The analysis is done for mapping function, PSNR, image histogram, and parameter of RS analysis. The simulation results highlights that the proposed security measure basically gives better and optimal results in comparison to prior research work conducted using wavelets and genetic algorithm.
A novel method to design S-box based on chaotic map and genetic algorithm
International Nuclear Information System (INIS)
The substitution box (S-box) is an important component in block encryption algorithms. In this Letter, the problem of constructing S-box is transformed to a Traveling Salesman Problem and a method for designing S-box based on chaos and genetic algorithm is proposed. Since the proposed method makes full use of the traits of chaotic map and evolution process, stronger S-box is obtained. The results of performance test show that the presented S-box has good cryptographic properties, which justify that the proposed algorithm is effective in generating strong S-boxes. -- Highlights: ► The problem of constructing S-box is transformed to a Traveling Salesman Problem. ► We present a new method for designing S-box based on chaos and genetic algorithm. ► The proposed algorithm is effective in generating strong S-boxes.
A novel method to design S-box based on chaotic map and genetic algorithm
Energy Technology Data Exchange (ETDEWEB)
Wang, Yong, E-mail: wangyong_cqupt@163.com [State Key Laboratory of Power Transmission Equipment and System Security and New Technology, Chongqing University, Chongqing 400044 (China); Key Laboratory of Electronic Commerce and Logistics, Chongqing University of Posts and Telecommunications, Chongqing 400065 (China); Wong, Kwok-Wo [Department of Electronic Engineering, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon Tong (Hong Kong); Li, Changbing [Key Laboratory of Electronic Commerce and Logistics, Chongqing University of Posts and Telecommunications, Chongqing 400065 (China); Li, Yang [Department of Automatic Control and Systems Engineering, The University of Sheffield, Mapping Street, S1 3DJ (United Kingdom)
2012-01-30
The substitution box (S-box) is an important component in block encryption algorithms. In this Letter, the problem of constructing S-box is transformed to a Traveling Salesman Problem and a method for designing S-box based on chaos and genetic algorithm is proposed. Since the proposed method makes full use of the traits of chaotic map and evolution process, stronger S-box is obtained. The results of performance test show that the presented S-box has good cryptographic properties, which justify that the proposed algorithm is effective in generating strong S-boxes. -- Highlights: ► The problem of constructing S-box is transformed to a Traveling Salesman Problem. ► We present a new method for designing S-box based on chaos and genetic algorithm. ► The proposed algorithm is effective in generating strong S-boxes.
Method of fault diagnosis in nuclear power plant base on genetic algorithm and knowledge base
International Nuclear Information System (INIS)
Via using the knowledge base, combining Genetic Algorithm and classical probability and contraposing the characteristic of the fault diagnosis of NPP. The authors put forward a method of fault diagnosis. In the process of fault diagnosis, this method contact the state of NPP with the colony in GA and transform the colony to get the individual that adapts to the condition. On the 950MW full size simulator in Beijing NPP simulation training center, experimentation shows it has comparative adaptability to the imperfection of expert knowledge, illusive signal and other instance
Asymptotically Optimal Algorithm for Short-Term Trading Based on the Method of Calibration
V'yugin, Vladimir
2012-01-01
A trading strategy based on a natural learning process, which asymptotically outperforms any trading strategy from RKHS (Reproduced Kernel Hilbert Space), is presented. In this process, the trader rationally chooses his gambles using predictions made by a randomized well calibrated algorithm. Our strategy is based on Dawid's notion of calibration with more general changing checking rules and on some modification of Kakade and Foster's randomized algorithm for computing calibrated forecasts. We use also Vovk's method of defensive forecasting in RKHS.
Space-borne antenna adaptive anti-jamming method based on gradient-genetic algorithm
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
A novel space-borne antenna adaptive anti-jamming method based on the genetic algorithm(GA), which is combined with gradient-like reproduction operators is presented,to search for the best weight for pattern synthesis in radio frequency(RF).Combined,the GA's the capability of the whole searching is,but not limited by selection of the initial parameter,with the gradient algorithm's advantage of fast searching.The proposed method requires a smaller sized initial population and lower computational complexity.Therefore,it is flexible to implement this method in the real-time systems.By using the proposed algorithm,the designer can efficiently control both main-lobe shaping and side-lobe level.Simulation results based on the spot survey data show that the algorithm proposed is efficient and feasible.
A novel method to design S-box based on chaotic map and genetic algorithm
Wang, Yong; Wong, Kwok-Wo; Li, Changbing; Li, Yang
2012-01-01
The substitution box (S-box) is an important component in block encryption algorithms. In this Letter, the problem of constructing S-box is transformed to a Traveling Salesman Problem and a method for designing S-box based on chaos and genetic algorithm is proposed. Since the proposed method makes full use of the traits of chaotic map and evolution process, stronger S-box is obtained. The results of performance test show that the presented S-box has good cryptographic properties, which justify that the proposed algorithm is effective in generating strong S-boxes.
Research on B Cell Algorithm for Learning to Rank Method Based on Parallel Strategy
Tian, Yuling; Zhang, Hongxian
2016-01-01
For the purposes of information retrieval, users must find highly relevant documents from within a system (and often a quite large one comprised of many individual documents) based on input query. Ranking the documents according to their relevance within the system to meet user needs is a challenging endeavor, and a hot research topic–there already exist several rank-learning methods based on machine learning techniques which can generate ranking functions automatically. This paper proposes a parallel B cell algorithm, RankBCA, for rank learning which utilizes a clonal selection mechanism based on biological immunity. The novel algorithm is compared with traditional rank-learning algorithms through experimentation and shown to outperform the others in respect to accuracy, learning time, and convergence rate; taken together, the experimental results show that the proposed algorithm indeed effectively and rapidly identifies optimal ranking functions. PMID:27487242
Beacon- and Schema-Based Method for Recognizing Algorithms from Students' Source Code
Taherkhani, Ahmad; Malmi, Lauri
2013-01-01
In this paper, we present a method for recognizing algorithms from students programming submissions coded in Java. The method is based on the concept of "programming schemas" and "beacons". Schemas are high-level programming knowledge with detailed knowledge abstracted out, and beacons are statements that imply specific…
Fast Matrix Computation Algorithms Based on Rough Attribute Vector Tree Method in RDSS
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
The concepts of Rough Decision Support System (RDSS)and equivalence matrix are introduced in this paper. Based on a rough attribute vector tree (RAVT) method, two kinds of matrix computation algorithms - Recursive Matrix Computation (RMC) and Parallel Matrix Computation (PMC) are proposed for rules extraction, attributes reduction and data cleaning finished synchronously. The algorithms emphasize the practicability and efficiency of rules generation. A case study of PMC is analyzed, and a comparison experiment of RMC algorithm shows that it is feasible and efficient for data mining and knowledge-discovery in RDSS.
Directory of Open Access Journals (Sweden)
Jiancai Song
Full Text Available In this paper, a novel method for selecting a navigation satellite subset for a global positioning system (GPS based on a genetic algorithm is presented. This approach is based on minimizing the factors in the geometric dilution of precision (GDOP using a modified genetic algorithm (MGA with an elite conservation strategy, adaptive selection, adaptive mutation, and a hybrid genetic algorithm that can select a subset of the satellites represented by specific numbers in the interval (4 ∼ n while maintaining position accuracy. A comprehensive simulation demonstrates that the MGA-based satellite selection method effectively selects the correct number of optimal satellite subsets using receiver autonomous integrity monitoring (RAIM or fault detection and exclusion (FDE. This method is more adaptable and flexible for GPS receivers, particularly for those used in handset equipment and mobile phones.
Song, Jiancai; Xue, Guixiang; Kang, Yanan
2016-01-01
In this paper, a novel method for selecting a navigation satellite subset for a global positioning system (GPS) based on a genetic algorithm is presented. This approach is based on minimizing the factors in the geometric dilution of precision (GDOP) using a modified genetic algorithm (MGA) with an elite conservation strategy, adaptive selection, adaptive mutation, and a hybrid genetic algorithm that can select a subset of the satellites represented by specific numbers in the interval (4 ∼ n) while maintaining position accuracy. A comprehensive simulation demonstrates that the MGA-based satellite selection method effectively selects the correct number of optimal satellite subsets using receiver autonomous integrity monitoring (RAIM) or fault detection and exclusion (FDE). This method is more adaptable and flexible for GPS receivers, particularly for those used in handset equipment and mobile phones. PMID:26943638
Song, Jiancai; Xue, Guixiang; Kang, Yanan
2016-01-01
In this paper, a novel method for selecting a navigation satellite subset for a global positioning system (GPS) based on a genetic algorithm is presented. This approach is based on minimizing the factors in the geometric dilution of precision (GDOP) using a modified genetic algorithm (MGA) with an elite conservation strategy, adaptive selection, adaptive mutation, and a hybrid genetic algorithm that can select a subset of the satellites represented by specific numbers in the interval (4 ∼ n) while maintaining position accuracy. A comprehensive simulation demonstrates that the MGA-based satellite selection method effectively selects the correct number of optimal satellite subsets using receiver autonomous integrity monitoring (RAIM) or fault detection and exclusion (FDE). This method is more adaptable and flexible for GPS receivers, particularly for those used in handset equipment and mobile phones.
Zhang, Chen; Ni, Zhiwei; Ni, Liping; Tang, Na
2016-10-01
Feature selection is an important method of data preprocessing in data mining. In this paper, a novel feature selection method based on multi-fractal dimension and harmony search algorithm is proposed. Multi-fractal dimension is adopted as the evaluation criterion of feature subset, which can determine the number of selected features. An improved harmony search algorithm is used as the search strategy to improve the efficiency of feature selection. The performance of the proposed method is compared with that of other feature selection algorithms on UCI data-sets. Besides, the proposed method is also used to predict the daily average concentration of PM2.5 in China. Experimental results show that the proposed method can obtain competitive results in terms of both prediction accuracy and the number of selected features.
NONLINEAR FILTER METHOD OF GPS DYNAMIC POSITIONING BASED ON BANCROFT ALGORITHM
Institute of Scientific and Technical Information of China (English)
ZHANGQin; TAOBen-zao; ZHAOChao-ying; WANGLi
2005-01-01
Because of the ignored items after linearization, the extended Kalman filter (EKF) becomes a form of suboptimal gradient descent algorithm. The emanative tendency exists in GPS solution when the filter equations are ill-posed. The deviation in the estimation cannot be avoided. Furthermore, the true solution may be lost in pseudorange positioning because the linearized pseudorange equations are partial solutions. To solve the above problems in GPS dynamic positioning by using EKF, a closed-form Kalman filter method called the two-stage algorithm is presented for the nonlinear algebraic solution of GPS dynamic positioning based on the global nonlinear least squares closed algorithm--Bancroft numerical algorithm of American. The method separates the spatial parts from temporal parts during processing the GPS filter problems, and solves the nonlinear GPS dynamic positioning, thus getting stable and reliable dynamic positioning solutions.
Improved Algorithm for Weak GPS Signal Acquisition Based on Delay-accumulation Method
Directory of Open Access Journals (Sweden)
LI Yuanming
2016-01-01
Full Text Available A new improved algorithm is proposed to solve the problem of GPS weak signal capture that the traditional algorithms are unavailable to capture under a weak signal environment. This algorithm is based on the analysis of double block zero padding (DBZP algorithm and it adopts the delay-accumulation method to retain the operation results temporarily which are discarded in DBZP algorithm. Waiting for delaying 1 ms, the corresponding correlation calculation results are obtained. Then superimpose the obtained results with the operation results retained temporarily and compare the coherent accumulation results with the threshold value. The data measurements are increased by improving the utilization rate of correlation operation results in the improved algorithm on the premise of increasing little computation. Simulation results showed that the improved algorithm can improve the acquisition algorithm processing gain and it is able to capture the signals whose carrier-to-noise ratio(C/N0 is 17 dB-Hz and the detection probability can achieve to 91%.
A novel Spectrum Estimation Algorithm Based on Compressed Sensing and Multi-taper Method
Directory of Open Access Journals (Sweden)
Wang Keqing
2013-04-01
Full Text Available To estimate the wideband or multi-channel signals’ spectrum swiftly and exactly is a key technology to improve the performance of wideband spectrum sensing. The paper was proposed a novel spectrum estimation algorithm based on compressed sensing (CS and multi-taper method (MTM, which is called CS-MTM. The new algorithm was validated by single-tone, multi-tone and QPSK signals. Meanwhile, the paper has validated six common random meaurement matrixes which can be used in compressed spectrum estimation algorithm successfully. Simulation results show that the proposed approach can potentially achieve better performance than multi-taper method combined with singular-value decomposition based on compressed sensing(CS-MTM-SVD in complexity, accuracy and real-time property.
Prediction method of rock burst proneness based on rough set and genetic algorithm
Institute of Scientific and Technical Information of China (English)
YU Huai-chang; LIU Hai-ning; LU Xue-song; LIU Han-dong
2009-01-01
A new method based on rough set theory and genetic algorithm was proposed to predict the rock burst proneness. Nine influencing factors were first selected, and then, the decision table was set up. Attributes were reduced by genetic algorithm. Rough set was used to extract the simplified decision rules of rock burst proneness. Taking the practical engineering for example, the rock burst proneness was evaluated and predicted by decision rules. Comparing the prediction results with the actual results, it shows that the proposed method is feasible and effective.
2D-3D Face Recognition Method Based on a Modified CCA-PCA Algorithm
Directory of Open Access Journals (Sweden)
Patrik Kamencay
2014-03-01
Full Text Available This paper presents a proposed methodology for face recognition based on an information theory approach to coding and decoding face images. In this paper, we propose a 2D-3D face-matching method based on a principal component analysis (PCA algorithm using canonical correlation analysis (CCA to learn the mapping between a 2D face image and 3D face data. This method makes it possible to match a 2D face image with enrolled 3D face data. Our proposed fusion algorithm is based on the PCA method, which is applied to extract base features. PCA feature-level fusion requires the extraction of different features from the source data before features are merged together. Experimental results on the TEXAS face image database have shown that the classification and recognition results based on the modified CCA-PCA method are superior to those based on the CCA method. Testing the 2D-3D face match results gave a recognition rate for the CCA method of a quite poor 55% while the modified CCA method based on PCA-level fusion achieved a very good recognition score of 85%.
Directory of Open Access Journals (Sweden)
Prof. Vikas Gupta
2014-01-01
Full Text Available Due to the exponential increase of noise pollution, the demand for noise controlling system is also increases. Basically two types of techniques are used for noise cancellation active and passive. But passive techniques are inactive for low frequency noise, hence there is an increasing demand of research and developmental work on active noise cancellation techniques. In this paper we introduce a new method in the active noise cancellation system. This new method is the transfer function based method which used Genetic and Particle swarm optimization (PSO algorithm for noise cancellation. This method is very simple and efficient for low frequency noise cancellation. Here we analysis the performance of this method in the presence of white Gaussian noise and compare the results of Particle swarm optimization (PSO and Genetic algorithm. Both algorithms are suitable for different environment, so we observe their performance in different fields. In this paper a comparative study of Genetic and Particle swarm optimization (PSO is described with proper results. It will go in depth what exactly transfer function method, how it work and advantages over neural network based method
An effective trust-based recommendation method using a novel graph clustering algorithm
Moradi, Parham; Ahmadian, Sajad; Akhlaghian, Fardin
2015-10-01
Recommender systems are programs that aim to provide personalized recommendations to users for specific items (e.g. music, books) in online sharing communities or on e-commerce sites. Collaborative filtering methods are important and widely accepted types of recommender systems that generate recommendations based on the ratings of like-minded users. On the other hand, these systems confront several inherent issues such as data sparsity and cold start problems, caused by fewer ratings against the unknowns that need to be predicted. Incorporating trust information into the collaborative filtering systems is an attractive approach to resolve these problems. In this paper, we present a model-based collaborative filtering method by applying a novel graph clustering algorithm and also considering trust statements. In the proposed method first of all, the problem space is represented as a graph and then a sparsest subgraph finding algorithm is applied on the graph to find the initial cluster centers. Then, the proposed graph clustering algorithm is performed to obtain the appropriate users/items clusters. Finally, the identified clusters are used as a set of neighbors to recommend unseen items to the current active user. Experimental results based on three real-world datasets demonstrate that the proposed method outperforms several state-of-the-art recommender system methods.
Urinary stone size estimation: a new segmentation algorithm-based CT method
Energy Technology Data Exchange (ETDEWEB)
Liden, Mats; Geijer, Haakan [Oerebro University, School of Health and Medical Sciences, Oerebro (Sweden); Oerebro University Hospital, Department of Radiology, Oerebro (Sweden); Andersson, Torbjoern [Oerebro University, School of Health and Medical Sciences, Oerebro (Sweden); Broxvall, Mathias [Oerebro University, Centre for Modelling and Simulation, Oerebro (Sweden); Thunberg, Per [Oerebro University, School of Health and Medical Sciences, Oerebro (Sweden); Oerebro University Hospital, Department of Medical Physics, Oerebro (Sweden)
2012-04-15
The size estimation in CT images of an obstructing ureteral calculus is important for the clinical management of a patient presenting with renal colic. The objective of the present study was to develop a reader independent urinary calculus segmentation algorithm using well-known digital image processing steps and to validate the method against size estimations by several readers. Fifty clinical CT examinations demonstrating urinary calculi were included. Each calculus was measured independently by 11 readers. The mean value of their size estimations was used as validation data for each calculus. The segmentation algorithm consisted of interpolated zoom, binary thresholding and morphological operations. Ten examinations were used for algorithm optimisation and 40 for validation. Based on the optimisation results three segmentation method candidates were identified. Between the primary segmentation algorithm using cubic spline interpolation and the mean estimation by 11 readers, the bias was 0.0 mm, the standard deviation of the difference 0.26 mm and the Bland-Altman limits of agreement 0.0{+-}0.5 mm. The validation showed good agreement between the suggested algorithm and the mean estimation by a large number of readers. The limit of agreement was narrower than the inter-reader limit of agreement previously reported for the same data. (orig.)
Self-Organizing Genetic Algorithm Based Method for Constructing Bayesian Networks from Databases
Institute of Scientific and Technical Information of China (English)
郑建军; 刘玉树; 陈立潮
2003-01-01
The typical characteristic of the topology of Bayesian networks (BNs) is the interdependence among different nodes (variables), which makes it impossible to optimize one variable independently of others, and the learning of BNs structures by general genetic algorithms is liable to converge to local extremum. To resolve efficiently this problem, a self-organizing genetic algorithm (SGA) based method for constructing BNs from databases is presented. This method makes use of a self-organizing mechanism to develop a genetic algorithm that extended the crossover operator from one to two, providing mutual competition between them, even adjusting the numbers of parents in recombination (crossover/recomposition) schemes. With the K2 algorithm, this method also optimizes the genetic operators, and utilizes adequately the domain knowledge. As a result, with this method it is able to find a global optimum of the topology of BNs, avoiding premature convergence to local extremum. The experimental results proved to be and the convergence of the SGA was discussed.
Ramlau, R.; Saxenhuber, D.; Yudytskiy, M.
2014-07-01
The problem of atmospheric tomography arises in ground-based telescope imaging with adaptive optics (AO), where one aims to compensate in real-time for the rapidly changing optical distortions in the atmosphere. Many of these systems depend on a sufficient reconstruction of the turbulence profiles in order to obtain a good correction. Due to steadily growing telescope sizes, there is a strong increase in the computational load for atmospheric reconstruction with current methods, first and foremost the MVM. In this paper we present and compare three novel iterative reconstruction methods. The first iterative approach is the Finite Element- Wavelet Hybrid Algorithm (FEWHA), which combines wavelet-based techniques and conjugate gradient schemes to efficiently and accurately tackle the problem of atmospheric reconstruction. The method is extremely fast, highly flexible and yields superior quality. Another novel iterative reconstruction algorithm is the three step approach which decouples the problem in the reconstruction of the incoming wavefronts, the reconstruction of the turbulent layers (atmospheric tomography) and the computation of the best mirror correction (fitting step). For the atmospheric tomography problem within the three step approach, the Kaczmarz algorithm and the Gradient-based method have been developed. We present a detailed comparison of our reconstructors both in terms of quality and speed performance in the context of a Multi-Object Adaptive Optics (MOAO) system for the E-ELT setting on OCTOPUS, the ESO end-to-end simulation tool.
Method and application of wavelet shrinkage denoising based on genetic algorithm
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
Genetic algorithm (GA) based on wavelet transform threshold shrinkage (WTS) and translation-invafiant threshold shrinkage (TIS) is introduced into the method of noise reduction, where parameters used in WTS and TIS, such as wavelet function,decomposition levels, hard or soft threshold and threshold can be selected automatically. This paper ends by comparing two noise reduction methods on the basis of their denoising performances, computation time, etc. The effectiveness of these methods introduced in this paper is validated by the results of analysis of the simulated and real signals.
A scalable method for parallelizing sampling-based motion planning algorithms
Jacobs, Sam Ade
2012-05-01
This paper describes a scalable method for parallelizing sampling-based motion planning algorithms. It subdivides configuration space (C-space) into (possibly overlapping) regions and independently, in parallel, uses standard (sequential) sampling-based planners to construct roadmaps in each region. Next, in parallel, regional roadmaps in adjacent regions are connected to form a global roadmap. By subdividing the space and restricting the locality of connection attempts, we reduce the work and inter-processor communication associated with nearest neighbor calculation, a critical bottleneck for scalability in existing parallel motion planning methods. We show that our method is general enough to handle a variety of planning schemes, including the widely used Probabilistic Roadmap (PRM) and Rapidly-exploring Random Trees (RRT) algorithms. We compare our approach to two other existing parallel algorithms and demonstrate that our approach achieves better and more scalable performance. Our approach achieves almost linear scalability on a 2400 core LINUX cluster and on a 153,216 core Cray XE6 petascale machine. © 2012 IEEE.
Two Novel On-policy Reinforcement Learning Algorithms based on TD(lambda)-methods
Wiering, M.A.; Hasselt, H. van
2007-01-01
This paper describes two novel on-policy reinforcement learning algorithms, named QV(lambda)-learning and the actor critic learning automaton (ACLA). Both algorithms learn a state value-function using TD(lambda)-methods. The difference between the algorithms is that QV-learning uses the learned valu
An adaptive turbo-shaft engine modeling method based on PS and MRR-LSSVR algorithms
Institute of Scientific and Technical Information of China (English)
Wang Jiankang; Zhang Haibo; Yan Changkai; Duan Shujing; Huang Xianghua
2013-01-01
In order to establish an adaptive turbo-shaft engine model with high accuracy,a new modeling method based on parameter selection (PS) algorithm and multi-input multi-output recursive reduced least square support vector regression (MRR-LSSVR) machine is proposed.Firstly,the PS algorithm is designed to choose the most reasonable inputs of the adaptive module.During this process,a wrapper criterion based on least square support vector regression (LSSVR) machine is adopted,which can not only reduce computational complexity but also enhance generalization performance.Secondly,with the input variables determined by the PS algorithm,a mapping model of engine parameter estimation is trained off-line using MRR-LSSVR,which has a satisfying accuracy within 5‰.Finally,based on a numerical simulation platform of an integrated helicopter/turbo-shaft engine system,an adaptive turbo-shaft engine model is developed and tested in a certain flight envelope.Under the condition of single or multiple engine components being degraded,many simulation experiments are carried out,and the simulation results show the effectiveness and validity of the proposed adaptive modeling method.
Speech Emotion Feature Selection Method Based on Contribution Analysis Algorithm of Neural Network
Wang, Xiaojia; Mao, Qirong; Zhan, Yongzhao
2008-11-01
There are many emotion features. If all these features are employed to recognize emotions, redundant features may be existed. Furthermore, recognition result is unsatisfying and the cost of feature extraction is high. In this paper, a method to select speech emotion features based on contribution analysis algorithm of NN is presented. The emotion features are selected by using contribution analysis algorithm of NN from the 95 extracted features. Cluster analysis is applied to analyze the effectiveness for the features selected, and the time of feature extraction is evaluated. Finally, 24 emotion features selected are used to recognize six speech emotions. The experiments show that this method can improve the recognition rate and the time of feature extraction
Speech Emotion Feature Selection Method Based on Contribution Analysis Algorithm of Neural Network
International Nuclear Information System (INIS)
There are many emotion features. If all these features are employed to recognize emotions, redundant features may be existed. Furthermore, recognition result is unsatisfying and the cost of feature extraction is high. In this paper, a method to select speech emotion features based on contribution analysis algorithm of NN is presented. The emotion features are selected by using contribution analysis algorithm of NN from the 95 extracted features. Cluster analysis is applied to analyze the effectiveness for the features selected, and the time of feature extraction is evaluated. Finally, 24 emotion features selected are used to recognize six speech emotions. The experiments show that this method can improve the recognition rate and the time of feature extraction
State Generation Method for Humanoid Motion Planning Based on Genetic Algorithm
Directory of Open Access Journals (Sweden)
Xuyang Wang
2008-11-01
Full Text Available A new approach to generate the original motion data for humanoid motion planning is presented in this paper. And a state generator is developed based on the genetic algorithm, which enables users to generate various motion states without using any reference motion data. By specifying various types of constraints such as configuration constraints and contact constraints, the state generator can generate stable states that satisfy the constraint conditions for humanoid robots.To deal with the multiple constraints and inverse kinematics, the state generation is finally simplified as a problem of optimizing and searching. In our method, we introduce a convenient mathematic representation for the constraints involved in the state generator, and solve the optimization problem with the genetic algorithm to acquire a desired state. To demonstrate the effectiveness and advantage of the method, a number of motion states are generated according to the requirements of the motion.
State Generation Method for Humanoid Motion Planning Based on Genetic Algorithm
Directory of Open Access Journals (Sweden)
Xuyang Wang
2012-05-01
Full Text Available A new approach to generate the original motion data for humanoid motion planning is presented in this paper. And a state generator is developed based on the genetic algorithm, which enables users to generate various motion states without using any reference motion data. By specifying various types of constraints such as configuration constraints and contact constraints, the state generator can generate stable states that satisfy the constraint conditions for humanoid robots.To deal with the multiple constraints and inverse kinematics, the state generation is finally simplified as a problem of optimizing and searching. In our method, we introduce a convenient mathematic representation for the constraints involved in the state generator, and solve the optimization problem with the genetic algorithm to acquire a desired state. To demonstrate the effectiveness and advantage of the method, a number of motion states are generated according to the requirements of the motion.
Directory of Open Access Journals (Sweden)
Aliasghar Baziar
2015-03-01
Full Text Available Abstract In order to handle large scale problems this study has used shuffled frog leaping algorithm. This algorithm is an optimization method based on natural memetics that uses a new two-phase modification to it to have a better search in the problem space. The suggested algorithm is evaluated by comparing to some well known algorithms using several benchmark optimization problems. The simulation results have clearly shown the superiority of this algorithm over other well-known methods in the area.
Institute of Scientific and Technical Information of China (English)
Zhou Jianjun; Lin Chunsheng; Chen Hao
2014-01-01
Aeromagnetic interference could not be compensated effectively if the precision of parameters which are solved by the aircraft magnetic field model is low. In order to improve the compensation effect under this condition, a method based on small signal model and least mean square (LMS) algorithm is proposed. According to the method, the initial values of adaptive filter’s weight vector are calculated with the solved model parameters through small signal model at first, then the small amount of direction cosine and its derivative are set as the input of the filter, and the small amount of the interference is set as the filter’s expected vector. After that, the aircraft mag-netic interference is compensated by LMS algorithm. Finally, the method is verified by simulation and experiment. The result shows that the compensation effect can be improved obviously by the LMS algorithm when original solved parameters have low precision. The method can further improve the compensation effect even if the solved parameters have high precision.
An efficient method of key-frame extraction based on a cluster algorithm.
Zhang, Qiang; Yu, Shao-Pei; Zhou, Dong-Sheng; Wei, Xiao-Peng
2013-12-18
This paper proposes a novel method of key-frame extraction for use with motion capture data. This method is based on an unsupervised cluster algorithm. First, the motion sequence is clustered into two classes by the similarity distance of the adjacent frames so that the thresholds needed in the next step can be determined adaptively. Second, a dynamic cluster algorithm called ISODATA is used to cluster all the frames and the frames nearest to the center of each class are automatically extracted as key-frames of the sequence. Unlike many other clustering techniques, the present improved cluster algorithm can automatically address different motion types without any need for specified parameters from users. The proposed method is capable of summarizing motion capture data reliably and efficiently. The present work also provides a meaningful comparison between the results of the proposed key-frame extraction technique and other previous methods. These results are evaluated in terms of metrics that measure reconstructed motion and the mean absolute error value, which are derived from the reconstructed data and the original data.
2-D minimum fuzzy entropy method of image thresholding based on genetic algorithm
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
A new image thresholding method is introduced, which is based on 2-D histgram and minimizing the measures of fuzziness of an input image. A new definition of fuzzy membership function is proposed, it denotes the characteristic relationship between the gray level of each pixel and the average value of its neighborhood. When the threshold is not located at the obvious and deep valley ofthe histgram, genetic algorithm is devoted to the problem of selecting the appropriate threshold value. The experimental results indicate that the proposed method has good performance.
Convergence Analysis of Contrastive Divergence Algorithm Based on Gradient Method with Errors
Xuesi Ma; Xiaojie Wang
2015-01-01
Contrastive Divergence has become a common way to train Restricted Boltzmann Machines; however, its convergence has not been made clear yet. This paper studies the convergence of Contrastive Divergence algorithm. We relate Contrastive Divergence algorithm to gradient method with errors and derive convergence conditions of Contrastive Divergence algorithm using the convergence theorem of gradient method with errors. We give specific convergence conditions of Contrastive Divergence ...
A Method for Streamlining and Assessing Sound Velocity Profiles Based on Improved D-P Algorithm
Zhao, D.; WU, Z. Y.; Zhou, J.
2015-12-01
A multi-beam system transmits sound waves and receives the round-trip time of their reflection or scattering, and thus it is possible to determine the depth and coordinates of the detected targets using the sound velocity profile (SVP) based on Snell's Law. The SVP is determined by a device. Because of the high sampling rate of the modern device, the operational time of ray tracing and beam footprint reduction will increase, lowering the overall efficiency. To promote the timeliness of multi-beam surveys and data processing, redundant points in the original SVP must be screened out and at the same time, errors following the streamlining of the SVP must be evaluated and controlled. We presents a new streamlining and evaluation method based on the Maximum Offset of sound Velocity (MOV) algorithm. Based on measured SVP data, this method selects sound velocity data points by calculating the maximum distance to the sound-velocity-dimension based on an improved Douglas-Peucker Algorithm to streamline the SVP (Fig. 1). To evaluate whether the streamlined SVP meets the desired accuracy requirements, this method is divided into two parts: SVP streamlining, and an accuracy analysis of the multi-beam sounding data processing using the streamlined SVP. Therefore, the method is divided into two modules: the streamlining module and the evaluation module (Fig. 2). The streamlining module is used for streamlining the SVP. Its core is the MOV algorithm.To assess the accuracy of the streamlined SVP, we uses ray tracing and the percentage error analysis method to evaluate the accuracy of the sounding data both before and after streamlining the SVP (Fig. 3). By automatically optimizing the threshold, the reduction rate of sound velocity profile data can reach over 90% and the standard deviation percentage error of sounding data can be controlled to within 0.1% (Fig. 4). The optimized sound velocity profile data improved the operational efficiency of the multi-beam survey and data post
A Method for Crude Oil Selection and Blending Optimization Based on Improved Cuckoo Search Algorithm
Institute of Scientific and Technical Information of China (English)
Yang Huihua; Ma Wei; Zhang Xiaofeng; Li Hu; Tian Songbai
2014-01-01
Reifneries often need to ifnd similar crude oil to replace the scarce crude oil for stabilizing the feedstock prop-erty. We introduced the method for calculation of crude blended properties ifrstly, and then created a crude oil selection and blending optimization model based on the data of crude oil property. The model is a mixed-integer nonlinear programming (MINLP) with constraints, and the target is to maximize the similarity between the blended crude oil and the objective crude oil. Furthermore, the model takes into account the selection of crude oils and their blending ratios simultaneously, and trans-forms the problem of looking for similar crude oil into the crude oil selection and blending optimization problem. We ap-plied the Improved Cuckoo Search (ICS) algorithm to solving the model. Through the simulations, ICS was compared with the genetic algorithm, the particle swarm optimization algorithm and the CPLEX solver. The results show that ICS has very good optimization efifciency. The blending solution can provide a reference for reifneries to ifnd the similar crude oil. And the method proposed can also give some references to selection and blending optimization of other materials.
Predicting Modeling Method of Ship Radiated Noise Based on Genetic Algorithm
Directory of Open Access Journals (Sweden)
Guohui Li
2016-01-01
Full Text Available Because the forming mechanism of underwater acoustic signal is complex, it is difficult to establish the accurate predicting model. In this paper, we propose a nonlinear predicting modeling method of ship radiated noise based on genetic algorithm. Three types of ship radiated noise are taken as real underwater acoustic signal. First of all, a basic model framework is chosen. Secondly, each possible model is done with genetic coding. Thirdly, model evaluation standard is established. Fourthly, the operation of genetic algorithm such as crossover, reproduction, and mutation is designed. Finally, a prediction model of real underwater acoustic signal is established by genetic algorithm. By calculating the root mean square error and signal error ratio of underwater acoustic signal predicting model, the satisfactory results are obtained. The results show that the proposed method can establish the accurate predicting model with high prediction accuracy and may play an important role in the further processing of underwater acoustic signal such as noise reduction and feature extraction and classification.
Research and Simulation of FECG Signal Blind Separation Algorithm Based on Gradient Method
Directory of Open Access Journals (Sweden)
Yu Chen
2012-08-01
Full Text Available Independent Component Analysis (ICA is a new developed signal separation and digital analysis technology in recent years. ICA has widely used because it does not need to know the signal prior information, which has became the hot spot in signal processing field research. In this study, we firstly introduce the principle, meaning and blind source separation algorithm based on the gradient. By using the traditional natural gradient algorithm and Equi-variant Adaptive Source Separation via Independent (EASI blind separation algorithm, mixing ECG signals with noises had been separated effectively into the Maternal Electrocardiograph (MECG signal, Fetal Electrocardiograph (FECG signal and noise signal. The algorithm separation test showed that EASI algorithm can better separate the fetal ECG signal and because the gradient algorithm is a kind of online algorithm, which can be used for clinical fetal ECG signal of the real-time detection with important practical value and research significance.
Directory of Open Access Journals (Sweden)
T. Karpagam
2012-01-01
Full Text Available Problem statement: Network topology design problems find application in several real life scenario. Approach: Most designs in the past either optimize for a single criterion like shortest or cost minimization or maximum flow. Results: This study discussed about solving a multi objective network topology design problem for a realistic traffic model specifically in the pipeline transportation. Here flow based algorithm focusing to transport liquid goods with maximum capacity with shortest distance, this algorithm developed with the sense of basic pert and critical path method. Conclusion/Recommendations: This flow based algorithm helps to give optimal result for transporting maximum capacity with minimum cost. It could be used in the juice factory, milk industry and its best alternate for the vehicle routing problem.
Xie, Li; Li, Guangyao; Xiao, Mang; Peng, Lei
2016-04-01
Various kinds of remote sensing image classification algorithms have been developed to adapt to the rapid growth of remote sensing data. Conventional methods typically have restrictions in either classification accuracy or computational efficiency. Aiming to overcome the difficulties, a new solution for remote sensing image classification is presented in this study. A discretization algorithm based on information entropy is applied to extract features from the data set and a vector space model (VSM) method is employed as the feature representation algorithm. Because of the simple structure of the feature space, the training rate is accelerated. The performance of the proposed method is compared with two other algorithms: back propagation neural networks (BPNN) method and ant colony optimization (ACO) method. Experimental results confirm that the proposed method is superior to the other algorithms in terms of classification accuracy and computational efficiency.
A Novel Method to Predict Genomic Islands Based on Mean Shift Clustering Algorithm.
de Brito, Daniel M; Maracaja-Coutinho, Vinicius; de Farias, Savio T; Batista, Leonardo V; do Rêgo, Thaís G
2016-01-01
Genomic Islands (GIs) are regions of bacterial genomes that are acquired from other organisms by the phenomenon of horizontal transfer. These regions are often responsible for many important acquired adaptations of the bacteria, with great impact on their evolution and behavior. Nevertheless, these adaptations are usually associated with pathogenicity, antibiotic resistance, degradation and metabolism. Identification of such regions is of medical and industrial interest. For this reason, different approaches for genomic islands prediction have been proposed. However, none of them are capable of predicting precisely the complete repertory of GIs in a genome. The difficulties arise due to the changes in performance of different algorithms in the face of the variety of nucleotide distribution in different species. In this paper, we present a novel method to predict GIs that is built upon mean shift clustering algorithm. It does not require any information regarding the number of clusters, and the bandwidth parameter is automatically calculated based on a heuristic approach. The method was implemented in a new user-friendly tool named MSGIP--Mean Shift Genomic Island Predictor. Genomes of bacteria with GIs discussed in other papers were used to evaluate the proposed method. The application of this tool revealed the same GIs predicted by other methods and also different novel unpredicted islands. A detailed investigation of the different features related to typical GI elements inserted in these new regions confirmed its effectiveness. Stand-alone and user-friendly versions for this new methodology are available at http://msgip.integrativebioinformatics.me. PMID:26731657
A Novel Method to Predict Genomic Islands Based on Mean Shift Clustering Algorithm.
Directory of Open Access Journals (Sweden)
Daniel M de Brito
Full Text Available Genomic Islands (GIs are regions of bacterial genomes that are acquired from other organisms by the phenomenon of horizontal transfer. These regions are often responsible for many important acquired adaptations of the bacteria, with great impact on their evolution and behavior. Nevertheless, these adaptations are usually associated with pathogenicity, antibiotic resistance, degradation and metabolism. Identification of such regions is of medical and industrial interest. For this reason, different approaches for genomic islands prediction have been proposed. However, none of them are capable of predicting precisely the complete repertory of GIs in a genome. The difficulties arise due to the changes in performance of different algorithms in the face of the variety of nucleotide distribution in different species. In this paper, we present a novel method to predict GIs that is built upon mean shift clustering algorithm. It does not require any information regarding the number of clusters, and the bandwidth parameter is automatically calculated based on a heuristic approach. The method was implemented in a new user-friendly tool named MSGIP--Mean Shift Genomic Island Predictor. Genomes of bacteria with GIs discussed in other papers were used to evaluate the proposed method. The application of this tool revealed the same GIs predicted by other methods and also different novel unpredicted islands. A detailed investigation of the different features related to typical GI elements inserted in these new regions confirmed its effectiveness. Stand-alone and user-friendly versions for this new methodology are available at http://msgip.integrativebioinformatics.me.
Adaptive Initialization Method Based on Spatial Local Information for k-Means Algorithm
Directory of Open Access Journals (Sweden)
Honghong Liao
2014-01-01
Full Text Available k-means algorithm is a widely used clustering algorithm in data mining and machine learning community. However, the initial guess of cluster centers affects the clustering result seriously, which means that improper initialization cannot lead to a desirous clustering result. How to choose suitable initial centers is an important research issue for k-means algorithm. In this paper, we propose an adaptive initialization framework based on spatial local information (AIF-SLI, which takes advantage of local density of data distribution. As it is difficult to estimate density correctly, we develop two approximate estimations: density by t-nearest neighborhoods (t-NN and density by ϵ-neighborhoods (ϵ-Ball, leading to two implements of the proposed framework. Our empirical study on more than 20 datasets shows promising performance of the proposed framework and denotes that it has several advantages: (1 can find the reasonable candidates of initial centers effectively; (2 it can reduce the iterations of k-means’ methods significantly; (3 it is robust to outliers; and (4 it is easy to implement.
New Classification Method Based on Support-Significant Association Rules Algorithm
Li, Guoxin; Shi, Wen
One of the most well-studied problems in data mining is mining for association rules. There was also research that introduced association rule mining methods to conduct classification tasks. These classification methods, based on association rule mining, could be applied for customer segmentation. Currently, most of the association rule mining methods are based on a support-confidence structure, where rules satisfied both minimum support and minimum confidence were returned as strong association rules back to the analyzer. But, this types of association rule mining methods lack of rigorous statistic guarantee, sometimes even caused misleading. A new classification model for customer segmentation, based on association rule mining algorithm, was proposed in this paper. This new model was based on the support-significant association rule mining method, where the measurement of confidence for association rule was substituted by the significant of association rule that was a better evaluation standard for association rules. Data experiment for customer segmentation from UCI indicated the effective of this new model.
Directory of Open Access Journals (Sweden)
Jing Xu
2016-07-01
Full Text Available As the sound signal of a machine contains abundant information and is easy to measure, acoustic-based monitoring or diagnosis systems exhibit obvious superiority, especially in some extreme conditions. However, the sound directly collected from industrial field is always polluted. In order to eliminate noise components from machinery sound, a wavelet threshold denoising method optimized by an improved fruit fly optimization algorithm (WTD-IFOA is proposed in this paper. The sound is firstly decomposed by wavelet transform (WT to obtain coefficients of each level. As the wavelet threshold functions proposed by Donoho were discontinuous, many modified functions with continuous first and second order derivative were presented to realize adaptively denoising. However, the function-based denoising process is time-consuming and it is difficult to find optimal thresholds. To overcome these problems, fruit fly optimization algorithm (FOA was introduced to the process. Moreover, to avoid falling into local extremes, an improved fly distance range obeying normal distribution was proposed on the basis of original FOA. Then, sound signal of a motor was recorded in a soundproof laboratory, and Gauss white noise was added into the signal. The simulation results illustrated the effectiveness and superiority of the proposed approach by a comprehensive comparison among five typical methods. Finally, an industrial application on a shearer in coal mining working face was performed to demonstrate the practical effect.
Wang, Xiaogang; Zhao, Daomu
2012-05-21
A double-image encryption technique that based on an asymmetric algorithm is proposed. In this method, the encryption process is different from the decryption and the encrypting keys are also different from the decrypting keys. In the nonlinear encryption process, the images are encoded into an amplitude cyphertext, and two phase-only masks (POMs) generated based on phase truncation are kept as keys for decryption. By using the classical double random phase encoding (DRPE) system, the primary images can be collected by an intensity detector that located at the output plane. Three random POMs that applied in the asymmetric encryption can be safely applied as public keys. Simulation results are presented to demonstrate the validity and security of the proposed protocol.
Directory of Open Access Journals (Sweden)
Li Honglin
2009-03-01
Full Text Available Abstract Background Conformation generation is a ubiquitous problem in molecule modelling. Many applications require sampling the broad molecular conformational space or perceiving the bioactive conformers to ensure success. Numerous in silico methods have been proposed in an attempt to resolve the problem, ranging from deterministic to non-deterministic and systemic to stochastic ones. In this work, we described an efficient conformation sampling method named Cyndi, which is based on multi-objective evolution algorithm. Results The conformational perturbation is subjected to evolutionary operation on the genome encoded with dihedral torsions. Various objectives are designated to render the generated Pareto optimal conformers to be energy-favoured as well as evenly scattered across the conformational space. An optional objective concerning the degree of molecular extension is added to achieve geometrically extended or compact conformations which have been observed to impact the molecular bioactivity (J Comput -Aided Mol Des 2002, 16: 105–112. Testing the performance of Cyndi against a test set consisting of 329 small molecules reveals an average minimum RMSD of 0.864 Å to corresponding bioactive conformations, indicating Cyndi is highly competitive against other conformation generation methods. Meanwhile, the high-speed performance (0.49 ± 0.18 seconds per molecule renders Cyndi to be a practical toolkit for conformational database preparation and facilitates subsequent pharmacophore mapping or rigid docking. The copy of precompiled executable of Cyndi and the test set molecules in mol2 format are accessible in Additional file 1. Conclusion On the basis of MOEA algorithm, we present a new, highly efficient conformation generation method, Cyndi, and report the results of validation and performance studies comparing with other four methods. The results reveal that Cyndi is capable of generating geometrically diverse conformers and outperforms
Rajan, C. Christober Asir
2010-10-01
The objective of this paper is to find the generation scheduling such that the total operating cost can be minimized, when subjected to a variety of constraints. This also means that it is desirable to find the optimal generating unit commitment in the power system for the next H hours. Genetic Algorithms (GA's) are general-purpose optimization techniques based on principles inspired from the biological evolution using metaphors of mechanisms such as neural section, genetic recombination and survival of the fittest. In this, the unit commitment schedule is coded as a string of symbols. An initial population of parent solutions is generated at random. Here, each schedule is formed by committing all the units according to their initial status ("flat start"). Here the parents are obtained from a pre-defined set of solution's i.e. each and every solution is adjusted to meet the requirements. Then, a random recommitment is carried out with respect to the unit's minimum down times. And SA improves the status. A 66-bus utility power system with twelve generating units in India demonstrates the effectiveness of the proposed approach. Numerical results are shown comparing the cost solutions and computation time obtained by using the Genetic Algorithm method and other conventional methods.
Directory of Open Access Journals (Sweden)
Chang Liu
2015-01-01
Full Text Available Path planning is a classic optimization problem which can be solved by many optimization algorithms. The complexity of three-dimensional (3D path planning for autonomous underwater vehicles (AUVs requires the optimization algorithm to have a quick convergence speed. This work provides a new 3D path planning method for AUV using a modified firefly algorithm. In order to solve the problem of slow convergence of the basic firefly algorithm, an improved method was proposed. In the modified firefly algorithm, the parameters of the algorithm and the random movement steps can be adjusted according to the operating process. At the same time, an autonomous flight strategy is introduced to avoid instances of invalid flight. An excluding operator was used to improve the effect of obstacle avoidance, and a contracting operator was used to enhance the convergence speed and the smoothness of the path. The performance of the modified firefly algorithm and the effectiveness of the 3D path planning method were proved through a varied set of experiments.
Adjoint Algorithm for CAD-Based Shape Optimization Using a Cartesian Method
Nemec, Marian; Aftosmis, Michael J.
2004-01-01
Adjoint solutions of the governing flow equations are becoming increasingly important for the development of efficient analysis and optimization algorithms. A well-known use of the adjoint method is gradient-based shape optimization. Given an objective function that defines some measure of performance, such as the lift and drag functionals, its gradient is computed at a cost that is essentially independent of the number of design variables (geometric parameters that control the shape). More recently, emerging adjoint applications focus on the analysis problem, where the adjoint solution is used to drive mesh adaptation, as well as to provide estimates of functional error bounds and corrections. The attractive feature of this approach is that the mesh-adaptation procedure targets a specific functional, thereby localizing the mesh refinement and reducing computational cost. Our focus is on the development of adjoint-based optimization techniques for a Cartesian method with embedded boundaries.12 In contrast t o implementations on structured and unstructured grids, Cartesian methods decouple the surface discretization from the volume mesh. This feature makes Cartesian methods well suited for the automated analysis of complex geometry problems, and consequently a promising approach to aerodynamic optimization. Melvin et developed an adjoint formulation for the TRANAIR code, which is based on the full-potential equation with viscous corrections. More recently, Dadone and Grossman presented an adjoint formulation for the Euler equations. In both approaches, a boundary condition is introduced to approximate the effects of the evolving surface shape that results in accurate gradient computation. Central to automated shape optimization algorithms is the issue of geometry modeling and control. The need to optimize complex, "real-life" geometry provides a strong incentive for the use of parametric-CAD systems within the optimization procedure. In previous work, we presented
A User Differential Range Error Calculating Algorithm Based on Analytic Method
Institute of Scientific and Technical Information of China (English)
SHAO Bo; LIU Jiansheng; ZHAO Ruibin; HUANG Zhigang; LI Rui
2011-01-01
To enhance the integrity,an analytic method (AM) which has less execution time is proposed to calculate the user differential range error (UDRE) used by the user to detect the potential risk.An ephemeris and clock correction calculation method is introduced first.It shows that the most important thing of computing UDRE is to find the worst user location (WUL) in the service volume.Then,a UDRE algorithm using AM is described to solve this problem.By using the covariance matrix of the error vector,the searching of WUL is converted to an analytic geometry problem.The location of WUL can be obtained directly by mathematical derivation.Experiments are conducted to compare the performance between the proposed AM algorithm and the exhaustive grid search (EGS) method used in the master station.The results show that the correctness of the AM algorithm can be proved by the EGS method and the AM algorithm can reduce the calculation time by more than 90%.The computational complexity of this proposed algorithm is better than that of EGS.Thereby this algorithm is more suitable for computing UDRE at the master station.
Two Novel On-policy Reinforcement Learning Algorithms based on TD(lambda)-methods
Wiering, M.A.; Hasselt, H. van
2007-01-01
This paper describes two novel on-policy reinforcement learning algorithms, named QV(lambda)-learning and the actor critic learning automaton (ACLA). Both algorithms learn a state value-function using TD(lambda)-methods. The difference between the algorithms is that QV-learning uses the learned value function and a form of Q-learning to learn Q-values, whereas ACLA uses the value function and a learning automaton-like update rule to update the actor. We describe several possible advantages of...
A Multiuser Manufacturing Resource Service Composition Method Based on the Bees Algorithm
Directory of Open Access Journals (Sweden)
Yongquan Xie
2015-01-01
Full Text Available In order to realize an optimal resource service allocation in current open and service-oriented manufacturing model, multiuser resource service composition (RSC is modeled as a combinational and constrained multiobjective problem. The model takes into account both subjective and objective quality of service (QoS properties as representatives to evaluate a solution. The QoS properties aggregation and evaluation techniques are based on existing researches. The basic Bees Algorithm is tailored for finding a near optimal solution to the model, since the basic version is only proposed to find a desired solution in continuous domain and thus not suitable for solving the problem modeled in our study. Particular rules are designed for handling the constraints and finding Pareto optimality. In addition, the established model introduces a trusted service set to each user so that the algorithm could start by searching in the neighbor of more reliable service chains (known as seeds than those randomly generated. The advantages of these techniques are validated by experiments in terms of success rate, searching speed, ability of avoiding ingenuity, and so forth. The results demonstrate the effectiveness of the proposed method in handling multiuser RSC problems.
Directory of Open Access Journals (Sweden)
Wang Pidong
2016-01-01
Full Text Available Blind source separation is a hot topic in signal processing. Most existing works focus on dealing with linear combined signals, while in practice we always encounter with nonlinear mixed signals. To address the problem of nonlinear source separation, in this paper we propose a novel algorithm using radial basis function neutral network, optimized by multi-universe parallel quantum genetic algorithm. Experiments show the efficiency of the proposed method.
Jie-Sheng Wang; Chen-Xu Ning
2015-01-01
In order to improve the accuracy and real-time of all kinds of information in the cash business, and solve the problem which accuracy and stability is not high of the data linkage between cash inventory forecasting and cash management information in the commercial bank, a hybrid learning algorithm is proposed based on adaptive population activity particle swarm optimization (APAPSO) algorithm combined with the least squares method (LMS) to optimize the adaptive network-based fuzzy inference s...
Institute of Scientific and Technical Information of China (English)
LI Zicheng; SUN Yukun
2006-01-01
Considering the detection principle that "when load current is periodic current, the integral in a cycle for absolute value of load current subtracting fundamental active current is the least", harmonic current real-time detection methods for power active filter are proposed based on direct computation, simple iterative algorithm and optimal iterative algorithm. According to the direct computation method, the amplitude of the fundamental active current can be accurately calculated when load current is placed in stable state. The simple iterative algorithm and the optimal iterative algorithm provide an idea about judging the state of load current. On the basis of the direct computation method, the simple iterative algorithm, the optimal iterative algorithm and precise definition of the basic concepts such as the true amplitude of the fundamental active current when load current is placed in varying state, etc., the double linear construction idea is proposed in which the amplitude of the fundamental active current at the moment of the sample is accurately calculated by using the first linear construction and the condition which disposes the next sample is created by using the second linear construction. On the basis of the double linear construction idea, a harmonic current real-time detection method for power active filter is proposed based on the double linear construction algorithm. This method has the characteristics of small computing quantity, fine real-time performance, being capable of accurately calculating the amplitude of the fundamental active current and so on.
A Semi-Supervised WLAN Indoor Localization Method Based on ℓ1-Graph Algorithm
Institute of Scientific and Technical Information of China (English)
Liye Zhang; Lin Ma; Yubin Xu
2015-01-01
For indoor location estimation based on received signal strength ( RSS ) in wireless local area networks ( WLAN) , in order to reduce the influence of noise on the positioning accuracy, a large number of RSS should be collected in offline phase. Therefore, collecting training data with positioning information is time consuming which becomes the bottleneck of WLAN indoor localization. In this paper, the traditional semi⁃supervised learning method based on k⁃NN andε⁃NN graph for reducing collection workload of offline phase are analyzed, and the result shows that the k⁃NN or ε⁃NN graph are sensitive to data noise, which limit the performance of semi⁃supervised learning WLAN indoor localization system. Aiming at the above problem, it proposes a ℓ1⁃graph⁃algorithm⁃based semi⁃supervised learning ( LG⁃SSL) indoor localization method in which the graph is built by ℓ1⁃norm algorithm. In our system, it firstly labels the unlabeled data using LG⁃SSL and labeled data to build the Radio Map in offline training phase, and then uses LG⁃SSL to estimate user’ s location in online phase. Extensive experimental results show that, benefit from the robustness to noise and sparsity ofℓ1⁃graph, LG⁃SSL exhibits superior performance by effectively reducing the collection workload in offline phase and improving localization accuracy in online phase.
International Nuclear Information System (INIS)
Many problems need to be solved before vision systems can actually be applied in industry, such as the precision of the kinematics model of the robot control algorithm based on visual information, active compensation of the camera's focal length and orientation during the movement of the robot, and understanding the mapping of the physical 3-D space into 2-D camera coordinates. An algorithm is proposed to enable robot to move actively even if the relative positions between the camera and the robot is unknown. To solve the correction problem, this study proposes vision system model with six camera parameters. To develop the robot vision control algorithm, the N-R and EKG methods are applied to the vision system model. Finally, the position accuracy and processing time of the two algorithms developed based based on the EKG and the N-R methods are compared experimentally by making the robot perform slender bar placement task
Alexander B. Bakulev; Marina A. Bakuleva; Svetlana B. Avilkina
2012-01-01
This article deals with mathematical models and algorithms, providing mobility of sequential programs parallel representation on the high-level language, presents formal model of operation environment processes management, based on the proposed model of programs parallel representation, presenting computation process on the base of multi-core processors.
Directory of Open Access Journals (Sweden)
Alexander B. Bakulev
2012-11-01
Full Text Available This article deals with mathematical models and algorithms, providing mobility of sequential programs parallel representation on the high-level language, presents formal model of operation environment processes management, based on the proposed model of programs parallel representation, presenting computation process on the base of multi-core processors.
Institute of Scientific and Technical Information of China (English)
Tung-Kuan Liu; Chiu-Hung Chen; Zu-Shu Li; Jyh-Horng Chou
2009-01-01
This article presents a multiobjective approach to the design of the controller for the swing-up and handstand control of a general cart-double-pendulum system (CDPS).The designed controller,which is based on the human-simulated intelligent control (HSIC) method,builds up different control modes to monitor and control the CDPS during four kinetic phases consisting of an initial oscillation phase,a swing-up phase,a posture adjustment phase,and a balance control phase.For the approach,the original method of inequalities-based (MoI) multiobjective genetic algorithm (MMGA) is extended and applied to the case study which uses a set of performance indices that includes the cart displacement over the rail boundary,the number of swings,the settling time,the overshoot of the total energy,and the control effort.The simulation results show good responses of the CDPS with the controllers obtained by the proposed approach.
A New Tool Wear Monitoring Method Based on Ant Colony Algorithm
Directory of Open Access Journals (Sweden)
Qianjian Guo
2013-06-01
Full Text Available Tool wear prediction is a major contributor to the dimensional errors of a work piece in precision machining, which plays an important role in industry for higher productivity and product quality. Tool wear monitoring is an effective way to predict the tool wear loss in milling process. In this paper, a new bionic prediction model is presented based on the generation mechanism of tool wear loss. Different milling conditions are estimated as the input variables, tool wear loss is estimated as the output variable, neural network method is proposed to establish the mapping relation and ant algorithm is used to train the weights of BP neural networks during tool wear modeling. Finally, a real-time tool wear loss estimator is developed based on ant colony alogrithm and experiments have been conducted for measuring tool wear based on the estimator in a milling machine. The experimental and estimated results are found to be in satisfactory agreement with average error lower than 6%.
Unsupervised classification algorithm based on EM method for polarimetric SAR images
Fernández-Michelli, J. I.; Hurtado, M.; Areta, J. A.; Muravchik, C. H.
2016-07-01
In this work we develop an iterative classification algorithm using complex Gaussian mixture models for the polarimetric complex SAR data. It is a non supervised algorithm which does not require training data or an initial set of classes. Additionally, it determines the model order from data, which allows representing data structure with minimum complexity. The algorithm consists of four steps: initialization, model selection, refinement and smoothing. After a simple initialization stage, the EM algorithm is iteratively applied in the model selection step to compute the model order and an initial classification for the refinement step. The refinement step uses Classification EM (CEM) to reach the final classification and the smoothing stage improves the results by means of non-linear filtering. The algorithm is applied to both simulated and real Single Look Complex data of the EMISAR mission and compared with the Wishart classification method. We use confusion matrix and kappa statistic to make the comparison for simulated data whose ground-truth is known. We apply Davies-Bouldin index to compare both classifications for real data. The results obtained for both types of data validate our algorithm and show that its performance is comparable to Wishart's in terms of classification quality.
Fast Contour-Tracing Algorithm Based on a Pixel-Following Method for Image Sensors.
Seo, Jonghoon; Chae, Seungho; Shim, Jinwook; Kim, Dongchul; Cheong, Cheolho; Han, Tack-Don
2016-01-01
Contour pixels distinguish objects from the background. Tracing and extracting contour pixels are widely used for smart/wearable image sensor devices, because these are simple and useful for detecting objects. In this paper, we present a novel contour-tracing algorithm for fast and accurate contour following. The proposed algorithm classifies the type of contour pixel, based on its local pattern. Then, it traces the next contour using the previous pixel's type. Therefore, it can classify the type of contour pixels as a straight line, inner corner, outer corner and inner-outer corner, and it can extract pixels of a specific contour type. Moreover, it can trace contour pixels rapidly because it can determine the local minimal path using the contour case. In addition, the proposed algorithm is capable of the compressing data of contour pixels using the representative points and inner-outer corner points, and it can accurately restore the contour image from the data. To compare the performance of the proposed algorithm to that of conventional techniques, we measure their processing time and accuracy. In the experimental results, the proposed algorithm shows better performance compared to the others. Furthermore, it can provide the compressed data of contour pixels and restore them accurately, including the inner-outer corner, which cannot be restored using conventional algorithms. PMID:27005632
Fast Contour-Tracing Algorithm Based on a Pixel-Following Method for Image Sensors.
Seo, Jonghoon; Chae, Seungho; Shim, Jinwook; Kim, Dongchul; Cheong, Cheolho; Han, Tack-Don
2016-03-09
Contour pixels distinguish objects from the background. Tracing and extracting contour pixels are widely used for smart/wearable image sensor devices, because these are simple and useful for detecting objects. In this paper, we present a novel contour-tracing algorithm for fast and accurate contour following. The proposed algorithm classifies the type of contour pixel, based on its local pattern. Then, it traces the next contour using the previous pixel's type. Therefore, it can classify the type of contour pixels as a straight line, inner corner, outer corner and inner-outer corner, and it can extract pixels of a specific contour type. Moreover, it can trace contour pixels rapidly because it can determine the local minimal path using the contour case. In addition, the proposed algorithm is capable of the compressing data of contour pixels using the representative points and inner-outer corner points, and it can accurately restore the contour image from the data. To compare the performance of the proposed algorithm to that of conventional techniques, we measure their processing time and accuracy. In the experimental results, the proposed algorithm shows better performance compared to the others. Furthermore, it can provide the compressed data of contour pixels and restore them accurately, including the inner-outer corner, which cannot be restored using conventional algorithms.
Fast Contour-Tracing Algorithm Based on a Pixel-Following Method for Image Sensors
Directory of Open Access Journals (Sweden)
Jonghoon Seo
2016-03-01
Full Text Available Contour pixels distinguish objects from the background. Tracing and extracting contour pixels are widely used for smart/wearable image sensor devices, because these are simple and useful for detecting objects. In this paper, we present a novel contour-tracing algorithm for fast and accurate contour following. The proposed algorithm classifies the type of contour pixel, based on its local pattern. Then, it traces the next contour using the previous pixel’s type. Therefore, it can classify the type of contour pixels as a straight line, inner corner, outer corner and inner-outer corner, and it can extract pixels of a specific contour type. Moreover, it can trace contour pixels rapidly because it can determine the local minimal path using the contour case. In addition, the proposed algorithm is capable of the compressing data of contour pixels using the representative points and inner-outer corner points, and it can accurately restore the contour image from the data. To compare the performance of the proposed algorithm to that of conventional techniques, we measure their processing time and accuracy. In the experimental results, the proposed algorithm shows better performance compared to the others. Furthermore, it can provide the compressed data of contour pixels and restore them accurately, including the inner-outer corner, which cannot be restored using conventional algorithms.
Directory of Open Access Journals (Sweden)
Luman Zhao
2015-01-01
Full Text Available A thrust allocation method was proposed based on a hybrid optimization algorithm to efficiently and dynamically position a semisubmersible drilling rig. That is, the thrust allocation was optimized to produce the generalized forces and moment required while at the same time minimizing the total power consumption under the premise that forbidden zones should be taken into account. An optimization problem was mathematically formulated to provide the optimal thrust allocation by introducing the corresponding design variables, objective function, and constraints. A hybrid optimization algorithm consisting of a genetic algorithm and a sequential quadratic programming (SQP algorithm was selected and used to solve this problem. The proposed method was evaluated by applying it to a thrust allocation problem for a semisubmersible drilling rig. The results indicate that the proposed method can be used as part of a cost-effective strategy for thrust allocation of the rig.
An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method
Energy Technology Data Exchange (ETDEWEB)
Campolina, Daniel; Lima, Paulo Rubens I., E-mail: campolina@cdtn.br, E-mail: pauloinacio@cpejr.com.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil). Servico de Tecnologia de Reatores; Pereira, Claubia; Veloso, Maria Auxiliadora F., E-mail: claubia@nuclear.ufmg.br, E-mail: dora@nuclear.ufmg.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Dept. de Engenharia Nuclear
2015-07-01
Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k{sub eff} was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)
Directory of Open Access Journals (Sweden)
Shao-Fei Jiang
2014-01-01
Full Text Available Optimization techniques have been applied to structural health monitoring and damage detection of civil infrastructures for two decades. The standard particle swarm optimization (PSO is easy to fall into the local optimum and such deficiency also exists in the multiparticle swarm coevolution optimization (MPSCO. This paper presents an improved MPSCO algorithm (IMPSCO firstly and then integrates it with Newmark’s algorithm to localize and quantify the structural damage by using the damage threshold proposed. To validate the proposed method, a numerical simulation and an experimental study of a seven-story steel frame were employed finally, and a comparison was made between the proposed method and the genetic algorithm (GA. The results show threefold: (1 the proposed method not only is capable of localization and quantification of damage, but also has good noise-tolerance; (2 the damage location can be accurately detected using the damage threshold proposed in this paper; and (3 compared with the GA, the IMPSCO algorithm is more efficient and accurate for damage detection problems in general. This implies that the proposed method is applicable and effective in the community of damage detection and structural health monitoring.
Creating IRT-Based Parallel Test Forms Using the Genetic Algorithm Method
Sun, Koun-Tem; Chen, Yu-Jen; Tsai, Shu-Yen; Cheng, Chien-Fen
2008-01-01
In educational measurement, the construction of parallel test forms is often a combinatorial optimization problem that involves the time-consuming selection of items to construct tests having approximately the same test information functions (TIFs) and constraints. This article proposes a novel method, genetic algorithm (GA), to construct parallel…
NUMERICAL METHOD BASED ON HAMILTON SYSTEM AND SYMPLECTIC ALGORITHM TO DIFFERENTIAL GAMES
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
The resolution of differential games often concerns the difficult problem of two points border value (TPBV), then ascribe linear quadratic differential game to Hamilton system. To Hamilton system, the algorithm of symplectic geometry has the merits of being able to copy the dynamic structure of Hamilton system and keep the measure of phase plane. From the viewpoint of Hamilton system, the symplectic characters of linear quadratic differential game were probed; as a try, Symplectic-Runge-Kutta algorithm was presented for the resolution of infinite horizon linear quadratic differential game. An example of numerical calculation was given, and the result can illuminate the feasibility of this method. At the same time, it embodies the fine conservation characteristics of symplectic algorithm to system energy.
An infrared small target detection algorithm based on high-speed local contrast method
Cui, Zheng; Yang, Jingli; Jiang, Shouda; Li, Junbao
2016-05-01
Small-target detection in infrared imagery with a complex background is always an important task in remote sensing fields. It is important to improve the detection capabilities such as detection rate, false alarm rate, and speed. However, current algorithms usually improve one or two of the detection capabilities while sacrificing the other. In this letter, an Infrared (IR) small target detection algorithm with two layers inspired by Human Visual System (HVS) is proposed to balance those detection capabilities. The first layer uses high speed simplified local contrast method to select significant information. And the second layer uses machine learning classifier to separate targets from background clutters. Experimental results show the proposed algorithm pursue good performance in detection rate, false alarm rate and speed simultaneously.
A method for classification of network traffic based on C5.0 Machine Learning Algorithm
DEFF Research Database (Denmark)
Bujlow, Tomasz; Riaz, M. Tahir; Pedersen, Jens Myrup
2012-01-01
Monitoring of the network performance in high-speed Internet infrastructure is a challenging task, as the requirements for the given quality level are service-dependent. Backbone QoS monitoring and analysis in Multi-hop Networks requires therefore knowledge about types of applications forming...... current network traffic. To overcome the drawbacks of existing methods for traffic classification, usage of C5.0 Machine Learning Algorithm (MLA) was proposed. On the basis of statistical traffic information received from volunteers and C5.0 algorithm we constructed a boosted classifier, which was shown...... and classification, an algorithm for recognizing flow direction and the C5.0 itself. Classified applications include Skype, FTP, torrent, web browser traffic, web radio, interactive gaming and SSH. We performed subsequent tries using different sets of parameters and both training and classification options...
Directory of Open Access Journals (Sweden)
Hongwei Yan
2013-07-01
Full Text Available With the continuous improvement of robot intelligent, constantly expanding range of applications, as well as multi-sensor information fusion technology, the traditional single sensor signal transmission problem has become multi-sensor transmission problems or multiple source signal transmission problems. This brought a large amount of signal variation and signals multiple variation problems. The traditional detection algorithm has been unable to meet the requirements; therefore, this paper puts forward a kind of robot multisensory signal variation test method based on artificial immune algorithm. First, establish the dynamic changes of the signal variability of equations to get the cross point of the distribution of the signal variability of variability, then update signal variation characteristic database, in the database selection signal variation characteristics. The method overcomes the drawbacks of traditional algorithms; the experiments show that this algorithm can avoid the defect signal variability of mutation, to improve the accuracy of signal variation detection.
A genetic algorithm approach for assessing soil liquefaction potential based on reliability method
Indian Academy of Sciences (India)
M H Bagheripour; I Shooshpasha; M Afzalirad
2012-02-01
Deterministic approaches are unable to account for the variations in soil’s strength properties, earthquake loads, as well as source of errors in evaluations of liquefaction potential in sandy soils which make them questionable against other reliability concepts. Furthermore, deterministic approaches are incapable of precisely relating the probability of liquefaction and the factor of safety (FS). Therefore, the use of probabilistic approaches and especially, reliability analysis is considered since a complementary solution is needed to reach better engineering decisions. In this study, Advanced First-Order Second-Moment (AFOSM) technique associated with genetic algorithm (GA) and its corresponding sophisticated optimization techniques have been used to calculate the reliability index and the probability of liquefaction. The use of GA provides a reliable mechanism suitable for computer programming and fast convergence. A new relation is developed here, by which the liquefaction potential can be directly calculated based on the estimated probability of liquefaction (), cyclic stress ratio (CSR) and normalized standard penetration test (SPT) blow counts while containing a mean error of less than 10% from the observational data. The validity of the proposed concept is examined through comparison of the results obtained by the new relation and those predicted by other investigators. A further advantage of the proposed relation is that it relates and FS and hence it provides possibility of decision making based on the liquefaction risk and the use of deterministic approaches. This could be beneficial to geotechnical engineers who use the common methods of FS for evaluation of liquefaction. As an application, the city of Babolsar which is located on the southern coasts of Caspian Sea is investigated for liquefaction potential. The investigation is based primarily on in situ tests in which the results of SPT are analysed.
Histogram-Based Estimation of Distribution Algorithm: A Competent Method for Continuous Optimization
Institute of Scientific and Technical Information of China (English)
Nan Ding; Shu-De Zhou; Zeng-Qi Sun
2008-01-01
Designing efficient estimation of distribution algorithms for optimizing complex continuous problems is still a challenging task. This paper utilizes histogram probabilistic model to describe the distribution of population and to generate promising solutions. The advantage of histogram model, its intrinsic multimodality, makes it proper to describe the solution distribution of complex and multimodal continuous problems. To make histogram model more efficiently explore and exploit the search space, several strategies are brought into the algorithms: the surrounding effect reduces the population size in estimating the model with a certain number of the bins and the shrinking strategy guarantees the accuracy of optimal solutions. Furthermore, this paper shows that histogram-based EDA (Estimation of distributiona lgorithm) can give comparable or even much better performance than those predominant EDAs based on Gaussianmodels.
Jamadi, Mohammad; Merrikh-Bayat, Farshad
2014-01-01
This paper proposes an effective method for estimating the parameters of double-cage induction motors by using Artificial Bee Colony (ABC) algorithm. For this purpose the unknown parameters in the electrical model of asynchronous machine are calculated such that the sum of the square of differences between full load torques, starting torques, maximum torques, starting currents, full load currents, and nominal power factors obtained from model and provided by manufacturer is minimized. In orde...
Directory of Open Access Journals (Sweden)
A New Algorithm Based on the Homotopy Perturbation Method For a Class of Singularly Perturbed Boundary Value Problems
2013-12-01
Full Text Available . In this paper, a new algorithm is presented to approximate the solution of a singularly perturbed boundary value problem with leftlayer based on the homotopy perturbation technique and applying the Laplace transformation. The convergence theorem and the error bound of the proposed method are proved. The method is examined by solving two examples. The results demonstrate the reliability and efficiency of the proposed method.
Directory of Open Access Journals (Sweden)
M. A. Demir
2012-04-01
Full Text Available Blind equalization is a technique for adaptive equalization of a communication channel without the use of training sequence. Although the constant modulus algorithm (CMA is one of the most popular adaptive blind equalization algorithms, because of using fixed step size it suffers from slow convergence rate. A novel enhanced variable step size CMA algorithm (VSS-CMA based on autocorrelation of error signal has been proposed to improve the weakness of CMA for application to blind equalization in this paper. The new algorithm resolves the conflict between the convergence rate and precision of the fixed step-size conventional CMA algorithm. Computer simulations have been performed to illustrate the performance of the proposed method in simulated frequency selective Rayleigh fading channels and experimental real communication channels. The obtained simulation results using single carrier (SC IEEE 802.16-2004 protocol have demonstrated that the proposed VSS-CMA algorithm has considerably better performance than conventional CMA, normalized CMA (N-CMA and the other VSS-CMA algorithms.
Wang, Hua; Liu, Feng; Xia, Ling; Crozier, Stuart
2008-11-01
This paper presents a stabilized Bi-conjugate gradient algorithm (BiCGstab) that can significantly improve the performance of the impedance method, which has been widely applied to model low-frequency field induction phenomena in voxel phantoms. The improved impedance method offers remarkable computational advantages in terms of convergence performance and memory consumption over the conventional, successive over-relaxation (SOR)-based algorithm. The scheme has been validated against other numerical/analytical solutions on a lossy, multilayered sphere phantom excited by an ideal coil loop. To demonstrate the computational performance and application capability of the developed algorithm, the induced fields inside a human phantom due to a low-frequency hyperthermia device is evaluated. The simulation results show the numerical accuracy and superior performance of the method.
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
Text categorization plays an important role in data mining.Feature selection is the most important process of text categorization.Focused on feature selection,we present an improved text frequency method for filtering of low frequency features to deal with the data preprocessing,propose an improved mutual information algorithm for feature selection,and develop an improved tf.idf method for characteristic weights evaluation.The proposed method is applied to the benchmark test set Reuters-21578 Top10 to examine its effectiveness.Numerical results show that the precision,the recall and the value of F1 of the proposed method are all superior to those of existing conventional methods.
Directory of Open Access Journals (Sweden)
Jie-Sheng Wang
2015-06-01
Full Text Available In order to improve the accuracy and real-time of all kinds of information in the cash business, and solve the problem which accuracy and stability is not high of the data linkage between cash inventory forecasting and cash management information in the commercial bank, a hybrid learning algorithm is proposed based on adaptive population activity particle swarm optimization (APAPSO algorithm combined with the least squares method (LMS to optimize the adaptive network-based fuzzy inference system (ANFIS model parameters. Through the introduction of metric function of population diversity to ensure the diversity of population and adaptive changes in inertia weight and learning factors, the optimization ability of the particle swarm optimization (PSO algorithm is improved, which avoids the premature convergence problem of the PSO algorithm. The simulation comparison experiments are carried out with BP-LMS algorithm and standard PSO-LMS by adopting real commercial banks’ cash flow data to verify the effectiveness of the proposed time series prediction of bank cash flow based on improved PSO-ANFIS optimization method. Simulation results show that the optimization speed is faster and the prediction accuracy is higher.
Directory of Open Access Journals (Sweden)
Ashkan Emami Ale Agha
2013-06-01
Full Text Available One of the most important concepts in multi programming Operating Systems is scheduling. It helps in choosing the processes for execution. Round robin method is one of the most important algorithms in scheduling. It is the most popular algorithm due to its fairness and starvation free nature towards the processes, which is achieved by using proper quantum time. The main challenge in this algorithm is selection of quantum time. This parameter affects on average Waiting Time and average Turnaround Time in execution queue. As the quantum time is static, it causes less context switching in case of high quantum time and high context switching in case of less quantum time. Increasing context switch leads to high average waiting time, high average turnaround time which is an overhead and degrades the system performance. With respect to these points, the algorithms should calculate proper value for the quantum time. Two main classes of algorithms that are proposed to calculate the quantum time include static and dynamic methods. In static methods quantum time is fixed during the scheduling. Dynamic algorithms are one of these methods that change the value of quantum time in each cycle. For example in one method the value of quantum time in each cycle is equal to the median of burst times of processes in ready queue and for another method this value is equal to arithmetic mean of burst times of ready processes.In this paper we proposed a new method to obtaining quantum time in each cycle based on arithmetic-harmonic mean (HARM. Harmonic mean is calculated by dividing the number of observations by the reciprocal of each number in the series. With examples we show that in some cases it can provides better scheduling criteria and improves the average Turnaround Time and average Waiting Time.
A RBF classification method of remote sensing image based on genetic algorithm
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
The remote sensing image classification has stimulated considerable interest as an effective method for better retrieving information from the rapidly increasing large volume, complex and distributed satellite remote imaging data of large scale and cross-time, due to the increase of remote image quantities and image resolutions. In the paper, the genetic algorithms were employed to solve the weighting of the radial basis faction networks in order to improve the precision of remote sensing image classification. The remote sensing image classification was also introduced for the GIS spatial analysis and the spatial online analytical processing (OLAP) ,and the resulted effectiveness was demonstrated in the analysis of land utilization variation of Daqing city.
Trobec, Roman
2015-01-01
This book is concentrated on the synergy between computer science and numerical analysis. It is written to provide a firm understanding of the described approaches to computer scientists, engineers or other experts who have to solve real problems. The meshless solution approach is described in more detail, with a description of the required algorithms and the methods that are needed for the design of an efficient computer program. Most of the details are demonstrated on solutions of practical problems, from basic to more complicated ones. This book will be a useful tool for any reader interes
Directory of Open Access Journals (Sweden)
Yukai Yao
2015-01-01
Full Text Available We propose an optimized Support Vector Machine classifier, named PMSVM, in which System Normalization, PCA, and Multilevel Grid Search methods are comprehensively considered for data preprocessing and parameters optimization, respectively. The main goals of this study are to improve the classification efficiency and accuracy of SVM. Sensitivity, Specificity, Precision, and ROC curve, and so forth, are adopted to appraise the performances of PMSVM. Experimental results show that PMSVM has relatively better accuracy and remarkable higher efficiency compared with traditional SVM algorithms.
Directory of Open Access Journals (Sweden)
Song Fu
2015-01-01
Full Text Available Although the uniform theory of diffraction (UTD could be theoretically applied to arbitrarilyshaped convex objects modeled by nonuniform rational B-splines (NURBS, one of the great challenges in calculation of the UTD surface diffracted fields is the difficulty in determining the geodesic paths along which the creeping waves propagate on arbitrarilyshaped NURBS surfaces. In differential geometry, geodesic paths satisfy geodesic differential equation (GDE. Hence, in this paper, a general and efficient adaptive variable step Euler method is introduced for solving the GDE on arbitrarilyshaped NURBS surfaces. In contrast with conventional Euler method, the proposed method employs a shape factor (SF ξ to efficiently enhance the accuracy of tracing and extends the application of UTD for practical engineering. The validity and usefulness of the algorithm can be verified by the numerical results.
An Novel Algorithm to Speech Endpoint Detection in Noisy Environments Based on Energy-Entropy Method
Directory of Open Access Journals (Sweden)
Hanmid Dehghani
2008-12-01
Full Text Available Endpoint detection, which means distinguishing speech and non-speech segments, is considered as one of the key preprocessing operations in automatic speech recognition (ASR systems. Usually the energy of speech signal and Zero Crossing Rate (ZCR, are used to locate the beginning and ending for an utterance. Both of these methods have been shown to be effective for endpoint detection. However, especially in a high noise environment they fail. In this paper, we integrate the modified Teager approach with the Energy-Entropy Features. In our new algorithm, the Teager Energy is used to determine crude endpoints, and the Energy-Entropy Features are used to make the final decision. The advantage of this method is that there is no need to estimate the background noise. Therefore, it is very helpful for environments when the beginning or ending noise is very strong or there is not enough “silence” at the beginning or at the end of the utterance. Experimental results on Farsi speech show that the accuracy of this algorithm is quite satisfactory and acceptable for speech endpoints detection.
Improved Power Flow Algorithm for VSC-HVDC System Based on High-Order Newton-Type Method
Directory of Open Access Journals (Sweden)
Yanfang Wei
2013-01-01
Full Text Available Voltage source converter (VSC based high-voltage direct-current (HVDC system is a new transmission technique, which has the most promising applications in the fields of power systems and power electronics. Considering the importance of power flow analysis of the VSC-HVDC system for its utilization and exploitation, the improved power flow algorithms for VSC-HVDC system based on third-order and sixth-order Newton-type method are presented. The steady power model of VSC-HVDC system is introduced firstly. Then the derivation solving formats of multivariable matrix for third-order and sixth-order Newton-type power flow method of VSC-HVDC system are given. The formats have the feature of third-order and sixth-order convergence based on Newton method. Further, based on the automatic differentiation technology and third-order Newton method, a new improved algorithm is given, which will help in improving the program development, computation efficiency, maintainability, and flexibility of the power flow. Simulations of AC/DC power systems in two-terminal, multi-terminal, and multi-infeed DC with VSC-HVDC are carried out for the modified IEEE bus systems, which show the effectiveness and practicality of the presented algorithms for VSC-HVDC system.
Survey on Parameters of Fingerprint Classification Methods Based On Algorithmic Flow
Directory of Open Access Journals (Sweden)
Dimple Parekh
2011-09-01
Full Text Available Classification refers to assigning a given fingerprint to one of the existing classes already recognized inthe literature. A search over all the records in the database takes a long time, so the goal is to reduce thesize of the search space by choosing an appropriate subset of database for search. Classifying afingerprint images is a very difficult pattern recognition problem, due to the minimal interclassvariability and maximal intraclass variability. This paper presents a sequence flow diagram which willhelp in developing the clarity on designing algorithm for classification based on various parametersextracted from the fingerprint image. It discusses in brief the ways in which the parameters are extractedfrom the image. Existing fingerprint classification approaches are based on these parameters as inputfor classifying the image. Parameters like orientation map, singular points, spurious singular points,ridge flow, transforms and hybrid feature are discussed in the paper.
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
A new method for power quality (PQ) disturbances identification is brought forward based on combining a neural network with least square (LS) weighted fusion algorithm. The characteristic components of PQ disturbances are distilled through an improved phase-located loop (PLL) system at first, and then five child BP ANNs with different structures are trained and adopted to identify the PQ disturbances respectively. The combining neural network fuses the identification results of these child ANNs with LS weighted fusion algorithm, and identifies PQ disturbances with the fused result finally. Compared with a single neural network, the combining one with LS weighted fusion algorithm can identify the PQ disturbances correctly when noise is strong. However, a single neural network may fail in this case. Furthermore, the combining neural network is more reliable than a single neural network. The simulation results prove the conclusions above.
Directory of Open Access Journals (Sweden)
Erik Cuevas
2015-01-01
Full Text Available In this paper, a new method for robustly estimating multiple view relations from point correspondences is presented. The approach combines the popular random sampling consensus (RANSAC algorithm and the evolutionary method harmony search (HS. With this combination, the proposed method adopts a different sampling strategy than RANSAC to generate putative solutions. Under the new mechanism, at each iteration, new candidate solutions are built taking into account the quality of the models generated by previous candidate solutions, rather than purely random as it is the case of RANSAC. The rules for the generation of candidate solutions (samples are motivated by the improvisation process that occurs when a musician searches for a better state of harmony. As a result, the proposed approach can substantially reduce the number of iterations still preserving the robust capabilities of RANSAC. The method is generic and its use is illustrated by the estimation of homographies, considering synthetic and real images. Additionally, in order to demonstrate the performance of the proposed approach within a real engineering application, it is employed to solve the problem of position estimation in a humanoid robot. Experimental results validate the efficiency of the proposed method in terms of accuracy, speed, and robustness.
Numerical methods and inversion algorithms in reservoir simulation based on front tracking
Energy Technology Data Exchange (ETDEWEB)
Haugse, Vidar
1999-04-01
This thesis uses front tracking to analyse laboratory experiments on multiphase flow in porous media. New methods for parameter estimation for two- and three-phase relative permeability experiments have been developed. Up scaling of heterogeneous and stochastic porous media is analysed. Numerical methods based on front tracking is developed and analysed. Such methods are efficient for problems involving steep changes in the physical quantities. Multi-dimensional problems are solved by combining front tracking with dimensional splitting. A method for adaptive grid refinement is developed.
An Aircraft Navigation System Fault Diagnosis Method Based on Optimized Neural Network Algorithm
Institute of Scientific and Technical Information of China (English)
Jean-dedieu Weyepe
2014-01-01
Air data and inertial reference system (ADIRS) is one of the complex sub-system in the aircraft navigation system and it plays an important role into the flight safety of the aircraft. This paper propose an optimize neural network algorithm which is a combination of neural network and ant colony algorithm to improve efficiency of maintenance engineer job task.
He, Hongyang; Xu, Jiangning; Qin, Fangjun; Li, Feng
2015-11-01
In order to shorten the alignment time and eliminate the small initial misalignment limit for compass alignment of strap-down inertial navigation system (SINS), which is sometimes not easy to satisfy when the ship is moored or anchored, an optimal model based time-varying parameter compass alignment algorithm is proposed in this paper. The contributions of the work presented here are twofold. First, the optimization of compass alignment parameters, which involves a lot of trial-and-error traditionally, is achieved based on genetic algorithm. On this basis, second, the optimal parameter varying model is established by least-square polynomial fitting. Experiments are performed with a navigational grade fiber optical gyroscope SINS, which validate the efficiency of the proposed method.
Li, Chen; Pan, Zengxin; Mao, Feiyue; Gong, Wei; Chen, Shihua; Min, Qilong
2015-10-01
The signal-to-noise ratio (SNR) of an atmospheric lidar decreases rapidly as range increases, so that maintaining high accuracy when retrieving lidar data at the far end is difficult. To avoid this problem, many de-noising algorithms have been developed; in particular, an effective de-noising algorithm has been proposed to simultaneously retrieve lidar data and obtain a de-noised signal by combining the ensemble Kalman filter (EnKF) and the Fernald method. This algorithm enhances the retrieval accuracy and effective measure range of a lidar based on the Fernald method, but sometimes leads to a shift (bias) in the near range as a result of the over-smoothing caused by the EnKF. This study proposes a new scheme that avoids this phenomenon using a particle filter (PF) instead of the EnKF in the de-noising algorithm. Synthetic experiments show that the PF performs better than the EnKF and Fernald methods. The root mean square error of PF are 52.55% and 38.14% of that of the Fernald and EnKF methods, and PF increases the SNR by 44.36% and 11.57% of that of the Fernald and EnKF methods, respectively. For experiments with real signals, the relative bias of the EnKF is 5.72%, which is reduced to 2.15% by the PF in the near range. Furthermore, the suppression impact on the random noise in the far range is also made significant via the PF. An extensive application of the PF method can be useful in determining the local and global properties of aerosols. PMID:26480164
Directory of Open Access Journals (Sweden)
Morita Mitsuo
2011-06-01
Full Text Available Abstract Background A Bayesian approach based on a Dirichlet process (DP prior is useful for inferring genetic population structures because it can infer the number of populations and the assignment of individuals simultaneously. However, the properties of the DP prior method are not well understood, and therefore, the use of this method is relatively uncommon. We characterized the DP prior method to increase its practical use. Results First, we evaluated the usefulness of the sequentially-allocated merge-split (SAMS sampler, which is a technique for improving the mixing of Markov chain Monte Carlo algorithms. Although this sampler has been implemented in a preceding program, HWLER, its effectiveness has not been investigated. We showed that this sampler was effective for population structure analysis. Implementation of this sampler was useful with regard to the accuracy of inference and computational time. Second, we examined the effect of a hyperparameter for the prior distribution of allele frequencies and showed that the specification of this parameter was important and could be resolved by considering the parameter as a variable. Third, we compared the DP prior method with other Bayesian clustering methods and showed that the DP prior method was suitable for data sets with unbalanced sample sizes among populations. In contrast, although current popular algorithms for population structure analysis, such as those implemented in STRUCTURE, were suitable for data sets with uniform sample sizes, inferences with these algorithms for unbalanced sample sizes tended to be less accurate than those with the DP prior method. Conclusions The clustering method based on the DP prior was found to be useful because it can infer the number of populations and simultaneously assign individuals into populations, and it is suitable for data sets with unbalanced sample sizes among populations. Here we presented a novel program, DPART, that implements the SAMS
General moving objects recognition method based on graph embedding dimension reduction algorithm
Institute of Scientific and Technical Information of China (English)
Yi ZHANG; Jie YANG; Kun LIU
2009-01-01
Effective and robust recognition and tracking of objects are the key problems in visual surveillance systems. Most existing object recognition methods were designed with particular objects in mind. This study presents a general moving objects recognition method using global features of targets. Targets are extracted with an adaptive Gaussian mixture model and their silhouette images are captured and unified. A new objects silhouette database is built to provide abundant samples to train the subspace feature. This database is more convincing than the previous ones. A more effective dimension reduction method based on graph embedding is used to obtain the projection eigenvector. In our experiments, we show the effective performance of our method in addressing the moving objects recognition problem and its superiority compared with the previous methods.
Niccolini, G.; Alcolea, J.
Solving the radiative transfer problem is a common problematic to may fields in astrophysics. With the increasing angular resolution of spatial or ground-based telescopes (VLTI, HST) but also with the next decade instruments (NGST, ALMA, ...), astrophysical objects reveal and will certainly reveal complex spatial structures. Consequently, it is necessary to develop numerical tools being able to solve the radiative transfer equation in three dimensions in order to model and interpret these observations. I present a 3D radiative transfer program, using a new method for the construction of an adaptive spatial grid, based on the Monte Claro method. With the help of this tools, one can solve the continuum radiative transfer problem (e.g. a dusty medium), computes the temperature structure of the considered medium and obtain the flux of the object (SED and images).
Ensemble Methods Foundations and Algorithms
Zhou, Zhi-Hua
2012-01-01
An up-to-date, self-contained introduction to a state-of-the-art machine learning approach, Ensemble Methods: Foundations and Algorithms shows how these accurate methods are used in real-world tasks. It gives you the necessary groundwork to carry out further research in this evolving field. After presenting background and terminology, the book covers the main algorithms and theories, including Boosting, Bagging, Random Forest, averaging and voting schemes, the Stacking method, mixture of experts, and diversity measures. It also discusses multiclass extension, noise tolerance, error-ambiguity a
Zhao, Peng; Zhang, Yan; Qian, Weiping
2015-10-01
Diffuse reflection laser ranging is one of the feasible ways to realize high precision measurement of the space debris. However, the weak echo of diffuse reflection results in a poor signal-to-noise ratio. Thus, it is difficult to realize the real-time signal extraction for diffuse reflection laser ranging when echo signal photons are blocked by a large amount of noise photons. The Genetic Algorithm, originally evolved from the idea of natural selection process, is a heuristic search algorithm which is famous for the adaptive optimization and the global search ability. To the best of our knowledge, this paper is the first one to propose a method of real-time signal extraction for diffuse reflection laser ranging based on Genetic Algorithm. The extraction results are regarded as individuals in the population. Besides, short-term linear fitting degree and data correlation level are used as selection criteria to search for an optimal solution. Fine search in the real-time data part gives the suitable new data quickly in real-time signal extraction. A coarse search in both historical data and real-time data after the fine search is designed. The co-evolution of both parts can increase the search accuracy of real-time data as well as the precision of the history data. Simulation experiments show that our method has good signal extraction capability in poor signal-to-noise ratio circumstance, especially for data with high correlation.
Liu, Yun; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Hui, Mei; Liu, Xiaohua; Wu, Yijian
2015-09-01
As an important branch of infrared imaging technology, infrared target tracking and detection has a very important scientific value and a wide range of applications in both military and civilian areas. For the infrared image which is characterized by low SNR and serious disturbance of background noise, an innovative and effective target detection algorithm is proposed in this paper, according to the correlation of moving target frame-to-frame and the irrelevance of noise in sequential images based on OpenCV. Firstly, since the temporal differencing and background subtraction are very complementary, we use a combined detection method of frame difference and background subtraction which is based on adaptive background updating. Results indicate that it is simple and can extract the foreground moving target from the video sequence stably. For the background updating mechanism continuously updating each pixel, we can detect the infrared moving target more accurately. It paves the way for eventually realizing real-time infrared target detection and tracking, when transplanting the algorithms on OpenCV to the DSP platform. Afterwards, we use the optimal thresholding arithmetic to segment image. It transforms the gray images to black-white images in order to provide a better condition for the image sequences detection. Finally, according to the relevance of moving objects between different frames and mathematical morphology processing, we can eliminate noise, decrease the area, and smooth region boundaries. Experimental results proves that our algorithm precisely achieve the purpose of rapid detection of small infrared target.
Diversity-Based Boosting Algorithm
Directory of Open Access Journals (Sweden)
Jafar A. Alzubi
2016-05-01
Full Text Available Boosting is a well known and efficient technique for constructing a classifier ensemble. An ensemble is built incrementally by altering the distribution of training data set and forcing learners to focus on misclassification errors. In this paper, an improvement to Boosting algorithm called DivBoosting algorithm is proposed and studied. Experiments on several data sets are conducted on both Boosting and DivBoosting. The experimental results show that DivBoosting is a promising method for ensemble pruning. We believe that it has many advantages over traditional boosting method because its mechanism is not solely based on selecting the most accurate base classifiers but also based on selecting the most diverse set of classifiers.
Universal Algorithm for Online Trading Based on the Method of Calibration
Vladimir V'yugin; Vladimir Trunov
2012-01-01
We present a universal algorithm for online trading in Stock Market which performs asymptotically at least as good as any stationary trading strategy that computes the investment at each step using a fixed function of the side information that belongs to a given RKHS (Reproducing Kernel Hilbert Space). Using a universal kernel, we extend this result for any continuous stationary strategy. In this learning process, a trader rationally chooses his gambles using predictions made by a randomized ...
Institute of Scientific and Technical Information of China (English)
ZHANG Shi-hai; LIU Shu-jun; LIU Xiao-yan; OU Jin-ping
2006-01-01
First, the high-rise building structure design process is divided into three relevant steps, that is,scheme generation and creation, performance evaluation, and scheme optimization. Then with the application of relational database, the case database of high-rise structures is constructed, the structure form-selection designing methods such as the smart algorithm based on CBR, DM, FINS, NN and GA is presented, and the original forms system of this method and its general structure are given. CBR and DM are used to generate scheme candidates; FINS and NN to evaluate and optimize the scheme performance; GA to create new structure forms.Finally, the application cases are presented, whose results fit in with the real project. It proves by combining and using the expert intelligence, algorithm intelligence and machine intelligence that this method makes good use of not only the engineering project knowledge and expertise but also much deeper knowledge contained in various engineering cases. In other words, it is because the form selection has a strong background support of vast real cases that its results prove more reliable and more acceptable. So the introduction of this method prorides an effective approach to improving the quality, efficiency, automatic and smart level of high-rise structures form selection design.
Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng
2015-01-01
The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.
Directory of Open Access Journals (Sweden)
Jun Wang
2015-01-01
Full Text Available The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.
Directory of Open Access Journals (Sweden)
Qu Li
2014-01-01
Full Text Available Online friend recommendation is a fast developing topic in web mining. In this paper, we used SVD matrix factorization to model user and item feature vector and used stochastic gradient descent to amend parameter and improve accuracy. To tackle cold start problem and data sparsity, we used KNN model to influence user feature vector. At the same time, we used graph theory to partition communities with fairly low time and space complexity. What is more, matrix factorization can combine online and offline recommendation. Experiments showed that the hybrid recommendation algorithm is able to recommend online friends with good accuracy.
Institute of Scientific and Technical Information of China (English)
贺建军; 喻寿益; 钟掘
2003-01-01
A new searching algorithm named the annealing-genetic algorithm(AGA) was proposed by skillfully merging GA with SAA. It draws on merits of both GA and SAA ,and offsets their shortcomings. The difference from GA is that AGA takes objective function as adaptability function directly, so it cuts down some unnecessary time expense because of float-point calculation of function conversion. The difference from SAA is that AGA need not execute a very long Markov chain iteration at each point of temperature, so it speeds up the convergence of solution and makes no assumption on the search space,so it is simple and easy to be implemented. It can be applied to a wide class of problems. The optimizing principle and the implementing steps of AGA were expounded. The example of the parameter optimization of a typical complex electromechanical system named temper mill shows that AGA is effective and superior to the conventional GA and SAA. The control system of temper mill optimized by AGA has the optimal performance in the adjustable ranges of its parameters.
Adaptive de-noising method based on wavelet and adaptive learning algorithm in on-line PD monitoring
Institute of Scientific and Technical Information of China (English)
王立欣; 诸定秋; 蔡惟铮
2002-01-01
It is an important step in the online monitoring of partial discharge (PD) to extract PD pulses from various background noises. An adaptive de-noising method is introduced for adaptive noise reduction during detection of PD pulses. This method is based on Wavelet Transform (WT) , and in the wavelet domain the noises decomposed at the levels are reduced by independent thresholds. Instead of the standard hard thresholding function, a new type of hard thresholding function with continuous derivative is employed by this method. For the selection of thresholds, an unsupervised learning algorithm based on gradient in a mean square error (MSE) is present to search for the optimal threshold for noise reduction, and the optimal threshold is selected when the minimum MSE is obtained. With the simulating signals and on-site experimental data processed by this method,it is shown that the background noises such as narrowband noises can be reduced efficiently. Furthermore, it is proved that in comparison with the conventional wavelet de-noising method the adaptive de-noising method has a better performance in keeping the pulses and is more adaptive when suppressing the background noises of PD signals.
Directory of Open Access Journals (Sweden)
J. Anitha
2015-06-01
Full Text Available Data mining has been a popular research area for more than a decade due to its vast spectrum of applications. However, the popularity and wide availability of data mining tools also raised concerns about the privacy of individuals. Thus, the burden of data privacy protection falls on the shoulder of the data holder and data disambiguation problem occurs in the data matrix, anonymized data becomes less secure. All of the existing privacy preservation clustering methods performs clustering based on single point of view, which is the origin, while the latter utilizes many different viewpoints, which are objects assumed to not be in the same cluster with the two objects being measured. To solve this all of above mentioned problems, this study presents a multiview point based clustering methods for anonymized data. Before that data disambiguation problem is solved by using Ramon-Gartner Subtree Graph Kernel (RGSGK, where the weight values are assigned and kernel value is determined for disambiguated data. Obtain privacy by anonymization, where the data is encrypted with secure key is obtained by the Ring-Based Fully Homomorphic Encryption (RBFHE. In order to group the anonymize data, in this study BAT clustering method is proposed based on multiview point based similarity measurement and the proposed method is called as MVBAT. However in this paper initially distance matrix is calculated and using which similarity matrix and dissimilarity matrix is formed. The experimental result of the proposed MVBAT Clustering algorithm is compared with conventional methods in terms of the F-Measure, running time, privacy loss and utility loss. RBFHE encryption results is also compared with existing methods in terms of the communication cost for UCI machine learning datasets such as adult dataset and house dataset.
Directory of Open Access Journals (Sweden)
Qing Guo
2015-04-01
Full Text Available A gait identification method for a lower extremity exoskeleton is presented in order to identify the gait sub-phases in human-machine coordinated motion. First, a sensor layout for the exoskeleton is introduced. Taking the difference between human lower limb motion and human-machine coordinated motion into account, the walking gait is divided into five sub-phases, which are ‘double standing’, ‘right leg swing and left leg stance’, ‘double stance with right leg front and left leg back’, ‘right leg stance and left leg swing’, and ‘double stance with left leg front and right leg back’. The sensors include shoe pressure sensors, knee encoders, and thigh and calf gyroscopes, and are used to measure the contact force of the foot, and the knee joint angle and its angular velocity. Then, five sub-phases of walking gait are identified by a C4.5 decision tree algorithm according to the data fusion of the sensors’ information. Based on the simulation results for the gait division, identification accuracy can be guaranteed by the proposed algorithm. Through the exoskeleton control experiment, a division of five sub-phases for the human-machine coordinated walk is proposed. The experimental results verify this gait division and identification method. They can make hydraulic cylinders retract ahead of time and improve the maximal walking velocity when the exoskeleton follows the person’s motion.
3D PET image reconstruction based on Maximum Likelihood Estimation Method (MLEM) algorithm
Słomski, Artur; Bednarski, Tomasz; Białas, Piotr; Czerwiński, Eryk; Kapłon, Łukasz; Kochanowski, Andrzej; Korcyl, Grzegorz; Kowal, Jakub; Kowalski, Paweł; Kozik, Tomasz; Krzemień, Wojciech; Molenda, Marcin; Moskal, Paweł; Niedźwiecki, Szymon; Pałka, Marek; Pawlik, Monika; Raczyński, Lech; Salabura, Piotr; Gupta-Sharma, Neha; Silarski, Michał; Smyrski, Jerzy; Strzelecki, Adam; Wiślicki, Wojciech; Zieliński, Marcin; Zoń, Natalia
2015-01-01
Positron emission tomographs (PET) do not measure an image directly. Instead, they measure at the boundary of the field-of-view (FOV) of PET tomograph a sinogram that consists of measurements of the sums of all the counts along the lines connecting two detectors. As there is a multitude of detectors build-in typical PET tomograph structure, there are many possible detector pairs that pertain to the measurement. The problem is how to turn this measurement into an image (this is called imaging). Decisive improvement in PET image quality was reached with the introduction of iterative reconstruction techniques. This stage was reached already twenty years ago (with the advent of new powerful computing processors). However, three dimensional (3D) imaging remains still a challenge. The purpose of the image reconstruction algorithm is to process this imperfect count data for a large number (many millions) of lines-of-responce (LOR) and millions of detected photons to produce an image showing the distribution of the l...
Parasyris, Antonios E.; Spanoudaki, Katerina; Kampanis, Nikolaos A.
2016-04-01
Groundwater level monitoring networks provide essential information for water resources management, especially in areas with significant groundwater exploitation for agricultural and domestic use. Given the high maintenance costs of these networks, development of tools, which can be used by regulators for efficient network design is essential. In this work, a monitoring network optimisation tool is presented. The network optimisation tool couples geostatistical modelling based on the Spartan family variogram with a genetic algorithm method and is applied to Mires basin in Crete, Greece, an area of high socioeconomic and agricultural interest, which suffers from groundwater overexploitation leading to a dramatic decrease of groundwater levels. The purpose of the optimisation tool is to determine which wells to exclude from the monitoring network because they add little or no beneficial information to groundwater level mapping of the area. Unlike previous relevant investigations, the network optimisation tool presented here uses Ordinary Kriging with the recently-established non-differentiable Spartan variogram for groundwater level mapping, which, based on a previous geostatistical study in the area leads to optimal groundwater level mapping. Seventy boreholes operate in the area for groundwater abstraction and water level monitoring. The Spartan variogram gives overall the most accurate groundwater level estimates followed closely by the power-law model. The geostatistical model is coupled to an integer genetic algorithm method programmed in MATLAB 2015a. The algorithm is used to find the set of wells whose removal leads to the minimum error between the original water level mapping using all the available wells in the network and the groundwater level mapping using the reduced well network (error is defined as the 2-norm of the difference between the original mapping matrix with 70 wells and the mapping matrix of the reduced well network). The solution to the
Holmes, Tim; Zanker, Johannes M
2013-01-01
Studying aesthetic preference is notoriously difficult because it targets individual experience. Eye movements provide a rich source of behavioral measures that directly reflect subjective choice. To determine individual preferences for simple composition rules we here use fixation duration as the fitness measure in a Gaze Driven Evolutionary Algorithm (GDEA), which has been demonstrated as a tool to identify aesthetic preferences (Holmes and Zanker, 2012). In the present study, the GDEA was used to investigate the preferred combination of color and shape which have been promoted in the Bauhaus arts school. We used the same three shapes (square, circle, triangle) used by Kandinsky (1923), with the three color palette from the original experiment (A), an extended seven color palette (B), and eight different shape orientation (C). Participants were instructed to look for their preferred circle, triangle or square in displays with eight stimuli of different shapes, colors and rotations, in an attempt to test for a strong preference for red squares, yellow triangles and blue circles in such an unbiased experimental design and with an extended set of possible combinations. We Tested six participants extensively on the different conditions and found consistent preferences for color-shape combinations for individuals, but little evidence at the group level for clear color/shape preference consistent with Kandinsky's claims, apart from some weak link between yellow and triangles. Our findings suggest substantial inter-individual differences in the presence of stable individual associations of color and shapes, but also that these associations are robust within a single individual. These individual differences go some way toward challenging the claims of the universal preference for color/shape combinations proposed by Kandinsky, but also indicate that a much larger sample size would be needed to confidently reject that hypothesis. Moreover, these experiments highlight the
Directory of Open Access Journals (Sweden)
Tim eHolmes
2013-12-01
Full Text Available Studying aesthetic preference is notoriously difficult because it targets individual experience. Eye movements provide a rich source of behavioural measures that directly reflect subjective choice. To determine individual preferences for simple composition rules we here use fixation duration as the fitness measure in a Gaze Driven Evolutionary Algorithm (GDEA, which has been used as a tool to identify aesthetic preferences (Holmes & Zanker, 2012. In the present study, the GDEA was used to investigate the preferred combination of colour and shape which have been promoted in the Bauhaus arts school. We used the same 3 shapes (square, circle, triangle used by Kandinsky (1923, with the 3 colour palette from the original experiment (A, an extended 7 colour palette (B, and 8 different shape orientation (C. Participants were instructed to look for their preferred circle, triangle or square in displays with 8 stimuli of different shapes, colours and rotations, in an attempt to test for a strong preference for red squares, yellow triangles and blue circles in such an unbiased experimental design and with an extended set of possible combinations. We Tested 6 participants extensively on the different conditions and found consistent preferences for individuals, but little evidence at the group level for preference consistent with Kandinsky’s claims, apart from some weak link between yellow and triangles. Our findings suggest substantial inter-individual differences in the presence of stable individual associations of colour and shapes, but also that these associations are robust within a single individual. These individual differences go some way towards challenging the claims of the universal preference for colour/shape combinations proposed by Kandinsky, but also indicate that a much larger sample size would be needed to confidently reject that hypothesis. Moreover, these experiments highlight the vast potential of the GDEA in experimental aesthetics
Different motor models based on parameter variation using method of genetic algorithms
Sarac, Vasilija; Cvetkovski, Goga
2010-01-01
Three new motor models of Single Phase Shade Pole Motor were developed using the method of genetic agoithms for optimisation purposes of motor design. In each of newly developed motor models number of varied parameters was gradually increased which results in gradual increase of electroamgnetic torque as target function for optimisation. Increase of electromagnetic torque was followed by the increase of efficiency factor. Finite Element Method Analysis was performed in order to be obtained ma...
Comparison between the conventional methods and PSO based MPPT algorithm for photovoltaic systems
Koad, RBA; Zobaa, AF
2014-01-01
Since the output characteristics of photovoltaic (PV) system depends on the ambient temperature, solar radiation and load impedance, its maximum power point (MPP) is not constant. Under each condition PV module has a point at which it can produce its MPP. Therefore, a maximum power point tracking (MPPT) method is needed to uphold the PV panel operating at its MPP. This paper presents comparative study between the conventional MPPT methods used in (PV) system: Perturb and Observe (P&O), Increm...
HISTORY BASED PROBABILISTIC BACKOFF ALGORITHM
Directory of Open Access Journals (Sweden)
Narendran Rajagopalan
2012-01-01
Full Text Available Performance of Wireless LAN can be improved at each layer of the protocol stack with respect to energy efficiency. The Media Access Control layer is responsible for the key functions like access control and flow control. During contention, Backoff algorithm is used to gain access to the medium with minimum probability of collision. After studying different variations of back off algorithms that have been proposed, a new variant called History based Probabilistic Backoff Algorithm is proposed. Through mathematical analysis and simulation results using NS-2, it is seen that proposed History based Probabilistic Backoff algorithm performs better than Binary Exponential Backoff algorithm.
Institute of Scientific and Technical Information of China (English)
WU Jing-min; ZUO Hong-fu; CHEN Yong
2005-01-01
A particle swarm optimization (PSO) algorithm improved by immunity algorithm (IA) was presented.Memory and self-regulation mechanisms of IA were used to avoid PSO plunging into local optima. Vaccination and immune selection mechanisms were used to prevent the undulate phenomenon during the evolutionary process. The algorithm was introduced through an application in the direct maintenance cost (DMC) estimation of aircraft components. Experiments results show that the algorithm can compute simply and run quickly. It resolves the combinatorial optimization problem of component DMC estimation with simple and available parameters. And it has higher accuracy than individual methods, such as PLS, BP and v-SVM, and also has better performance than other combined methods, such as basic PSO and BP neural network.
Predicting students’ grades using fuzzy non-parametric regression method and ReliefF-based algorithm
Directory of Open Access Journals (Sweden)
Javad Ghasemian
Full Text Available In this paper we introduce two new approaches to predict the grades that university students will acquire in the final exam of a course and improve the obtained result on some features extracted from logged data in an educational web-based system. First w ...
Directory of Open Access Journals (Sweden)
Xiaolei Yu
2014-10-01
Full Text Available Accurate inversion of land surface geo/biophysical variables from remote sensing data for earth observation applications is an essential and challenging topic for the global change research. Land surface temperature (LST is one of the key parameters in the physics of earth surface processes from local to global scales. The importance of LST is being increasingly recognized and there is a strong interest in developing methodologies to measure LST from the space. Landsat 8 Thermal Infrared Sensor (TIRS is the newest thermal infrared sensor for the Landsat project, providing two adjacent thermal bands, which has a great benefit for the LST inversion. In this paper, we compared three different approaches for LST inversion from TIRS, including the radiative transfer equation-based method, the split-window algorithm and the single channel method. Four selected energy balance monitoring sites from the Surface Radiation Budget Network (SURFRAD were used for validation, combining with the MODIS 8 day emissivity product. For the investigated sites and scenes, results show that the LST inverted from the radiative transfer equation-based method using band 10 has the highest accuracy with RMSE lower than 1 K, while the SW algorithm has moderate accuracy and the SC method has the lowest accuracy.
Genetic algorithms as global random search methods
Peck, Charles C.; Dhawan, Atam P.
1995-01-01
Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that the schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solutions and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.
Edge Crossing Minimization Algorithm for Hierarchical Graphs Based on Genetic Algorithms
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
We present an edge crossing minimization algorithm forhierarchical gr aphs based on genetic algorithms, and comparing it with some heuristic algorithm s. The proposed algorithm is more efficient and has the following advantages: th e frame of the algorithms is unified, the method is simple, and its implementati on and revision are easy.
Directory of Open Access Journals (Sweden)
U. Jeong
2015-06-01
Full Text Available An online version of the OMI (Ozone Monitoring Instrument near-ultraviolet (UV aerosol retrieval algorithm was developed to retrieve aerosol optical thickness (AOT and single scattering albedo (SSA based on the optimal estimation (OE method. Instead of using the traditional look-up tables for radiative transfer calculations, it performs online radiative transfer calculations with the Vector Linearized Discrete Ordinate Radiative Transfer (VLIDORT model to eliminate interpolation errors and improve stability. The OE-based algorithm has the merit of providing useful estimates of uncertainties simultaneously with the inversion products. The measurements and inversion products of the Distributed Regional Aerosol Gridded Observation Network campaign in Northeast Asia (DRAGON NE-Asia 2012 were used to validate the retrieved AOT and SSA. The retrieved AOT and SSA at 388 nm have a correlation with the Aerosol Robotic Network (AERONET products that is comparable to or better than the correlation with the operational product during the campaign. The estimated retrieval noise and smoothing error perform well in representing the envelope curve of actual biases of AOT at 388 nm between the retrieved AOT and AERONET measurements. The forward model parameter errors were analyzed separately for both AOT and SSA retrievals. The surface albedo at 388 nm, the imaginary part of the refractive index at 354 nm, and the number fine mode fraction (FMF were found to be the most important parameters affecting the retrieval accuracy of AOT, while FMF was the most important parameter for the SSA retrieval. The additional information provided with the retrievals, including the estimated error and degrees of freedom, is expected to be valuable for future studies.
Genetic algorithm and particle swarm optimization combined with Powell method
Bento, David; Pinho, Diana; Pereira, Ana I.; Lima, Rui
2013-10-01
In recent years, the population algorithms are becoming increasingly robust and easy to use, based on Darwin's Theory of Evolution, perform a search for the best solution around a population that will progress according to several generations. This paper present variants of hybrid genetic algorithm - Genetic Algorithm and a bio-inspired hybrid algorithm - Particle Swarm Optimization, both combined with the local method - Powell Method. The developed methods were tested with twelve test functions from unconstrained optimization context.
Yang, Yan-Pu; Chen, Deng-Kai; Gu, Rong; Gu, Yu-Feng; Yu, Sui-Huai
2016-01-01
Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design. PMID:27630709
Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai
2016-01-01
Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design. PMID:27630709
Mirzadeh, Zeynab; Mehri, Razieh; Rabbani, Hossein
2010-01-01
In this paper the degraded video with blur and noise is enhanced by using an algorithm based on an iterative procedure. In this algorithm at first we estimate the clean data and blur function using Newton optimization method and then the estimation procedure is improved using appropriate denoising methods. These noise reduction techniques are based on local statistics of clean data and blur function. For estimated blur function we use LPA-ICI (local polynomial approximation - intersection of confidence intervals) method that use an anisotropic window around each point and obtain the enhanced data employing Wiener filter in this local window. Similarly, to improvement the quality of estimated clean video, at first we transform the data to wavelet transform domain and then improve our estimation using maximum a posterior (MAP) estimator and local Laplace prior. This procedure (initial estimation and improvement of estimation by denoising) is iterated and finally the clean video is obtained. The implementation of this algorithm is slow in MATLAB1 environment and so it is not suitable for online applications. However, MATLAB has the capability of running functions written in C. The files which hold the source for these functions are called MEX-Files. The MEX functions allow system-specific APIs to be called to extend MATLAB's abilities. So, in this paper to speed up our algorithm, the written code in MATLAB is sectioned and the elapsed time for each section is measured and slow sections (that use 60% of complete running time) are selected. Then these slow sections are translated to C++ and linked to MATLAB. In fact, the high loads of information in images and processed data in the "for" loops of relevant code, makes MATLAB an unsuitable candidate for writing such programs. The written code for our video deblurring algorithm in MATLAB contains eight "for" loops. These eighth "for" utilize 60% of the total execution time of the entire program and so the runtime should be
Eigenvalue Decomposition-Based Modified Newton Algorithm
Directory of Open Access Journals (Sweden)
Wen-jun Wang
2013-01-01
Full Text Available When the Hessian matrix is not positive, the Newton direction may not be the descending direction. A new method named eigenvalue decomposition-based modified Newton algorithm is presented, which first takes the eigenvalue decomposition of the Hessian matrix, then replaces the negative eigenvalues with their absolute values, and finally reconstructs the Hessian matrix and modifies the searching direction. The new searching direction is always the descending direction. The convergence of the algorithm is proven and the conclusion on convergence rate is presented qualitatively. Finally, a numerical experiment is given for comparing the convergence domains of the modified algorithm and the classical algorithm.
An Experimental Method for the Active Learning of Greedy Algorithms
Velazquez-Iturbide, J. Angel
2013-01-01
Greedy algorithms constitute an apparently simple algorithm design technique, but its learning goals are not simple to achieve.We present a didacticmethod aimed at promoting active learning of greedy algorithms. The method is focused on the concept of selection function, and is based on explicit learning goals. It mainly consists of an…
基于阻尼牛顿法的拥塞速率控制算法%Congestion Rate Control Algorithm Based on Damping Newton Method
Institute of Scientific and Technical Information of China (English)
黄玉涛
2014-01-01
This paper present a congestion rate control algorithm based on damping Newton method to calculate link price. The simulation results show that the algorithm is superior to the algorithm based on gradient descent method with more rapid speed.%本文使用阻尼牛顿法来进行链路价格的计算，提出一种基于阻尼牛顿法的拥塞速率控制算法，仿真结果表明，本文所提出的算法具有更快的收敛速度，算法性能优于基于梯度下降法的链路价格计算算法。
Orecchia, Lorenzo
2010-01-01
In this paper, we consider the following graph partitioning problem: The input is an undirected graph $G=(V,E),$ a balance parameter $b \\in (0,1/2]$ and a target conductance value $\\gamma \\in (0,1).$ The output is a cut which, if non-empty, is of conductance at most $O(f),$ for some function $f(G, \\gamma),$ and which is either balanced or well correlated with all cuts of conductance at most $\\gamma.$ Spielman and Teng gave an $\\tilde{O}(|E|/\\gamma^{2})$-time algorithm for $f= \\sqrt{\\gamma \\log^{3}|V|}$ and used it to decompose graphs into a collection of near-expanders. We present a new spectral algorithm for this problem which runs in time $\\tilde{O}(|E|/\\gamma)$ for $f=\\sqrt{\\gamma}.$ Our result yields the first nearly-linear time algorithm for the classic Balanced Separator problem that achieves the asymptotically optimal approximation guarantee for spectral methods. Our method has the advantage of being conceptually simple and relies on a primal-dual semidefinite-programming SDP approach. We first conside...
An image segmentation method based on accelerated Dijkstra algorithm%一种基于加速Dijkstra算法的图像分割技术
Institute of Scientific and Technical Information of China (English)
戴虹
2011-01-01
An optimal path searching algorithm called " Dijkstra Algorithm" is used for image segmentation. An accelerated Dijkstra Algorithm is presented to reduce the calculation work of the classical Dijkstra algorithm and to accelerate its operating speed. Live-Wire image segmentation method based on the accelerated Dijkstra algorithm is presented to sketch the object' s contour of interest in an image and area filling method is used to segment the object. The experimental results show that the algorithm can run image segmentation successfully and has good anti-noise ability, in addition, the algorithm has less interactive times than that of the manual segmentation method and run faster than the original live-Wire algorithm.%利用最短路径搜索算法中的Dijkstra算法进行图像分割.提出一种加速Dijkstra算法减小经典Dijkstra算法的运算量,以加快其运行速度.提出基于加速Dijkstra算法的Live-Wire图像分割方法勾画出一幅图像中感兴趣目标的轮廓并采用边界填充分割该目标.实验结果表明该算法能正确地进行图像分割,抗噪声性能好,与手工分割法相比交互次数较少,与原Live-Wire分割算法相比运行时间较短.
Variables Bounding Based Retiming Algorithm
Institute of Scientific and Technical Information of China (English)
宫宗伟; 林争辉; 陈后鹏
2002-01-01
Retiming is a technique for optimizing sequential circuits. In this paper, wediscuss this problem and propose an improved retiming algorithm based on variables bounding.Through the computation of the lower and upper bounds on variables, the algorithm can signi-ficantly reduce the number of constraints and speed up the execution of retiming. Furthermore,the elements of matrixes D and W are computed in a demand-driven way, which can reducethe capacity of memory. It is shown through the experimental results on ISCAS89 benchmarksthat our algorithm is very effective for large-scale sequential circuits.
Evolutionary algorithm based configuration interaction approach
Chakraborty, Rahul
2016-01-01
A stochastic configuration interaction method based on evolutionary algorithm is designed as an affordable approximation to full configuration interaction (FCI). The algorithm comprises of initiation, propagation and termination steps, where the propagation step is performed with cloning, mutation and cross-over, taking inspiration from genetic algorithm. We have tested its accuracy in 1D Hubbard problem and a molecular system (symmetric bond breaking of water molecule). We have tested two different fitness functions based on energy of the determinants and the CI coefficients of determinants. We find that the absolute value of CI coefficients is a more suitable fitness function when combined with a fixed selection scheme.
Research of the Kernel Operator Library Based on Cryptographic Algorithm
Institute of Scientific and Technical Information of China (English)
王以刚; 钱力; 黄素梅
2001-01-01
The variety of encryption mechanism and algorithms which were conventionally used have some limitations.The kernel operator library based on Cryptographic algorithm is put forward. Owing to the impenetrability of algorithm, the data transfer system with the cryptographic algorithm library has many remarkable advantages in algorithm rebuilding and optimization,easily adding and deleting algorithm, and improving the security power over the traditional algorithm. The user can choose any one in all algorithms with the method against any attack because the cryptographic algorithm library is extensible.
Energy Technology Data Exchange (ETDEWEB)
Cao, Ye [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Tang, Xiao-Bin, E-mail: tangxiaobin@nuaa.edu.cn [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Jiangsu Key Laboratory of Nuclear Energy Equipment Materials Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Wang, Peng; Meng, Jia; Huang, Xi; Wen, Liang-Sheng [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Chen, Da [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Jiangsu Key Laboratory of Nuclear Energy Equipment Materials Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China)
2015-10-11
The unmanned aerial vehicle (UAV) radiation monitoring method plays an important role in nuclear accidents emergency. In this research, a spectrum correction algorithm about the UAV airborne radioactivity monitoring equipment NH-UAV was studied to measure the radioactive nuclides within a small area in real time and in a fixed place. The simulation spectra of the high-purity germanium (HPGe) detector and the lanthanum bromide (LaBr{sub 3}) detector in the equipment were obtained using the Monte Carlo technique. Spectrum correction coefficients were calculated after performing ratio processing techniques about the net peak areas between the double detectors on the detection spectrum of the LaBr{sub 3} detector according to the accuracy of the detection spectrum of the HPGe detector. The relationship between the spectrum correction coefficient and the size of the source term was also investigated. A good linear relation exists between the spectrum correction coefficient and the corresponding energy (R{sup 2}=0.9765). The maximum relative deviation from the real condition reduced from 1.65 to 0.035. The spectrum correction method was verified as feasible. - Highlights: • An airborne radioactivity monitoring equipment NH-UAV was developed to measure radionuclide after a nuclear accident. • A spectrum correction algorithm was proposed to obtain precise information on the detected radioactivity within a small area. • The spectrum correction method was verified as feasible. • The corresponding spectrum correction coefficients increase first and then stay constant.
International Nuclear Information System (INIS)
The unmanned aerial vehicle (UAV) radiation monitoring method plays an important role in nuclear accidents emergency. In this research, a spectrum correction algorithm about the UAV airborne radioactivity monitoring equipment NH-UAV was studied to measure the radioactive nuclides within a small area in real time and in a fixed place. The simulation spectra of the high-purity germanium (HPGe) detector and the lanthanum bromide (LaBr3) detector in the equipment were obtained using the Monte Carlo technique. Spectrum correction coefficients were calculated after performing ratio processing techniques about the net peak areas between the double detectors on the detection spectrum of the LaBr3 detector according to the accuracy of the detection spectrum of the HPGe detector. The relationship between the spectrum correction coefficient and the size of the source term was also investigated. A good linear relation exists between the spectrum correction coefficient and the corresponding energy (R2=0.9765). The maximum relative deviation from the real condition reduced from 1.65 to 0.035. The spectrum correction method was verified as feasible. - Highlights: • An airborne radioactivity monitoring equipment NH-UAV was developed to measure radionuclide after a nuclear accident. • A spectrum correction algorithm was proposed to obtain precise information on the detected radioactivity within a small area. • The spectrum correction method was verified as feasible. • The corresponding spectrum correction coefficients increase first and then stay constant
Seizure detection algorithms based on EMG signals
DEFF Research Database (Denmark)
Conradsen, Isa
Background: the currently used non-invasive seizure detection methods are not reliable. Muscle fibers are directly connected to the nerves, whereby electric signals are generated during activity. Therefore, an alarm system on electromyography (EMG) signals is a theoretical possibility. Objective......: to show whether medical signal processing of EMG data is feasible for detection of epileptic seizures. Methods: EMG signals during generalised seizures were recorded from 3 patients (with 20 seizures in total). Two possible medical signal processing algorithms were tested. The first algorithm was based...... the frequency-based algorithm was efficient for detecting the seizures in the third patient. Conclusion: Our results suggest that EMG signals could be used to develop an automatic seizuredetection system. However, different patients might require different types of algorithms /approaches....
Ramos, A; Talaia, P; Queirós de Melo, F J
2016-01-01
The main goal of this work was to develop an approached model to study dynamic behavior and prediction of the stress distribution in an in vitro Charnley cemented hip arthroplasty. An alternative version of the described pseudo-dynamic procedure is proposed by using the time integration Newmark algorithm. An internal restoring force vector is numerically calculated from the displacement, velocity, and acceleration vectors. A numerical model of hip replacement was developed to analyze the deformation of a dynamically stressed structure for all time steps. The experimental measurement of resulting internal forces generated in the structure (internal restoring force vector) is the second fundamental step of the pseudo-dynamic procedure. These data (as a feedback) are used by the time integration algorithm, which allows updating of the structure's shape for the next displacement, velocity, and acceleration vectors. In the field of Biomechanics, the potentialities of this method contribute to the determination of a dynamically equivalent in vitro stress field of a cemented hip prosthesis; implant fitted in patients with a normal mobility or practice sports. Consequences of the stress distribution in the implant zone that underwent cyclic fatigue loads were also discussed by using a finite element model. Application of this method in Biomechanics appears as a useful tool in the approximate stress field characterization of the peak stress state. Results show a peak value around two times the static situation, more for making possible the prediction of future damage and a programed clinical examination in patients using hip prosthesis. PMID:25483822
A New Method of Detecting Pulmonary Nodules with PET/CT Based on an Improved Watershed Algorithm
Zhao, Juanjuan; Ji, Guohua; Qiang, Yan; Han, Xiaohong; Pei, Bo; Shi, Zhenghao
2015-01-01
Background Integrated 18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT) is widely performed for staging solitary pulmonary nodules (SPNs). However, the diagnostic efficacy of SPNs based on PET/CT is not optimal. Here, we propose a method of detection based on PET/CT that can differentiate malignant and benign SPNs with few false-positives. Method Our proposed method combines the features of positron-emission tomography (PET) and computed tomography (CT)....
International Nuclear Information System (INIS)
In this paper, a novel and effective damage diagnosis algorithm is proposed to localize and quantify structural damage using incomplete modal data, considering the existence of some limitations in the number of attached sensors on structures. The damage detection problem is formulated as an optimization problem by computing static displacements in the reduced model of a structure subjected to a unique static load. The static responses are computed through the flexibility matrix of the damaged structure obtained based on the incomplete modal data of the structure. In the algorithm, an iterated improved reduction system method is applied to prepare an accurate reduced model of a structure. The optimization problem is solved via a new evolutionary optimization algorithm called the cuckoo optimization algorithm. The efficiency and robustness of the presented method are demonstrated through three numerical examples. Moreover, the efficiency of the method is verified by an experimental study of a five-story shear building structure on a shaking table considering only two sensors. The obtained damage identification results for the numerical and experimental studies show the suitable and stable performance of the proposed damage identification method for structures with limited sensors. (paper)
Zare Hosseinzadeh, Ali; Bagheri, Abdollah; Ghodrati Amiri, Gholamreza; Koo, Ki-Young
2014-04-01
In this paper, a novel and effective damage diagnosis algorithm is proposed to localize and quantify structural damage using incomplete modal data, considering the existence of some limitations in the number of attached sensors on structures. The damage detection problem is formulated as an optimization problem by computing static displacements in the reduced model of a structure subjected to a unique static load. The static responses are computed through the flexibility matrix of the damaged structure obtained based on the incomplete modal data of the structure. In the algorithm, an iterated improved reduction system method is applied to prepare an accurate reduced model of a structure. The optimization problem is solved via a new evolutionary optimization algorithm called the cuckoo optimization algorithm. The efficiency and robustness of the presented method are demonstrated through three numerical examples. Moreover, the efficiency of the method is verified by an experimental study of a five-story shear building structure on a shaking table considering only two sensors. The obtained damage identification results for the numerical and experimental studies show the suitable and stable performance of the proposed damage identification method for structures with limited sensors.
Bechet, P.; Mitran, R.; Munteanu, M.
2013-08-01
Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact.
Evolutionary algorithm based index assignment algorithm for noisy channel
Institute of Scientific and Technical Information of China (English)
李天昊; 余松煜
2004-01-01
A globally optimal solution to vector quantization (VQ) index assignment on noisy channel, the evolutionary algorithm based index assignment algorithm (EAIAA), is presented. The algorithm yields a significant reduction in average distortion due to channel errors, over conventional arbitrary index assignment, as confirmed by experimental results over the memoryless binary symmetric channel (BSC) for any bit error.
Function Optimization Based on Quantum Genetic Algorithm
Directory of Open Access Journals (Sweden)
Ying Sun
2014-01-01
Full Text Available Optimization method is important in engineering design and application. Quantum genetic algorithm has the characteristics of good population diversity, rapid convergence and good global search capability and so on. It combines quantum algorithm with genetic algorithm. A novel quantum genetic algorithm is proposed, which is called Variable-boundary-coded Quantum Genetic Algorithm (vbQGA in which qubit chromosomes are collapsed into variable-boundary-coded chromosomes instead of binary-coded chromosomes. Therefore much shorter chromosome strings can be gained. The method of encoding and decoding of chromosome is first described before a new adaptive selection scheme for angle parameters used for rotation gate is put forward based on the core ideas and principles of quantum computation. Eight typical functions are selected to optimize to evaluate the effectiveness and performance of vbQGA against standard Genetic Algorithm (sGA and Genetic Quantum Algorithm (GQA. The simulation results show that vbQGA is significantly superior to sGA in all aspects and outperforms GQA in robustness and solving velocity, especially for multidimensional and complicated functions.
Directory of Open Access Journals (Sweden)
Ying-Chih Lai
2016-05-01
Full Text Available The demand for pedestrian navigation has increased along with the rapid progress in mobile and wearable devices. This study develops an accurate and usable Step Length Estimation (SLE method for a Pedestrian Dead Reckoning (PDR system with features including a wide range of step lengths, a self-contained system, and real-time computing, based on the multi-sensor fusion and Fuzzy Logic (FL algorithms. The wide-range SLE developed in this study was achieved by using a knowledge-based method to model the walking patterns of the user. The input variables of the FL are step strength and frequency, and the output is the estimated step length. Moreover, a waist-mounted sensor module has been developed using low-cost inertial sensors. Since low-cost sensors suffer from various errors, a calibration procedure has been utilized to improve accuracy. The proposed PDR scheme in this study demonstrates its ability to be implemented on waist-mounted devices in real time and is suitable for the indoor and outdoor environments considered in this study without the need for map information or any pre-installed infrastructure. The experiment results show that the maximum distance error was within 1.2% of 116.51 m in an indoor environment and was 1.78% of 385.2 m in an outdoor environment.
Application of detecting algorithm based on network
Institute of Scientific and Technical Information of China (English)
张凤斌; 杨永田; 江子扬; 孙冰心
2004-01-01
Because currently intrusion detection systems cannot detect undefined intrusion behavior effectively,according to the robustness and adaptability of the genetic algorithms, this paper integrates the genetic algorithms into an intrusion detection system, and a detection algorithm based on network traffic is proposed. This algorithm is a real-time and self-study algorithm and can detect undefined intrusion behaviors effectively.
Knowledge-based tracking algorithm
Corbeil, Allan F.; Hawkins, Linda J.; Gilgallon, Paul F.
1990-10-01
This paper describes the Knowledge-Based Tracking (KBT) algorithm for which a real-time flight test demonstration was recently conducted at Rome Air Development Center (RADC). In KBT processing, the radar signal in each resolution cell is thresholded at a lower than normal setting to detect low RCS targets. This lower threshold produces a larger than normal false alarm rate. Therefore, additional signal processing including spectral filtering, CFAR and knowledge-based acceptance testing are performed to eliminate some of the false alarms. TSC's knowledge-based Track-Before-Detect (TBD) algorithm is then applied to the data from each azimuth sector to detect target tracks. In this algorithm, tentative track templates are formed for each threshold crossing and knowledge-based association rules are applied to the range, Doppler, and azimuth measurements from successive scans. Lastly, an M-association out of N-scan rule is used to declare a detection. This scan-to-scan integration enhances the probability of target detection while maintaining an acceptably low output false alarm rate. For a real-time demonstration of the KBT algorithm, the L-band radar in the Surveillance Laboratory (SL) at RADC was used to illuminate a small Cessna 310 test aircraft. The received radar signal wa digitized and processed by a ST-100 Array Processor and VAX computer network in the lab. The ST-100 performed all of the radar signal processing functions, including Moving Target Indicator (MTI) pulse cancelling, FFT Doppler filtering, and CFAR detection. The VAX computers performed the remaining range-Doppler clustering, beamsplitting and TBD processing functions. The KBT algorithm provided a 9.5 dB improvement relative to single scan performance with a nominal real time delay of less than one second between illumination and display.
Directory of Open Access Journals (Sweden)
Lüdtke Rainer
2008-08-01
Full Text Available Abstract Background Regression to the mean (RTM occurs in situations of repeated measurements when extreme values are followed by measurements in the same subjects that are closer to the mean of the basic population. In uncontrolled studies such changes are likely to be interpreted as a real treatment effect. Methods Several statistical approaches have been developed to analyse such situations, including the algorithm of Mee and Chua which assumes a known population mean μ. We extend this approach to a situation where μ is unknown and suggest to vary it systematically over a range of reasonable values. Using differential calculus we provide formulas to estimate the range of μ where treatment effects are likely to occur when RTM is present. Results We successfully applied our method to three real world examples denoting situations when (a no treatment effect can be confirmed regardless which μ is true, (b when a treatment effect must be assumed independent from the true μ and (c in the appraisal of results of uncontrolled studies. Conclusion Our method can be used to separate the wheat from the chaff in situations, when one has to interpret the results of uncontrolled studies. In meta-analysis, health-technology reports or systematic reviews this approach may be helpful to clarify the evidence given from uncontrolled observational studies.
A Hybrid Algorithm for Satellite Data Transmission Schedule Based on Genetic Algorithm
Institute of Scientific and Technical Information of China (English)
LI Yun-feng; WU Xiao-yue
2008-01-01
A hybrid scheduling algorithm based on genetic algorithm is proposed in this paper for reconnaissance satellite data transmission. At first, based on description of satellite data transmission request, satellite data transmission task modal and satellite data transmission scheduling problem model are established. Secondly, the conflicts in scheduling are discussed. According to the meaning of possible conflict, the method to divide possible conflict task set is given. Thirdly, a hybrid algorithm which consists of genetic algorithm and heuristic information is presented. The heuristic information comes from two concepts, conflict degree and conflict number. Finally, an example shows the algorithm's feasibility and performance better than other traditional algorithms.
Spectral element methods: Algorithms and architectures
Fischer, Paul; Ronquist, Einar M.; Dewey, Daniel; Patera, Anthony T.
1988-01-01
Spectral element methods are high-order weighted residual techniques for partial differential equations that combine the geometric flexibility of finite element methods with the rapid convergence of spectral techniques. Spectral element methods are described for the simulation of incompressible fluid flows, with special emphasis on implementation of spectral element techniques on medium-grained parallel processors. Two parallel architectures are considered: the first, a commercially available message-passing hypercube system; the second, a developmental reconfigurable architecture based on Geometry-Defining Processors. High parallel efficiency is obtained in hypercube spectral element computations, indicating that load balancing and communication issues can be successfully addressed by a high-order technique/medium-grained processor algorithm-architecture coupling.
Spectral element methods - Algorithms and architectures
Fischer, Paul; Ronquist, Einar M.; Dewey, Daniel; Patera, Anthony T.
1988-01-01
Spectral element methods are high-order weighted residual techniques for partial differential equations that combine the geometric flexibility of finite element methods with the rapid convergence of spectral techniques. Spectral element methods are described for the simulation of incompressible fluid flows, with special emphasis on implementation of spectral element techniques on medium-grained parallel processors. Two parallel architectures are considered; the first, a commercially available message-passing hypercube system; the second, a developmental reconfigurable architecture based on Geometry-Defining Processors. High parallel efficiency is obtained in hypercube spectral element computations, indicating that load balancing and communication issues can be successfully addressed by a high-order technique/medium-grained processor algorithm-architecture coupling.
Cao, Ye; Tang, Xiao-Bin; Wang, Peng; Meng, Jia; Huang, Xi; Wen, Liang-Sheng; Chen, Da
2015-10-01
The unmanned aerial vehicle (UAV) radiation monitoring method plays an important role in nuclear accidents emergency. In this research, a spectrum correction algorithm about the UAV airborne radioactivity monitoring equipment NH-UAV was studied to measure the radioactive nuclides within a small area in real time and in a fixed place. The simulation spectra of the high-purity germanium (HPGe) detector and the lanthanum bromide (LaBr3) detector in the equipment were obtained using the Monte Carlo technique. Spectrum correction coefficients were calculated after performing ratio processing techniques about the net peak areas between the double detectors on the detection spectrum of the LaBr3 detector according to the accuracy of the detection spectrum of the HPGe detector. The relationship between the spectrum correction coefficient and the size of the source term was also investigated. A good linear relation exists between the spectrum correction coefficient and the corresponding energy (R2=0.9765). The maximum relative deviation from the real condition reduced from 1.65 to 0.035. The spectrum correction method was verified as feasible.
Directory of Open Access Journals (Sweden)
Bisheng He
2014-01-01
Full Text Available A time-space network based optimization method is designed for high-speed rail train timetabling problem to improve the service level of the high-speed rail. The general time-space path cost is presented which considers both the train travel time and the high-speed rail operation requirements: (1 service frequency requirement; (2 stopping plan adjustment; and (3 priority of train types. Train timetabling problem based on time-space path aims to minimize the total general time-space path cost of all trains. An improved branch-and-price algorithm is applied to solve the large scale integer programming problem. When dealing with the algorithm, a rapid branching and node selection for branch-and-price tree and a heuristic train time-space path generation for column generation are adopted to speed up the algorithm computation time. The computational results of a set of experiments on China’s high-speed rail system are presented with the discussions about the model validation, the effectiveness of the general time-space path cost, and the improved branch-and-price algorithm.
Function Optimization Based on Quantum Genetic Algorithm
Directory of Open Access Journals (Sweden)
Ying Sun
2013-01-01
Full Text Available Quantum genetic algorithm has the characteristics of good population diversity, rapid convergence and good global search capability and so on.It combines quantum algorithm with genetic algorithm. A novel quantum genetic algorithm is proposed ,which is called variable-boundary-coded quantum genetic algorithm (vbQGA in which qubit chromosomes are collapsed into variableboundary- coded chromosomes instead of binary-coded chromosomes. Therefore much shorter chromosome strings can be gained.The method of encoding and decoding of chromosome is first described before a new adaptive selection scheme for angle parameters used for rotation gate is put forward based on the core ideas and principles of quantum computation. Eight typical functions are selected to optimize to evaluate the effectiveness and performance of vbQGA against standard genetic algorithm (sGA and genetic quantum algorithm (GQA. The simulation results show that vbQGA is significantly superior to sGA in all aspects and outperforms GQA in robustness and solving velocity, especially for multidimensional and complicated functions.
Study on Pattern Recognition Method of Osteoma Based on Genetic Algorithm%基于遗传算法的骨肿瘤分类方法研究
Institute of Scientific and Technical Information of China (English)
余鹏; 吴朝霞; 马林; 王波; 程敬之
2001-01-01
为了更好地利用骨肿瘤分形参数集对骨肿瘤进行模式判别，将基于连续变量的遗传算法和相应的交叉与变异算子应用于骨肿瘤的模式分类中．针对该算法在实验中出现的振荡及不收敛问题，相应采用了自适应技术加以改进．通过对比改进前后遗传算法的精度和速度，证明了改进后的自适应遗传算法稳健性能好，运算速度快．利用该算法，可根据分形参数模式集对骨肿瘤进行有效的分类，达到了预期的目标．%In order to classify the pathological characteristics of osteoma more accurately and correctly by using the combined fractal parameters, the genetic algorithm based on continuous variables was applied to the pattern classification of osteoma and the relevant crossover and mutation methods were employed too. According to the shortcomings of the initial algorithm, it is improved by adaptive method. By comparing the performance of two algorithms, it is proved that the adaptive genetic algorithm based on continuos variables is more robust and faster. Using this algorithm, osteomas can be effectively classified by the fractal parameters as expected.
Institute of Scientific and Technical Information of China (English)
XIA Yuan-yuan; SHAO He-song; LI Shi-xiong; LU Jing-yu
2012-01-01
The essential for microseismic monitoring is fast and accurate calculation of seismic wave source location.The precision of most traditional microseismic monitoring processes of mines,using TDOA location method in two-dimensional space to position the microseismic events,as well as the accuracy of positioning microseismic events,may be reduced by the two-dimensional model and simple method,and ill-conditioned equations produced by TDOA location method will increase the positioning error.This article,based on inversion theory,studies the mathematical model of TDOA location method,polarization analysis location method,and comprehensive difference location method of adding angle factor in the traditional TDOA location method.The feasibility of three methods is verified by numerical simulation and analysis of the positioning error of them.The results show that the comprehensive location method of adding angle difference has strong positioning stability and high positioning accuracy,and it may reduce the impact effectively about ill-conditioned equations to positioning results.Comprehensive location method with the data of actual measure may get better positioning results.
Optimisation of nonlinear motion cueing algorithm based on genetic algorithm
Asadi, Houshyar; Mohamed, Shady; Rahim Zadeh, Delpak; Nahavandi, Saeid
2015-04-01
Motion cueing algorithms (MCAs) are playing a significant role in driving simulators, aiming to deliver the most accurate human sensation to the simulator drivers compared with a real vehicle driver, without exceeding the physical limitations of the simulator. This paper provides the optimisation design of an MCA for a vehicle simulator, in order to find the most suitable washout algorithm parameters, while respecting all motion platform physical limitations, and minimising human perception error between real and simulator driver. One of the main limitations of the classical washout filters is that it is attuned by the worst-case scenario tuning method. This is based on trial and error, and is effected by driving and programmers experience, making this the most significant obstacle to full motion platform utilisation. This leads to inflexibility of the structure, production of false cues and makes the resulting simulator fail to suit all circumstances. In addition, the classical method does not take minimisation of human perception error and physical constraints into account. Production of motion cues and the impact of different parameters of classical washout filters on motion cues remain inaccessible for designers for this reason. The aim of this paper is to provide an optimisation method for tuning the MCA parameters, based on nonlinear filtering and genetic algorithms. This is done by taking vestibular sensation error into account between real and simulated cases, as well as main dynamic limitations, tilt coordination and correlation coefficient. Three additional compensatory linear blocks are integrated into the MCA, to be tuned in order to modify the performance of the filters successfully. The proposed optimised MCA is implemented in MATLAB/Simulink software packages. The results generated using the proposed method show increased performance in terms of human sensation, reference shape tracking and exploiting the platform more efficiently without reaching
Cheng, Xuemin; Hao, Qun; Xie, Mengdi
2016-04-07
Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large.
Cheng, Xuemin; Hao, Qun; Xie, Mengdi
2016-01-01
Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large. PMID:27070603
Cheng, Xuemin; Hao, Qun; Xie, Mengdi
2016-01-01
Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large. PMID:27070603
Zhang, Yanjun; Zhao, Yu; Fu, Xinghu; Xu, Jinrui
2016-10-01
A novel particle swarm optimization algorithm based on adaptive inertia weight and chaos optimization is proposed for extracting the features of Brillouin scattering spectra. Firstly, the adaptive inertia weight parameter of the velocity is introduced to the basic particle swarm algorithm. Based on the current iteration number of particles and the adaptation value, the algorithm can change the weight coefficient and adjust the iteration speed of searching space for particles, so the local optimization ability can be enhanced. Secondly, the logical self-mapping chaotic search is carried out by using the chaos optimization in particle swarm optimization algorithm, which makes the particle swarm optimization algorithm jump out of local optimum. The novel algorithm is compared with finite element analysis-Levenberg Marquardt algorithm, particle swarm optimization-Levenberg Marquardt algorithm and particle swarm optimization algorithm by changing the linewidth, the signal-to-noise ratio and the linear weight ratio of Brillouin scattering spectra. Then the algorithm is applied to the feature extraction of Brillouin scattering spectra in different temperatures. The simulation analysis and experimental results show that this algorithm has a high fitting degree and small Brillouin frequency shift error for different linewidth, SNR and linear weight ratio. Therefore, this algorithm can be applied to the distributed optical fiber sensing system based on Brillouin optical time domain reflection, which can effectively improve the accuracy of Brillouin frequency shift extraction.
Differential Search Algorithm Based Edge Detection
Gunen, M. A.; Civicioglu, P.; Beşdok, E.
2016-06-01
In this paper, a new method has been presented for the extraction of edge information by using Differential Search Optimization Algorithm. The proposed method is based on using a new heuristic image thresholding method for edge detection. The success of the proposed method has been examined on fusion of two remote sensed images. The applicability of the proposed method on edge detection and image fusion problems have been analysed in detail and the empirical results exposed that the proposed method is useful for solving the mentioned problems.
利用盲源分离算法实现DOA估计%DOA estimation method based on blind source separation algorithm
Institute of Scientific and Technical Information of China (English)
徐先峰; 刘义艳; 段晨东
2012-01-01
A new DOA (direction-of-arrival) estimation method based on an algorithm for fast blind source separation (FBSS-DOA) is proposed in this paper. A group of correlation matrices possessing diagonal structure is generated. A cost function of joint diagonalization for blind source separation is introduced. For solving this cost function, a fast multiplied iterative algorithm in complex-valued domain is utilized. The demixing matrix was then estimated and the estimation of DOA was realized. Compared with familiar algorithms, the algorithm has more generality and better estimation performance. The simulation results illustrate its efficiency.%提出一种基于快速盲源分离算法实现波达方向(DOA)估计的方法.构造了具有对角化结构的相关矩阵组,引入解盲源分离问题的联合对角化代价函数,采用一种快速的复数域乘性迭代算法求解代价函数,得到混迭矩阵逆的估计,进而实现DOA估计.与同类算法相比,该算法具有更广的适用性和更精确的DOA估计性能.仿真实验结果验证了算法的快速收敛性和优越的估计性能.
Structure-Based Algorithms for Microvessel Classification
Smith, Amy F.
2015-02-01
© 2014 The Authors. Microcirculation published by John Wiley & Sons Ltd. Objective: Recent developments in high-resolution imaging techniques have enabled digital reconstruction of three-dimensional sections of microvascular networks down to the capillary scale. To better interpret these large data sets, our goal is to distinguish branching trees of arterioles and venules from capillaries. Methods: Two novel algorithms are presented for classifying vessels in microvascular anatomical data sets without requiring flow information. The algorithms are compared with a classification based on observed flow directions (considered the gold standard), and with an existing resistance-based method that relies only on structural data. Results: The first algorithm, developed for networks with one arteriolar and one venular tree, performs well in identifying arterioles and venules and is robust to parameter changes, but incorrectly labels a significant number of capillaries as arterioles or venules. The second algorithm, developed for networks with multiple inlets and outlets, correctly identifies more arterioles and venules, but is more sensitive to parameter changes. Conclusions: The algorithms presented here can be used to classify microvessels in large microvascular data sets lacking flow information. This provides a basis for analyzing the distinct geometrical properties and modelling the functional behavior of arterioles, capillaries, and venules.
Graphical model construction based on evolutionary algorithms
Institute of Scientific and Technical Information of China (English)
Youlong YANG; Yan WU; Sanyang LIU
2006-01-01
Using Bayesian networks to model promising solutions from the current population of the evolutionary algorithms can ensure efficiency and intelligence search for the optimum. However, to construct a Bayesian network that fits a given dataset is a NP-hard problem, and it also needs consuming mass computational resources. This paper develops a methodology for constructing a graphical model based on Bayesian Dirichlet metric. Our approach is derived from a set of propositions and theorems by researching the local metric relationship of networks matching dataset. This paper presents the algorithm to construct a tree model from a set of potential solutions using above approach. This method is important not only for evolutionary algorithms based on graphical models, but also for machine learning and data mining.The experimental results show that the exact theoretical results and the approximations match very well.
ALGORITHM AND METHODS OF HUMAN RESOURCES EVALUATION
Gontiuk, Viktoriia
2014-01-01
The paper deals with the scientific position and methodical approaches of human resource evaluation and indicates it importance in the organization management. This study provides the algorithm of human resources evaluation. Author argues that the most fully human resources evaluation manifests in complex using of different methods (qualitative, quantitative and combined methods).
Institute of Scientific and Technical Information of China (English)
Menghan Wang∗,Zongmin Yue; Lie Meng
2016-01-01
In order to prevent cracking appeared in the work⁃piece during the hot stamping operation, this paper proposes a hybrid optimization method based on Hammersley sequence sampling ( HSS) , finite analysis, back⁃propagation ( BP ) neural network and genetic algorithm ( GA ) . The mechanical properties of high strength boron steel are characterized on the basis of uniaxial tensile test at elevated temperatures. The samples of process parameters are chosen via the HSS that encourages the exploration throughout the design space and hence achieves better discovery of possible global optimum in the solution space. Meanwhile, numerical simulation is carried out to predict the forming quality for the optimized design. A BP neural network model is developed to obtain the mathematical relationship between optimization goal and design variables, and genetic algorithm is used to optimize the process parameters. Finally, the results of numerical simulation are compared with those of production experiment to demonstrate that the optimization strategy proposed in the paper is feasible.
Algorithmic Differentiation for Calculus-based Optimization
Walther, Andrea
2010-10-01
For numerous applications, the computation and provision of exact derivative information plays an important role for optimizing the considered system but quite often also for its simulation. This presentation introduces the technique of Algorithmic Differentiation (AD), a method to compute derivatives of arbitrary order within working precision. Quite often an additional structure exploitation is indispensable for a successful coupling of these derivatives with state-of-the-art optimization algorithms. The talk will discuss two important situations where the problem-inherent structure allows a calculus-based optimization. Examples from aerodynamics and nano optics illustrate these advanced optimization approaches.
Method of identifying overlapping communities based on GN algorithm%基于 GN 算法的重叠社区识别方法
Institute of Scientific and Technical Information of China (English)
高庆一; 李牧
2015-01-01
The overlapping community detection in complex networks was studied .The notion of de‐gree of membership was first presented to expresses how strongly a node belongs to a community ,and then the definition of modularity was extended to undirected graphs with overlapping communities . An overlapping community detection algorithm was provided by extending the classical algorithm presented by Girvan and Newman (GN) for identifying disjoint communities ,called GN algorithm .In order to improve the running speed ,a parallel algorithm based on MapReduce was given .The experi‐mental results demonstrate the effectiveness of proposed algorithms on DBLP (digital bibliography and library project) data and show that they outperform other methods on efficiency .%针对社会网络中的重叠社区识别问题，提出用从属度描述节点对不同社区的紧密程度，并把模块度扩展到重叠社区的识别。基于Girvan和Newman提出的非重叠社区识别（GN）算法设计了重叠社区的串行识别算法。基于M apReduce模型设计了并行识别算法，以提高识别效率。对模块度与重叠度进行了分析，结果表明：所提出的算法在计算机科学文献网络中能有效识别重叠社区，且运行效率优于已有重叠社区识别算法。
Kostrzewa, Daniel; Josiński, Henryk
2016-06-01
The expanded Invasive Weed Optimization algorithm (exIWO) is an optimization metaheuristic modelled on the original IWO version inspired by dynamic growth of weeds colony. The authors of the present paper have modified the exIWO algorithm introducing a set of both deterministic and non-deterministic strategies of individuals' selection. The goal of the project was to evaluate the modified exIWO by testing its usefulness for multidimensional numerical functions optimization. The optimized functions: Griewank, Rastrigin, and Rosenbrock are frequently used as benchmarks because of their characteristics.
Directory of Open Access Journals (Sweden)
Fulgencio Cánovas-García
2015-04-01
Full Text Available Object-based image analysis allows several different features to be calculated for the resulting objects. However, a large number of features means longer computing times and might even result in a loss of classification accuracy. In this study, we use four feature ranking methods (maximum correlation, average correlation, Jeffries–Matusita distance and mean decrease in the Gini index and five classification algorithms (linear discriminant analysis, naive Bayes, weighted k-nearest neighbors, support vector machines and random forest. The objective is to discover the optimal algorithm and feature subset to maximize accuracy when classifying a set of 1,076,937 objects, produced by the prior segmentation of a 0.45-m resolution multispectral image, with 356 features calculated on each object. The study area is both large (9070 ha and diverse, which increases the possibility to generalize the results. The mean decrease in the Gini index was found to be the feature ranking method that provided highest accuracy for all of the classification algorithms. In addition, support vector machines and random forest obtained the highest accuracy in the classification, both using their default parameters. This is a useful result that could be taken into account in the processing of high-resolution images in large and diverse areas to obtain a land cover classification.
A Simplification Algorithm Based On Appearance Maintenance
Directory of Open Access Journals (Sweden)
Fang Wan
2010-12-01
Full Text Available This paper present a new simplification algorithm named EQEM which is based on the QEM algorith to simplify geometry model with texture. The algorithm used a new framework to integrate geometry and texture factor into simplification process. In the framework, an error metric descriptor is described in detail. Firstly, Gauss curvature of the vertex should be calculated to assure good geometry similarity. Then, the descriptor take into account the visual importance in the simplification. In this way, we introduce a method of edge detection algorithm in image processing, which use mallat wavelet method to quickly extract the distinct edges from a texture image. Also, we compute the color distance of the vertices which are not belonged to any edges as the supplement of the metric descriptor. Experiment prove the simplification can make the sharp-simplified model keep similar appearance. For the descriptor is extended from the QEM error metric and can also be calculated by matrix multiplication, the algorithm is with high efficiency. After all, for all the parameters in the error metric formula are adjustable, algorithm can fit for different type of the mesh models and simplification scale.
FACE RECOGNITION BASED ON CUCKOO SEARCH ALGORITHM
VIPINKUMAR TIWARI
2012-01-01
Feature Selection is a optimization technique used in face recognition technology. Feature selection removes the irrelevant, noisy and redundant data thus leading to the more accurate recognition of face from the database.Cuckko Algorithm is one of the recent optimization algorithm in the league of nature based algorithm. Its optimization results are better than the PSO and ACO optimization algorithms. The proposal of applying the Cuckoo algorithm for feature selection in the process of face ...
Institute of Scientific and Technical Information of China (English)
许永达
2013-01-01
因为传统组卷方法的时间和空间开销大、成功率较低,简单遗传算法的收敛速度慢、稳定性差,所以提出了基于改进遗传算法的智能组卷方法,通过根据个体适应度值自适应地选择个体,调整交叉概率和变异概率等措施,加快了算法向最优解的逼近速度,提高了组卷的效率和成功率.论文介绍了该组卷方法的组卷策略,数学模型,各模块的详细设计.%The traditional Generating Examination Paper method has disadvantages of large time and space consumption, and low success rate. The simple generic algorithm has disadvantages of slow convergence speed and poor stability. So an intelligent test construction method based on improved generic algorithm is proposed. The intelligent method adaptivety selects individuals and adjusts the crossover and mutation probabilities based on individual fitness, which accelerates approximation speed to the optimal solution and improves the efficiency and success rate. This paper will introduce the test construction strategies, mathematical model, and the detailed design of each module.
The Improved Cuckoo Search Algorithm Based on Quadratic Interpolation Method%基于二次插值法的布谷鸟搜索算法研究
Institute of Scientific and Technical Information of China (English)
刘佳; 冯震; 徐越群
2015-01-01
The Cuckoo Search algorithm (CS) was studied, and in order to improve the shortcomings of the basic CS algorithm, such as low optimization precision and convergence slowly and poor local search ability in late evolution, an improved CS algorithm(QI_GSO) based on quadratic interpolation method was proposed in this paper. New algorithm makes full use of the bird’s nest local information, enhances the local search ability of the algorithm, and speeds up the convergence of the global optimal solution. The feasibility and effectiveness of the new approach was verified through testing by functions. The experimental results show that the QI_CS algorithm is significantly superior to original CS and can greatly improve the ability of seeking the global excellent result and convergence property and accuracy, which is an effective method to solve multimodal function optimization problem.%对基本的布谷鸟搜索算法（Cuckoo Search，CS）进行研究，为改进CS算法局部搜索能力差、进化后期收敛速度慢、求解精度低等缺陷，考虑到二次插值法是一种局部搜索能力较强的搜索方法，提出一种基于二次插值法的布谷鸟搜索算法（QI_CS）。新算法充分利用鸟窝个体局部的优化信息，增强算法的局部搜索能力，加快算法搜索全局最优解的收敛速度。仿真实验结果表明，QI_CS 算法在保持原算法的强大全局寻优能力的基础上大幅提高算法的收敛能力和求解精度，是求解多峰函数优化问题的一种可行和有效的方法。
基于改进的PSO算法的网络社区划分方法%IMPROVED PSO ALGORITHM BASED NETWORK COMMUNITY DETECTION METHOD
Institute of Scientific and Technical Information of China (English)
张钰莎; 蒋盛益; 谢柏林; 唐凯
2013-01-01
网络社区划分是复杂网络研究领域的一个热点,现有的复杂网络社区划分方法时间复杂度比较高,准确性过于依赖先验知识,因此许多现有的社区划分方法不太适用于实际网络的社区结构分析.对PSO算法进行改进,改进后的PSO算法的参数设置更简单.基于改进后的PSO算法,提出一种复杂网络社区划分方法,该社区划分方法时间复杂度比较低,并且无需预先知道网络的社区数量、社区节点数.实验结果表明该方法具有良好的性能.%Network community detection is a focus in complex network research field.Present complex network community detection methods' time complexity is high and their accuracy depends too heavily upon prerequisite knowledge.Therefore many present community detection methods are unfit for practical network community structure analysis.The paper improves PSO algorithm to simplify the improved PSK algorithm's parameter configuration.Based on the improved PSO algorithm the paper proposes a complicated network community detection method whose time complexity is low while there is no prerequisite knowledge for network's community number or community node number.Experiment results illustrate that the method shows good performance.
A New Page Ranking Algorithm Based On WPRVOL Algorithm
Directory of Open Access Journals (Sweden)
Roja Javadian Kootenae
2013-03-01
Full Text Available The amount of information on the web is always growing, thus powerful search tools are needed to search for such a large collection. Search engines in this direction help users so they can find their desirable information among the massive volume of information in an easier way. But what is important in the search engines and causes a distinction between them is page ranking algorithm used in them. In this paper a new page ranking algorithm based on "Weighted Page Ranking based on Visits of Links (WPRVOL Algorithm" for search engines is being proposed which is called WPR'VOL for short. The proposed algorithm considers the number of visits of first and second level in-links. The original WPRVOL algorithm takes into account the number of visits of first level in-links of the pages and distributes rank scores based on the popularity of the pages whereas the proposed algorithm considers both in-links of that page (first level in-links and in-links of the pages that point to it (second level in-links in order to calculation of rank of the page, hence more related pages are displayed at the top of search result list. In the summary it is said that the proposed algorithm assigns higher rank to pages that both themselves and pages that point to them be important.
DNA Coding Based Knowledge Discovery Algorithm
Institute of Scientific and Technical Information of China (English)
LI Ji-yun; GENG Zhao-feng; SHAO Shi-huang
2002-01-01
A novel DNA coding based knowledge discovery algorithm was proposed, an example which verified its validity was given. It is proved that this algorithm can discover new simplified rules from the original rule set efficiently.
Generalized Rule Induction Based on Immune Algorithm
Institute of Scientific and Technical Information of China (English)
郑建国; 刘芳; 焦李成
2002-01-01
A generalized rule induction mechanism, immune algorithm, for knowledge bases is building an inheritance hierarchy of classes based on the content of their knowledge objects. This hierarchy facilitates group-related processing tasks such as answering set queries, discriminating between objects, finding similarities among objects, etc. Building this hierarchy is a difficult task for knowledge engineers. Conceptual induction may be used to automate or assist engineers in the creation of such a classification structure. This paper introduces a new conceptual rule induction method, which addresses the problem of clustering large amounts of structured objects. The conditions under which the method is applicable are discussed.
Opposition-Based Adaptive Fireworks Algorithm
Directory of Open Access Journals (Sweden)
Chibing Gong
2016-07-01
Full Text Available A fireworks algorithm (FWA is a recent swarm intelligence algorithm that is inspired by observing fireworks explosions. An adaptive fireworks algorithm (AFWA proposes additional adaptive amplitudes to improve the performance of the enhanced fireworks algorithm (EFWA. The purpose of this paper is to add opposition-based learning (OBL to AFWA with the goal of further boosting performance and achieving global optimization. Twelve benchmark functions are tested in use of an opposition-based adaptive fireworks algorithm (OAFWA. The final results conclude that OAFWA significantly outperformed EFWA and AFWA in terms of solution accuracy. Additionally, OAFWA was compared with a bat algorithm (BA, differential evolution (DE, self-adapting control parameters in differential evolution (jDE, a firefly algorithm (FA, and a standard particle swarm optimization 2011 (SPSO2011 algorithm. The research results indicate that OAFWA ranks the highest of the six algorithms for both solution accuracy and runtime cost.
Distance Concentration-Based Artificial Immune Algorithm
Institute of Scientific and Technical Information of China (English)
LIU Tao; WANG Yao-cai; WANG Zhi-jie; MENG Jiang
2005-01-01
The diversity, adaptation and memory of biological immune system attract much attention of researchers. Several optimal algorithms based on immune system have also been proposed up to now. The distance concentration-based artificial immune algorithm (DCAIA) is proposed to overcome defects of the classical artificial immune algorithm (CAIA) in this paper. Compared with genetic algorithm (GA) and CAIA, DCAIA is good for solving the problem of precocity,holding the diversity of antibody, and enhancing convergence rate.
Honey Bees Inspired Optimization Method: The Bees Algorithm
Directory of Open Access Journals (Sweden)
Ernesto Mastrocinque
2013-11-01
Full Text Available Optimization algorithms are search methods where the goal is to find an optimal solution to a problem, in order to satisfy one or more objective functions, possibly subject to a set of constraints. Studies of social animals and social insects have resulted in a number of computational models of swarm intelligence. Within these swarms their collective behavior is usually very complex. The collective behavior of a swarm of social organisms emerges from the behaviors of the individuals of that swarm. Researchers have developed computational optimization methods based on biology such as Genetic Algorithms, Particle Swarm Optimization, and Ant Colony. The aim of this paper is to describe an optimization algorithm called the Bees Algorithm, inspired from the natural foraging behavior of honey bees, to find the optimal solution. The algorithm performs both an exploitative neighborhood search combined with random explorative search. In this paper, after an explanation of the natural foraging behavior of honey bees, the basic Bees Algorithm and its improved versions are described and are implemented in order to optimize several benchmark functions, and the results are compared with those obtained with different optimization algorithms. The results show that the Bees Algorithm offering some advantage over other optimization methods according to the nature of the problem.
Honey Bees Inspired Optimization Method: The Bees Algorithm.
Yuce, Baris; Packianather, Michael S; Mastrocinque, Ernesto; Pham, Duc Truong; Lambiase, Alfredo
2013-01-01
Optimization algorithms are search methods where the goal is to find an optimal solution to a problem, in order to satisfy one or more objective functions, possibly subject to a set of constraints. Studies of social animals and social insects have resulted in a number of computational models of swarm intelligence. Within these swarms their collective behavior is usually very complex. The collective behavior of a swarm of social organisms emerges from the behaviors of the individuals of that swarm. Researchers have developed computational optimization methods based on biology such as Genetic Algorithms, Particle Swarm Optimization, and Ant Colony. The aim of this paper is to describe an optimization algorithm called the Bees Algorithm, inspired from the natural foraging behavior of honey bees, to find the optimal solution. The algorithm performs both an exploitative neighborhood search combined with random explorative search. In this paper, after an explanation of the natural foraging behavior of honey bees, the basic Bees Algorithm and its improved versions are described and are implemented in order to optimize several benchmark functions, and the results are compared with those obtained with different optimization algorithms. The results show that the Bees Algorithm offering some advantage over other optimization methods according to the nature of the problem. PMID:26462528
Zhang, Yanjun; Liu, Wen-zhe; Fu, Xing-hu; Bi, Wei-hong
2016-02-01
Given that the traditional signal processing methods can not effectively distinguish the different vibration intrusion signal, a feature extraction and recognition method of the vibration information is proposed based on EMD-AWPP and HOSA-SVM, using for high precision signal recognition of distributed fiber optic intrusion detection system. When dealing with different types of vibration, the method firstly utilizes the adaptive wavelet processing algorithm based on empirical mode decomposition effect to reduce the abnormal value influence of sensing signal and improve the accuracy of signal feature extraction. Not only the low frequency part of the signal is decomposed, but also the high frequency part the details of the signal disposed better by time-frequency localization process. Secondly, it uses the bispectrum and bicoherence spectrum to accurately extract the feature vector which contains different types of intrusion vibration. Finally, based on the BPNN reference model, the recognition parameters of SVM after the implementation of the particle swarm optimization can distinguish signals of different intrusion vibration, which endows the identification model stronger adaptive and self-learning ability. It overcomes the shortcomings, such as easy to fall into local optimum. The simulation experiment results showed that this new method can effectively extract the feature vector of sensing information, eliminate the influence of random noise and reduce the effects of outliers for different types of invasion source. The predicted category identifies with the output category and the accurate rate of vibration identification can reach above 95%. So it is better than BPNN recognition algorithm and improves the accuracy of the information analysis effectively. PMID:27209772
RED Algorithm based Iris Recognition
Directory of Open Access Journals (Sweden)
Mayuri M. Memane
2012-07-01
Full Text Available Human iris is one of the most reliable biometric because of its uniqueness, stability and non-invasive nature. Biometrics based human authentication systems are becoming more important as government & corporations worldwide deploy them in such schemes as access & border control, time & attendance record, driving license registration and national ID card schemes. For this various preprocessing steps are carried out on the iris image which also includes segmentation. Normalization deals with polar to rectangular conversion. The edges are detected using canny edge detector. Features are extracted using ridge energy direction algorithm. It uses two directional filters i.e. horizontal and vertical oriented. The final template is generated by comparing the two templates and considering the predominant edge. This final template is match with the stored one using hamming distance and the match ID is displayed.
A generalized GPU-based connected component labeling algorithm
Komura, Yukihiro
2016-01-01
We propose a generalized GPU-based connected component labeling (CCL) algorithm that can be applied to both various lattices and to non-lattice environments in a uniform fashion. We extend our recent GPU-based CCL algorithm without the use of conventional iteration to the generalized method. As an application of this algorithm, we deal with the bond percolation problem. We investigate bond percolation on the honeycomb and triangle lattices to confirm the correctness of this algorithm. Moreover, we deal with bond percolation on the Bethe lattice as a substitute for a network structure, and demonstrate the performance of this algorithm on those lattices.
Research of MD5 shadow table method based on MD5 algorithm%一种融入MD5的影子表算法研究
Institute of Scientific and Technical Information of China (English)
袁满; 康峰峰; 黄刚
2013-01-01
As for the rapid extraction of source data, the rapid identification of changing data and the rapid incremental extraction of data, underlying the analysis of the working principle of the traditional shadow table, this paper proposed a kind of improved linear algorithm which merged MD5 algorithm into the ST. It scanned compared tables for the linear and eliminates unnecessary inverse operation. During scanning these tables, it employed MD5 algorithm to calculate the whole record ' fingerprint ' , it reduced the frequency and duration of string matching, and it could quickly recognize the changed records. The algorithm was verified practically, and the result shows that the proposed algorithm can improve the efficiency of data extraction in database. Incremental extraction method based on shadow table is a general incremental capturing method, and it can be implemented in any database. These applications can easily be transplanted in all kinds of platforms, so it is suitable to solve the heterogeneous database replication issues.%为了快速提取源头数据、快速识别变化记录以及实现数据的快速增量提取,在剖析传统影子表法的工作原理上,提出基于MD5算法的影子表法的改进型线性算法,对对比表进行线性扫描,排除了不必要的回扫操作；同时通过MD5算法计算整条记录的“指纹”,降低了字符串比对次数和时间,能够迅速识别出发生变化的记录.对所提出算法进行了应用测试,结果表明通过融入MD5算法后的影子表法提高了数据提取效率.基于影子表的增量提取方法是一种通用的增量捕获方法,能在任何数据库上实现；应用程序可以方便地在多种平台间移植,因此很适合解决异构数据库复制问题.
A graph spectrum based geometric biclustering algorithm.
Wang, Doris Z; Yan, Hong
2013-01-21
Biclustering is capable of performing simultaneous clustering on two dimensions of a data matrix and has many applications in pattern classification. For example, in microarray experiments, a subset of genes is co-expressed in a subset of conditions, and biclustering algorithms can be used to detect the coherent patterns in the data for further analysis of function. In this paper, we present a graph spectrum based geometric biclustering (GSGBC) algorithm. In the geometrical view, biclusters can be seen as different linear geometrical patterns in high dimensional spaces. Based on this, the modified Hough transform is used to find the Hough vector (HV) corresponding to sub-bicluster patterns in 2D spaces. A graph can be built regarding each HV as a node. The graph spectrum is utilized to identify the eigengroups in which the sub-biclusters are grouped naturally to produce larger biclusters. Through a comparative study, we find that the GSGBC achieves as good a result as GBC and outperforms other kinds of biclustering algorithms. Also, compared with the original geometrical biclustering algorithm, it reduces the computing time complexity significantly. We also show that biologically meaningful biclusters can be identified by our method from real microarray gene expression data. PMID:23079285
Li, Chuan; Li, Lin; Jie ZHANG; Alexov, Emil
2012-01-01
The Gauss-Seidel method is a standard iterative numerical method widely used to solve a system of equations and, in general, is more efficient comparing to other iterative methods, such as the Jacobi method. However, standard implementation of the Gauss-Seidel method restricts its utilization in parallel computing due to its requirement of using updated neighboring values (i.e., in current iteration) as soon as they are available. Here we report an efficient and exact (not requiring assumptio...
ALGORITHM FOR GENERATING DEM BASED ON CONE
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
Digital elevation model (DEM) has a variety of applications in GIS and CAD.It is the basic model for generating three-dimensional terrain feature.Generally speaking,there are two methods for building DEM.One is based upon the digital terrain model of discrete points,and is characterized by fast speed and low precision.The other is based upon triangular digital terrain model,and slow speed and high precision are the features of the method.Combining the advantages of the two methods,an algorithm for generating DEM with discrete points is presented in this paper.When interpolating elevation,this method can create a triangle which includes interpolating point and the elevation of the interpolating point can be obtained from the triangle.The method has the advantage of fast speed,high precision and less memory.
Indian Academy of Sciences (India)
Sangeetha S; S Jeevananthan
2015-12-01
Genetic Algorithms (GA) has always done justice to the art of optimization. One such endeavor has been made in employing the roots of GA in a most proficient way to determine the switching moments of a cascaded H-bridge seven level inverter with equal DC sources. Evolutionary techniques have proved themselves efficient to solve such an obscurity. GA is one of the methods to achieve the objective through biological mimicking. The extraordinary property of crossover is extracted using Random 3-Point Neighbourhood Crossover (RPNC) and Multi Midpoint Selective Bit Neighbourhood crossover (MMSBNC). This paper deals with solving of the selective harmonic equations (SHE) using binary coded GA specific to knowledge based neighbourhood multipoint crossover technique. This is directly related to the switching moments of the multilevel inverter under consideration. Although the previous root-finding techniques such as N-R or resultant like methods endeavor the same, the latter offers faster convergence, better program reliability and wide range of solutions. With an acute algorithm developed in Turbo C, the switching moments are calculated offline. The simulation results closely agree with the hardware results.
Parallel algorithms for the spectral transform method
Energy Technology Data Exchange (ETDEWEB)
Foster, I.T. [Argonne National Lab., IL (United States); Worley, P.H. [Oak Ridge National Lab., TN (United States)
1994-04-01
The spectral transform method is a standard numerical technique for solving partial differential equations on a sphere and is widely used in atmospheric circulation models. Recent research has identified several promising algorithms for implementing this method on massively parallel computers; however, no detailed comparison of the different algorithms has previously been attempted. In this paper, we describe these different parallel algorithms and report on computational experiments that we have conducted to evaluate their efficiency on parallel computers. The experiments used a testbed code that solves the nonlinear shallow water equations or a sphere; considerable care was taken to ensure that the experiments provide a fair comparison of the different algorithms and that the results are relevant to global models. We focus on hypercube- and mesh-connected multicomputers with cut-through routing, such as the Intel iPSC/860, DELTA, and Paragon, and the nCUBE/2, but also indicate how the results extend to other parallel computer architectures. The results of this study are relevant not only to the spectral transform method but also to multidimensional FFTs and other parallel transforms.
Stochastic Recursive Algorithms for Optimization Simultaneous Perturbation Methods
Bhatnagar, S; Prashanth, L A
2013-01-01
Stochastic Recursive Algorithms for Optimization presents algorithms for constrained and unconstrained optimization and for reinforcement learning. Efficient perturbation approaches form a thread unifying all the algorithms considered. Simultaneous perturbation stochastic approximation and smooth fractional estimators for gradient- and Hessian-based methods are presented. These algorithms: • are easily implemented; • do not require an explicit system model; and • work with real or simulated data. Chapters on their application in service systems, vehicular traffic control and communications networks illustrate this point. The book is self-contained with necessary mathematical results placed in an appendix. The text provides easy-to-use, off-the-shelf algorithms that are given detailed mathematical treatment so the material presented will be of significant interest to practitioners, academic researchers and graduate students alike. The breadth of applications makes the book appropriate for reader from sim...
Zhang, Mei; Gu, Xiaoyun; Chen, Zhenjie; Li, Xue; Su, Mo
2007-06-01
Up until now, the optical cable network has covered many cities of China. However, there are still so many middle or small sized cities which are not connected to the grid backbone optical cable network of the country. It is urgent to connect these middle or small sized cities into the backbone optical cable network of the country as soon as possible. However, up until now, little work has been done to find a better way for route choice of main optical cables, including those based on GIS methods. This paper proposes a new method for route choice of main optical cables, i.e., the method for route choice of main optical cables based on Dijkstra Algorithm. In this paper, a model for route choice of main optical cables is built, the influencing factors are chosen and quantified according to the specific situation of Guanyun County, and the route of the main optical cables of Guanyun is chosen and drawn on the map. The result shows that the method proposed by this paper has more potentials than the traditional method used in Guanyun County.
Institute of Scientific and Technical Information of China (English)
吴振军; 冯为民; 胡智宏; 崔光照
2012-01-01
The asynchronous sampling leak causes errors in electric energy measurement. A new electric energy measurement method is proposed by fixing the sampling frequency, and this method is based on Simpson integral method, and it takes account of the energy leak in asynchronous sampling when system frequency shifts, thus it can achieve an accurate result. The modified Simpson integral formula is given. The ideal power system electric energy and the real one are simulated by the algorithm, and are compared with other methods. The simulation results illustrate that the nev algorithm can reduce the measurement errors dramatically and computational complexity has little increased. So the modified Simpson integral method has good practical value.%非同步采样的泄露是电能计量出现较大误差的原因.提出一种固定采样频率条件下的电能计量方法,该方法基于复化Simpson积分算法,在系统频率发生偏移时充分考虑非同步采样的泄露误差,能够得到较精确的电能计量结果.推导了改进复化Simpson积分公式,并利用该方法对理想及实际电力系统电能进行了仿真,和其它计量算法进行了对比.仿真结果表明使用改进的复化Simpson积分算法能显著提高计量精度,且计算量增加较少,具有很好的使用价值.
Directory of Open Access Journals (Sweden)
S. Vijaya Kumar
2010-07-01
Full Text Available This paper presents an automated system for detecting masses in mammogram images. Breast cancer is one of the leading causes of women mortality in the world. Since the causes are unknown, breast cancer cannot be prevented. It is difficult for radiologists to provide both accurate and uniform evaluation over the enormous number of ammograms generated in widespread screening. Microcalcifications (calcium deposits and masses are the earliest signs of breast carcinomas and their detection is one of the key issues for breast cancer control. Computer-aided detection of Microcalcifications and masses is an important and challenging task in breast cancer control. This paper presents a novel approach for detecting microcalcification clusters. First digitized mammogram has been taken from Mammography Image Analysis Society (MIAS database. The Mammogram is preprocessed using Adaptive median filtered. Next, the microcalcification clusters are identified by using the marker extractions of the gradient images obtained by multiscale morphological reconstruction and avoids Oversegmentation vivid in Watershed algorithm. Experimental result show that the microcalcification can be accurately and efficiently detected using the proposed approach.
Moving target detection method based on artificial bee colony algorithm%基于人工蜂群算法的运动目标检测方法
Institute of Scientific and Technical Information of China (English)
陈雷; 张立毅; 郭艳菊; 刘婷; 李锵
2012-01-01
A moving target detection method based on artificial bee colony algorithm is proposed. The principle that moving target detection problem is transformed into the problem of independent component analysis is utilized. Kur-tosis is selected as the criterion for independent component analysis and artificial bee colony algorithm is used for optimizing the objective function based on kurtosis. The separated signal component is wiped off from sequence images using decorrelation method and the moving target trajectory can be extracted successfully. Computer experiments aim at simulation moving target and real moving target shows that the method proposed can achieve good results for finding out the trajectory of the moving target.%提出了一种基于人工蜂群算法的运动目标检测方法.利用将运动目标检测问题转化为独立成分分析问题的原理,选用峭度作为求解信号独立成分的判据,使用人工蜂群算法对基于峭度的目标函数进行优化求解,通过去相关方法从序列图像中剔除分离出的信号成分,进而实现对运动目标轨迹的成功提取.针对模拟运动物体和实际运动物体图像的仿真实验表明,该方法可以很好地检测出序列图像中运动物体清晰的运动轨迹.
Social emotional optimization algorithm based on quadratic interpolation method%基于二次插值法的社会情感优化算法
Institute of Scientific and Technical Information of China (English)
武建娜; 崔志华; 刘静
2011-01-01
社会情感优化算法是一种模拟人类社会行为的新型群智能优化算法,算法中考虑了个体决策能力以及个体的情感对寻优结果的影响,因此算法的多样性比常见的群智能算法改善了很多,但是局部搜索能力还有待提高.二次插值法是一种局部搜索能力较强的搜索方法,把二次插值法引入社会情感优化算法,搜索效果会改善.通过使用测试函数对算法的优化性能进行测试,证明把二次插值法引入社会情感优化算法,可以使得社会情感优化算法的局部搜索能力增强,从而增强了社会情感优化算法的全局搜索能力.%Social Emotional Optimization Algorithm (SEOA) is a new swarm intelligent population-based optimization algorithm to simulate the human social behaviors. The individual decision-making ability and individual emotion which have impact on optimization results were taken into account, so the diversity of the algorithm has been improved a lot than common swam intelligence algorithms. However, the local search capacity needs to be updated. Quadratic interpolation method is better-behaved in local search. Therefore, the introduction of it into SEOA will improve the search capability. According to the test for the optimization performance by using benchmark functions, it is proved that the local search ability can be improved by introducing quadratic interpolation method into SEOA, thus increasing the global search capability.
基于改进遗传算法的传感器优化配置%Sensor Optimization Method Based on Improved Genetic Algorithm
Institute of Scientific and Technical Information of China (English)
耿飞; 龙海辉; 赵健康; 丁鹏
2014-01-01
In this paper, an optimization method based on system observability was proposed to the optimized con-figuration of satellite solar panels sensor. Block analytical form of the observability Gram array was used to avoid the solution of higher-order Lyapunov matrix equation. Sensor optimal allocation principle was proposed based on the a-nalysis of particularity of observability. In order to quickly search for the optimal location and overcome the limitation and insufficient of the traditional genetic algorithm, an improved adaptive genetic algorithm was presented. Adaptively adjusting crossover probability and mutation probability and excellent individual protection were addressed to overcome the traditional genetic algorithm premature and divergent phenomenon defects. Experimental results show that im-proved genetic algorithm is effective for sensor placement optimization.%针对卫星太阳能帆板传感器的优化配置问题，为提高稳定性，提出了基于系统可观性的优化方法。通过可观性Gram阵的分块解析形式，避免了求解高阶Lyapunov矩阵方程。分析可观性的特殊性，提出了传感器的优化配置准则。为快速寻找到传感器的最优位置，针对传统遗传算法的局限性和不足，提出了自适应改进遗传算法。通过自适应调整交叉概率与变异概率和优秀个体保护，克服了传统遗传算法的早熟和发散现象的缺陷。实验结果表明，改进的遗传算法对于传感器的配置优化是有效的。
A Multi-Scale Gradient Algorithm Based on Morphological Operators
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
Watershed transformation is a powerful morphological tool for image segmentation. However, the performance of the image segmentation methods based on watershed transformation depends largely on the algorithm for computing the gradient of the image to be segmented. In this paper, we present a multi-scale gradient algorithm based on morphological operators for watershed-based image segmentation, with effective handling of both step and blurred edges. We also present an algorithm to eliminate the local minima produced by noise and quantization errors. Experimental results indicate that watershed transformation with the algorithms proposed in this paper produces meaningful segmentations, even without a region-merging step.
Tang, Guang-hua; Xu, Chuan-long; Shao, Li-tang; Yang, Dao-ye; Zhou, Bin; Wang, Shi-min
2009-04-01
Valuable achievements on differential optical absorption spectroscopy (DOAS) for monitoring atmospheric pollutants gas have been made in the past decades. Based on the idea of setting the threshold according to the maximum value, symbolized as OD'm, of differential optical density, the algorithm of traditional DOAS was combined with the DOAS algorithm based on the kalman filtering to improve the detection limit without losing measurement accuracy in the present article. Two algorithms have different inversion accuracy at the same ratio of signal to noise and the problem of inversion accuracy was well resolved by combining two algorithms at short light path length. Theoretical and experimental research on the concentration measurement of SO2 in the flue gases was carried out at the normal temperature and atmospheric pressure. The research results show that with the OD'm less than 0.0481, the measurement precision is very high for SO2 with the improved DOAS algorithm. The measurement lower limit of SO2 is less than 28.6 mg x m(-3) and the zero drift of the system is less than 2.9 mg x m(-3). If the OD'm is between 0.0481 and 0.9272, the measurement precision is high with the traditional DOAS algorithm. However, if the OD'm is more than 0.922, the errors of measurement results for both two DOAS algorithms are very large and the linearity correction must be performed. PMID:19626898
一种基于FKT算法的人脸表情识别方法%A Method of Facial Expression Recognition Based on FKT Algorithm
Institute of Scientific and Technical Information of China (English)
邱伟
2011-01-01
Facial expression recognition has been one of the important research themes in the fields of pattern recognition and computer vision. In this paper, a face detection system base on the dynamic cascade learning algorithm is implemented by C++ programs. With the help of the obtained face detector, face samples are extracted to form the data sets for expression recognition. Finally, FKT (Fuktunaga-Koontz Transform) algorithm is applied to solve the problem of expression recognition. The experimental results demonstrate the effectiveness of the proposed method.%人脸表情识别是模式识别和计算机视觉领域的重要课题之一.本文使用C++编程实现基于动态级联算法的人脸检测器,使用人脸检测器提取出表情识别需要的人脸数据集,通过FKT(Fukunaga-Koontz Transform)算法来解决表情识别问题.实验结果表明算法的有效性.
基于OpenCV算法库的摄像机标定方法%Camera Calibration Method Based on OpenCV Algorithm Library
Institute of Scientific and Technical Information of China (English)
刘国平; 蔡建平
2011-01-01
通过分析摄像机的透视投影成像模型和四个笛卡尔坐标系之间的变换关系，从而明确摄像机标定的目的就是求解其内外参数。比较常用标定方法的优缺点，在VC＋＋环境下开发了一种基于OpencV算法库的摄像机标定程序，实验结果表明，该程序能自动、快速、精确地标定摄像机。%Through analyzing the perspective imaging model and coordinate transformations between four different Cartesian coordinate systems, a camera calibration method based on OpenCV algorithm library is presented. The camera calibration aims to solve the intrin
Accuracy verification methods theory and algorithms
Mali, Olli; Repin, Sergey
2014-01-01
The importance of accuracy verification methods was understood at the very beginning of the development of numerical analysis. Recent decades have seen a rapid growth of results related to adaptive numerical methods and a posteriori estimates. However, in this important area there often exists a noticeable gap between mathematicians creating the theory and researchers developing applied algorithms that could be used in engineering and scientific computations for guaranteed and efficient error control. The goals of the book are to (1) give a transparent explanation of the underlying mathematical theory in a style accessible not only to advanced numerical analysts but also to engineers and students; (2) present detailed step-by-step algorithms that follow from a theory; (3) discuss their advantages and drawbacks, areas of applicability, give recommendations and examples.
Continuous Attributes Discretization Algorithm based on FPGA
Directory of Open Access Journals (Sweden)
Guoqiang Sun
2013-07-01
Full Text Available The paper addresses the problem of Discretization of continuous attributes in rough set. Discretization of continuous attributes is an important part of rough set theory because most of data that we usually gain are continuous data. In order to improve processing speed of discretization, we propose a FPGA-based discretization algorithm of continuous attributes making use of the speed advantage of FPGA. Combined attributes dependency degree of rough ret, the discretization system was divided into eight modules according to block design. This method can save much time of pretreatment in rough set and improve operation efficiency. Extensive experiments on a certain fighter fault diagnosis validate the effectiveness of the algorithm.
Algorithmic and analytical methods in network biology.
Koyutürk, Mehmet
2010-01-01
During the genomic revolution, algorithmic and analytical methods for organizing, integrating, analyzing, and querying biological sequence data proved invaluable. Today, increasing availability of high-throughput data pertaining to functional states of biomolecules, as well as their interactions, enables genome-scale studies of the cell from a systems perspective. The past decade witnessed significant efforts on the development of computational infrastructure for large-scale modeling and analysis of biological systems, commonly using network models. Such efforts lead to novel insights into the complexity of living systems, through development of sophisticated abstractions, algorithms, and analytical techniques that address a broad range of problems, including the following: (1) inference and reconstruction of complex cellular networks; (2) identification of common and coherent patterns in cellular networks, with a view to understanding the organizing principles and building blocks of cellular signaling, regulation, and metabolism; and (3) characterization of cellular mechanisms that underlie the differences between living systems, in terms of evolutionary diversity, development and differentiation, and complex phenotypes, including human disease. These problems pose significant algorithmic and analytical challenges because of the inherent complexity of the systems being studied; limitations of data in terms of availability, scope, and scale; intractability of resulting computational problems; and limitations of reference models for reliable statistical inference. This article provides a broad overview of existing algorithmic and analytical approaches to these problems, highlights key biological insights provided by these approaches, and outlines emerging opportunities and challenges in computational systems biology.
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
A novel method of global optimal path planning for mobile robot was proposed based on the improved Dijkstra algorithm and ant system algorithm. This method includes three steps: the first step is adopting the MAKLINK graph theory to establish the free space model of the mobile robot, the second step is adopting the improved Dijkstra algorithm to find out a sub-optimal collision-free path, and the third step is using the ant system algorithm to adjust and optimize the location of the sub-optimal path so as to generate the global optimal path for the mobile robot. The computer simulation experiment was carried out and the results show that this method is correct and effective. The comparison of the results confirms that the proposed method is better than the hybrid genetic algorithm in the global optimal path planning.
FACE RECOGNITION BASED ON CUCKOO SEARCH ALGORITHM
Directory of Open Access Journals (Sweden)
VIPINKUMAR TIWARI
2012-07-01
Full Text Available Feature Selection is a optimization technique used in face recognition technology. Feature selection removes the irrelevant, noisy and redundant data thus leading to the more accurate recognition of face from the database.Cuckko Algorithm is one of the recent optimization algorithm in the league of nature based algorithm. Its optimization results are better than the PSO and ACO optimization algorithms. The proposal of applying the Cuckoo algorithm for feature selection in the process of face recognition is presented in this paper.
A solution quality assessment method for swarm intelligence optimization algorithms.
Zhang, Zhaojun; Wang, Gai-Ge; Zou, Kuansheng; Zhang, Jianhua
2014-01-01
Nowadays, swarm intelligence optimization has become an important optimization tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment method for intelligent optimization is proposed in this paper. It is an experimental analysis method based on the analysis of search space and characteristic of algorithm itself. Instead of "value performance," the "ordinal performance" is used as evaluation criteria in this method. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and "good enough" set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed method, some intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and artificial fish swarm algorithm (AFS) were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed method. PMID:25013845
Dynamic route guidance algorithm based algorithm based on artificial immune system
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
To improve the performance of the K-shortest paths search in intelligent traffic guidance systems,this paper proposes an optimal search algorithm based on the intelligent optimization search theory and the memphor mechanism of vertebrate immune systems.This algorithm,applied to the urban traffic network model established by the node-expanding method,can expediently realize K-shortest paths search in the urban traffic guidance systems.Because of the immune memory and global parallel search ability from artificial immune systems,K shortest paths can be found without any repeat,which indicates evidently the superiority of the algorithm to the conventional ones.Not only does it perform a better parallelism,the algorithm also prevents premature phenomenon that often occurs in genetic algorithms.Thus,it is especially suitable for real-time requirement of the traffic guidance system and other engineering optimal applications.A case study verifies the efficiency and the practicability of the algorithm aforementioned.
A new optimization algorithm based on chaos
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
In this article, some methods are proposed for enhancing the converging velocity of the COA (chaos optimization algorithm) based on using carrier wave two times, which can greatly increase the speed and efficiency of the first carrier wave's search for the optimal point in implementing the sophisticated searching during the second carrier wave is faster and more accurate.In addition, the concept of using the carrier wave three times is proposed and put into practice to tackle the multi-variables optimization problems, where the searching for the optimal point of the last several variables is frequently worse than the first several ones.
Smell Detection Agent Based Optimization Algorithm
Vinod Chandra S. S.
2016-09-01
In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.
Cryptography Based MSLDIP Watermarking Algorithm
Directory of Open Access Journals (Sweden)
Abdelmgeid A. Ali
2015-08-01
Full Text Available In recent years, internet revolution resulted in an explosive growth in multimedia applications. The rapid advancement of internet has made it easier to send the data accurate and faster to the destination. Aside to this, it is easier to modify and misuse the valuable information through hacking at the same time. Digital watermarking is one of the proposed solutions for copyright protection of multimedia data. In this paper cryptography based MSLDIP watermarking method (Modified Substitute Last Digit in Pixel is proposed. The main goal of this method is to increase the security of the MSLDIP technique besides to hiding the watermark in the pixels of digital image in such a manner that the human visual system is not able to differentiate between the cover image and the watermarked image. Also the experimental results showed that this method can be used effectively in the field of watermarking.
Morshed, Mohammad Sarwar; Kamal, Mostafa Mashnoon; Khan, Somaiya Islam
2016-07-01
Inventory has been a major concern in supply chain and numerous researches have been done lately on inventory control which brought forth a number of methods that efficiently manage inventory and related overheads by reducing cost of replenishment. This research is aimed towards providing a better replenishment policy in case of multi-product, single supplier situations for chemical raw materials of textile industries in Bangladesh. It is assumed that industries currently pursue individual replenishment system. The purpose is to find out the optimum ideal cycle time and individual replenishment cycle time of each product for replenishment that will cause lowest annual holding and ordering cost, and also find the optimum ordering quantity. In this paper indirect grouping strategy has been used. It is suggested that indirect grouping Strategy outperforms direct grouping strategy when major cost is high. An algorithm by Kaspi and Rosenblatt (1991) called RAND is exercised for its simplicity and ease of application. RAND provides an ideal cycle time (T) for replenishment and integer multiplier (ki) for individual items. Thus the replenishment cycle time for each product is found as T×ki. Firstly, based on data, a comparison between currently prevailing (individual) process and RAND is provided that uses the actual demands which presents 49% improvement in total cost of replenishment. Secondly, discrepancies in demand is corrected by using Holt's method. However, demands can only be forecasted one or two months into the future because of the demand pattern of the industry under consideration. Evidently, application of RAND with corrected demand display even greater improvement. The results of this study demonstrates that cost of replenishment can be significantly reduced by applying RAND algorithm and exponential smoothing models.
Product design optimization method based on genetic algorithm%运用遗传算法的产品造型设计方案优化方法
Institute of Scientific and Technical Information of China (English)
杨延璞; 余隋怀; 陈登凯
2012-01-01
Starting from the product semantics, the research presented a product design optimization method based on genetic algorithm and constructed a product design evolution model based on semantic. By constructing product gene coding and gene string and the fitness function,new programs were developed through optimizing original product design programs. Application on panel furniture design proves that the method is suitable.%从产品语义出发,提出一种基于遗传算法的产品造型设计方案优化方法,构造基于语义的产品造型进化设计模型.通过构造产品造型基因编码和基因串及适应度函数,对产品造型设计方案进行优化并形成新的方案,应用板式家具的造型设计验证了方法的适宜性.
A Global Optimization Algorithm Based on Incremental Metamodel Method%一种基于增量元模型方法的全局优化算法
Institute of Scientific and Technical Information of China (English)
魏昕; 吴义忠; 陈立平
2013-01-01
A new global optimization method based on incremental metamodel was proposed for the complex simulation model problem. Firstly, to overcome the defects of existing incremental Latin hyper-cube design, which was hard to control the number of sampling points and limited to multiples, we proposed an improved incremental Latin hyper-cube sampling method based on subtraction rule idea. Secondly, combined incremental Latin hyper - cube design and the method of incremental update radial basis functions, we proposed a new efficient global optimization algorithm. Finally, the method was applied to a pressure vessel design problem and the results of the example demonstrate the efficiency and engineering practicability of the presented method.%针对复杂仿真模型的全局优化问题,提出一种基于增量元模型方法的全局优化算法.首先,分析了现有增量拉丁超立方采样方法新增点数量难以控制以及其数值必须为原采样点数量的整数倍的缺陷,在此基础上利用减法规则思想改进了增量拉丁超立方采样；其次,将改进后的增量拉丁超立方采样与径向基函数的增量更新方法相结合,提出了一种全新的高效全局优化算法；最后,将该算法应用于压力容器的优化设计,计算结果证明了该方法的实用性与工程有效性.
Enterprise Human Resources Information Mining Based on Improved Apriori Algorithm
Directory of Open Access Journals (Sweden)
Lei He
2013-05-01
Full Text Available With the unceasing development of information and technology in today’s modern society, enterprises’ demand of human resources information mining is getting bigger and bigger. Based on the enterprise human resources information mining situation, this paper puts forward a kind of improved Apriori algorithm based model on the enterprise human resources information mining, this model introduced data mining technology and traditional Apriori algorithm, and improved on its basis, divided the association rules mining task of the original algorithm into two subtasks of producing frequent item sets and producing rule, using SQL technology to directly generating frequent item sets, and using the method of establishing chart to extract the information which are interested to customers. The experimental results show that the improved Apriori algorithm based model on the enterprise human resources information mining is better in efficiency than the original algorithm, and the practical application test results show that the improved algorithm is practical and effective.
Multicast Routing Based on Hybrid Genetic Algorithm
Institute of Scientific and Technical Information of China (English)
CAO Yuan-da; CAI Gui
2005-01-01
A new multicast routing algorithm based on the hybrid genetic algorithm (HGA) is proposed. The coding pattern based on the number of routing paths is used. A fitness function that is computed easily and makes algorithm quickly convergent is proposed. A new approach that defines the HGA's parameters is provided. The simulation shows that the approach can increase largely the convergent ratio, and the fitting values of the parameters of this algorithm are different from that of the original algorithms. The optimal mutation probability of HGA equals 0.50 in HGA in the experiment, but that equals 0.07 in SGA. It has been concluded that the population size has a significant influence on the HGA's convergent ratio when it's mutation probability is bigger. The algorithm with a small population size has a high average convergent rate. The population size has little influence on HGA with the lower mutation probability.
FMS Scheduling Simulation Based on an Evolution Algorithm
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
A FMS (flexible manufacturing system) scheduling algorithm based on an evolution algorithm (EA) is developed by intensively analyzing and researching the scheduling method in this paper. Many factors related to FMS scheduling are considered sufficiently. New explanations for a common kind of the encoding model are given. The rationality of encoding model is ensured by designing a set ofnew encoding methods, while the simulation experiment is performed. The results show that a FMS scheduling optimum problem with multi-constraint conditions can be effectively solved by a FMS scheduling simulation model based on EA. Comparing this method with others, this algorithm has the advantage of good stability and quick convergence.
Video segmentation using multiple features based on EM algorithm
Institute of Scientific and Technical Information of China (English)
张风超; 杨杰; 刘尔琦
2004-01-01
Object-based video segmentation is an important issue for many multimedia applications. A video segmentation method based on EM algorithm is proposed. We consider video segmentation as an unsupervised classification problem and apply EM algorithm to obtain the maximum-likelihood estimation of the Gaussian model parameters for model-based segmentation. We simultaneously combine multiple features (motion, color) within a maximum likelihood framework to obtain accurate segment results. We also use the temporal consistency among video frames to improve the speed of EM algorithm. Experimental results on typical MPEG-4 sequences and real scene sequences show that our method has an attractive accuracy and robustness.
Cosmic Web Reconstruction through Density Ridges: Method and Algorithm
Chen, Yen-Chi; Freeman, Peter E; Genovese, Christopher R; Wasserman, Larry
2015-01-01
The detection and characterization of filamentary structures in the cosmic web allows cosmologists to constrain parameters that dictates the evolution of the Universe. While many filament estimators have been proposed, they generally lack estimates of uncertainty, reducing their inferential power. In this paper, we demonstrate how one may apply the Subspace Constrained Mean Shift (SCMS) algorithm (Ozertem and Erdogmus (2011); Genovese et al. (2012)) to uncover filamentary structure in galaxy data. The SCMS algorithm is a gradient ascent method that models filaments as density ridges, one-dimensional smooth curves that trace high-density regions within the point cloud. We also demonstrate how augmenting the SCMS algorithm with bootstrap-based methods of uncertainty estimation allows one to place uncertainty bands around putative filaments. We apply the SCMS method to datasets sampled from the P3M N-body simulation, with galaxy number densities consistent with SDSS and WFIRST-AFTA and to LOWZ and CMASS data fro...
FAST NAS-RIF ALGORITHM USING ITERATIVE CONJUGATE GRADIENT METHOD
Directory of Open Access Journals (Sweden)
A.M.Raid
2014-04-01
Full Text Available Many improvements on image enhancemen have been achieved by The Non-negativity And Support constraints Recursive Inverse Filtering (NAS-RIF algorithm. The Deterministic constraints such as non negativity, known finite support, and existence of blur invariant edges are given for the true image. NASRIF algorithms iterative and simultaneously estimate the pixels of the true image and the Point Spread Function (PSF based on conjugate gradients method. NAS-RIF algorithm doesn’t assume parametric models for either the image or the blur, so we update the parameters of conjugate gradient method and the objective function for improving the minimization of the cost function and the time for execution. We propose a different version of linear and nonlinear conjugate gradient methods to obtain the better results of image restoration with high PSNR.
Clonal Selection Algorithm Based Iterative Learning Control with Random Disturbance
Directory of Open Access Journals (Sweden)
Yuanyuan Ju
2013-01-01
Full Text Available Clonal selection algorithm is improved and proposed as a method to solve optimization problems in iterative learning control. And a clonal selection algorithm based optimal iterative learning control algorithm with random disturbance is proposed. In the algorithm, at the same time, the size of the search space is decreased and the convergence speed of the algorithm is increased. In addition a model modifying device is used in the algorithm to cope with the uncertainty in the plant model. In addition a model is used in the algorithm cope with the uncertainty in the plant model. Simulations show that the convergence speed is satisfactory regardless of whether or not the plant model is precise nonlinear plants. The simulation test verify the controlled system with random disturbance can reached to stability by using improved iterative learning control law but not the traditional control law.
Speech Enhancement based on Compressive Sensing Algorithm
Sulong, Amart; Gunawan, Teddy S.; Khalifa, Othman O.; Chebil, Jalel
2013-12-01
There are various methods, in performance of speech enhancement, have been proposed over the years. The accurate method for the speech enhancement design mainly focuses on quality and intelligibility. The method proposed with high performance level. A novel speech enhancement by using compressive sensing (CS) is a new paradigm of acquiring signals, fundamentally different from uniform rate digitization followed by compression, often used for transmission or storage. Using CS can reduce the number of degrees of freedom of a sparse/compressible signal by permitting only certain configurations of the large and zero/small coefficients, and structured sparsity models. Therefore, CS is significantly provides a way of reconstructing a compressed version of the speech in the original signal by taking only a small amount of linear and non-adaptive measurement. The performance of overall algorithms will be evaluated based on the speech quality by optimise using informal listening test and Perceptual Evaluation of Speech Quality (PESQ). Experimental results show that the CS algorithm perform very well in a wide range of speech test and being significantly given good performance for speech enhancement method with better noise suppression ability over conventional approaches without obvious degradation of speech quality.
A PSO-Based Subtractive Data Clustering Algorithm
Directory of Open Access Journals (Sweden)
Gamal Abdel-Azeem
2013-03-01
Full Text Available There is a tremendous proliferation in the amount of information available on the largest shared information source, the World Wide Web. Fast and high-quality clustering algorithms play an important role in helping users to effectively navigate, summarize, and organize the information. Recent studies have shown that partitional clustering algorithms such as the k-means algorithm are the most popular algorithms for clustering large datasets. The major problem with partitional clustering algorithms is that they are sensitive to the selection of the initial partitions and are prone to premature converge to local optima. Subtractive clustering is a fast, one-pass algorithm for estimating the number of clusters and cluster centers for any given set of data. The cluster estimates can be used to initialize iterative optimization-based clustering methods and model identification methods. In this paper, we present a hybrid Particle Swarm Optimization, Subtractive + (PSO clustering algorithm that performs fast clustering. For comparison purpose, we applied the Subtractive + (PSO clustering algorithm, PSO, and the Subtractive clustering algorithms on three different datasets. The results illustrate that the Subtractive + (PSO clustering algorithm can generate the most compact clustering results as compared to other algorithms.
An intersection algorithm based on transformation
Institute of Scientific and Technical Information of China (English)
CHEN Xiao-xia; YONG Jun-hai; CHEN Yu-jian
2006-01-01
How to obtain intersection of curves and surfaces is a fundamental problem in many areas such as computer graphics,CAD/CAM,computer animation,and robotics.Especially,how to deal with singular cases,such as tangency or superposition,is a key problem in obtaining intersection results.A method for solving the intersection problem based on the coordinate transformation is presented.With the Lagrange multiplier method,the minimum distance between the center of a circle and a quadric surface is given as well.Experience shows that the coordinate transformation could significantly simplify the method for calculating intersection to the tangency condition.It can improve the stability of the intersection of given curves and surfaces in singularity cases.The new algorithm is applied in a three dimensional CAD software (GEMS),produced by Tsinghua University.
Adaptive algorithm of scale-space construction method based on Weber's law%Weber定律下尺度空间的自适应构建
Institute of Scientific and Technical Information of China (English)
刘立; 张瑞军; 万亚平; 黄欣阳; 彭复员
2012-01-01
在Weber定律的启发下提出一种基于“恰可识别差异”的尺度空间自适应构建方法.Weber定律认为人类感知模式不仅与刺激的变化有关,而且与原刺激强度有关.基于此观点,在图像尺度空间的构建过程中,首先以Marr视觉理论的屋脊型边缘和阶梯型边缘的特征之和计算图像的信息量,再通过实验获得人类视觉能感知到“恰可识别差异”时的图像信息量变化,最后通过曲线拟合自适应构建图像的尺度空间.实验结果表明,该方法充分体现了人类视觉感知特性,与其他尺度空间构建方法相比,在目标匹配实验中,匹配数目提升了25％以上；在去噪实验中,本算法的去噪效果良好.%Inspired by Weber's Law, we propose an adaptive method to build up scale-space based on "just noticeable difference". This is because human perception of patterns depend not only on the change of a stimulus but also on the original intensity of the stimulus. According to this point of view, we use the ramp edge and step edge to calculate the image information amount based on Marr's theory. The information lost between the adjacent scale layers is then gained in experiments when the human vision can feel the "just noticeable difference" . Finally, the scale-space is constructed adaptively by curve fitting method. The experimental results show that the algorithm can exhibit the characteristics of human visual perception. In the matching experiments, the number of matched points can be increased at least by 25% by the adaptive algorithm and it impressively outperforms the other scale-space construction method in denoising field.
基于贪婪算法的高分辨信号源DOA估计%A high resolution DOA estimation method based on greedy algorithm
Institute of Scientific and Technical Information of China (English)
王晓庆; 陶荣辉; 甘露
2012-01-01
It is important research contents to determine the direction-of-arrival ( DOA) of radiation sources in the array signal processing, which has been widely used in radar, sonar and wireless communication system. This paper studies the ( DOA) estimation problem of far field narrowband signal sources. We build up the sparse representation model of far field narrowband using the spatial sparse property of the DOA of the incident sources. We propose a DOA estimation method combined the eigenvalue decomposition and multiple orthogonal matching pursuit algorithm (MOMP) based on the eigenvalue decomposition of array output covariance matrix and the theory of greedy matching pursuit algorithm. Firstly, we use eigenvalue decomposition to reduce the dimension of the received data form array output. The reduction operation transforms the DOA estimation problem into solving a multiple measurement vectors (MMV) problem which is appropriate for the MOMP algorithm. Then, the MOMP algorithm is employed to estimate the DOA from the data of reduced dimension. High resolution DOA estimation is achieved in low signal to noise ratio (SNR) and less computational complexity is guaranteed. The performance is compared with the representative DOA estimation methods including MUSIC, Capon and OMP. Simulation experiments are conducted to validate the effectiveness of the proposed method regardless of the coherence of the incident signals.%确定辐射源的来波方向(DOA)是阵列信号处理的重要研究内容,已经广泛应用于雷达、声纳和无线通信等领域.本文研究了远场窄带信号源的DOA高分辨估计问题.利用信号来波方向在空域具有稀疏性的特点,建立了远场窄带信号源的稀疏表示模型.根据协方差矩阵的特征值分解和贪婪匹配追踪算法原理提出了一种基于特征值分解的多重正交匹配追踪算法(EIG-MOMP).首先,利用特征值分解对阵列接收数据进行降维处理.这一降维操作使得问题
Yang, Qidong; Zuo, Hongchao; Li, Weidong
2016-01-01
Improving the capability of land-surface process models to simulate soil moisture assists in better understanding the atmosphere-land interaction. In semi-arid regions, due to limited near-surface observational data and large errors in large-scale parameters obtained by the remote sensing method, there exist uncertainties in land surface parameters, which can cause large offsets between the simulated results of land-surface process models and the observational data for the soil moisture. In this study, observational data from the Semi-Arid Climate Observatory and Laboratory (SACOL) station in the semi-arid loess plateau of China were divided into three datasets: summer, autumn, and summer-autumn. By combing the particle swarm optimization (PSO) algorithm and the land-surface process model SHAW (Simultaneous Heat and Water), the soil and vegetation parameters that are related to the soil moisture but difficult to obtain by observations are optimized using three datasets. On this basis, the SHAW model was run with the optimized parameters to simulate the characteristics of the land-surface process in the semi-arid loess plateau. Simultaneously, the default SHAW model was run with the same atmospheric forcing as a comparison test. Simulation results revealed the following: parameters optimized by the particle swarm optimization algorithm in all simulation tests improved simulations of the soil moisture and latent heat flux; differences between simulated results and observational data are clearly reduced, but simulation tests involving the adoption of optimized parameters cannot simultaneously improve the simulation results for the net radiation, sensible heat flux, and soil temperature. Optimized soil and vegetation parameters based on different datasets have the same order of magnitude but are not identical; soil parameters only vary to a small degree, but the variation range of vegetation parameters is large. PMID:26991786
New Iterative Learning Control Algorithms Based on Vector Plots Analysis1）
Institute of Scientific and Technical Information of China (English)
XIESheng-Li; TIANSen-Ping; XIEZhen-Dong
2004-01-01
Based on vector plots analysis, this paper researches the geometric frame of iterativelearning control method. New structure of iterative learning algorithms is obtained by analyzingthe vector plots of some general algorithms. The structure of the new algorithm is different fromthose of the present algorithms. It is of faster convergence speed and higher accuracy. Simulationspresented here illustrate the effectiveness and advantage of the new algorithm.
基于三帧间差分的障碍物检测算法%Obstacle Detection Algorithm Based on Three Inter-Frame Difference Method
Institute of Scientific and Technical Information of China (English)
郭文俊; 乔世东
2015-01-01
机动车私人拥有量的不断上升，导致了交通事故在逐年的增加。故此，怎样最大程度的减少交通事故成了我们关注的焦点。本文以基于机器视觉的道路识别技术与障碍物检测技术为研究对象，提出了基于三帧间的差分方法，该方法利用相邻三帧图像两两差分，再将两个差分结果相“与”并进行二值化，最终确定运动目标在图像中的位置。%The continuously rising in privately owned motor vehicle leads to the increase in traffic accidents. Therefore, how to re⁃duce the traffic accidents and the losses caused by the accidents have became our focus. The thesis is based on road recognition tech⁃nology and obstacle detection technology, it puts forward a new algorithm called three inter-frame difference method. This method uses neighboring three frames to make different inter-frame difference, then uses the two different inter-frame results to conduct an opera⁃tion“&”, and then runs the binary operation. Finally, the targets are located in the image.
Optical Sensor Based Corn Algorithm Evaluation
Optical sensor based algorithms for corn fertilization have developed by researchers in several states. The goal of this international research project was to evaluate these different algorithms and determine their robustness over a large geographic area. Concurrently the goal of this project was to...
A hybrid cuckoo search algorithm with Nelder Mead method for solving global optimization problems
Ali, Ahmed F.; Mohamed A. Tawhid
2016-01-01
Cuckoo search algorithm is a promising metaheuristic population based method. It has been applied to solve many real life problems. In this paper, we propose a new cuckoo search algorithm by combining the cuckoo search algorithm with the Nelder–Mead method in order to solve the integer and minimax optimization problems. We call the proposed algorithm by hybrid cuckoo search and Nelder–Mead method (HCSNM). HCSNM starts the search by applying the standard cuckoo search for number of iterations ...
Efficient mining of association rules based on gravitational search algorithm
Directory of Open Access Journals (Sweden)
Fariba Khademolghorani
2011-07-01
Full Text Available Association rules mining are one of the most used tools to discover relationships among attributes in a database. A lot of algorithms have been introduced for discovering these rules. These algorithms have to mine association rules in two stages separately. Most of them mine occurrence rules which are easily predictable by the users. Therefore, this paper discusses the application of gravitational search algorithm for discovering interesting association rules. This evolutionary algorithm is based on the Newtonian gravity and the laws of motion. Furthermore, contrary to the previous methods, the proposed method in this study is able to mine the best association rules without generating frequent itemsets and is independent of the minimum support and confidence values. The results of applying this method in comparison with the method of mining association rules based upon the particle swarm optimization show that our method is successful.
Agent-based Algorithm for Spatial Distribution of Objects
Collier, Nathan
2012-06-02
In this paper we present an agent-based algorithm for the spatial distribution of objects. The algorithm is a generalization of the bubble mesh algorithm, initially created for the point insertion stage of the meshing process of the finite element method. The bubble mesh algorithm treats objects in space as bubbles, which repel and attract each other. The dynamics of each bubble are approximated by solving a series of ordinary differential equations. We present numerical results for a meshing application as well as a graph visualization application.
Research on Algorithms for Mining Distance-Based Outliers
Institute of Scientific and Technical Information of China (English)
WANGLizhen; ZOULikun
2005-01-01
The outlier detection is an important and valuable research in KDD (Knowledge discover in database). The identification of outliers can lead to the discovery of truly unexpected knowledge in areas such as electronic commerce, credit card fraud, and even weather forecast. In existing methods that we have seen for finding outliers, the notion of DB-(Distance-based) outliers is not restricted computationally to small values of the number of dimensions k and goes beyond the data space. Here, we study algorithms for mining DB-outliers. We focus on developing algorithms unlimited by k. First, we present a Partition-based algorithm (the PBA). The key idea is to gain efficiency by divide-and-conquer. Second, we present an optimized algorithm called Object-class-based algorithm (the OCBA). The computing of this algorithm has nothing to do with k and the efficiency of this algorithm is as good as the cell-based algorithm. We provide experimental results showing that the two new algorithms have better execution efficiency.
Drilling Path Optimization Based on Particle Swarm Optimization Algorithm
Institute of Scientific and Technical Information of China (English)
ZHU Guangyu; ZHANG Weibo; DU Yuexiang
2006-01-01
This paper presents a new approach based on the particle swarm optimization (PSO) algorithm for solving the drilling path optimization problem belonging to discrete space. Because the standard PSO algorithm is not guaranteed to be global convergence or local convergence, based on the mathematical algorithm model, the algorithm is improved by adopting the method of generate the stop evolution particle over again to get the ability of convergence to the global optimization solution. And the operators are improved by establishing the duality transposition method and the handle manner for the elements of the operator, the improved operator can satisfy the need of integer coding in drilling path optimization. The experiment with small node numbers indicates that the improved algorithm has the characteristics of easy realize, fast convergence speed, and better global convergence characteristics, hence the new PSO can play a role in solving the problem of drilling path optimization in drilling holes.
Automatic Image Registration Algorithm Based on Wavelet Transform
Institute of Scientific and Technical Information of China (English)
LIU Qiong; NI Guo-qiang
2006-01-01
An automatic image registration approach based on wavelet transform is proposed. This proposed method utilizes multiscale wavelet transform to extract feature points. A coarse-to-fine feature matching method is utilized in the feature matching phase. A two-way matching method based on cross-correlation to get candidate point pairs and a fine matching based on support strength combine to form the matching algorithm. At last, based on an affine transformation model, the parameters are iteratively refined by using the least-squares estimation approach. Experimental results have verified that the proposed algorithm can realize automatic registration of various kinds of images rapidly and effectively.
Daylighting simulation: methods, algorithms, and resources
Energy Technology Data Exchange (ETDEWEB)
Carroll, William L.
1999-12-01
This document presents work conducted as part of Subtask C, ''Daylighting Design Tools'', Subgroup C2, ''New Daylight Algorithms'', of the IEA SHC Task 21 and the ECBCS Program Annex 29 ''Daylight in Buildings''. The search for and collection of daylighting analysis methods and algorithms led to two important observations. First, there is a wide range of needs for different types of methods to produce a complete analysis tool. These include: Geometry; Light modeling; Characterization of the natural illumination resource; Materials and components properties, representations; and Usability issues (interfaces, interoperability, representation of analysis results, etc). Second, very advantageously, there have been rapid advances in many basic methods in these areas, due to other forces. They are in part driven by: The commercial computer graphics community (commerce, entertainment); The lighting industry; Architectural rendering and visualization for projects; and Academia: Course materials, research. This has led to a very rich set of information resources that have direct applicability to the small daylighting analysis community. Furthermore, much of this information is in fact available online. Because much of the information about methods and algorithms is now online, an innovative reporting strategy was used: the core formats are electronic, and used to produce a printed form only secondarily. The electronic forms include both online WWW pages and a downloadable .PDF file with the same appearance and content. Both electronic forms include live primary and indirect links to actual information sources on the WWW. In most cases, little additional commentary is provided regarding the information links or citations that are provided. This in turn allows the report to be very concise. The links are expected speak for themselves. The report consists of only about 10+ pages, with about 100+ primary links, but
Collaborative Filtering Algorithms Based on Kendall Correlation in Recommender Systems
Institute of Scientific and Technical Information of China (English)
YAO Yu; ZHU Shanfeng; CHEN Xinmeng
2006-01-01
In this work, Kendall correlation based collaborative filtering algorithms for the recommender systems are proposed. The Kendall correlation method is used to measure the correlation amongst users by means of considering the relative order of the users' ratings. Kendall based algorithm is based upon a more general model and thus could be more widely applied in e-commerce. Another discovery of this work is that the consideration of only positive correlated neighbors in prediction, in both Pearson and Kendall algorithms, achieves higher accuracy than the consideration of all neighbors, with only a small loss of coverage.
Fast image matching algorithm based on projection characteristics
Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun
2011-06-01
Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.
A hybrid features based image matching algorithm
Tu, Zhenbiao; Lin, Tao; Sun, Xiao; Dou, Hao; Ming, Delie
2015-12-01
In this paper, we present a novel image matching method to find the correspondences between two sets of image interest points. The proposed method is based on a revised third-order tensor graph matching method, and introduces an energy function that takes four kinds of energy term into account. The third-order tensor method can hardly deal with the situation that the number of interest points is huge. To deal with this problem, we use a potential matching set and a vote mechanism to decompose the matching task into several sub-tasks. Moreover, the third-order tensor method sometimes could only find a local optimum solution. Thus we use a cluster method to divide the feature points into some groups and only sample feature triangles between different groups, which could make the algorithm to find the global optimum solution much easier. Experiments on different image databases could prove that our new method would obtain correct matching results with relatively high efficiency.
QPSO-Based Adaptive DNA Computing Algorithm
Directory of Open Access Journals (Sweden)
Mehmet Karakose
2013-01-01
Full Text Available DNA (deoxyribonucleic acid computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO. Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1 parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2 adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3 numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm.
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
A new FFT algorithm has been deduced, which is called the base-6 FFT algorithm. The amount for calculating the DFT of complex sequence of N=2r by the base-6 FFT algorithm is Mr(N)=14/3*Nlog6N-4N+4 for multiplication operation of real number and Ar(N)=23/3*Nlog6N-2N+2 for addition operation of real number. The amount for calculating the DFT of real sequence is a half of it with the complex sequence.
Vehicle parameter identification using population based algorithms
GÖKDAĞ, Hakan
2015-01-01
This work deals with parameter identification of a vehicle using population based algorithms such as Particle Swarm Optimization (PSO), Artificial Bee Colony Optimization (ABC) and Genetic Algorithm (GA). Full vehicle model with seven degree of freedom (DoF) is employed, and two objective functions based on reference and computed responses are proposed. Solving the optimization problem vehicle mass, moments of inertia and vehicle center of gravity parameters, which are necessary for later app...
A PRESSURE-BASED ALGORITHM FOR CAVITATING FLOW COMPUTATIONS
Institute of Scientific and Technical Information of China (English)
ZHANG Ling-xin; ZHAO Wei-guo; SHAO Xue-ming
2011-01-01
A pressure-based algorithm for the prediction of cavitating flows is presented. The algorithm employs a set of equations including the Navier-Stokes equations and a cavitation model explaining the phase change between liquid and vapor. A pressure-based method is used to construct the algorithm and the coupling between pressure and velocity is considered. The pressure correction equation is derived from a new continuity equation which employs a source term related to phase change rate instead of the material derivative of density Dp/Dt.Thispressure-based algorithm allows for the computation of steady or unsteady,2-Dor 3-D cavitating flows. Two 2-D cases, flows around a flat-nose cylinder and around a NACA0015 hydrofoil, are simulated respectively, and the periodic cavitation behaviors associated with the re-entrant jets are captured. This algorithm shows good capability of computating time-dependent cavitating flows.
A Practical Localization Algorithm Based on Wireless Sensor Networks
Huang, Tao; Xia, Feng; Jin, Cheng; Li, Liang
2010-01-01
Many localization algorithms and systems have been developed by means of wireless sensor networks for both indoor and outdoor environments. To achieve higher localization accuracy, extra hardware equipments are utilized by most of the existing localization algorithms, which increase the cost and greatly limit the range of location-based applications. In this paper we present a method which can effectively meet different localization accuracy requirements of most indoor and outdoor location services in realistic applications. Our algorithm is composed of two phases: partition phase, in which the target region is split into small grids and localization refinement phase in which a higher accuracy location can be generated by applying a trick algorithm. A realistic demo system using our algorithm has been developed to illustrate its feasibility and availability. The results show that our algorithm can improve the localization accuracy.
Analog Circuit Design Optimization Based on Evolutionary Algorithms
Directory of Open Access Journals (Sweden)
Mansour Barari
2014-01-01
Full Text Available This paper investigates an evolutionary-based designing system for automated sizing of analog integrated circuits (ICs. Two evolutionary algorithms, genetic algorithm and PSO (Parswal particle swarm optimization algorithm, are proposed to design analog ICs with practical user-defined specifications. On the basis of the combination of HSPICE and MATLAB, the system links circuit performances, evaluated through specific electrical simulation, to the optimization system in the MATLAB environment, for the selected topology. The system has been tested by typical and hard-to-design cases, such as complex analog blocks with stringent design requirements. The results show that the design specifications are closely met. Comparisons with available methods like genetic algorithms show that the proposed algorithm offers important advantages in terms of optimization quality and robustness. Moreover, the algorithm is shown to be efficient.
Evolutionary algorithm based on schemata theory
Institute of Scientific and Technical Information of China (English)
Takashi MARUYAMA; Eisuke KITA
2009-01-01
The stochastic schemata exploiter (SSE), which is one of the evolutionary algorithms based on schemata theory, was presented by Aizawa. The convergence speed of SSE is much faster than simple genetic algorithm. It sacrifices somewhat the global search performance. This paper describes an improved algorithm of SSE,which is named as cross-generational elitist selection SSE (cSSE). In cSSE, the use of the cross-generational elitist selection enhances the diversity of the individuals in the population and therefore, the global search performance is improved. In the numerical examples, cSSE is compared with genetic algorithm with minimum generation gap (MGG),Bayesian optimization algorithm (BOA), and SSE. The results show that cSSE has fast convergence and good global search performance.
Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms
Lee Chien-Cheng; Huang Shin-Sheng; Shih Cheng-Yuan
2010-01-01
This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB) with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RD...
Discovering simple DNA sequences by the algorithmic significance method.
Milosavljević, A; Jurka, J
1993-08-01
A new method, 'algorithmic significance', is proposed as a tool for discovery of patterns in DNA sequences. The main idea is that patterns can be discovered by finding ways to encode the observed data concisely. In this sense, the method can be viewed as a formal version of the Occam's Razor principle. In this paper the method is applied to discover significantly simple DNA sequences. We define DNA sequences to be simple if they contain repeated occurrences of certain 'words' and thus can be encoded in a small number of bits. Such definition includes minisatellites and microsatellites. A standard dynamic programming algorithm for data compression is applied to compute the minimal encoding lengths of sequences in linear time. An electronic mail server for identification of simple sequences based on the proposed method has been installed at the Internet address pythia/anl.gov. PMID:8402207
Alternative Method for Solving Traveling Salesman Problem by Evolutionary Algorithm
Directory of Open Access Journals (Sweden)
Zuzana Čičková
2008-06-01
Full Text Available This article describes the application of Self Organizing Migrating Algorithm (SOMA to the well-known optimization problem - Traveling Salesman Problem (TSP. SOMA is a relatively new optimization method that is based on Evolutionary Algorithms that are originally focused on solving non-linear programming problems that contain continuous variables. The TSP has model character in many branches of Operation Research because of its computational complexity; therefore the use of Evolutionary Algorithm requires some special approaches to guarantee feasibility of solutions. In this article two concrete examples of TSP as 8 cities set and 25 cities set are given to demonstrate the practical use of SOMA. Firstly, the penalty approach is applied as a simple way to guarantee feasibility of solution. Then, new approach that works only on feasible solutions is presented.
Paduszyński, Kamil
2016-08-22
The aim of the paper is to address all the disadvantages of currently available models for calculating infinite dilution activity coefficients (γ(∞)) of molecular solutes in ionic liquids (ILs)-a relevant property from the point of view of many applications of ILs, particularly in separations. Three new models are proposed, each of them based on distinct machine learning algorithm: stepwise multiple linear regression (SWMLR), feed-forward artificial neural network (FFANN), and least-squares support vector machine (LSSVM). The models were established based on the most comprehensive γ(∞) data bank reported so far (>34 000 data points for 188 ILs and 128 solutes). Following the paper published previously [J. Chem. Inf. Model 2014, 54, 1311-1324], the ILs were treated in terms of group contributions, whereas the Abraham solvation parameters were used to quantify an impact of solute structure. Temperature is also included in the input data of the models so that they can be utilized to obtain temperature-dependent data and thus related thermodynamic functions. Both internal and external validation techniques were applied to assess the statistical significance and explanatory power of the final correlations. A comparative study of the overall performance of the investigated SWMLR/FFANN/LSSVM approaches is presented in terms of root-mean-square error and average absolute relative deviation between calculated and experimental γ(∞), evaluated for different families of ILs and solutes, as well as between calculated and experimental infinite dilution selectivity for separation problems benzene from n-hexane and thiophene from n-heptane. LSSVM is shown to be a method with the lowest values of both training and generalization errors. It is finally demonstrated that the established models exhibit an improved accuracy compared to the state-of-the-art model, namely, temperature-dependent group contribution linear solvation energy relationship, published in 2011 [J. Chem
Institute of Scientific and Technical Information of China (English)
苏义鑫; 向炉阳; 张丹红
2012-01-01
It is necessary to track the maximum power point of solar cell array rapidly and accurately to improve the efficiency of photovoltaic system. A simulation model of photovoltaic cells arrays was established according to the electrical characteristics of solar cell array. This model can simulate output characteristics of the solar cell array under different sunshine and temperature conditions. Aiming at the shortage of duty ratio perturbation,a duty ratio perturbation based on fuzzy - PI control for maximum power point tracking ( MPPT) was proposed and simulated under Matlab environment. Simulation results show that duty ratio perturbation algorithm based on fuzzy - PI control can track the maximum power of photovoltaic cells rapidly and increase the maximum power point tracking accuracy and have good static and dynamic performance when compared with the duty ratio perturbation method.%根据光伏电池的特性,搭建了其仿真模型,能够模拟不同日照和温度条件下电池的输出特性.针对占空比扰动法的不足,提出了一种基于模糊- PI控制的占空比扰动法进行最大功率点跟踪,并在Matlab环境下进行了仿真验证.仿真结果表明,与占空比扰动法相比,该方法在外界环境变化时能够快速跟踪光伏电池的最大功率点,有效提高最大功率点的跟踪精度,具有良好的动态和稳态性能.
Paduszyński, Kamil
2016-08-22
The aim of the paper is to address all the disadvantages of currently available models for calculating infinite dilution activity coefficients (γ(∞)) of molecular solutes in ionic liquids (ILs)-a relevant property from the point of view of many applications of ILs, particularly in separations. Three new models are proposed, each of them based on distinct machine learning algorithm: stepwise multiple linear regression (SWMLR), feed-forward artificial neural network (FFANN), and least-squares support vector machine (LSSVM). The models were established based on the most comprehensive γ(∞) data bank reported so far (>34 000 data points for 188 ILs and 128 solutes). Following the paper published previously [J. Chem. Inf. Model 2014, 54, 1311-1324], the ILs were treated in terms of group contributions, whereas the Abraham solvation parameters were used to quantify an impact of solute structure. Temperature is also included in the input data of the models so that they can be utilized to obtain temperature-dependent data and thus related thermodynamic functions. Both internal and external validation techniques were applied to assess the statistical significance and explanatory power of the final correlations. A comparative study of the overall performance of the investigated SWMLR/FFANN/LSSVM approaches is presented in terms of root-mean-square error and average absolute relative deviation between calculated and experimental γ(∞), evaluated for different families of ILs and solutes, as well as between calculated and experimental infinite dilution selectivity for separation problems benzene from n-hexane and thiophene from n-heptane. LSSVM is shown to be a method with the lowest values of both training and generalization errors. It is finally demonstrated that the established models exhibit an improved accuracy compared to the state-of-the-art model, namely, temperature-dependent group contribution linear solvation energy relationship, published in 2011 [J. Chem
Fisher, Jason C.
2013-01-01
Long-term groundwater monitoring networks can provide essential information for the planning and management of water resources. Budget constraints in water resource management agencies often mean a reduction in the number of observation wells included in a monitoring network. A network design tool, distributed as an R package, was developed to determine which wells to exclude from a monitoring network because they add little or no beneficial information. A kriging-based genetic algorithm method was used to optimize the monitoring network. The algorithm was used to find the set of wells whose removal leads to the smallest increase in the weighted sum of the (1) mean standard error at all nodes in the kriging grid where the water table is estimated, (2) root-mean-squared-error between the measured and estimated water-level elevation at the removed sites, (3) mean standard deviation of measurements across time at the removed sites, and (4) mean measurement error of wells in the reduced network. The solution to the optimization problem (the best wells to retain in the monitoring network) depends on the total number of wells removed; this number is a management decision. The network design tool was applied to optimize two observation well networks monitoring the water table of the eastern Snake River Plain aquifer, Idaho; these networks include the 2008 Federal-State Cooperative water-level monitoring network (Co-op network) with 166 observation wells, and the 2008 U.S. Geological Survey-Idaho National Laboratory water-level monitoring network (USGS-INL network) with 171 wells. Each water-level monitoring network was optimized five times: by removing (1) 10, (2) 20, (3) 40, (4) 60, and (5) 80 observation wells from the original network. An examination of the trade-offs associated with changes in the number of wells to remove indicates that 20 wells can be removed from the Co-op network with a relatively small degradation of the estimated water table map, and 40 wells
Chaos-Based Multipurpose Image Watermarking Algorithm
Institute of Scientific and Technical Information of China (English)
ZHU Congxu; LIAO Xuefeng; LI Zhihua
2006-01-01
To achieve the goal of image content authentication and copyright protection simultaneously, this paper presents a novel image dual watermarking method based on chaotic map. Firstly, the host image was split into many nonoverlapping small blocks, and the block-wise discrete cosine transform (DCT) is computed. Secondly, the robust watermarks, shuffled by the chaotic sequences, are embedded in the DC coefficients of blocks to achieve the goal of copyright protection. The semi-fragile watermarks, generated by chaotic map, are embedded in the AC coefficients of blocks to obtain the aim of image authentication. Both of them can be extracted without the original image. Simulation results demonstrate the effectiveness of our algorithm in terms of robustness and fragility.
A Modularity Degree Based Heuristic Community Detection Algorithm
Directory of Open Access Journals (Sweden)
Dongming Chen
2014-01-01
Full Text Available A community in a complex network can be seen as a subgroup of nodes that are densely connected. Discovery of community structures is a basic problem of research and can be used in various areas, such as biology, computer science, and sociology. Existing community detection methods usually try to expand or collapse the nodes partitions in order to optimize a given quality function. These optimization function based methods share the same drawback of inefficiency. Here we propose a heuristic algorithm (MDBH algorithm based on network structure which employs modularity degree as a measure function. Experiments on both synthetic benchmarks and real-world networks show that our algorithm gives competitive accuracy with previous modularity optimization methods, even though it has less computational complexity. Furthermore, due to the use of modularity degree, our algorithm naturally improves the resolution limit in community detection.
Fingerprint Image Segmentation Algorithm Based on Contourlet Transform Technology
Directory of Open Access Journals (Sweden)
Guanghua Zhang
2016-09-01
Full Text Available This paper briefly introduces two classic algorithms for fingerprint image processing, which include the soft threshold denoise algorithm of wavelet domain based on wavelet domain and the fingerprint image enhancement algorithm based on Gabor function. Contourlet transform has good texture sensitivity and can be used for the segmentation enforcement of the fingerprint image. The method proposed in this paper has attained the final fingerprint segmentation image through utilizing a modified denoising for a high-frequency coefficient after Contourlet decomposition, highlighting the fingerprint ridge line through modulus maxima detection and finally connecting the broken fingerprint line using a value filter in direction. It can attain richer direction information than the method based on wavelet transform and Gabor function and can make the positioning of detailed features more accurate. However, its ridge should be more coherent. Experiments have shown that this algorithm is obviously superior in fingerprint features detection.
A novel bit-quad-based Euler number computing algorithm.
Yao, Bin; He, Lifeng; Kang, Shiying; Chao, Yuyan; Zhao, Xiao
2015-01-01
The Euler number of a binary image is an important topological property in computer vision and pattern recognition. This paper proposes a novel bit-quad-based Euler number computing algorithm. Based on graph theory and analysis on bit-quad patterns, our algorithm only needs to count two bit-quad patterns. Moreover, by use of the information obtained during processing the previous bit-quad, the average number of pixels to be checked for processing a bit-quad is only 1.75. Experimental results demonstrated that our method outperforms significantly conventional Euler number computing algorithms.
Duality based optical flow algorithms with applications
DEFF Research Database (Denmark)
Rakêt, Lars Lau
We consider the popular TV-L1 optical flow formulation, and the so-called duality based algorithm for minimizing the TV-L1 energy. The original formulation is extended to allow for vector valued images, and minimization results are given. In addition we consider different definitions of total...... variation regularization, and related formulations of the optical flow problem that may be used with a duality based algorithm. We present a highly optimized algorithmic setup to estimate optical flows, and give five novel applications. The first application is registration of medical images, where X......-ray images of different hands, taken using different imaging devices are registered using a TV-L1 optical flow algorithm. We propose to regularize the input images, using sparsity enhancing regularization of the image gradient to improve registration results. The second application is registration of 2D...
Cuckoo search algorithm based on conjugate gradient method%基于共轭梯度的布谷鸟搜索算法
Institute of Scientific and Technical Information of China (English)
杜利敏; 阮奇; 冯登科
2013-01-01
布谷鸟搜索算法(Cuckoo Search, CS)是基于群体智能的新型随机全局优化算法，具有控制参数少、搜索路径优和全局寻优能力强等优点，但也存在局部搜索能力较弱、收敛速度偏慢和收敛精度不够高等缺点。为了克服 CS 算法的缺点，提出一种基于共轭梯度的布谷鸟搜索算法(CGCS)，使经过 Lévy 飞行机制和淘汰机制进化后的布谷鸟种群沿着相互共轭的方向迅速下降，从而在保持算法的强大全局寻优能力的基础上大幅提高算法的收敛能力。用4个典型测试函数分别对 CGCS 算法和 CS 算法进行性能测试，结果表明，CGCS 算法比 CS 算法具有更快的收敛速度、更高的收敛精度和更稳定的优化结果。CGCS 算法同时具有很强的全局寻优能力、收敛能力和鲁棒性，特别适合多峰及高维函数的优化。% Cuckoo search algorithm is a novel stochastic global optimization algorithm based on swarm intelligence, with advantages of few control parameters, optimal search path and good global search capability, but it also has shortcomings of weak local search ability, slow convergence velocity and low convergence accuracy. In order to overcome these disadvantages of CS algorithm, an improved cuckoo search algorithm based on conjugate gradient is introduced. After evolved from Lévy flights and elimination mechanism, the cuckoo populations decline rapidly in the mutually conjugate directions so that the convergence ability of algorithm is strengthened significantly under the condition of maintaining the strong global search capability of CS algorithm. The CGCS algorithm and CS algorithm are tested by four typical test functions. The conclusions indicate that CGCS algorithm has faster convergence velocity, higher convergence accuracy and more stable optimization results. Meanwhile CGCS algorithm has good global search capability, convergence ability and robustness, which is
Efficient Satellite Scheduling Based on Improved Vector Evaluated Genetic Algorithm
Directory of Open Access Journals (Sweden)
Tengyue Mao
2012-03-01
Full Text Available Satellite scheduling is a typical multi-peak, many-valley, nonlinear multi-objective optimization problem. How to effectively implement the satellite scheduling is a crucial research in space areas.This paper mainly discusses the performance of VEGA (Vector Evaluated Genetic Algorithm based on the study of basic principles of VEGA algorithm, algorithm realization and test function, and then improves VEGA algorithm through introducing vector coding, new crossover and mutation operators, new methods to assign fitness and hold good individuals. As a result, the diversity and convergence of improved VEGA algorithm of improved VEGA algorithm have been significantly enhanced and will be applied to Earth-Mars orbit optimization. At the same time, this paper analyzes the results of the improved VEGA, whose results of performance analysis and evaluation show that although VEGA has a profound impact upon multi-objective evolutionary research, multi-objective evolutionary algorithm on the basis of Pareto seems to be a more effective method to get the non-dominated solutions from the perspective of diversity and convergence of experimental result. Finally, based on Visual C + + integrated development environment, we have implemented improved vector evaluation algorithm in the satellite scheduling.
Method for gesture based modeling
DEFF Research Database (Denmark)
2006-01-01
A computer program based method is described for creating models using gestures. On an input device, such as an electronic whiteboard, a user draws a gesture which is recognized by a computer program and interpreted relative to a predetermined meta-model. Based on the interpretation, an algorithm...... is assigned to the gesture drawn by the user. The executed algorithm may, for example, consist in creating a new model element, modifying an existing model element, or deleting an existing model element....
Network Intrusion Detection based on GMKL Algorithm
Directory of Open Access Journals (Sweden)
Li Yuxiang
2013-06-01
Full Text Available According to the 31th statistical reports of China Internet network information center (CNNIC, by the end of December 2012, the number of Chinese netizens has reached 564 million, and the scale of mobile Internet users also reached 420 million. But when the network brings great convenience to people's life, it also brings huge threat in the life of people. So through collecting and analyzing the information in the computer system or network we can detect any possible behaviors that can damage the availability, integrity and confidentiality of the computer resource, and make timely treatment to these behaviors which have important research significance to improve the operation environment of network and network service. At present, the Neural Network, Support Vector machine (SVM and Hidden Markov Model, Fuzzy inference and Genetic Algorithms are introduced into the research of network intrusion detection, trying to build a healthy and secure network operation environment. But most of these algorithms are based on the total sample and it also hypothesizes that the number of the sample is infinity. But in the field of network intrusion the collected data often cannot meet the above requirements. It often shows high latitudes, variability and small sample characteristics. For these data using traditional machine learning methods are hard to get ideal results. In view of this, this paper proposed a Generalized Multi-Kernel Learning method to applied to network intrusion detection. The Generalized Multi-Kernel Learning method can be well applied to large scale sample data, dimension complex, containing a large number of heterogeneous information and so on. The experimental results show that applying GMKL to network attack detection has high classification precision and low abnormal practical precision.
Methods in Logic Based Control
DEFF Research Database (Denmark)
Christensen, Georg Kronborg
1999-01-01
Desing and theory of Logic Based Control systems.Boolean Algebra, Karnaugh Map, Quine McClusky's algorithm. Sequential control design. Logic Based Control Method, Cascade Control Method. Implementation techniques: relay, pneumatic, TTL/CMOS,PAL and PLC- and Soft_PLC implementation. PLC...
Fast parallel algorithm for slicing STL based on pipeline
Ma, Xulong; Lin, Feng; Yao, Bo
2016-05-01
In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.
A new method for constructing total energy conservation algorithms
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
Based on a basic rule for the research on numerical methods which requires that the properties of the original problem should be preserved as much as possible under discretization, a new method for constructing total energy conservation algorithms is presented. By this method, all kinds of implicit schemes with energy conservation laws including many classical conservation schemes can be constructed from a kind of special function. Also, the concrete criterion for judging total energy conservation schemes is given. Numerical tests show that these new schemes are effective.
Wavelet Image Encryption Algorithm Based on AES
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
Traditional encryption techniques have some limits for multimedia information, especially image and video, which are considered only to be common data. In this paper, we propose a wavelet-based image encryption algorithm based on the Advanced Encryption Standard, which encrypts only those low frequency coefficients of image wavelet decomposition. The experimental results are satisfactory.
Genetic Algorithm based PID controller for Frequency Regulation Ancillary services
Directory of Open Access Journals (Sweden)
Sandeep Bhongade
2010-12-01
Full Text Available In this paper, the parameters of Proportional, Integral and Derivative (PID controller for Automatic Generation Control (AGC suitable in restructured power system is tuned according to Generic Algorithms (GAs based performance indices. The key idea of the proposed method is to use the fitness function based on Area Control Error (ACE. The functioning of the proposed Genetic Algorithm based PID (GAPID controller has been demonstrated on a 75-bus Indian power system network and the results have been compared with those obtained by using Least Square Minimization method.
一种基于交叉熵的社区发现算法%Community Detection Algorithm Based on Cross-Entropy Method
Institute of Scientific and Technical Information of China (English)
于海; 赵玉丽; 崔坤; 朱志良
2015-01-01
Community detection algorithm is a very significant research topic in the complex network theory,which can be applied in communities’structures search and discovery applications. In this paper,the concept of Cross-Entropy in the field of signal processing is introduced and a community detection algorithm based on Cross-Entropy is proposed.The algorithm defines modularity as the quality function,which uses importance sampling in Cross-Entropy to speed up the convergence,thus the efficiency and accuracy of communities’detection can be improved simultaneously.Comparing with the Girvan-Newman algorithm over networks the computer generated,the proposed algorithm achieves higher NMI and the proportion of correctly division nodes.Moreover, the simulation results over real-world networks further reveal that the proposed algorithm accomplishes higher value of Modularity than Girvan-Newman algorithm,and no less than External Optimization algorithm.It is further verified that the proposed algorithm is more accurate than Girvan-Newman and External Optimization ones.%作为复杂网络中的一个极其重要的研究领域，社区结构的搜寻和发现研究具有重要的应用价值。该文将信号处理领域的交叉熵概念引入到网络社区结构的发现算法中，提出了一种基于交叉熵的社区发现算法。算法利用 Modularity 值作为判别依据，使用交叉熵方法中的重要抽样方法提高收敛速度，从而在提高社区发现算法运算效率的同时，提高算法的精确性。针对计算机生成网络的社区划分结果表明，该算法所得 MNI 值和划分正确节点所占比例高于 Girvan-Newman 算法。在真实网络上的仿真结果表明，该社区划分算法的 Modularity 值高于Girvan-Newman 算法，且不低于极值优化算法，进一步验证了该文提出算法的社区划分准确性优于已有的 Girvan-Newman 算法和极值优化算法。
Scalable force directed graph layout algorithms using fast multipole methods
Yunis, Enas
2012-06-01
We present an extension to ExaFMM, a Fast Multipole Method library, as a generalized approach for fast and scalable execution of the Force-Directed Graph Layout algorithm. The Force-Directed Graph Layout algorithm is a physics-based approach to graph layout that treats the vertices V as repelling charged particles with the edges E connecting them acting as springs. Traditionally, the amount of work required in applying the Force-Directed Graph Layout algorithm is O(|V|2 + |E|) using direct calculations and O(|V| log |V| + |E|) using truncation, filtering, and/or multi-level techniques. Correct application of the Fast Multipole Method allows us to maintain a lower complexity of O(|V| + |E|) while regaining most of the precision lost in other techniques. Solving layout problems for truly large graphs with millions of vertices still requires a scalable algorithm and implementation. We have been able to leverage the scalability and architectural adaptability of the ExaFMM library to create a Force-Directed Graph Layout implementation that runs efficiently on distributed multicore and multi-GPU architectures. © 2012 IEEE.
Directory of Open Access Journals (Sweden)
Xiangwei Guo
2016-02-01
Full Text Available An estimation of the power battery state of charge (SOC is related to the energy management, the battery cycle life and the use cost of electric vehicles. When a lithium-ion power battery is used in an electric vehicle, the SOC displays a very strong time-dependent nonlinearity under the influence of random factors, such as the working conditions and the environment. Hence, research on estimating the SOC of a power battery for an electric vehicle is of great theoretical significance and application value. In this paper, according to the dynamic response of the power battery terminal voltage during a discharging process, the second-order RC circuit is first used as the equivalent model of the power battery. Subsequently, on the basis of this model, the least squares method (LS with a forgetting factor and the adaptive unscented Kalman filter (AUKF algorithm are used jointly in the estimation of the power battery SOC. Simulation experiments show that the joint estimation algorithm proposed in this paper has higher precision and convergence of the initial value error than a single AUKF algorithm.
基于差分进化算法的最大功率点跟踪方法%Maximum Power Tracking Method Based on Differential Evolution Algorithm
Institute of Scientific and Technical Information of China (English)
高建超; 程若发
2016-01-01
Due to the nonlinear and time-varying nature of photovoltaic cel P-V characteristic curve,this paper presents a kind of maximum power tracking algorithm based on standard differential evolution (DE) algorithm,through the MATLAB simula-tion test and verify the effectiveness of the algorithm to track the maximum power.%针对光伏电池P－V特性曲线非线性时变的特点，提出一种基于标准差分进化（DE）算法的最大功率跟踪（MPPT）算法。通过MATLAB环境下的仿真实验，验证了该算法跟踪最大功率的有效性。在标准的差分进化算法基础上改进后的自适应差分进化算法（DDE），与粒子群算法（PSO）和标准差分进化算法相比，通过大量仿真试验证明，该算法在保持相同跟踪精度的同时具有更加快速的收敛特性。
SIMULATED ANNEALING BASED POLYNOMIAL TIME QOS ROUTING ALGORITHM FOR MANETS
Institute of Scientific and Technical Information of China (English)
Liu Lianggui; Feng Guangzeng
2006-01-01
Multi-constrained Quality-of-Service (QoS) routing is a big challenge for Mobile Ad hoc Networks (MANETs) where the topology may change constantly. In this paper a novel QoS Routing Algorithm based on Simulated Annealing (SA_RA) is proposed. This algorithm first uses an energy function to translate multiple QoS weights into a single mixed metric and then seeks to find a feasible path by simulated annealing. The paper outlines simulated annealing algorithm and analyzes the problems met when we apply it to Qos Routing (QoSR) in MANETs. Theoretical analysis and experiment results demonstrate that the proposed method is an effective approximation algorithms showing better performance than the other pertinent algorithm in seeking the (approximate) optimal configuration within a period of polynomial time.
Distribution network planning algorithm based on Hopfield neural network
Institute of Scientific and Technical Information of China (English)
GAO Wei-xin; LUO Xian-jue
2005-01-01
This paper presents a new algorithm based on Hopfield neural network to find the optimal solution for an electric distribution network. This algorithm transforms the distribution power network-planning problem into a directed graph-planning problem. The Hopfield neural network is designed to decide the in-degree of each node and is in combined application with an energy function. The new algorithm doesn't need to code city streets and normalize data, so the program is easier to be realized. A case study applying the method to a district of 29 street proved that an optimal solution for the planning of such a power system could be obtained by only 26 iterations. The energy function and algorithm developed in this work have the following advantages over many existing algorithms for electric distribution network planning: fast convergence and unnecessary to code all possible lines.
Institute of Scientific and Technical Information of China (English)
秦岭; 阚树林
2013-01-01
针对制造系统中车间调度过程存在的动态性与复杂性问题，本文基于粒子群算法，结合多Agent协同优化性，构建了粒子群算法的是适应度函数，以及多Agent车间调度的行动策略，提出了基于多Agent粒子群改进算法（particle swarm improved algorithm based on multi-agent，PSIMA），建立了基于多Agent粒子群改进算法流程。最后，通过具体实例，该方法比传统粒子群算法在计算时间以及计算复杂度上有明显改进，为解决制造系统中车间调度问题提供了一种新的方法指导。%To solve the dynamic and complex calculation problems in production scheduling procedure, a fitness function of particle swarm algorithm and operation strategy of shop scheduling agent are constructed by means of particle swarm algorithm and multi-agent cooperation characteristic. Moreover, an improved algorithm based on particle swarm algorithm is established and an optimal shop scheduling procedure by particle swarm algorithm based on multi-agent in this paper. Finally, the simulation system of production dynamic scheduling by particle swarm improved algorithm based on multi-agent was demonstrated and validated by QUEST software. It has shown the production scheduling calculation time in proposed method can be improved, and provide a support for adapting to complex and dynamic production scheduling.
Genetic algorithm based separation cascade optimization
International Nuclear Information System (INIS)
The conventional separation cascade design procedure does not give an optimum design because of squaring-off, variation of flow rates and separation factor of the element with respect to stage location. Multi-component isotope separation further complicates the design procedure. Cascade design can be stated as a constrained multi-objective optimization. Cascade's expectation from the separating element is multi-objective i.e. overall separation factor, cut, optimum feed and separative power. Decision maker may aspire for more comprehensive multi-objective goals where optimization of cascade is coupled with the exploration of separating element optimization vector space. In real life there are many issues which make it important to understand the decision maker's perception of cost-quality-speed trade-off and consistency of preferences. Genetic algorithm (GA) is one such evolutionary technique that can be used for cascade design optimization. This paper addresses various issues involved in the GA based multi-objective optimization of the separation cascade. Reference point based optimization methodology with GA based Pareto optimality concept for separation cascade was found pragmatic and promising. This method should be explored, tested, examined and further developed for binary as well as multi-component separations. (author)
Ant colony algorithm based on genetic method for continuous optimization problem%基于遗传机制的蚁群算法求解连续优化问题
Institute of Scientific and Technical Information of China (English)
朱经纬; 蒙陪生; 王乘
2007-01-01
A new algorithm is presented by using the ant colony algorithm based on genetic method (ACG) to solve the continuous optimization problem. Each component has a seed set. The seed in the set has the value of component, trail information and fitness. The ant chooses a seed from the seed set with the possibility determined by trail information and fitness of the seed. The genetic method is used to form new solutions from the solutions got by the ants. Best solutions are selected to update the seeds in the sets and trail information of the seeds. In updating the trail information, a diffusion function is used to achieve the diffuseness of trail information. The new algorithm is tested with 8 different benchmark functions.
Image fusion based on expectation maximization algorithm and steerable pyramid
Institute of Scientific and Technical Information of China (English)
Gang Liu(刘刚); Zhongliang Jing(敬忠良); Shaoyuan Sun(孙韶媛); Jianxun Li(李建勋); Zhenhua Li(李振华); Henry Leung
2004-01-01
In this paper, a novel image fusion method based on the expectation maximization (EM) algorithm and steerable pyramid is proposed. The registered images are first decomposed by using steerable pyramid.The EM algorithm is used to fuse the image components in the low frequency band. The selection method involving the informative importance measure is applied to those in the high frequency band. The final fused image is then computed by taking the inverse transform on the composite coefficient representations.Experimental results show that the proposed method outperforms conventional image fusion methods.
Secure OFDM communications based on hashing algorithms
Neri, Alessandro; Campisi, Patrizio; Blasi, Daniele
2007-10-01
In this paper we propose an OFDM (Orthogonal Frequency Division Multiplexing) wireless communication system that introduces mutual authentication and encryption at the physical layer, without impairing spectral efficiency, exploiting some freedom degrees of the base-band signal, and using encrypted-hash algorithms. FEC (Forward Error Correction) is instead performed through variable-rate Turbo Codes. To avoid false rejections, i.e. rejections of enrolled (authorized) users, we designed and tested a robust hash algorithm. This robustness is obtained both by a segmentation of the hash domain (based on BCH codes) and by the FEC capabilities of Turbo Codes.
A Novel Image Encryption Algorithm Based on DNA Subsequence Operation
Zhang, Qiang; Xue, Xianglian; Wei, Xiaopeng
2012-01-01
We present a novel image encryption algorithm based on DNA subsequence operation. Different from the traditional DNA encryption methods, our algorithm does not use complex biological operation but just uses the idea of DNA subsequence operations (such as elongation operation, truncation operation, deletion operation, etc.) combining with the logistic chaotic map to scramble the location and the value of pixel points from the image. The experimental results and security analysis show that the proposed algorithm is easy to be implemented, can get good encryption effect, has a wide secret key's space, strong sensitivity to secret key, and has the abilities of resisting exhaustive attack and statistic attack. PMID:23093912
A novel image encryption algorithm based on DNA subsequence operation.
Zhang, Qiang; Xue, Xianglian; Wei, Xiaopeng
2012-01-01
We present a novel image encryption algorithm based on DNA subsequence operation. Different from the traditional DNA encryption methods, our algorithm does not use complex biological operation but just uses the idea of DNA subsequence operations (such as elongation operation, truncation operation, deletion operation, etc.) combining with the logistic chaotic map to scramble the location and the value of pixel points from the image. The experimental results and security analysis show that the proposed algorithm is easy to be implemented, can get good encryption effect, has a wide secret key's space, strong sensitivity to secret key, and has the abilities of resisting exhaustive attack and statistic attack.
A Novel Image Encryption Algorithm Based on DNA Subsequence Operation
Directory of Open Access Journals (Sweden)
Qiang Zhang
2012-01-01
Full Text Available We present a novel image encryption algorithm based on DNA subsequence operation. Different from the traditional DNA encryption methods, our algorithm does not use complex biological operation but just uses the idea of DNA subsequence operations (such as elongation operation, truncation operation, deletion operation, etc. combining with the logistic chaotic map to scramble the location and the value of pixel points from the image. The experimental results and security analysis show that the proposed algorithm is easy to be implemented, can get good encryption effect, has a wide secret key's space, strong sensitivity to secret key, and has the abilities of resisting exhaustive attack and statistic attack.
A motion retargeting algorithm based on model simplification
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
A new motion retargeting algorithm is presented, which adapts the motion capture data to a new character. To make the resulting motion realistic, the physically-based optimization method is adopted. However, the optimization process is difficult to converge to the optimal value because of high complexity of the physical human model. In order to address this problem, an appropriate simplified model automatically determined by a motion analysis technique is utilized, and then motion retargeting with this simplified model as an intermediate agent is implemented. The entire motion retargeting algorithm involves three steps of nonlinearly constrained optimization: forward retargeting, motion scaling and inverse retargeting. Experimental results show the validity of this algorithm.
Knowledge Automatic Indexing Based on Concept Lexicon and Segmentation Algorithm
Institute of Scientific and Technical Information of China (English)
WANG Lan-cheng; JIANG Dan; LE Jia-jin
2005-01-01
This paper is based on two existing theories about automatic indexing of thematic knowledge concept. The prohibit-word table with position information has been designed. The improved Maximum Matching-Minimum Backtracking method has been researched. Moreover it has been studied on improved indexing algorithm and application technology based on rules and thematic concept word table.
An Improved Convexity Based Segmentation Algorithm for Heavily Camouflaged Images
Directory of Open Access Journals (Sweden)
Amarjot Singh
2013-03-01
Full Text Available The paper proposes an advanced convexity based segmentation algorithm for heavily camouflaged images. The convexity of the intensity function is used to detect camouflaged objects from complex environments. We take advantage of operator for the detection of 3D concave or convex graylevels to exhibit the effectiveness of camouflage breaking based on convexity. The biological motivation behind operator and its high robustness make it suitable for camouflage breaking. The traditional convexity based algorithm identifies the desired targets but in addition also identifies sub-targets due to their three dimensional behavior. The problem is overcome by combining the conventional algorithm with thresholding. The proposed method is able to eliminate the sub-targets leaving behind only the target of interest in the input image. The proposed method is compared with the conventional operator. It is also compared with some conventional edge based operator for performance evaluation.
Second Attribute Algorithm Based on Tree Expression
Institute of Scientific and Technical Information of China (English)
Su-Qing Han; Jue Wang
2006-01-01
One view of finding a personalized solution of reduct in an information system is grounded on the viewpoint that attribute order can serve as a kind of semantic representation of user requirements. Thus the problem of finding personalized solutions can be transformed into computing the reduct on an attribute order. The second attribute theorem describes the relationship between the set of attribute orders and the set of reducts, and can be used to transform the problem of searching solutions to meet user requirements into the problem of modifying reduct based on a given attribute order. An algorithm is implied based on the second attribute theorem, with computation on the discernibility matrix. Its time complexity is O(n2 × m) (n is the number of the objects and m the number of the attributes of an information system).This paper presents another effective second attribute algorithm for facilitating the use of the second attribute theorem,with computation on the tree expression of an information system. The time complexity of the new algorithm is linear in n. This algorithm is proved to be equivalent to the algorithm on the discernibility matrix.
新的模糊核聚类入侵检测方法%New intrusion detection method based on fuzzy kernel clustering algorithm
Institute of Scientific and Technical Information of China (English)
刘永芬; 陈志安
2012-01-01
To solve the problem of high cost in labeling the data artificially and that of the dimension effect by traditional clustering method, this paper proposes a new fuzzy support vector clustering algorithm to cope with unlabeled data. Through combining .K-means and DBSCAN algorithm to generate association matrix, setting the threshold value of constraint term to get the initial clustering, and using the fuzzy support vector domain description, the final result is achieved. The contrast experiment shows the feasibility and effectiveness of this method.%针对人工标记数据类别代价太高以及传统聚类方法在处理高维数据时产生的维度效应,提出了一种针对无标签数据的新型模糊核聚类方法.通过将K-means与DBSCAN聚类算法相结合生成关联矩阵,设置约束条件的阈值得到初始聚类结果,并在模糊支持向量数据描述方法的基础上完成聚类过程.通过在网络连接数据的对比实验,验证了该方法的可行性与有效性.
Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms
Directory of Open Access Journals (Sweden)
Cheng-Yuan Shih
2010-01-01
Full Text Available This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA and quadratic discriminant analysis (QDA. It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.
Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms
Lee, Chien-Cheng; Huang, Shin-Sheng; Shih, Cheng-Yuan
2010-12-01
This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB) with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA). It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO) algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.
Wireless Sensor Network Path Optimization Based on Hybrid Algorithm
Directory of Open Access Journals (Sweden)
Zeyu Sun
2013-09-01
Full Text Available One merit of genetic algorithm is fast overall searching, but this algorithm usually results in low efficiency because of large quantities of redundant codes. The advantages of ant colony algorithm are strong suitability and good robustness while its disadvantages are tendency to stagnation, slow speed of convergence. Put forward based on improved ant colony algorithm for wireless sensor network path optimization approach will first need to pass the data in the shortest path for transmission, assuming that transmission path jam, it will clog information sent to the initial position, so the follow-up need to pass data can choose other reasonable path so as to avoid the defects of the traditional method. Genetic ant colony is proposed to avoid the faults of both algorithms above. The proposed algorithm determines distribution of pheromones on path through fast searching and changing the operation of selection operator, crossover operator and mutation operator of genetic ant colony, and then solves the problems efficiently through parallelism, positive feedback and iteration of ant colony algorithm. Therefore, the faults of both algorithms are conquered and the aim of combinational optimization is achieved. At last, the validity and feasibility is demonstrated by means of simulation experiment of traveling salesman problem.
Algorithms for Quantum Branching Programs Based on Fingerprinting
Directory of Open Access Journals (Sweden)
Farid Ablayev
2009-11-01
Full Text Available In the paper we develop a method for constructing quantum algorithms for computing Boolean functions by quantum ordered read-once branching programs (quantum OBDDs. Our method is based on fingerprinting technique and representation of Boolean functions by their characteristic polynomials. We use circuit notation for branching programs for desired algorithms presentation. For several known functions our approach provides optimal QOBDDs. Namely we consider such functions as Equality, Palindrome, and Permutation Matrix Test. We also propose a generalization of our method and apply it to the Boolean variant of the Hidden Subgroup Problem.
Analog Group Delay Equalizers Design Based on Evolutionary Algorithm
Directory of Open Access Journals (Sweden)
M. Laipert
2006-04-01
Full Text Available This paper deals with a design method of the analog all-pass filter designated for equalization of the group delay frequency response of the analog filter. This method is based on usage of evolutionary algorithm, the Differential Evolution algorithm in particular. We are able to design such equalizers to be obtained equal-ripple group delay frequency response in the pass-band of the low-pass filter. The procedure works automatically without an input estimation. The method is presented on solving practical examples.
Algorithms for Quantum Branching Programs Based on Fingerprinting
Ablayev, Farid; 10.4204/EPTCS.9.1
2009-01-01
In the paper we develop a method for constructing quantum algorithms for computing Boolean functions by quantum ordered read-once branching programs (quantum OBDDs). Our method is based on fingerprinting technique and representation of Boolean functions by their characteristic polynomials. We use circuit notation for branching programs for desired algorithms presentation. For several known functions our approach provides optimal QOBDDs. Namely we consider such functions as Equality, Palindrome, and Permutation Matrix Test. We also propose a generalization of our method and apply it to the Boolean variant of the Hidden Subgroup Problem.
QRS Detection Based on an Advanced Multilevel Algorithm
Directory of Open Access Journals (Sweden)
Wissam Jenkal
2016-01-01
Full Text Available This paper presents an advanced multilevel algorithm used for the QRS complex detection. This method is based on three levels. The first permits the extraction of higher peaks using an adaptive thresholding technique. The second allows the QRS region detection. The last level permits the detection of Q, R and S waves. The proposed algorithm shows interesting results compared to recently published methods. The perspective of this work is the implementation of this method on an embedded system for a real time ECG monitoring system.
Web mining based on chaotic social evolutionary programming algorithm
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
With an aim to the fact that the K-means clustering algorithm usually ends in local optimization and is hard to harvest global optimization, a new web clustering method is presented based on the chaotic social evolutionary programming (CSEP) algorithm. This method brings up the manner of that a cognitive agent inherits a paradigm in clustering to enable the cognitive agent to acquire a chaotic mutation operator in the betrayal. As proven in the experiment, this method can not only effectively increase web clustering efficiency, but it can also practically improve the precision of web clustering.
Numerical Algorithms Based on Biorthogonal Wavelets
Ponenti, Pj.; Liandrat, J.
1996-01-01
Wavelet bases are used to generate spaces of approximation for the resolution of bidimensional elliptic and parabolic problems. Under some specific hypotheses relating the properties of the wavelets to the order of the involved operators, it is shown that an approximate solution can be built. This approximation is then stable and converges towards the exact solution. It is designed such that fast algorithms involving biorthogonal multi resolution analyses can be used to resolve the corresponding numerical problems. Detailed algorithms are provided as well as the results of numerical tests on partial differential equations defined on the bidimensional torus.
An Image Identification Algorithm Based on Digital Signature Method%基于数字签名方式的图像真伪鉴别算法
Institute of Scientific and Technical Information of China (English)
李晓飞; 申铉京; 陈海鹏; 吕颖达
2012-01-01
针对图像的真伪问题,提出了一种基于数字签名方式的图像真伪鉴别算法.首先对图像进行预处理来提高图像的抗JPEG压缩性,然后把图像等分成2个区域,分别从2个区域中提取块均值的高5位映射为GF(25)域中的元素来构造RS编码,生成的校验码经Arnold变换置乱后作为数字签名对图像进行真伪鉴别.实验结果表明,该算法不但能够有效地检测出图像内容是否被篡改,并且能够准确定位篡改位置,提高了图像对JPEG压缩的稳健性,有效地区分JPEG压缩操作和蓄意篡改.%In view of the question of images' authenticity, an effective and robust image identification algorithm is presented based on digital signature. Firstly, image is preprocessed to enhance the capability of images' anti-JPEG compression. Then the image is divided into two regions equally, and the top five of image' s block average value are extracted and mapped to the element of GF (25) to construct the RS code. The generated check code is scrambled by Arnold transform as digital signature to authenticate the original image. Experimental results show that the algorithm not only can detect whether the content of the image is tampered or not but also can accurately locate the tampered position, which improves the image on the robustness of JPEG compression. Thus this algorithm can effectively distinguish JPEG compression operation from the deliberate tampering.
Co-evolution algorithm based on punctuated anytime learning and sampling method%基于间隔时间学习和抽样法的协同进化算法
Institute of Scientific and Technical Information of China (English)
肖喜丽
2012-01-01
When evaluating individuals, the selection of representation and the evaluation of the combination of the individuals and representations need lots of computation in co-evolution algorithm. For cooperative co-evolutionary genetic algorithm, calculated amount is small, but it can only obtain one greedy solution. Multi-pattern symbiotic evolutionary algorithm can overcome the shortcoming, but its calculated amount is too big. In this paper we proposed punctuated anytime learning co-evolution algorithm using punctuated anytime learning method, this approach interacts information every N generations. Based on this algorithm, the sampling method was used to co-evolutionary algorithms. The experimental results and mathematical analysis show that this algorithm is effective to reduce the calculated amount.%协同进化算法中，计算个体适应度时，代表个体的选择以及代表个体与个体的组合评估需要很大的计算量。协同进化遗传算法虽然计算量相对小一点，但是只能获得一个贪婪解。多模式共生进化算法虽能克服协同进化遗传算法的这个缺点，但是计算量太大。本文利用间隔时间学习方法提出间隔时间学习协同进化算法，该算法每隔N代交互一次信息。在此基础上，将抽样法应用到协同进化算法中。实验结果表明，这种方法能有效地减少计算量，且本文从数学方面进行了分析验证。
Relevance Feedback Algorithm Based on Collaborative Filtering in Image Retrieval
Directory of Open Access Journals (Sweden)
Yan Sun
2010-12-01
Full Text Available Content-based image retrieval is a very dynamic study field, and in this field, how to improve retrieval speed and retrieval accuracy is a hot issue. The retrieval performance can be improved when applying relevance feedback to image retrieval and introducing the participation of people to the retrieval process. However, as for many existing image retrieval methods, there are disadvantages of relevance feedback with information not being fully saved and used, and their accuracy and flexibility are relatively poor. Based on this, the collaborative filtering technology was combined with relevance feedback in this study, and an improved relevance feedback algorithm based on collaborative filtering was proposed. In the method, the collaborative filtering technology was used not only to predict the semantic relevance between images in database and the retrieval samples, but to analyze feedback log files in image retrieval, which can make the historical data of relevance feedback be fully used by image retrieval system, and further to improve the efficiency of feedback. The improved algorithm presented has been tested on the content-based image retrieval database, and the performance of the algorithm has been analyzed and compared with the existing algorithms. The experimental results showed that, compared with the traditional feedback algorithms, this method can obviously improve the efficiency of relevance feedback, and effectively promote the recall and precision of image retrieval.
Quantum Monte Carlo methods algorithms for lattice models
Gubernatis, James; Werner, Philipp
2016-01-01
Featuring detailed explanations of the major algorithms used in quantum Monte Carlo simulations, this is the first textbook of its kind to provide a pedagogical overview of the field and its applications. The book provides a comprehensive introduction to the Monte Carlo method, its use, and its foundations, and examines algorithms for the simulation of quantum many-body lattice problems at finite and zero temperature. These algorithms include continuous-time loop and cluster algorithms for quantum spins, determinant methods for simulating fermions, power methods for computing ground and excited states, and the variational Monte Carlo method. Also discussed are continuous-time algorithms for quantum impurity models and their use within dynamical mean-field theory, along with algorithms for analytically continuing imaginary-time quantum Monte Carlo data. The parallelization of Monte Carlo simulations is also addressed. This is an essential resource for graduate students, teachers, and researchers interested in ...
Agent Based Patient Scheduling Using Heuristic Algorithm
Directory of Open Access Journals (Sweden)
Juliet A Murali
2010-01-01
Full Text Available This paper describes about an agent based approach to patient scheduling using experience based learning. A heuristic algorithm is also used in the proposed framework. The evaluation on different learning techniques shows that the experience based learning (EBL gives better solution. The processing time decreases as the experience increases. The heuristic algorithm make use of EBL in calculating the processing time. The main objective of this patient scheduling system is to reduce the waiting time of patient in hospitals and to complete their treatment in minimum required time. The framework is implemented in JADE. In this approach the patients and resources are represented as patient agents (PA and resource agents (RA respectively. Even though mathematical model give optimal solution, the computational complexity increases for large size problems. Heuristic solution gives better solution for large size problems. The comparisons of the proposed framework with other scheduling rules shows that an agent based approach to patient scheduling using EBL is better.
Efficient Iris Recognition Algorithm Using Method of Moments
Directory of Open Access Journals (Sweden)
Bimi Jain
2012-10-01
Full Text Available This paper presents an efficient biometric algorithm for iris recognition using Fast Fourier Transform andmoments. Biometric system provides automatic identification of an individual based on a unique feature orcharacteristic possessed by the individual. The Fast Fourier Transform converts image from spatialdomain to frequency domain and also filters noise in the image giving more precise information. Momentsare area descriptors used to characterize the shape and size of the image. The moments values areinvariant to scale and orientation of the object under study, also insensitive to rotation and scaletransformation. At last Euclidean distance formula is used for image matching.The CASIA database clearly demonstrates an efficient method for Biometrics. As per experimentalresult,the algorithm is achieving higher Correct Recognition Rate.
Efficient Iris Recognition Algorithm Using Method of Moments
Directory of Open Access Journals (Sweden)
Bimi Jain
2012-09-01
Full Text Available This paper presents an efficient biometric algorithm for iris recognition using Fast Fourier Transform and moments. Biometric system provides automatic identification of an individual based on a unique feature or characteristic possessed by the individual. The Fast Fourier Transform converts image from spatial domain to frequency domain and also filters noise in the image giving more precise information. Moments are area descriptors used to characterize the shape and size of the image. The moments values are invariant to scale and orientation of the object under study, also insensitive to rotation and scale transformation. At last Euclidean distance formula is used for image matching. The CASIA database clearly demonstrates an efficient method for Biometrics. As per experimental result,the algorithm is achieving higher Correct Recognition Rate.
Institute of Scientific and Technical Information of China (English)
ZHENG Yu; CHEN Zhuang-zhuang; LI Ya-juan; DUAN Jian
2009-01-01
A novel automatic alignment algorithm of single mode fiber-waveguide based on improved genetic algorithm is proposed. The genetic searching is based on the dynamic crossover operator and the adaptive mutation operator to solve the premature convergence of simple genetic algorithm The improved genetic algorithm combines with hill-climbing method and pattern searching algorithm, to solve low precision of simple genetic algorithm in later searching. The simulation results indicate that the improved genetic algorithm can rise the alignment precision and reach the coupling loss of 0.01 dB when platform moves near 207 space points averagely.
Web Based Genetic Algorithm Using Data Mining
Ashiqur Rahman; Asaduzzaman Noman; Md. Ashraful Islam; Al-Amin Gaji
2016-01-01
This paper presents an approach for classifying students in order to predict their final grade based on features extracted from logged data in an education web-based system. A combination of multiple classifiers leads to a significant improvement in classification performance. Through weighting the feature vectors using a Genetic Algorithm we can optimize the prediction accuracy and get a marked improvement over raw classification. It further shows that when the number of features is few; fea...
AN OPTIMIZATION ALGORITHM BASED ON BACTERIA BEHAVIOR
Directory of Open Access Journals (Sweden)
Ricardo Contreras
2014-09-01
Full Text Available Paradigms based on competition have shown to be useful for solving difficult problems. In this paper we present a new approach for solving hard problems using a collaborative philosophy. A collaborative philosophy can produce paradigms as interesting as the ones found in algorithms based on a competitive philosophy. Furthermore, we show that the performance - in problems associated to explosive combinatorial - is comparable to the performance obtained using a classic evolutive approach.
A Novel Algorithm Based on 3D-MUSIC Algorithm for Localizing Near-Field Source
Institute of Scientific and Technical Information of China (English)
SHAN Zhi-yong; ZHOU Xi-lang; PEN Gen-jiang
2005-01-01
A novel 3-D MUSIC algorithm based on the classical 3D-MUSIC algorithm for the location of near-field source was presented. Under the far-field assumption of actual near-field, two algebraic relations of the location parameters between the actual near-field sources and the far-field ones were derived. With Fourier transformation and polynomial-root methods, the elevation and the azimuth of the far-field were obtained, the tracking paths can be developed, and the location parameters of the near-field source can be determined, then the more accurate results can be estimated using an optimization method. The computer simulation results p rove that the algorithm for the location of the near-fields is more accurate, effective and suitable for real-time applications.
A hybrid cuckoo search algorithm with Nelder Mead method for solving global optimization problems.
Ali, Ahmed F; Tawhid, Mohamed A
2016-01-01
Cuckoo search algorithm is a promising metaheuristic population based method. It has been applied to solve many real life problems. In this paper, we propose a new cuckoo search algorithm by combining the cuckoo search algorithm with the Nelder-Mead method in order to solve the integer and minimax optimization problems. We call the proposed algorithm by hybrid cuckoo search and Nelder-Mead method (HCSNM). HCSNM starts the search by applying the standard cuckoo search for number of iterations then the best obtained solution is passing to the Nelder-Mead algorithm as an intensification process in order to accelerate the search and overcome the slow convergence of the standard cuckoo search algorithm. The proposed algorithm is balancing between the global exploration of the Cuckoo search algorithm and the deep exploitation of the Nelder-Mead method. We test HCSNM algorithm on seven integer programming problems and ten minimax problems and compare against eight algorithms for solving integer programming problems and seven algorithms for solving minimax problems. The experiments results show the efficiency of the proposed algorithm and its ability to solve integer and minimax optimization problems in reasonable time. PMID:27217988
Institute of Scientific and Technical Information of China (English)
牛晓峰; 梁伟; 赵宇宏; 侯华; 穆彦青; 黄志伟; 杨伟明
2011-01-01
A new algorithm based on the projection method with the implicit finite difference technique was established to calculate the velocity fields and pressure.The calculation region can be divided into different regions according to Reynolds number.In the far-wall region,the thermal melt flow was calculated as Newtonian flow.In the near-wall region,the thermal melt flow was calculated as non-Newtonian flow.It was proved that the new algorithm based on the projection method with the implicit technique was correct through nonparametric statistics method and experiment.The simulation results show that the new algorithm based on the projection method with the implicit technique calculates more quickly than the solution algorithm-volume of fluid method using the explicit difference method.%提出一种新的基于投影法的隐式有限差分算法,用于计算速度场和压力.这种方法的特点是将计算区域根据雷诺数分成几个区域；对于远离壁面的区域,热金属流按牛顿流计算；对于贴近壁面的区域,热金属流按非牛顿流计算.通过非参数统计和实验的方法证明新算法的正确性.数值模拟结果表明,新算法的计算速度要快于基于SOLA-VOF法的显式有限差分方法.
Genetic Algorithm Based Transmission Expansion Planning System
Directory of Open Access Journals (Sweden)
Dike, Damian Obioma
2014-11-01
Full Text Available This paper presents an application of genetic algorithm (GA to the solution of Static Transmission Expansion Planning (STEP to determine the optimal number of transmission circuits required in each network corridor and their respective network adequacy while satisfying various economic and technical constraints. The uncertainties in generation and distribution networks as a result of power system deregulation cannot be modeled effectively using the conventional mathematical methods. GA, been a probabilistic based approach has the ability to resolve the transmission expansion planning problem in the face of such uncertainties. The model presented in this work will help in the identification of overloaded lines using Gauss Seidel (GS load flow technique. Result from GS is then used as input in the GA simulation to show the extra lines needed to accommodate present load flow in the system. The model was tested on IEEE 14 – bus test network.The model developed and simulation results obtained may be useful in an electric power systems undergoing deregulation. Such is the case in Nigeria presently where the generation and distribution components are increasing significantly without any commensurate boost in the transmission sector. This leads to line overloads necessitating commensurate and immediate expansion of the grid
Cloud-based Evolutionary Algorithms: An algorithmic study
Merelo, Juan-J; Mora, Antonio M; Castillo, Pedro; Romero, Gustavo; Laredo, JLJ
2011-01-01
After a proof of concept using Dropbox(tm), a free storage and synchronization service, showed that an evolutionary algorithm using several dissimilar computers connected via WiFi or Ethernet had a good scaling behavior in terms of evaluations per second, it remains to be proved whether that effect also translates to the algorithmic performance of the algorithm. In this paper we will check several different, and difficult, problems, and see what effects the automatic load-balancing and asynchrony have on the speed of resolution of problems.
Genetic Algorithm-based Optimized Design Method for Casing String%基于遗传算法的套管柱优化设计方法
Institute of Scientific and Technical Information of China (English)
何英明; 王瑞和; 雷杨; 臧艳彬; 何英君
2012-01-01
In light of changeable external load of casing in complex geologic conditions, the optimized method to design casing string on the basis of genetic algorithm was established. The genetic algorithm theory was applied to determine the coding mode of chromosome, production method of initial population, and evaluation function of chromosome and genetic operation. Thus, the optimized design model for casing string on the basis of genetic algo- rithm was established to overcome the defects of complex calculation process and neglect of casing costs and risk factor in the traditional design method of casing string strength concerning complex geologic conditions. The design verification of some casing string in Puguang Gasfield shows that the design method is accurate and reliable. It improves the reliability of casing string and helps to reduce the costs. The present study offers the method to optimize the design of casing string in complex geologic conditions.%针对复杂地质条件下套管外载荷多变的特点,建立了基于遗传算法的套管柱优化设计方法。应用遗传算法理论,确定染色体编码方式、初始种群的产生方法、染色体评估函数及遗传操作,建立基于遗传算法的套管柱优化设计模型,从而克服了传统套管柱设计方法在复杂地质条件下套管柱强度设计中运算过程复杂,无法兼顾套管成本与风险因素的缺点。对普光气田某井套管柱进行的设计验证情况表明,该方法设计结果准确可靠,既提高了套管柱可靠性,又有利于降低成本。该项研究为复杂地质条件下优化套管柱设计提供了新的方法。
A Method for Accelerating Conway's Doomsday Algorithm
Fong, Chamberlain
2010-01-01
We propose a simplification of a key component in the Doomsday Algorithm for calculating the day-of-the-week of any given date. In particular, we propose to replace the calculation of the required term: floor(x/12) + x mod 12 + floor((x mod 12)/4) with 2 y + 3 odd(y) + z + leap where y is the tens digit of x; z is the ones digit of x; leap is the number of leap years in the y decade less than or equal to x excluding the start of the decade; and odd(y) is a decision function such that odd(y) = 1 if y is odd odd(y) = 0 if y is even We argue that this simplification makes the algorithm simpler and easier to calculate mentally.
Zhu, Liuhong; Guo, Gang
2012-01-01
This study tested an improved fiber tracking algorithm, which was based on fiber assignment using a continuous tracking algorithm and a two-tensor model. Different models and tracking decisions were used by judging the type of estimation of each voxel. This method should solve the cross-track problem. This study included eight healthy subjects, two axonal injury patients and seven demyelinating disease patients. This new algorithm clearly exhibited a difference in nerve fiber direction betwee...
Performance-Based Seismic Design of Steel Frames Utilizing Colliding Bodies Algorithm
Directory of Open Access Journals (Sweden)
H. Veladi
2014-01-01
Full Text Available A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm.
Genetic algorithm-based evaluation of spatial straightness error
Institute of Scientific and Technical Information of China (English)
崔长彩; 车仁生; 黄庆成; 叶东; 陈刚
2003-01-01
A genetic algorithm ( GA ) -based approach is proposed to evaluate the straightness error of spatial lines. According to the mathematical definition of spatial straightness, a verification model is established for straightness error, and the fitness function of GA is then given and the implementation techniques of the proposed algorithm is discussed in detail. The implementation techniques include real number encoding, adaptive variable range choosing, roulette wheel and elitist combination selection strategies, heuristic crossover and single point mutation schemes etc. An application example is quoted to validate the proposed algorithm. The computation result shows that the GA-based approach is a superior nonlinear parallel optimization method. The performance of the evolution population can be improved through genetic operations such as reproduction, crossover and mutation until the optimum goal of the minimum zone solution is obtained. The quality of the solution is better and the efficiency of computation is higher than other methods.
Knowledge Template Based Multi-perspective Car Recognition Algorithm
Directory of Open Access Journals (Sweden)
Bo Cai
2010-12-01
Full Text Available In order to solve the problem due to the vehicle-oriented society such as traffic jam or traffic accident, intelligent transportation system(ITS is raised and become scientist’s research focus, with the purpose of giving people better and safer driving condition and assistance. The core of intelligent transport system is the vehicle recognition and detection, and it’s the prerequisites for other related problems. Many existing vehicle recognition algorithms are aiming at one specific direction perspective, mostly front/back and side view. To make the algorithm more robust, our paper raised a vehicle recognition algorithm for oblique vehicles while also do research on front/back and side ones. The algorithm is designed based on the common knowledge of the car, such as shape, structure and so on. The experimental results of many car images show that our method has fine accuracy in car recognition.
Hybrid Collision Detection Algorithm based on Image Space
Directory of Open Access Journals (Sweden)
XueLi Shen
2013-07-01
Full Text Available Collision detection is an important application in the field of virtual reality, and efficiently completing collision detection has become the research focus. For the poorly real-time defect of collision detection, this paper has presented an algorithm based on the hybrid collision detection, detecting the potential collision object sets quickly with the mixed bounding volume hierarchy tree, and then using the streaming pattern collision detection algorithm to make an accurate detection. With the above methods, it can achieve the purpose of balancing load of the CPU and GPU and speeding up the detection rate. The experimental results show that compared with the classic Rapid algorithm, this algorithm can effectively improve the efficiency of collision detection.
Filter model based dwell time algorithm for ion beam figuring
Li, Yun; Xing, Tingwen; Jia, Xin; Wei, Haoming
2010-10-01
The process of Ion Beam Figuring (IBF) can be described by a two-dimensional convolution equation which including dwell time. Solving the dwell time is a key problem in IBF. Theoretically, the dwell time can be solved from a two-dimensional deconvolution. However, it is often ill-posed]; the suitable solution of that is hard to get. In this article, a dwell time algorithm is proposed, depending on the characters of IBF. Usually, the Beam Removal Function (BRF) in IBF is Gaussian, which can be regarded as a headstand Gaussian filter. In its stop-band, the filter has various filtering abilities for various frequencies. The dwell time algorithm proposed in this article is just based on this concept. The Curved Surface Smooth Extension (CSSE) method and Fast Fourier Transform (FFT) algorithm are also used. The simulation results show that this algorithm is high precision, effective, and suitable for actual application.
Physics-based signal processing algorithms for micromachined cantilever arrays
Candy, James V; Clague, David S; Lee, Christopher L; Rudd, Robert E; Burnham, Alan K; Tringe, Joseph W
2013-11-19
A method of using physics-based signal processing algorithms for micromachined cantilever arrays. The methods utilize deflection of a micromachined cantilever that represents the chemical, biological, or physical element being detected. One embodiment of the method comprises the steps of modeling the deflection of the micromachined cantilever producing a deflection model, sensing the deflection of the micromachined cantilever and producing a signal representing the deflection, and comparing the signal representing the deflection with the deflection model.
A New Algorithm for Total Variation Based Image Denoising
Institute of Scientific and Technical Information of China (English)
Yi-ping XU
2012-01-01
We propose a new algorithm for the total variation based on image denoising problem.The split Bregman method is used to convert an unconstrained minimization denoising problem to a linear system in the outer iteration.An algebraic multi-grid method is applied to solve the linear system in the inner iteration.Furthermore,Krylov subspace acceleration is adopted to improve convergence in the outer iteration.Numerical experiments demonstrate that this algorithm is efficient even for images with large signal-to-noise ratio.
Core Business Selection Based on Ant Colony Clustering Algorithm
Directory of Open Access Journals (Sweden)
Yu Lan
2014-01-01
Full Text Available Core business is the most important business to the enterprise in diversified business. In this paper, we first introduce the definition and characteristics of the core business and then descript the ant colony clustering algorithm. In order to test the effectiveness of the proposed method, Tianjin Port Logistics Development Co., Ltd. is selected as the research object. Based on the current situation of the development of the company, the core business of the company can be acquired by ant colony clustering algorithm. Thus, the results indicate that the proposed method is an effective way to determine the core business for company.
The optimal time-frequency atom search based on a modified ant colony algorithm
Institute of Scientific and Technical Information of China (English)
GUO Jun-feng; LI Yan-jun; YU Rui-xing; ZHANG Ke
2008-01-01
In this paper,a new optimal time-frequency atom search method based on a modified ant colony algorithm is proposed to improve the precision of the traditional methods.First,the discretization formula of finite length time-frequency atom is inferred at length.Second; a modified ant colony algorithm in continuous space is proposed.Finally,the optimal timefrequency atom search algorithm based on the modified ant colony algorithm is described in detail and the simulation experiment is carried on.The result indicates that the developed algorithm is valid and stable,and the precision of the method is higher than that of the traditional method.
Measuring Disorientation Based on the Needleman-Wunsch Algorithm
Güyer, Tolga; Atasoy, Bilal; Somyürek, Sibel
2015-01-01
This study offers a new method to measure navigation disorientation in web based systems which is powerful learning medium for distance and open education. The Needleman-Wunsch algorithm is used to measure disorientation in a more precise manner. The process combines theoretical and applied knowledge from two previously distinct research areas,…
Institute of Scientific and Technical Information of China (English)
康杰红; 马苗
2012-01-01
为了快速准确地确定多阈值图像分割中的最佳阈值,提出了一种基于蛙跳算法与Otsu法相结合的多阈值图像分割方法.该方法将多阈值求解看作一种多变量的组合求解优化问题,利用多阈值Otsu法设计分割目标函数,将新兴的仿生学优化求解算法——蛙跳算法引入到图像分割技术中,通过蛙跳算法中全局搜索和局部搜索相结合的搜索机制并行求解多个阈值.实验结果表明,该方法与基于人工鱼群算法的图像多阈值分割方法相比,明显提高了图像分割速度和分割质量.%In order to obtain a group of satisfying thresholds in image segmentation quickly and accurately, this paper proposed a method based on shuffled frog leaping (SFL) algorithm and Otsu method for multilevel thresholding image segmentation. The method regarded the group of thresholds as a group of potential solutions to a certain objective function, and employed the extended Otsu method to be the fitness function for SFL algorithm. And then, the powerful searching ability of SFL algorithm was used to locate the thresholds in parallel, which combines the global search in the whole swarm and local searches in subswarms. Experimental results showed that compared with the method based on artificial fish swarm (AFS) algorithm, the suggested method obviously im- proved the performance of image segmentation in speed and quality.
自底向上记录式 Hanoi 塔非递归算法%Non -Recursive Algorithm Based on Down -Up Record Method for Hanoi Tower
Institute of Scientific and Technical Information of China (English)
戴莉萍; 黄龙军; 刘清华
2016-01-01
The code of classical recursion algorithm for Hanoi tower problem is simple,but the time complexity is O(2n )and the code is difficult to understand.Understanding recursive idea of Hanoi tower problem and constructing a tree model based on the func-tion,two objectives of the function’s parameters and the execution result are analyzed carefully.Then the relationship between them is obtained and used to design a new down -up non -recursive algorithm.This algorithm here records the moving paths of n plates (n =1,2,…),which could be applied to get the moving result of n +1 plates directly.The experimental results show that the correspond-ing code is very easy to read,and its time complexity is only O(n),which is further practice and study for the non -recursive re-search of Hanoi tower problem.%Hanoi 塔问题的经典递归算法虽然代码量小，但时间复杂度却是指数级的，而且难以理解。该文基于 Hanoi 塔问题的递归思想，构造出 Hanoi 塔的树模型，仔细分析递归函数的调用参数和语句执行时盘子移动的顺序，巧妙地找到两者之间的对应关系，从而提出一种新的自底向上非递归算法。该算法逐一地记录下 n 从1开始时盘子从源柱到目标柱时经历过的移动轨迹，进而直接应用到 n ＋1个盘子的移动问题。实验结果表明，该算法对应的代码易读且高效，时间复杂度降为O（n），是对 Hanoi 塔问题的非递归算法研究的进一步实践与探讨。
LSB Based Quantum Image Steganography Algorithm
Jiang, Nan; Zhao, Na; Wang, Luo
2016-01-01
Quantum steganography is the technique which hides a secret message into quantum covers such as quantum images. In this paper, two blind LSB steganography algorithms in the form of quantum circuits are proposed based on the novel enhanced quantum representation (NEQR) for quantum images. One algorithm is plain LSB which uses the message bits to substitute for the pixels' LSB directly. The other is block LSB which embeds a message bit into a number of pixels that belong to one image block. The extracting circuits can regain the secret message only according to the stego cover. Analysis and simulation-based experimental results demonstrate that the invisibility is good, and the balance between the capacity and the robustness can be adjusted according to the needs of applications.
Network-based recommendation algorithms: A review
Yu, Fei; Zeng, An; Gillard, Sébastien; Medo, Matúš
2016-06-01
Recommender systems are a vital tool that helps us to overcome the information overload problem. They are being used by most e-commerce web sites and attract the interest of a broad scientific community. A recommender system uses data on users' past preferences to choose new items that might be appreciated by a given individual user. While many approaches to recommendation exist, the approach based on a network representation of the input data has gained considerable attention in the past. We review here a broad range of network-based recommendation algorithms and for the first time compare their performance on three distinct real datasets. We present recommendation topics that go beyond the mere question of which algorithm to use-such as the possible influence of recommendation on the evolution of systems that use it-and finally discuss open research directions and challenges.
Network-based recommendation algorithms: A review
Yu, Fei; Gillard, Sebastien; Medo, Matus
2015-01-01
Recommender systems are a vital tool that helps us to overcome the information overload problem. They are being used by most e-commerce web sites and attract the interest of a broad scientific community. A recommender system uses data on users' past preferences to choose new items that might be appreciated by a given individual user. While many approaches to recommendation exist, the approach based on a network representation of the input data has gained considerable attention in the past. We review here a broad range of network-based recommendation algorithms and for the first time compare their performance on three distinct real datasets. We present recommendation topics that go beyond the mere question of which algorithm to use - such as the possible influence of recommendation on the evolution of systems that use it - and finally discuss open research directions and challenges.
Integer pixel digital speckle correlation method based on improved CS algorithm%基于改进CS算法的整像素数字散斑相关方法
Institute of Scientific and Technical Information of China (English)
周海芳; 杨秋翔; 杨剑
2015-01-01
为克服基于传统数字散斑相关方法搜索结果易陷入局部最优、后期收敛速度慢等缺点，引入基于群体智能的布谷鸟搜索算法，从随机步长、鸟巢更新策略和最优鸟巢扰动策略3方面对算法进行改进。以模拟散斑图为研究对象，观察改进布谷鸟搜索算法在相关搜索时鸟巢的运动轨迹，与基本布谷鸟搜索算法、粒子群算法在寻优精度和收敛速度方面进行比较。仿真结果验证了该算法应用于数字散斑相关方法的可行性和优越性。%To overcome the defects including easily falling into local optimum and slow convergence speed in the later stage of tra-ditional digital speckle correlation method,the cuckoo search algorithm based on swarm intelligence was introduced,and it was improved from three aspects including the random step length,the nest update strategy and the best nest disturbance strategy. Study was pictured to simulate specklegram,the nest trajectory of improved search algorithm cuckoo was observed during the search.The accuracy and convergence speed of the proposed algorithm were compare with that of the basic cuckoo search algo-rithm and particle swarm optimization algorithm.Simulation results show that the algorithm used in digital speckle correlation method is feasible and superior.
Institute of Scientific and Technical Information of China (English)
黄昌盛; 张文欢; 侯志敏; 陈俊辉; 李明晶; 何南忠; 施保昌
2011-01-01
格子Boltzmann方法(LBM)由于其具有计算简单,天然并行,易于程序实现,易于处理复杂边界等优点而成为流体建模和模拟的一种重要方法.LBM的上述优点也使得其非常适合利用图形处理单元(graphic processing unit,GPU)进行大规模流体计算.基于GPU的CUDA(compute unified device architecture)编程平台,首先设计了相应的LBM算法,并以二维方腔流、二维圆柱绕流以及三维方腔流为例,着重探讨了存储器访问优化等优化技术的作用；此外,本文也对程序的性能进行了详细分析.结果表明,本文的算法取得了理想的加速效果,证实了GPU与LBM的良好匹配关系.%Lattice Boltzmann method (LBM) has become a powerful tool in modeling and simulating fluid flows for its fully parallelism, easy implementation, and simple code, and thus, LBM is quite suitable for large-scale computation of fluid flows on graphic processing unit (GPU) due to these advantages. In this paper, we implement the LBM algorithm on GPU using CUDA, and simulate 2D cavity flow, 2D flow around a cylinder and 3D cavity flow. The role of memory access optimization and other optimization technologies in programming and the performance of programs are analyzed in detail. The results show that our algorithm gives satisfactory acceleration, and confirm that LBM is very compatible with GPU for large-scale parallel computation.
Using LBG quantization for particle-based collision detection algorithm
Institute of Scientific and Technical Information of China (English)
SAENGHAENGTHAM Nida; KANONGCHAIYOS Pizzanu
2006-01-01
Most collision detection algorithms can be efficiently used only with solid and rigid objects, for instance, Hierarchical methods which must have their bounding representation recalculated every time deformation occurs. An alternative algorithm using particle-based method is then proposed which can detect the collision among non-rigid deformable polygonal models.However, the original particle-based collision detection algorithm might not be sufficient enough in some situations due to the improper particle dispersion. Therefore, this research presents an improved algorithm which provides a particle to detect in each separated area so that particles always covered all over the object. The surface partitioning can be efficiently performed by using LBG quantization since it can classify object vertices into several groups base on a number of factors as required. A particle is then assigned to move between vertices in a group by the attractive forces received from other particles on neighbouring objects.Collision is detected when the distance between a pair of corresponding particles becomes very small. Lastly, the proposed algorithm has been implemented to show that collision detection can be conducted in real-time.
Gradient Gene Algorithm: a Fast Optimization Method to MST Problem
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
The extension of Minimum Spanning Tree(MST) problem is an NP hardproblem which does not exit a polynomial time algorithm. In this paper, a fast optimizat ion method on MST problem--the Gradient Gene Algorithm is introduced. Compar ed with other evolutionary algorithms on MST problem, it is more advanced: firstly, very simple and easy to realize; then, efficient and accurate; finally general on other combination optimization problems.
Engine fault diagnosis method based on PSO-RVM algorithm%基于P SO-RVM算法的发动机故障诊断
Institute of Scientific and Technical Information of China (English)
毕晓君; 柳长源; 卢迪
2014-01-01
针对汽车发动机失火故障问题，提出一种新的智能诊断方法。建立了汽车尾气中各气体的体积分数与失火故障原因的映射关系，对归一化处理的数据进行机器训练，将训练好的相关向量机模型应用于故障分类诊断。算法中的惩罚因子和径向基核函数参数对分类准确率有着很大的影响，利用粒子群算法对超参数进行了优化。将优化训练后的相关向量机模型与目前较成熟的遗传优化的神经网络及支持向量机方法进行了对比，实验结果表明新方法比传统方法在诊断精度和鲁棒性方面均有一定的提高。%To solve the problems of the misfiring errors of an automobile engine, the authors, put forward a new in-telligent fault diagnosis method. A mapping relation is established the volume fraction of gases in the exhaust of the automobile and the cause of the misfire. Machine training is applied to normalized data and the trained relevance vector machine model is applied to the fault classification and diagnosis. The penalty factor and the RBF kernel pa-rameters in the algorithm greatly affect the classification accuracy. The particle swarm algorithm is used to optimize the super-parameters;in addition, the relevance vector machine model having experienced optimization training is compared with the presently mature genetic optimized neural network and support vector machine method. The ex-perimental results show that the new method improves the diagnosis accuracy and robustness.
A Single Pattern Matching Algorithm Based on Character Frequency
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
Based on the study of single pattern matching, MBF algorithm is proposed by imitating the string searching procedure of human. The algorithm preprocesses the pattern by using the idea of Quick Search algorithm and the already-matched pattern psefix and suffix information. In searching phase, the algorithm makes use of the!character using frequency and the continue-skip idea. The experiment shows that MBF algorithm is more efficient than other algorithms.
BACKPROPAGATION LEARNING ALGORITHM BASED ON LEVENBERG MARQUARDT ALGORITHM
Directory of Open Access Journals (Sweden)
S.Sapna
2012-10-01
Full Text Available Data Mining aims at discovering knowledge out of data and presenting it in a form that is easily compressible to humans. Data Mining represents a process developed to examine large amounts of data routinely collected. The term also refers to a collection of tools used to perform the process. One of the useful applications in the field of medicine is the incurable chronic disease diabetes. Data Mining algorithm is used for testing the accuracy in predicting diabetic status. Fuzzy Systems are been used for solving a wide range of problems in different application domain and Genetic Algorithm for designing. Fuzzy systems allows in introducing the learning and adaptation capabilities. Neural Networks are efficiently used for learning membership functions. Diabetes occurs throughout the world, but Type 2 is more common in the most developed countries. The greater increase in prevalence is however expected in Asia and Africa where most patients will likely be found by 2030. This paper is proposed on the Levenberg – Marquardt algorithm which is specifically designed to minimize sum-of-square error functions. Levernberg-Marquardt algorithm gives the best performance in the prediction of diabetes compared to any other backpropogation algorithm.
Optimized Audio Classification and Segmentation Algorithm by Using Ensemble Methods
Directory of Open Access Journals (Sweden)
Saadia Zahid
2015-01-01
Full Text Available Audio segmentation is a basis for multimedia content analysis which is the most important and widely used application nowadays. An optimized audio classification and segmentation algorithm is presented in this paper that segments a superimposed audio stream on the basis of its content into four main audio types: pure-speech, music, environment sound, and silence. An algorithm is proposed that preserves important audio content and reduces the misclassification rate without using large amount of training data, which handles noise and is suitable for use for real-time applications. Noise in an audio stream is segmented out as environment sound. A hybrid classification approach is used, bagged support vector machines (SVMs with artificial neural networks (ANNs. Audio stream is classified, firstly, into speech and nonspeech segment by using bagged support vector machines; nonspeech segment is further classified into music and environment sound by using artificial neural networks and lastly, speech segment is classified into silence and pure-speech segments on the basis of rule-based classifier. Minimum data is used for training classifier; ensemble methods are used for minimizing misclassification rate and approximately 98% accurate segments are obtained. A fast and efficient algorithm is designed that can be used with real-time multimedia applications.
AN IMPROVED FAST BLIND DECONVOLUTION ALGORITHM BASED ON DECORRELATION AND BLOCK MATRIX
Institute of Scientific and Technical Information of China (English)
Yang Jun'an; He Xuefan; Tan Ying
2008-01-01
In order to alleviate the shortcomings of most blind deconvolution algorithms,this paper proposes an improved fast algorithm for blind deconvolution based on decorrelation technique and broadband block matrix. Althougth the original algorithm can overcome the shortcomings of current blind deconvolution algorithms,it has a constraint that the number of the source signals must be less than that of the channels. The improved algorithm deletes this constraint by using decorrelation technique. Besides,the improved algorithm raises the separation speed in terms of improving the computing methods of the output signal matrix. Simulation results demonstrate the validation and fast separation of the improved algorithm.
Image Recovery Algorithm Based on Learned Dictionary
Directory of Open Access Journals (Sweden)
Xinghui Zhu
2014-01-01
Full Text Available We proposed a recovery scheme for image deblurring. The scheme is under the framework of sparse representation and it has three main contributions. Firstly, considering the sparse property of natural image, the nonlocal overcompleted dictionaries are learned for image patches in our scheme. And, then, we coded the patches in each nonlocal clustering with the corresponding learned dictionary to recover the whole latent image. In addition, for some practical applications, we also proposed a method to evaluate the blur kernel to make the algorithm usable in blind image recovery. The experimental results demonstrated that the proposed scheme is competitive with some current state-of-the-art methods.
A fast quantum mechanics based contour extraction algorithm
Lan, Tian; Sun, Yangguang; Ding, Mingyue
2009-02-01
A fast algorithm was proposed to decrease the computational cost of the contour extraction approach based on quantum mechanics. The contour extraction approach based on quantum mechanics is a novel method proposed recently by us, which will be presented on the same conference by another paper of us titled "a statistical approach to contour extraction based on quantum mechanics". In our approach, contour extraction was modeled as the locus of a moving particle described by quantum mechanics, which is obtained by the most probable locus of the particle simulated in a large number of iterations. In quantum mechanics, the probability that a particle appears at a point is equivalent to the square amplitude of the wave function. Furthermore, the expression of the wave function can be derived from digital images, making the probability of the locus of a particle available. We employed the Markov Chain Monte Carlo (MCMC) method to estimate the square amplitude of the wave function. Finally, our fast quantum mechanics based contour extraction algorithm (referred as our fast algorithm hereafter) was evaluated by a number of different images including synthetic and medical images. It was demonstrated that our fast algorithm can achieve significant improvements in accuracy and robustness compared with the well-known state-of-the-art contour extraction techniques and dramatic reduction of time complexity compared to the statistical approach to contour extraction based on quantum mechanics.
Identification of ARMAX based on genetic algorithm
Institute of Scientific and Technical Information of China (English)
贺尚红; 李旭宇; 钟掘
2002-01-01
On the basis of genetic algorithm,an intelligent search approach to determination of parameters of ARMAX(Autor Regressive Moving Average model with external input) processes was proposed.By representing the system with pole and zero pairs and repairing illegal chromosomes,the search space is limited to stable schemes.In calculation of objective function the "shifted data window" was designed,so that every input-output pair is used to guide the evolution and the "Data Saturation" is avoided.To prevent premature convergence,the adaptive fitness function was introduced,the conventional crossover and mutation operator was modified and the "catastrophic mutation" which is based on Metropolis mechanism was adopted.So the performance of convergence to the global optimum is improved.The validity and efficiency of proposed algorithm were illustrated by simulated results.
Algorithmic Methods for Sponsored Search Advertising
Feldman, Jon
2008-01-01
Modern commercial Internet search engines display advertisements along side the search results in response to user queries. Such sponsored search relies on market mechanisms to elicit prices for these advertisements, making use of an auction among advertisers who bid in order to have their ads shown for specific keywords. We present an overview of the current systems for such auctions and also describe the underlying game-theoretic aspects. The game involves three parties--advertisers, the search engine, and search users--and we present example research directions that emphasize the role of each. The algorithms for bidding and pricing in these games use techniques from three mathematical areas: mechanism design, optimization, and statistical estimation. Finally, we present some challenges in sponsored search advertising.
A Hybrid Metaheuristic for Biclustering Based on Scatter Search and Genetic Algorithms
Nepomuceno, Juan A.; Troncoso, Alicia; Aguilar–Ruiz, Jesús S.
In this paper a hybrid metaheuristic for biclustering based on Scatter Search and Genetic Algorithms is presented. A general scheme of Scatter Search has been used to obtain high-quality biclusters, but a way of generating the initial population and a method of combination based on Genetic Algorithms have been chosen. Experimental results from yeast cell cycle and human B-cell lymphoma are reported. Finally, the performance of the proposed hybrid algorithm is compared with a genetic algorithm recently published.
Time-Based Dynamic Trust Model Using Ant Colony Algorithm
Institute of Scientific and Technical Information of China (English)
TANG Zhuo; LU Zhengding; LI Kai
2006-01-01
The trust in distributed environment is uncertain, which is variation for various factors. This paper introduces TDTM, a model for time-based dynamic trust. Every entity in the distribute environment is endowed with a trust-vector, which figures the trust intensity between this entity and the others. The trust intensity is dynamic due to the time and the inter-operation between two entities, a method is proposed to quantify this change based on the mind of ant colony algorithm and then an algorithm for the transfer of trust relation is also proposed. Furthermore, this paper analyses the influence to the trust intensity among all entities that is aroused by the change of trust intensity between the two entities, and presents an algorithm to resolve the problem. Finally, we show the process of the trusts'change that is aroused by the time' lapse and the inter-operation through an instance.
Node-Dependence-Based Dynamic Incentive Algorithm in Opportunistic Networks
Directory of Open Access Journals (Sweden)
Ruiyun Yu
2014-01-01
Full Text Available Opportunistic networks lack end-to-end paths between source nodes and destination nodes, so the communications are mainly carried out by the “store-carry-forward” strategy. Selfish behaviors of rejecting packet relay requests will severely worsen the network performance. Incentive is an efficient way to reduce selfish behaviors and hence improves the reliability and robustness of the networks. In this paper, we propose the node-dependence-based dynamic gaming incentive (NDI algorithm, which exploits the dynamic repeated gaming to motivate nodes relaying packets for other nodes. The NDI algorithm presents a mechanism of tolerating selfish behaviors of nodes. Reward and punishment methods are also designed based on the node dependence degree. Simulation results show that the NDI algorithm is effective in increasing the delivery ratio and decreasing average latency when there are a lot of selfish nodes in the opportunistic networks.
Multi-Agent Reinforcement Learning Algorithm Based on Action Prediction
Institute of Scientific and Technical Information of China (English)
TONG Liang; LU Ji-lian
2006-01-01
Multi-agent reinforcement learning algorithms are studied. A prediction-based multi-agent reinforcement learning algorithm is presented for multi-robot cooperation task. The multi-robot cooperation experiment based on multi-agent inverted pendulum is made to test the efficency of the new algorithm, and the experiment results show that the new algorithm can achieve the cooperation strategy much faster than the primitive multiagent reinforcement learning algorithm.
Fast wavelet based algorithms for linear evolution equations
Engquist, Bjorn; Osher, Stanley; Zhong, Sifen
1992-01-01
A class was devised of fast wavelet based algorithms for linear evolution equations whose coefficients are time independent. The method draws on the work of Beylkin, Coifman, and Rokhlin which they applied to general Calderon-Zygmund type integral operators. A modification of their idea is applied to linear hyperbolic and parabolic equations, with spatially varying coefficients. A significant speedup over standard methods is obtained when applied to hyperbolic equations in one space dimension and parabolic equations in multidimensions.
Photoacoustic image reconstruction based on Bayesian compressive sensing algorithm
Institute of Scientific and Technical Information of China (English)
Mingjian Sun; Naizhang Feng; Yi Shen; Jiangang Li; Liyong Ma; Zhenghua Wu
2011-01-01
The photoacoustic tomography (PAT) method, based on compressive sensing (CS) theory, requires that,for the CS reconstruction, the desired image should have a sparse representation in a known transform domain. However, the sparsity of photoacoustic signals is destroyed because noises always exist. Therefore,the original sparse signal cannot be effectively recovered using the general reconstruction algorithm. In this study, Bayesian compressive sensing (BCS) is employed to obtain highly sparse representations of photoacoustic images based on a set of noisy CS measurements. Results of simulation demonstrate that the BCS-reconstructed image can achieve superior performance than other state-of-the-art CS-reconstruction algorithms.%@@ The photoacoustic tomography (PAT) method, based on compressive sensing (CS) theory, requires that,for the CS reconstruction, the desired image should have a sparse representation in a known transform domain.However, the sparsity of photoacoustic signals is destroyed because noises always exist.Therefore,the original sparse signal cannot be effectively recovered using the general reconstruction algorithm.In this study, Bayesian compressive sensing (BCS) is employed to obtain highly sparse representations of photoacoustic inages based on a set of noisy CS measurements.Results of simulation demonstrate that the BCS-reconstructed image can achieve superior performance than other state-of-the-art CS-reconstruction algorithms.
Image completion algorithm based on texture synthesis
Institute of Scientific and Technical Information of China (English)
Zhang Hongying; Peng Qicong; Wu Yadong
2007-01-01
A new algorithm is proposed for completing the missing parts caused by the removal of foreground or background elements from an image of natural scenery in a visually plausible way.The major contributions of the proposed algorithm are: (1) for most natural images, there is a strong orientation of texture or color distribution.So a method is introduced to compute the main direction of the texture and complete the image by limiting the search to one direction to carry out image completion quite fast; (2) there exists a synthesis ordering for image completion.The searching order of the patches is denned to ensure the regions with more known information and the structures should be completed before filling in other regions; (3) to improve the visual effect of texture synthesis, an adaptive scheme is presented to determine the size of the template window for capturing the features of various scales.A number of examples are given to demonstrate the effectiveness of the proposed algorithm.
Matrix-based, finite-difference algorithms for computational acoustics
Davis, Sanford
1990-01-01
A compact numerical algorithm is introduced for simulating multidimensional acoustic waves. The algorithm is expressed in terms of a set of matrix coefficients on a three-point spatial grid that approximates the acoustic wave equation with a discretization error of O(h exp 5). The method is based on tracking a local phase variable and its implementation suggests a convenient coordinate splitting along with natural intermediate boundary conditions. Results are presented for oblique plane waves and compared with other procedures. Preliminary computations of acoustic diffraction are also considered.
Image Fault Area Detection Algorithm Based on Visual Perception
Directory of Open Access Journals (Sweden)
Peng-Lu
2011-02-01
Full Text Available If the natural scenes decomposed by basic ICA which simulates visual perception then the arrangement in space of its basis functions are in disorder. This result is contradicted with physiological mechanisms of vision. So, a new compute model is proposed to simulate two important mechanisms of vision which are visual cortex receptive field topology construct and synchronous oscillation among neuron group. To solve the problem of train image fault detection, a novel algorithm was proposed based on above compute model. The experiment results show that, the algorithm can increase fault detection rate effectively compared with traditional methods which absence of above two important mechanisms of vision.
Institute of Scientific and Technical Information of China (English)
汪欢文; 陆海良; 单宇翔
2015-01-01
利用客户关系图可以很清晰地看出企业与客户之间的各类关系,便于企业决策者采取针对性的措施来改善客户关系.该文提出了一种基于改进的FP-Growth算法进行客户关系图提取的方法,通过最小支持度寻找到所有的频繁项集,然后结合最小置信度,筛选出所需要的关联规则来提高算法的效率.本方法已应用于浙江中烟CRM系统,结果证明该改进算法有比较好的效果.%Customer relationships can be clearly seen in customer relationship graph, thus business decision-makers can take spe-cific measures to facilitate customer relationships. This paper presents an improved algorithm based on FP-Growth algorithm to ex-tract customer relationship graph. We find all frequent itemsets through minimum support, then filter out the desired association rules integrated with the minimum confidence, which can improve the efficiency of the algorithm considerably. This method has been applied to Zhejiang Tobacco CRM system, and the results show that the improved algorithm is very effective.
Asian Option Pricing Based on Genetic Algorithms
Institute of Scientific and Technical Information of China (English)
YunzhongLiu; HuiyuXuan
2004-01-01
The cross-fertilization between artificial intelligence and computational finance has resulted in some of the most active research areas in financial engineering. One direction is the application of machine learning techniques to pricing financial products, which is certainly one of the most complex issues in finance. In the literature, when the interest rate,the mean rate of return and the volatility of the underlying asset follow general stochastic processes, the exact solution is usually not available. In this paper, we shall illustrate how genetic algorithms (GAs), as a numerical approach, can be potentially helpful in dealing with pricing. In particular, we test the performance of basic genetic algorithms by using it to the determination of prices of Asian options, whose exact solutions is known from Black-Scholesoption pricing theory. The solutions found by basic genetic algorithms are compared with the exact solution, and the performance of GAs is ewluated accordingly. Based on these ewluations, some limitations of GAs in option pricing are examined and possible extensions to future works are also proposed.
Algorithms and software for total variation image reconstruction via first-order methods
DEFF Research Database (Denmark)
Dahl, Joahim; Hansen, Per Christian; Jensen, Søren Holdt;
2010-01-01
This paper describes new algorithms and related software for total variation (TV) image reconstruction, more specifically: denoising, inpainting, and deblurring. The algorithms are based on one of Nesterov's first-order methods, tailored to the image processing applications in such a way that...
An improved algorithm for the normalized elimination of the small-component method
Zou, Wenli; Filatov, Michael; Cremer, Dieter
2011-01-01
A new algorithm for the iterative solution of the normalized elimination of the small component (NESC) method is presented that is less costly than previous algorithms and that is based on (1) solving the NESC equations for the uncontracted rather than contracted basis ("First-Diagonalize-then-Contr
Navigation Algorithm Using Fuzzy Control Method in Mobile Robotics
Directory of Open Access Journals (Sweden)
Cviklovič Vladimír
2016-03-01
Full Text Available The issue of navigation methods is being continuously developed globally. The aim of this article is to test the fuzzy control algorithm for track finding in mobile robotics. The concept of an autonomous mobile robot EN20 has been designed to test its behaviour. The odometry navigation method was used. The benefits of fuzzy control are in the evidence of mobile robot’s behaviour. These benefits are obtained when more physical variables on the base of more input variables are controlled at the same time. In our case, there are two input variables - heading angle and distance, and two output variables - the angular velocity of the left and right wheel. The autonomous mobile robot is moving with human logic.
Institute of Scientific and Technical Information of China (English)
郭平; 程代杰
2003-01-01
As the base of intelligent system, it is very important to guarantee the consistency and non-redundancy of knowledge in knowledge database. Since the variety of knowledge sources, it is necessary to dispose knowledge with redundancy, inclusion and even contradiction during the integration of knowledge database. This paper researches the integration method based on the multi-knowledge database. Firstly, it finds out the inconsistent knowledge sets between the knowledge databases by rough set classification and presents one method eliminating the inconsistency by test data. Then, it regards consistent knowledge sets as the initial population of genetic calculation and constructs a genetic adaptive function based on accuracy, practicability and spreadability of knowledge representation to carry on the genetic calculation. Lastly, classifying the results of genetic calculation reduces the knowledge redundancy of knowledge database. This paper also presents a frameworkfor knowledge database integration based on the rough set classification and genetic algorithm.
DIGITAL SPECKLE CORRELATION METHOD IMPROVED BY GENETIC ALGORITHM
Institute of Scientific and Technical Information of China (English)
MaShaopeng; JillGuanchang
2003-01-01
The digital speckle correlation method is an important optical metrology for surface displacement and strain measurement. With this technique, the whole field deformation information can be obtained by tracking the geometric points on the speckle images based on a correlation-matching search technique. However, general search techniques suffer from great computational complexity in the processing of speckle images with large deformation and the large random errors in the processing of images of bad quality. In this paper, an advanced approach based on genetic algorithms (GA) for correlation-matching search is developed. Benefiting from the abilities of global optimum and parallelism searching of GA, this new approach can complete the correlation-matching search with less computational consumption and at high accuracy. Two experimental results from the simulated speckle images have proved the efficiency of the new approach.
Quantitative Methods in Supply Chain Management Models and Algorithms
Christou, Ioannis T
2012-01-01
Quantitative Methods in Supply Chain Management presents some of the most important methods and tools available for modeling and solving problems arising in the context of supply chain management. In the context of this book, “solving problems” usually means designing efficient algorithms for obtaining high-quality solutions. The first chapter is an extensive optimization review covering continuous unconstrained and constrained linear and nonlinear optimization algorithms, as well as dynamic programming and discrete optimization exact methods and heuristics. The second chapter presents time-series forecasting methods together with prediction market techniques for demand forecasting of new products and services. The third chapter details models and algorithms for planning and scheduling with an emphasis on production planning and personnel scheduling. The fourth chapter presents deterministic and stochastic models for inventory control with a detailed analysis on periodic review systems and algorithmic dev...
Multi-objective community detection based on memetic algorithm.
Directory of Open Access Journals (Sweden)
Peng Wu
Full Text Available Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels.
Multicast Routing Problem Using Tree-Based Cuckoo Optimization Algorithm
Directory of Open Access Journals (Sweden)
Mahmood Sardarpour
2016-06-01
Full Text Available The problem of QoS multicast routing is to find a multicast tree with the least expense/cost which would meet the limitations such as band width, delay and loss rate. This is a NP-Complete problem. To solve the problem of multicast routing, the entire routes from the source node to every destination node are often recognized. Then the routes are integrated and changed into a single multicast tree. But they are slow and complicated methods. The present paper introduces a new tree-based optimization method to overcome such weaknesses. The recommended method directly optimizes the multicast tree. Therefore a tree-based typology including several spanning trees is created which combines the trees two by two. For this purpose, the Cuckoo Algorithm is used which is proved to be well converged and makes quick calculations. The simulation conducted on different types of network typologies proved that it is a practical and influential algorithm.
The positioning algorithm based on feature variance of billet character
Yi, Jiansong; Hong, Hanyu; Shi, Yu; Chen, Hongyang
2015-12-01
In the process of steel billets recognition on the production line, the key problem is how to determine the position of the billet from complex scenes. To solve this problem, this paper presents a positioning algorithm based on the feature variance of billet character. Using the largest intra-cluster variance recursive method based on multilevel filtering, the billet characters are segmented completely from the complex scenes. There are three rows of characters on each steel billet, we are able to determine whether the connected regions, which satisfy the condition of the feature variance, are on a straight line. Then we can accurately locate the steel billet. The experimental results demonstrated that the proposed method in this paper is competitive to other methods in positioning the characters and it also reduce the running time. The algorithm can provide a better basis for the character recognition.
Unified HMM-based layout analysis framework and algorithm
Institute of Scientific and Technical Information of China (English)
陈明; 丁晓青; 吴佑寿
2003-01-01
To manipulate the layout analysis problem for complex or irregular document image, a Unified HMM-based Layout Analysis Framework is presented in this paper. Based on the multi-resolution wavelet analysis results of the document image, we use HMM method in both inner-scale image model and trans-scale context model to classify the pixel region properties, such as text, picture or background. In each scale, a HMM direct segmentation method is used to get better inner-scale classification result. Then another HMM method is used to fuse the inner-scale result in each scale and then get better final seg- mentation result. The optimized algorithm uses a stop rule in the coarse to fine multi-scale segmentation process, so the speed is improved remarkably. Experiments prove the efficiency of proposed algorithm.
METHOD OF CENTERS ALGORITHM FOR MULTI-OBJECTIVE PROGRAMMING PROBLEMS
Institute of Scientific and Technical Information of China (English)
Tarek Emam
2009-01-01
In this paper, we consider a method of centers for solving multi-objective programming problems, where the objective functions involved are concave functions and the set of feasible points is convex. The algorithm is defined so that the sub-problems that must be solved during its execution may be solved by finite-step procedures. Conditions are given under which the algorithm generates sequences of feasible points and constraint multiplier vectors that have accumulation points satisfying the KKT conditions. Finally, we establish convergence of the proposed method of centers algorithm for solving multiobjective programming problems.
Hamid Reza Mohammadi; Ali Akhavan
2015-01-01
In this paper, a new control method for quasi-Z-source cascaded multilevel inverter based grid-connected photovoltaic (PV) system is proposed. The proposed method is capable of boosting the PV array voltage to a higher level and solves the imbalance problem of DC-link voltage in traditional cascaded H-bridge inverters. The proposed control system adjusts the grid injected current in phase with the grid voltage and achieves independent maximum power point tracking (MPPT) for the separate PV ar...
The Books Recommend Service System Based on Improved Algorithm for Mining Association Rules
Institute of Scientific and Technical Information of China (English)
王萍
2009-01-01
The Apriori algorithm is a classical method of association rules mining. Based on analysis of this theory, the paper provides an improved Apriori algorithm. The paper puts foward with algorithm combines HASH table technique and reduction of candidate item sets to en-hance the usage efficiency of resources as well as the individualized service of the data library.
Melody Extraction Method from MIDI Based on H-K Algorithm%基于H-K算法的MIDI音乐主旋律提取
Institute of Scientific and Technical Information of China (English)
刘勇; 林景栋; 穆伟力
2011-01-01
音乐的主旋律音轨包含了很多重要的音乐旋律信息,是音乐特征识别的基础,也是音乐灯光表演方案设计的前提工作.这方面的工作涉及了音乐旋律的表达、旋律特征的抽取以及分类技术等许多内容.针对多音轨MIDI文件,介绍一种多音轨MIDI音乐主旋律识别方法.通过对表征音乐旋律的特征量的提取,采用H-K分类算法,构建音轨分类器模型,对MIDI主旋律音轨和伴奏旋律音轨进行分类,从而提取MIDI音乐主旋律音轨.实验结果显示取得了较好的效果,为音乐灯光表演方案的自动设计做了必要的前提工作.%Main melody is the basis of the music feature recognition and the premise of music light show design work by compute, as it contains the most important information about music melody.This work includes many questions, such as melody representation, melody character exwaction and classifying technology.This paper represents a model for auto melody extraction method for multi-track MIDI files.Construct the feature vector space by means of extracting the music melody feature.To classify the melody of the main track and the accompaniment tracks with the H-K algorithm in order t6 exwact the main melody track, and achieved good results.It prepares for the program of automatic design the music light show.
Lagrangian relaxation based algorithm for trigeneration planning with storages
DEFF Research Database (Denmark)
Rong, Aiying; Lahdelma, Risto; Luh, Peter
2008-01-01
Trigeneration is a booming power production technology where three energy commodities are simultaneously produced in a single integrated process. Electric power, heat (e.g. hot water) and cooling (e.g. chilled water) are three typical energy commodities in the trigeneration system. The production...... an effective method for the long-term planning problem based on the proper strategy to form Lagrangian subproblems and solve the Lagrangian dual (LD) problem based on deflected subgradient optimization method. We also develop a heuristic for restoring feasibility from the LD solution. Numerical results based...... on realistic production models show that the algorithm is efficient and near-optimal solutions are obtained....
An Initiative-Learning Algorithm Based on System Uncertainty
Institute of Scientific and Technical Information of China (English)
ZHAO Jun
2005-01-01
Initiative-learning algorithms are characterized by and hence advantageous for their independence of prior domain knowledge.Usually,their induced results could more objectively express the potential characteristics and patterns of information systems.Initiative-learning processes can be effectively conducted by system uncertainty,because uncertainty is an intrinsic common feature of and also an essential link between information systems and their induced results.Obviously,the effectiveness of such initiative-learning framework is heavily dependent on the accuracy of system uncertainty measurements.Herein,a more reasonable method for measuring system uncertainty is developed based on rough set theory and the conception of information entropy;then a new algorithm is developed on the bases of the new system uncertainty measurement and the Skowron's algorithm for mining propositional default decision rules.The proposed algorithm is typically initiative-learning.It is well adaptable to system uncertainty.As shown by simulation experiments,its comprehensive performances are much better than those of congeneric algorithms.
Promoter sequences and algorithmical methods for identifying them.
Vanet, A; Marsan, L; Sagot, M F
1999-01-01
This paper presents a survey of currently available mathematical models and algorithmical methods for trying to identify promoter sequences. The methods concern both searching in a genome for a previously defined consensus and extracting a consensus from a set of sequences. Such methods were often tailored for either eukaryotes or prokaryotes although this does not preclude use of the same method for both types of organisms. The survey therefore covers all methods; however, emphasis is placed on prokaryotic promoter sequence identification. Illustrative applications of the main extracting algorithms are given for three bacteria. PMID:10673015
基于AFD算法的滚动轴承故障诊断方法%Fault Diagnosis Method of Rolling Bearing Based on AFD Algorithm
Institute of Scientific and Technical Information of China (English)
梁瑜; 贾利民; 蔡国强; 刘金朝
2013-01-01
基于自适应傅里叶分解(AFD)算法,将滚动轴承的振动信号分解为一系列单一分量信号并计算它们的峭度；将峭度由大到小顺序排列,自适应寻找峭度趋于稳定的拐点,对拐点前的单一分量信号求和并取包络作共振解调；根据解调得到的频谱判断滚动轴承是否发生故障及发生故障的部位.以N205EM型滚动轴承为例进行试验验证,结果表明:在不预先确定滤波频带,不出现无物理意义的“负频”情形下,能够准确有效地提取出比传统共振解调方法有更好频谱特征的滚动轴承故障信息,从而有效地诊断出滚动轴承的故障.%Adaptive Fourier decomposition (AFD) algorithm decomposes the vibration signal of rolling bearing into a series of mono-components, and the kurtosis of each mono-component is calculated. The kurtosis is arranged in descending order. The inflection point of kurtosis becoming stable is adaptively sought out and the corresponding mono-component signals before inflection point are summed up, then the envelope is taken as the resonance demodulation. According to the frequency spectrum obtained from demodulation, whether rolling bearing has fault is judged and the fault location is determined. Taking N205EM-type rolling bearing for example, the experiment results indicate that the proposed method is accurate and effective in extracting the fault information of rolling bearing without presetting filter frequency band and the absence of "negative frequency" with no physical meaning. The spectrum characteristics better than traditional resonance demodulation are obtained. The proposed method is effective in diagnosing the fault of rolling bearing.
Directory of Open Access Journals (Sweden)
D. A. Viattchenin
2009-01-01
Full Text Available A method for constructing a subset of labeled objects which is used in a heuristic algorithm of possible clusterization with partial training is proposed in the paper. The method is based on data preprocessing by the heuristic algorithm of possible clusterization using a transitive closure of a fuzzy tolerance. Method efficiency is demonstrated by way of an illustrative example.
An algorithm for motif-based network design
Mäki-Marttunen, Tuomo
2016-01-01
A determinant property of the structure of a biological network is the distribution of local connectivity patterns, i.e., network motifs. In this work, a method for creating directed, unweighted networks while promoting a certain combination of motifs is presented. This motif-based network algorithm starts with an empty graph and randomly connects the nodes by advancing or discouraging the formation of chosen motifs. The in- or out-degree distribution of the generated networks can be explicitly chosen. The algorithm is shown to perform well in producing networks with high occurrences of the targeted motifs, both ones consisting of 3 nodes as well as ones consisting of 4 nodes. Moreover, the algorithm can also be tuned to bring about global network characteristics found in many natural networks, such as small-worldness and modularity.
An Algorithm of Sensor Management Based on Dynamic Target Detection
Institute of Scientific and Technical Information of China (English)
LIUXianxing; ZHOULin; JINYong
2005-01-01
The probability density of stationary target is only evolved at measurement update, but the probability density of dynamic target is evolved not only at measurement update but also during measurements, this paper researches an algorithm of dynamic targets detection. Firstly, it presents the evolution of probability density at measurement update by Bayes' rule and the evolution of probability density during measurements by Fokker-Planck differential equations, respectively. Secondly, the method of obtaining information entropy by the probability density is given and sensor resources are distributed based on the evolution of information entropy viz. the maximization of information gain. Simulation results show that compared with the algorithm of serial search, this algorithm is feasible and effective when it is used to detect dynamic target.
Crime Busting Model Based on Dynamic Ranking Algorithms
Directory of Open Access Journals (Sweden)
Yang Cao
2013-01-01
Full Text Available This paper proposed a crime busting model with two dynamic ranking algorithms to detect the likelihood of a suspect and the possibility of a leader in a complex social network. Signally, in order to obtain the priority list of suspects, an advanced network mining approach with a dynamic cumulative nominating algorithm is adopted to rapidly reduce computational expensiveness than most other topology-based approaches. Our method can also greatly increase the accuracy of solution with the enhancement of semantic learning filtering at the same time. Moreover, another dynamic algorithm of node contraction is also presented to help identify the leader among conspirators. Test results are given to verify the theoretical results, which show the great performance for either small or large datasets.
A Multi-Scale Settlement Matching Algorithm Based on ARG
Yue, Han; Zhu, Xinyan; Chen, Di; Liu, Lingjia
2016-06-01
Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG) is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.
Determination of Selection Method in Genetic Algorithm for Land Suitability
Directory of Open Access Journals (Sweden)
Irfianti Asti Dwi
2016-01-01
Full Text Available Genetic Algoirthm is one alternative solution in the field of modeling optimization, automatic programming and machine learning. The purpose of the study was to compare some type of selection methods in Genetic Algorithm for land suitability. Contribution of this research applies the best method to develop region based horticultural commodities. This testing is done by comparing the three methods on the method of selection, the Roulette Wheel, Tournament Selection and Stochastic Universal Sampling. Parameters of the locations used in the test scenarios include Temperature = 27°C, Rainfall = 1200 mm, hummidity = 30%, Cluster fruit = 4, Crossover Probabiitiy (Pc = 0.6, Mutation Probabilty (Pm = 0.2 and Epoch = 10. The second test epoch incluides location parameters consist of Temperature = 30°C, Rainfall = 2000 mm, Humidity = 35%, Cluster fruit = 5, Crossover Probability (Pc = 0.7, Mutation Probability (Pm = 0.3 and Epoch 10. The conclusion of this study shows that the Roulette Wheel is the best method because it produces more stable and fitness value than the other two methods.
Directory of Open Access Journals (Sweden)
Qian-Qian Duan
2014-01-01
Full Text Available A hybrid optimization algorithm combining finite state method (FSM and genetic algorithm (GA is proposed to solve the crude oil scheduling problem. The FSM and GA are combined to take the advantage of each method and compensate deficiencies of individual methods. In the proposed algorithm, the finite state method makes up for the weakness of GA which is poor at local searching ability. The heuristic returned by the FSM can guide the GA algorithm towards good solutions. The idea behind this is that we can generate promising substructure or partial solution by using FSM. Furthermore, the FSM can guarantee that the entire solution space is uniformly covered. Therefore, the combination of the two algorithms has better global performance than the existing GA or FSM which is operated individually. Finally, a real-life crude oil scheduling problem from the literature is used for conducting simulation. The experimental results validate that the proposed method outperforms the state-of-art GA method.
基于奔德斯算法的安全约束机组组合方法%A Method for Security Constrained Unit Commitment Based on Benders Algorithm
Institute of Scientific and Technical Information of China (English)
王楠; 张粒子; 袁喆; 张黎明; 李雪
2012-01-01
When security-constrained unit commitment (SCUC) was directly solved by mixed integer programming algorithm, the calculation efficiency would considerably decrease, and when SCUC was solved by Benders algorithm, it led to the problem that the solution efficiency would decrease due to the algorithm shock and the restriction of system scale. A new Benders algorithm-based method to solve SCUC was proposed. Based on Benders algorithm, by means of adding the link to correct the constraint of out-of-limit after the iteration of Benders master problem, the search direction of Benders cut could be controlled; by means of adding the link to identify taken-effect constraints the search space of Benders algorithm was reduced, thus the solution efficiency of SCUC optimization could be improved. The effectiveness of the proposed method was verified by simulation results of 6-machine 3-bus system and 54-machine 118-bus system.%针对采用混合整数规划算法直接求解安全约束机组组合,将使计算效率大幅度降低,而利用奔德斯算法求解则存在着算法振荡和受系统规模制约求解效率下降的问题,提出了一种基于奔德斯算法的安全约束机组组合新方法.该方法在奔德斯算法的基础上,通过纳入新增越限约束校正环节,控制了奔德斯割的寻优方向,通过增加起作用约束识别环节,缩小了奔德斯算法的寻优空间,进而提高了安全约束机组组合优化问题的求解效率.6机3节点和54机118节点算例验证了所提方法的有效性.
Institute of Scientific and Technical Information of China (English)
施爱春; 李甲; 胡波
2011-01-01
It is a hotspot to study high-efficient distributed sound localization algorithm in Wireless Sensor Networks. An energy based distributed sound localization algorithm is proposed, which applies the Alternating Direction Method of Multipliers(ADMM) to decompose the Maximum Likelihood(ML) prohlem into each sensor node and uses bridged sensor nodes to implement information fusion. Due to sound energy attenuation model, the optimization target function of ADMM is non-convex, and the algorithm is prone to trapping in local optima. Multi-Resolution Search(MRS) method is proposed to solve the problem. As the simulation result shows, compared Lo existing distributed sound localization algorithms, the proposed algorithm can be implemented in parallel, can be applied to arbitrary network topologies, and also can avoid local optima effectively.energy based sound localization; distributed algorithm; Wireless Sensor Networks;%如何实现高效的分布式声源定位是无线传感器网络研究的热点.通过一种基于声源信号能量的分布式声源定位算法,采用交互方向的拉格朗日乘子方法将最大似然声源定位问题拆分到单个传感器节点,通过桥接传感器节点实现传感器节点之间的信息融合.由于采用声源信号衰减模型,交互方向拉格朗日乘子方法中的最优化目标函数成为非凸函数,导致定位算法容易陷入局部最优,为此提出了多重网格搜索方法.仿真结果表明,新算法与现有的分布式声源定位算法相比,具有可并行实现,可应用于任意网络拓扑,不易陷于局部最优等优点.
Cognitive radio resource allocation based on coupled chaotic genetic algorithm
International Nuclear Information System (INIS)
A coupled chaotic genetic algorithm for cognitive radio resource allocation which is based on genetic algorithm and coupled Logistic map is proposed. A fitness function for cognitive radio resource allocation is provided. Simulations are conducted for cognitive radio resource allocation by using the coupled chaotic genetic algorithm, simple genetic algorithm and dynamic allocation algorithm respectively. The simulation results show that, compared with simple genetic and dynamic allocation algorithm, coupled chaotic genetic algorithm reduces the total transmission power and bit error rate in cognitive radio system, and has faster convergence speed
Cognitive radio resource allocation based on coupled chaotic genetic algorithm
Zu, Yun-Xiao; Zhou, Jie; Zeng, Chang-Chang
2010-11-01
A coupled chaotic genetic algorithm for cognitive radio resource allocation which is based on genetic algorithm and coupled Logistic map is proposed. A fitness function for cognitive radio resource allocation is provided. Simulations are conducted for cognitive radio resource allocation by using the coupled chaotic genetic algorithm, simple genetic algorithm and dynamic allocation algorithm respectively. The simulation results show that, compared with simple genetic and dynamic allocation algorithm, coupled chaotic genetic algorithm reduces the total transmission power and bit error rate in cognitive radio system, and has faster convergence speed.
Cognitive radio resource allocation based on coupled chaotic genetic algorithm
Institute of Scientific and Technical Information of China (English)
Zu Yun-Xiao; Zhou Jie; Zeng Chang-Chang
2010-01-01
A coupled chaotic genetic algorithm for cognitive radio resource allocation which is based on genetic algorithm and coupled Logistic map is proposed. A fitness function for cognitive radio resource allocation is provided. Simulations are conducted for cognitive radio resource allocation by using the coupled chaotic genetic algorithm, simple genetic algorithm and dynamic allocation algorithm respectively. The simulation results show that, compared with simple genetic and dynamic allocation algorithm, coupled chaotic genetic algorithm reduces the total transmission power and bit error rate in cognitive radio system, and has faster convergence speed.
Evolutionary Algorithm Based on Immune Strategy
Institute of Scientific and Technical Information of China (English)
WANG Lei; JIAO Licheng
2001-01-01
A novel evolutionary algorithm,evolution-immunity strategies(EIS), is proposed with reference to the immune theory in biology, which constructs an immune operator accomplished by two steps, a vaccination and an immune selection. The aim of introducing the immune concepts and methods into ES is for finding the ways and means obtaining the optimal solution of difficult problems with locally characteristic information. The detail processes of realizing EIS are presented which contain 6 steps. EIS is analyzed with Markovian theory and it is approved to be convergent with probability 1. In EIS, an immune operator is an aggregation of specific operations and procedures, and methods of selecting vaccines and constructing an immune operator are given in this paper. It is shown with an example of the 442-city TSP that the EIS can restrain the degenerate phenomenon during the evolutionary process by simulated calculating result, improve the searching capability and efficiency, and therefore, increase the convergent speed greatly.
Multifeature Fusion Vehicle Detection Algorithm Based on Choquet Integral
Directory of Open Access Journals (Sweden)
Wenhui Li
2014-01-01
Full Text Available Vision-based multivehicle detection plays an important role in Forward Collision Warning Systems (FCWS and Blind Spot Detection Systems (BSDS. The performance of these systems depends on the real-time capability, accuracy, and robustness of vehicle detection methods. To improve the accuracy of vehicle detection algorithm, we propose a multifeature fusion vehicle detection algorithm based on Choquet integral. This algorithm divides the vehicle detection problem into two phases: feature similarity measure and multifeature fusion. In the feature similarity measure phase, we first propose a taillight-based vehicle detection method, and then vehicle taillight feature similarity measure is defined. Second, combining with the definition of Choquet integral, the vehicle symmetry similarity measure and the HOG + AdaBoost feature similarity measure are defined. Finally, these three features are fused together by Choquet integral. Being evaluated on public test collections and our own test images, the experimental results show that our method has achieved effective and robust multivehicle detection in complicated environments. Our method can not only improve the detection rate but also reduce the false alarm rate, which meets the engineering requirements of Advanced Driving Assistance Systems (ADAS.
Archimedean copula estimation of distribution algorithm based on artificial bee colony algorithm
Institute of Scientific and Technical Information of China (English)
Haidong Xu; Mingyan Jiang; Kun Xu
2015-01-01
The artificial bee colony (ABC) algorithm is a com-petitive stochastic population-based optimization algorithm. How-ever, the ABC algorithm does not use the social information and lacks the knowledge of the problem structure, which leads to in-sufficiency in both convergent speed and searching precision. Archimedean copula estimation of distribution algorithm (ACEDA) is a relatively simple, time-economic and multivariate correlated EDA. This paper proposes a novel hybrid algorithm based on the ABC algorithm and ACEDA cal ed Archimedean copula estima-tion of distribution based on the artificial bee colony (ACABC) algorithm. The hybrid algorithm utilizes ACEDA to estimate the distribution model and then uses the information to help artificial bees to search more efficiently in the search space. Six bench-mark functions are introduced to assess the performance of the ACABC algorithm on numerical function optimization. Experimen-tal results show that the ACABC algorithm converges much faster with greater precision compared with the ABC algorithm, ACEDA and the global best (gbest)-guided ABC (GABC) algorithm in most of the experiments.
Directory of Open Access Journals (Sweden)
Wen-Jong Chen
2016-04-01
Full Text Available This article combined Taguchi method and analysis of variance with the culture-based quantum-behaved particle swarm optimization to determine the optimal models of gating system for aluminium (Al A356 sand casting part. First, the Taguchi method and analysis of variance were, respectively, applied to establish an L27(38 orthogonal array and determine significant process parameters, including riser diameter, pouring temperature, pouring speed, riser position and gating diameter. Subsequently, a response surface methodology was used to construct a second-order regression model, including filling time, solidification time and oxide ratio. Finally, the culture-based quantum-behaved particle swarm optimization was used to determine the multi-objective Pareto optimal solutions and identify corresponding process conditions. The results showed that the proposed method, compared with initial casting model, enabled reducing the filling time, solidification time and oxide ratio by 68.14%, 50.56% and 20.20%, respectively. A confirmation experiment was verified to be able to effectively reduce the defect of casting and improve the casting quality.
Directory of Open Access Journals (Sweden)
Hamid Reza Mohammadi
2015-03-01
Full Text Available In this paper, a new control method for quasi-Z-source cascaded multilevel inverter based grid-connected photovoltaic (PV system is proposed. The proposed method is capable of boosting the PV array voltage to a higher level and solves the imbalance problem of DC-link voltage in traditional cascaded H-bridge inverters. The proposed control system adjusts the grid injected current in phase with the grid voltage and achieves independent maximum power point tracking (MPPT for the separate PV arrays. To achieve these goals, the proportional-integral (PI controllers are employed for each module. For achieving the best performance, this paper presents an optimum approach to design the controller parameters using particle swarm optimization (PSO. The primary design goal is to obtain good response by minimizing the integral absolute error. Also, the transient response is guaranteed by minimizing the overshoot, settling time and rise time of the system response. The effectiveness of the new proposed control method has been verified through simulation studies based on a seven level quasi-Z-Source cascaded multilevel inverter.
Li, Zhifei; Qin, Dongliang; Yang, Feng
2014-01-01
In defense related programs, the use of capability-based analysis, design, and acquisition has been significant. In order to confront one of the most challenging features of a huge design space in capability based analysis (CBA), a literature review of design space exploration was first examined. Then, in the process of an aerospace system of systems design space exploration, a bilayer mapping method was put forward, based on the existing experimental and operating data. Finally, the feasibility of the foregoing approach was demonstrated with an illustrative example. With the data mining RST (rough sets theory) and SOM (self-organized mapping) techniques, the alternative to the aerospace system of systems architecture was mapping from P-space (performance space) to C-space (configuration space), and then from C-space to D-space (design space), respectively. Ultimately, the performance space was mapped to the design space, which completed the exploration and preliminary reduction of the entire design space. This method provides a computational analysis and implementation scheme for large-scale simulation.
Li, Zhifei; Qin, Dongliang; Yang, Feng
2014-01-01
In defense related programs, the use of capability-based analysis, design, and acquisition has been significant. In order to confront one of the most challenging features of a huge design space in capability based analysis (CBA), a literature review of design space exploration was first examined. Then, in the process of an aerospace system of systems design space exploration, a bilayer mapping method was put forward, based on the existing experimental and operating data. Finally, the feasibility of the foregoing approach was demonstrated with an illustrative example. With the data mining RST (rough sets theory) and SOM (self-organized mapping) techniques, the alternative to the aerospace system of systems architecture was mapping from P-space (performance space) to C-space (configuration space), and then from C-space to D-space (design space), respectively. Ultimately, the performance space was mapped to the design space, which completed the exploration and preliminary reduction of the entire design space. This method provides a computational analysis and implementation scheme for large-scale simulation. PMID:24790572
Directory of Open Access Journals (Sweden)
Zhifei Li
2014-01-01
Full Text Available In defense related programs, the use of capability-based analysis, design, and acquisition has been significant. In order to confront one of the most challenging features of a huge design space in capability based analysis (CBA, a literature review of design space exploration was first examined. Then, in the process of an aerospace system of systems design space exploration, a bilayer mapping method was put forward, based on the existing experimental and operating data. Finally, the feasibility of the foregoing approach was demonstrated with an illustrative example. With the data mining RST (rough sets theory and SOM (self-organized mapping techniques, the alternative to the aerospace system of systems architecture was mapping from P-space (performance space to C-space (configuration space, and then from C-space to D-space (design space, respectively. Ultimately, the performance space was mapped to the design space, which completed the exploration and preliminary reduction of the entire design space. This method provides a computational analysis and implementation scheme for large-scale simulation.
A Genetic Algorithm-Based Feature Selection
Directory of Open Access Journals (Sweden)
Babatunde Oluleye
2014-07-01
Full Text Available This article details the exploration and application of Genetic Algorithm (GA for feature selection. Particularly a binary GA was used for dimensionality reduction to enhance the performance of the concerned classifiers. In this work, hundred (100 features were extracted from set of images found in the Flavia dataset (a publicly available dataset. The extracted features are Zernike Moments (ZM, Fourier Descriptors (FD, Lengendre Moments (LM, Hu 7 Moments (Hu7M, Texture Properties (TP and Geometrical Properties (GP. The main contributions of this article are (1 detailed documentation of the GA Toolbox in MATLAB and (2 the development of a GA-based feature selector using a novel fitness function (kNN-based classification error which enabled the GA to obtain a combinatorial set of feature giving rise to optimal accuracy. The results obtained were compared with various feature selectors from WEKA software and obtained better results in many ways than WEKA feature selectors in terms of classification accuracy
Image File Security using Base-64 Algorithm
Directory of Open Access Journals (Sweden)
Pooja Guwalani
2014-12-01
Full Text Available Information security is becoming a vital component of any data storage and transmission operations. Since visual representation of data is gaining importance, data in the form of images are used to exchange and convey information between entities. As the use of digital techniques for transmitting and storing images are increasing, it becomes an important issue to protect the confidentiality, integrity and authenticity of images. There are various techniques which are discovered to encrypt the images to make them more secure. The primary goal of this paper is security management. A mechanism to provide authentication of users and ensure integrity, accuracy and safety of images. Moreover, an image-based data requires more effort during encryption and decryption. In this paper, we describe how Base64 algorithm can be used to achieve this purpose.
Institute of Scientific and Technical Information of China (English)
WANG Ding; ZHANG Li; WU Ying
2009-01-01
Based on the constrained total least squares (CTLS) passive location algorithm with bearing-only measurements, in this paper, the same passive location problem is transformed into the structured total least squares (STLS) problem. The solution of the STLS problem for passive location can be obtained using the inverse iteration method. It also expatiates that both the STLS algorithm and the CTLS algorithm have the same location mean squares error under certain condition. Finally, the article presents a kind of location and tracking algorithm for moving target by combining STLS location algorithm with Kalman filter (KF). The efficiency and superiority of the proposed algorithms can be confirmed by computer simulation results.
Entropy-Based Search Algorithm for Experimental Design
Malakar, N. K.; Knuth, K. H.
2011-03-01
The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.
Institute of Scientific and Technical Information of China (English)
麦范金; 李东普; 岳晓光
2011-01-01
Bi-direction marching method is a traditional algorithm, which can find ambiguity but can not solve the ambiguity problem. In order to find a better solution, this paper proposes a combination method based on bidirection marching method and feature selection algorithm. Through the accumulation of corpus, a Chinese word segmentation system is designed. Experimental results show that the new Chinese word segmentation method is better than traditional methods.%传统的双向匹配算法虽然能够发现歧义现象,但是却不能解决歧义问题.为了更好地进行歧义消解,提出了一种基于双向匹配法和特征选择算法的中文分词技术,通过积累的语料库,设计并实现了一个基于两种方法的分词系统.该系统的实验结果表明,基于双向匹配法和特征选择算法的中文分词技术比传统方法的效果要好.
A Survey on Evolutionary Algorithm Based Hybrid Intelligence in Bioinformatics
Directory of Open Access Journals (Sweden)
Shan Li
2014-01-01
Full Text Available With the rapid advance in genomics, proteomics, metabolomics, and other types of omics technologies during the past decades, a tremendous amount of data related to molecular biology has been produced. It is becoming a big challenge for the bioinformatists to analyze and interpret these data with conventional intelligent techniques, for example, support vector machines. Recently, the hybrid intelligent methods, which integrate several standard intelligent approaches, are becoming more and more popular due to their robustness and efficiency. Specifically, the hybrid intelligent approaches based on evolutionary algorithms (EAs are widely used in various fields due to the efficiency and robustness of EAs. In this review, we give an introduction about the applications of hybrid intelligent methods, in particular those based on evolutionary algorithm, in bioinformatics. In particular, we focus on their applications to three common problems that arise in bioinformatics, that is, feature selection, parameter estimation, and reconstruction of biological networks.
Synthetic aperture sonar movement compensation algorithm based on time-delay and phase estimation
Institute of Scientific and Technical Information of China (English)
JIANG Nan; SUN Dajun; TIAN Tan
2003-01-01
The effects of movement errors on imaging results of synthetic aperture sonar andthe necessity of movement compensation are discussed. Based on analyzing so-called displacedphase center algorithm, an improved algorithm is proposed. In this method, the time delayis estimated firstly, then the phase is estimated for the residual error, so that the range ofmovement error suited to the algorithm is extended to some extent. Some simulation resultson computer and experimental results in the test tank using the proposed algorithm are givenas well.
A fast flexible docking method using an incremental construction algorithm.
Rarey, M; Kramer, B; Lengauer, T; Klebe, G
1996-08-23
We present an automatic method for docking organic ligands into protein binding sites. The method can be used in the design process of specific protein ligands. It combines an appropriate model of the physico-chemical properties of the docked molecules with efficient methods for sampling the conformational space of the ligand. If the ligand is flexible, it can adopt a large variety of different conformations. Each such minimum in conformational space presents a potential candidate for the conformation of the ligand in the complexed state. Our docking method samples the conformation space of the ligand on the basis of a discrete model and uses a tree-search technique for placing the ligand incrementally into the active site. For placing the first fragment of the ligand into the protein, we use hashing techniques adapted from computer vision. The incremental construction algorithm is based on a greedy strategy combined with efficient methods for overlap detection and for the search of new interactions. We present results on 19 complexes of which the binding geometry has been crystallographically determined. All considered ligands are docked in at most three minutes on a current workstation. The experimentally observed binding mode of the ligand is reproduced with 0.5 to 1.2 A rms deviation. It is almost always found among the highest-ranking conformations computed.
Liu, Dong-sheng; Fan, Shu-jiang
2014-01-01
In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389
Historical Feature Pattern Extraction Based Network Attack Situation Sensing Algorithm
Directory of Open Access Journals (Sweden)
Yong Zeng
2014-01-01
Full Text Available The situation sequence contains a series of complicated and multivariate random trends, which are very sudden, uncertain, and difficult to recognize and describe its principle by traditional algorithms. To solve the above questions, estimating parameters of super long situation sequence is essential, but very difficult, so this paper proposes a situation prediction method based on historical feature pattern extraction (HFPE. First, HFPE algorithm seeks similar indications from the history situation sequence recorded and weighs the link intensity between occurred indication and subsequent effect. Then it calculates the probability that a certain effect reappears according to the current indication and makes a prediction after weighting. Meanwhile, HFPE method gives an evolution algorithm to derive the prediction deviation from the views of pattern and accuracy. This algorithm can continuously promote the adaptability of HFPE through gradual fine-tuning. The method preserves the rules in sequence at its best, does not need data preprocessing, and can track and adapt to the variation of situation sequence continuously.
Computer Crime Forensics Based on Improved Decision Tree Algorithm
Directory of Open Access Journals (Sweden)
Ying Wang
2014-04-01
Full Text Available To find out the evidence of crime-related evidence and association rules among massive data, the classic decision tree algorithms such as ID3 for classification analysis have appeared in related prototype systems. So how to make it more suitable for computer forensics in variable environments becomes a hot issue. When selecting classification attributes, ID3 relies on computation of information entropy. Then the attributes owning more value are selected as classification nodes of the decision tress. Such classification is unrealistic under many cases. During the process of ID3 algorithm there are too many logarithms, so it is complicated to handle with the dataset which has various classification attributes. Therefore, contraposing the special demand for computer crime forensics, ID3 algorithm is improved and a novel classification attribute selection method based on Maclaurin-Priority Value First method is proposed. It adopts the foot changing formula and infinitesimal substitution to simplify the logarithms in ID3. For the errors generated in this process, an apposite constant is introduced to be multiplied by the simplified formulas for compensation. The idea of Priority Value First is introduced to solve the problems of value deviation. The performance of improved method is strictly proved in theory. Finally, the experiments verify that our scheme has advantage in computation time and classification accuracy, compared to ID3 and two existing algorithms
Directory of Open Access Journals (Sweden)
D. Haritha
2012-03-01
Full Text Available A robust and efficient face recognition system was developed and evaluated. The each individual face is characterized by 2D-DCT coefficients which follows a finite mixture of doubly truncated Gaussian distribution. In modelling the features vector of the face the number of components (in the mixture model are determined by hierarchical clustering. The model parameters are estimated using EM algorithm. The face recognition algorithm is developed by maximum likelihood under Baysian frame. The method was tested on two available face databases namely JNTUK and yale. The recognition rates computed for different methods of face recognition have revealed that the proposed method performs very well when compared to the other approaches. It is also observed that the proposed system require less number of DCT coefficients in each block and serve well even with large and small databases. The hybridization of hierarchical clustering with model based approach has significantly improved the recognition rate of the system even with the simple features like DCT. Keywords: Face recognition system, doubly truncated Gaussian mixture model, Hierarchical clustering algorithm, DCT coefficients.
CUDT: A CUDA Based Decision Tree Algorithm
Directory of Open Access Journals (Sweden)
Win-Tsung Lo
2014-01-01
Full Text Available Decision tree is one of the famous classification methods in data mining. Many researches have been proposed, which were focusing on improving the performance of decision tree. However, those algorithms are developed and run on traditional distributed systems. Obviously the latency could not be improved while processing huge data generated by ubiquitous sensing node in the era without new technology help. In order to improve data processing latency in huge data mining, in this paper, we design and implement a new parallelized decision tree algorithm on a CUDA (compute unified device architecture, which is a GPGPU solution provided by NVIDIA. In the proposed system, CPU is responsible for flow control while the GPU is responsible for computation. We have conducted many experiments to evaluate system performance of CUDT and made a comparison with traditional CPU version. The results show that CUDT is 5∼55 times faster than Weka-j48 and is 18 times speedup than SPRINT for large data set.
Adaptive algorithm for mobile user positioning based on environment estimation
Directory of Open Access Journals (Sweden)
Grujović Darko
2014-01-01
Full Text Available This paper analyzes the challenges to realize an infrastructure independent and a low-cost positioning method in cellular networks based on RSS (Received Signal Strength parameter, auxiliary timing parameter and environment estimation. The proposed algorithm has been evaluated using field measurements collected from GSM (Global System for Mobile Communications network, but it is technology independent and can be applied in UMTS (Universal Mobile Telecommunication Systems and LTE (Long-Term Evolution networks, also.
SENSITIVITY ANALYSIS BASED ON LANCZOS ALGORITHM IN STRUCTURAL DYNAMICS
Institute of Scientific and Technical Information of China (English)
李书; 王波; 胡继忠
2003-01-01
The sensitivity calculating formulas in structural dynamics was developed byutilizing the mathematical theorem and new definitions of sensitivities. So the singularityproblem of sensitivity with repeated eigenvalues is solved completely. To improve thecomputational efficiency, the reduction system is obtained based on Lanczos vectors. Afterincorporating the mathematical theory with the Lanczos algorithm, the approximatesensitivity solution can be obtained. A numerical example is presented to illustrate theperformance of the method.
DELAUNAY-BASED SURFACE RECONSTRUCTION ALGORITHM IN REVERSE ENGINEERING
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
Triangulation of scattered points is the first important section during reverse engineering. New concepts of dynamic circle and closed point are put forward based on current basic method. These new concepts can narrow the extent which triangulation process should seek through and optimize the triangles during producing them. Updating the searching edges dynamically controls progress of triangulation. Intersection judgment between new triangle and produced triangles is changed into intersection judgment between new triangle and searching edges. Examples illustrate superiorities of this new algorithm.
User Equilibrium Exchange Allocation Algorithm Based on Super Network
Directory of Open Access Journals (Sweden)
Peiyi Dong
2013-12-01
Full Text Available The theory of super network is an effective method to various traffic networks with means of multiple decision-making. It provides us with a favorable pricing decision tool for it combines a practical transport network with the space pricing decision. Spatial price equilibrium problem has always been the important research direction of the Transport Economics and regional transportation planning. As to how to combine the two, this paper presents the user equilibrium exchange allocation algorithm based on super network, which successfully keep the classical spatial price equilibrium problems (SPE into a super-network analysis framework. Through super-network analysis, we can add two virtual nodes in the network, which correspond to the virtual supply node and the super-super-demand virtual node, analysis the user equivalence with the SPE equilibrium and find the concrete steps of users exchange allocation algorithm based on super-network equilibrium. Finally, we carried out experiments to verify. The experiments show that: through the user equilibrium exchange SPE allocation algorithm based on super-network, we can get the steady-state equilibrium solution, which demonstrate that the algorithm is reasonable.
Research on Wavelet-Based Algorithm for Image Contrast Enhancement
Institute of Scientific and Technical Information of China (English)
Wu Ying-qian; Du Pei-jun; Shi Peng-fei
2004-01-01
A novel wavelet-based algorithm for image enhancement is proposed in the paper. On the basis of multiscale analysis, the proposed algorithm solves efficiently the problem of noise over-enhancement, which commonly occurs in the traditional methods for contrast enhancement. The decomposed coefficients at same scales are processed by a nonlinear method, and the coefficients at different scales are enhanced in different degree. During the procedure, the method takes full advantage of the properties of Human visual system so as to achieve better performance. The simulations demonstrate that these characters of the proposed approach enable it to fully enhance the content in images, to efficiently alleviate the enhancement of noise and to achieve much better enhancement effect than the traditional approaches.
Faster algorithms for RNA-folding using the Four-Russians method
2014-01-01
Background The secondary structure that maximizes the number of non-crossing matchings between complimentary bases of an RNA sequence of length n can be computed in O(n3) time using Nussinov’s dynamic programming algorithm. The Four-Russians method is a technique that reduces the running time for certain dynamic programming algorithms by a multiplicative factor after a preprocessing step where solutions to all smaller subproblems of a fixed size are exhaustively enumerated and solved. Frid and Gusfield designed an O(n3logn) algorithm for RNA folding using the Four-Russians technique. In their algorithm the preprocessing is interleaved with the algorithm computation. Theoretical results We simplify the algorithm and the analysis by doing the preprocessing once prior to the algorithm computation. We call this the two-vector method. We also show variants where instead of exhaustive preprocessing, we only solve the subproblems encountered in the main algorithm once and memoize the results. We give a simple proof of correctness and explore the practical advantages over the earlier method. The Nussinov algorithm admits an O(n2) time parallel algorithm. We show a parallel algorithm using the two-vector idea that improves the time bound to O(n2logn). Practical results We have implemented the parallel algorithm on graphics processing units using the CUDA platform. We discuss the organization of the data structures to exploit coalesced memory access for fast running times. The ideas to organize the data structures also help in improving the running time of the serial algorithms. For sequences of length up to 6000 bases the parallel algorithm takes only about 2.5 seconds and the two-vector serial method takes about 57 seconds on a desktop and 15 seconds on a server. Among the serial algorithms, the two-vector and memoized versions are faster than the Frid-Gusfield algorithm by a factor of 3, and are faster than Nussinov by up to a factor of 20. The source-code for the
3D face recognition algorithm based on detecting reliable components
Institute of Scientific and Technical Information of China (English)
Huang Wenjun; Zhou Xuebing; Niu Xiamu
2007-01-01
Fisherfaces algorithm is a popular method for face recognition. However, there exist some unstable components that degrade recognition performance. In this paper, we propose a method based on detecting reliable components to overcome the problem and introduce it to 3D face recognition. The reliable components are detected within the binary feature vector, which is generated from the Fisherfaces feature vector based on statistical properties, and is used for 3D face recognition as the final feature vector. Experimental results show that the reliable components feature vector is much more effective than the Fisherfaces feature vector for face recognition.
Comparison of cone beam artifacts reduction: two pass algorithm vs TV-based CS algorithm
Choi, Shinkook; Baek, Jongduk
2015-03-01
In a cone beam computed tomography (CBCT), the severity of the cone beam artifacts is increased as the cone angle increases. To reduce the cone beam artifacts, several modified FDK algorithms and compressed sensing based iterative algorithms have been proposed. In this paper, we used two pass algorithm and Gradient-Projection-Barzilai-Borwein (GPBB) algorithm to reduce the cone beam artifacts, and compared their performance using structural similarity (SSIM) index. In two pass algorithm, it is assumed that the cone beam artifacts are mainly caused by extreme-density(ED) objects, and therefore the algorithm reproduces the cone beam artifacts(i.e., error image) produced by ED objects, and then subtract it from the original image. GPBB algorithm is a compressed sensing based iterative algorithm which minimizes an energy function for calculating the gradient projection with the step size determined by the Barzilai- Borwein formulation, therefore it can estimate missing data caused by the cone beam artifacts. To evaluate the performance of two algorithms, we used testing objects consisting of 7 ellipsoids separated along the z direction and cone beam artifacts were generated using 30 degree cone angle. Even though the FDK algorithm produced severe cone beam artifacts with a large cone angle, two pass algorithm reduced the cone beam artifacts with small residual errors caused by inaccuracy of ED objects. In contrast, GPBB algorithm completely removed the cone beam artifacts and restored the original shape of the objects.
Institute of Scientific and Technical Information of China (English)
张时锋; 李自良
2011-01-01
Heat transfer coefficient is the main parameters of assessing the cooling capacity of quenching cooling medium, and it also is the key parameters of establishing the thermal boundary conditions. Using the inverse method for heat transfer coefficient, the heat transfer coefficient is taken as the unknown variables to solve the problem, which is classified as inverse heat conduction problems. Such problems have the extremely vital significance in practical engineering application research. This article presented a program of the inverse method for heat transfer coefficient using MATLAB software. The program based on the finite element method verified by Ansys software simulations and experiments. The results show that the method described in this article is a kind of effective method of calculating heat transfer coefficient.%换热系数是评定淬火介质冷却能力的主要参数,也是建立热边界条件的关键参数.换热系数反求法就是把换热系数作为未知量来求解,属于反向热传导问题,这类问题的研究在实际工程应用中具有十分重要的意义.本文用Matlab编写了基于有限元的换热系数反求法程序,用Ansys软件模拟和试验相结合的方法,进行了相应的验证,结果表明,本文所述的方法是一种有效的计算换热系数的方法.
An Improved Image Segmentation Based on Mean Shift Algorithm
Institute of Scientific and Technical Information of China (English)
CHENHanfeng; QIFeihu
2003-01-01
Gray image segmentation is to segment an image into some homogeneous regions and only one gray level is defined for each region as the result. These grayl evels are called major gray levels. Mean shift algorithm(MSA) has shown its efficiency in image segmentation. An improved gray image segmentation method based on MSAis proposed in this paper since usual image segmentation methods based on MSA often fail in segmenting imageswith weak edges. Corrupted block and its J-value are defined firstly in the proposed method. Then, J-matrix gotten from corrupted blocks are proposed to measure whether weak edges appear in the image. According to the J-matrix, major gray levels gotten with usual segmen-tation methods based on MSA are augmented and corre-sponding allocation windows are modified to detect weak edges. Experimental results demonstrate the effectiveness of the proposed method in gray image segmentation.
DOBD Algorithm for Training Neural Network: Part I. Method
Institute of Scientific and Technical Information of China (English)
吴建昱; 何小荣
2002-01-01
Overfitting is one of the important problems that restrain the application of neural network. The traditional OBD (Optimal Brain Damage) algorithm canavoid overfitting effectively. But it needs to train the network repeatedly with low calculational efficiency. In this paper, the Marquardt algorithm is incorporated into the OBD algorithm and a new method for pruning network-the Dynamic Optimal Brain Damage (DOBD) is introduced. This algorithm simplifies a network and obtains good generalization through dynamically deleting weight parameters with low sensitivity that is defined as the change of error function value with respect to the change of weights. Also a simplified method is presented through which sensitivities can be calculated during training with a little computation. Arule to determine the lower limit of sensitivity for deleting the unnecessary weights and other control methods during pruning and training are introduced. Thetraining course is analyzed theoretically and the reason why DOBD algorithm canobtain a much faster training speed than the OBD algorithm and avoid overfitting effectively is given.``
Image Encryption Algorithm Based on Chaotic Economic Model
S. S. Askar; Karawia, A. A.; Ahmad Alshamrani
2015-01-01
In literature, chaotic economic systems have got much attention because of their complex dynamic behaviors such as bifurcation and chaos. Recently, a few researches on the usage of these systems in cryptographic algorithms have been conducted. In this paper, a new image encryption algorithm based on a chaotic economic map is proposed. An implementation of the proposed algorithm on a plain image based on the chaotic map is performed. The obtained results show that the proposed algorithm can su...
A Trust-region-based Sequential Quadratic Programming Algorithm
DEFF Research Database (Denmark)
Henriksen, Lars Christian; Poulsen, Niels Kjølstad
This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints.......This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints....
Institute of Scientific and Technical Information of China (English)
冯林; 李聪; 沈莉
2013-01-01
人脸表情特征选择是人脸表情识别研究领域关注的一个热点.基于量子遗传算法与邻域粗糙集理论,文章提出一种新的人脸表情特征选择方法(Feature Selection based on Neighborhood Rough Set Theory and Quantum Genetic Algorithm,简称FSNRSTQGA),以邻域粗糙集理论为基础,定义了最优特征集的适应度函数来评价表情特征子集的选择效果；并结合量子遗传算法进化策略,提出了一种表情特征选择方法.Cohn-Kanade表情数据集上的仿真实验结果表明了该方法的有效性.%Facial expression feature selection is one of the hot issues in the field of facial expression recognition. A novel facial expression feature selection method named feature selection based on neighborhood rough set theory and quantum genetic algorithm (FSNRSTQGA) is proposed. First, an evaluation criterion of the optimization expression feature subset is established based on neighborhood rough set theory and used as the fitness function. Then, by quantum genetic algorithm evolutionary strategy, an approach of facial expression feature selection is proposed. The results of the simulation on Cohn-Kanade expression dataset illustrate that the FSNRSTQGA method is effective.
Improved Adaptive LSB Steganography Based on Chaos and Genetic Algorithm
Directory of Open Access Journals (Sweden)
Yu Lifang
2010-01-01
Full Text Available We propose a novel steganographic method in JPEG images with high performance. Firstly, we propose improved adaptive LSB steganography, which can achieve high capacity while preserving the first-order statistics. Secondly, in order to minimize visual degradation of the stego image, we shuffle bits-order of the message based on chaos whose parameters are selected by the genetic algorithm. Shuffling message's bits-order provides us with a new way to improve the performance of steganography. Experimental results show that our method outperforms classical steganographic methods in image quality, while preserving characteristics of histogram and providing high capacity.
A TDOA location algorithm based on data fusion
Institute of Scientific and Technical Information of China (English)
LIU Jun-min; ZHANG Chen; LIU Shi
2006-01-01
A new positioning method in mobile networks is presented.Based on the data fusion technology,it processes multi-layer information fusion for the location estimates achieved by the Chan algorithm,which increases mobile positioning accuracy effectively by only using measured difference of arriving (TDOA) signals.The method is simple and practical,especially when the location estimates are corrupted by the non-line-of-sight (NLOS) error.It not only has high positioning accuracy,but also reduces the location failure probability.Results from computer simulation show that the proposed method is effective in various environments.
Acoustic Environments: Applying Evolutionary Algorithms for Sound based Morphogenesis
DEFF Research Database (Denmark)
Foged, Isak Worre; Pasold, Anke; Jensen, Mads Brath;
2012-01-01
The research investigates the application of evolutionary computation in relation to sound based morphogenesis. It does so by using the Sabine equation for performance benchmark in the development of the spatial volume and refl ectors, effectively creating the architectural expression as a whole....... Additional algorithms are created and used to organise the entire set of 200 refl ector components and manufacturing constraints based upon the GA studies. An architectural pavilion is created based upon the studies illustrating the applicability of both developed methods and techniques....
Vergallo, P.; Lay-Ekuakille, A.
2013-08-01
Brain activity can be recorded by means of EEG (Electroencephalogram) electrodes placed on the scalp of the patient. The EEG reflects the activity of groups of neurons located in the head, and the fundamental problem in neurophysiology is the identification of the sources responsible of brain activity, especially if a seizure occurs and in this case it is important to identify it. The studies conducted in order to formalize the relationship between the electromagnetic activity in the head and the recording of the generated external field allow to know pattern of brain activity. The inverse problem, that is given the sampling field at different electrodes the underlying asset must be determined, is more difficult because the problem may not have a unique solution, or the search for the solution is made difficult by a low spatial resolution which may not allow to distinguish between activities involving sources close to each other. Thus, sources of interest may be obscured or not detected and known method in source localization problem as MUSIC (MUltiple SIgnal Classification) could fail. Many advanced source localization techniques achieve a best resolution by exploiting sparsity: if the number of sources is small as a result, the neural power vs. location is sparse. In this work a solution based on the spatial sparsity of the field signal is presented and analyzed to improve MUSIC method. For this purpose, it is necessary to set a priori information of the sparsity in the signal. The problem is formulated and solved using a regularization method as Tikhonov, which calculates a solution that is the better compromise between two cost functions to minimize, one related to the fitting of the data, and another concerning the maintenance of the sparsity of the signal. At the first, the method is tested on simulated EEG signals obtained by the solution of the forward problem. Relatively to the model considered for the head and brain sources, the result obtained allows to
Performance evaluation of sensor allocation algorithm based on covariance control
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
The covariance control capability of sensor allocation algorithms based on covariance control strategy is an important index to evaluate the performance of these algorithms. Owing to lack of standard performance metric indices to evaluate covariance control capability, sensor allocation ratio, etc, there are no guides to follow in the design procedure of sensor allocation algorithm in practical applications. To meet these demands, three quantified performance metric indices are presented, which are average covariance misadjustment quantity (ACMQ), average sensor allocation ratio (ASAR) and matrix metric influence factor (MMIF), where ACMQ, ASAR and MMIF quantify the covariance control capability, the usage of sensor resources and the robustness of sensor allocation algorithm, respectively. Meanwhile, a covariance adaptive sensor allocation algorithm based on a new objective function is proposed to improve the covariance control capability of the algorithm based on information gain. The experiment results show that the proposed algorithm have the advantage over the preceding sensor allocation algorithm in covariance control capability and robustness.
MEDICAL IMAGE SEGMENTATION BASED ON A MODIFIED LEVEL SET ALGORITHM
Institute of Scientific and Technical Information of China (English)
Yang Yong; Lin Pan; Zheng Chongxun; Gu Jianwen
2005-01-01
Objective To present a novel modified level set algorithm for medical image segmentation. Methods The algorithm is developed by substituting the speed function of level set algorithm with the region and gradient information of the image instead of the conventional gradient information. This new algorithm has been tested by a series of different modality medical images. Results We present various examples and also evaluate and compare the performance of our method with the classical level set method on weak boundaries and noisy images. Conclusion Experimental results show the proposed algorithm is effective and robust.
A Method of Network Traffic Identification Based on Improved Clustering Algorithms%基于改进分簇算法的网络流量识别方法
Institute of Scientific and Technical Information of China (English)
王宇科; 黎文伟; 苏欣
2011-01-01
The automatic detection of applications associated with network traffic is very important for network security and traffic management. Unfortunately, because of some of the applications like P2P, VOIP applications using dynamic port numbers, masquerading techniques, and encryption, it is difficult using simple port-based analysis to classify packet payloads in order to identify these applica tions. And many research works have proposed using the clustering algorithms to identify network traf fic, but these algorithms have some defects in how to choose the cluster center and the number of clus ters. In this paper, we first use the Weighting D2 algorithm to improve the selection of the initialized cluster centers, and use the value of NMK Normalize Mutual Information) to ascertain the number of clusters, and then get an improved clustering algorithm, and finally propose a application level identifi cation method based on this algorithm. The experimental results show that this method reaches 90% ac curacy or more, and gets lower False Positive Rate and False Rejection Rate.%网络流量相关应用的自动检测对于网络安全和流量管理来说非常重要.但是,由于Peer-to-Peer(P2P)、VOIP等网络新应用使用动态端口、伪装和加密流等技术,使得基于端口匹配和数据包特征字段分析等识别方法在识别这些应用时存在一定的难度.不少研究工作提出了分簇算法进行流量识别,但现有的分簇算法在簇中心和簇数目的选择上存在一定缺陷.本文首先使用基于Weighting D2算法对初始化簇中心选择进行改进,通过NMI值来确定簇的数目,得到改进的分簇算法,并提出一种基于该算法的应用层流量识别方法.对于应用层流量,尤其是P2P应用识别实验结果表明,该方法能达到90％以上的识别率以及较低的误识别率和漏识别率.
A Location-Based Business Information Recommendation Algorithm
Directory of Open Access Journals (Sweden)
Shudong Liu
2015-01-01
Full Text Available Recently, many researches on information (e.g., POI, ADs recommendation based on location have been done in both research and industry. In this paper, we firstly construct a region-based location graph (RLG, in which region node respectively connects with user node and business information node, and then we propose a location-based recommendation algorithm based on RLG, which can combine with user short-ranged mobility formed by daily activity and long-distance mobility formed by social network ties and sequentially can recommend local business information and long-distance business information to users. Moreover, it can combine user-based collaborative filtering with item-based collaborative filtering, and it can alleviate cold start problem which traditional recommender systems often suffer from. Empirical studies from large-scale real-world data from Yelp demonstrate that our method outperforms other methods on the aspect of recommendation accuracy.
Efficient Algorithms for Global Optimisation Methods in Computer Vision (Dagstuhl Seminar 11471)
Bruhn, Andrés; Pock, Thomas; Tai, Xue-Cheng
2012-01-01
This report documents the program and the results of Dagstuhl Seminar 11471 Efficient Algorithms for Global Optimisation Methods in Computer Vision, taking place November 20-25 in 2011. The focus of the seminar was to discuss the design of efficient computer vision algorithms based on global optimisation methods in the context of the entire design pipeline. Since there is no such conference that deals with all aspects of the design process -- modelling, mathematical analy...
Contrast Enhancement Algorithm Based on Gap Adjustment for Histogram Equalization
Chiu, Chung-Cheng; Ting, Chih-Chung
2016-01-01
Image enhancement methods have been widely used to improve the visual effects of images. Owing to its simplicity and effectiveness histogram equalization (HE) is one of the methods used for enhancing image contrast. However, HE may result in over-enhancement and feature loss problems that lead to unnatural look and loss of details in the processed images. Researchers have proposed various HE-based methods to solve the over-enhancement problem; however, they have largely ignored the feature loss problem. Therefore, a contrast enhancement algorithm based on gap adjustment for histogram equalization (CegaHE) is proposed. It refers to a visual contrast enhancement algorithm based on histogram equalization (VCEA), which generates visually pleasing enhanced images, and improves the enhancement effects of VCEA. CegaHE adjusts the gaps between two gray values based on the adjustment equation, which takes the properties of human visual perception into consideration, to solve the over-enhancement problem. Besides, it also alleviates the feature loss problem and further enhances the textures in the dark regions of the images to improve the quality of the processed images for human visual perception. Experimental results demonstrate that CegaHE is a reliable method for contrast enhancement and that it significantly outperforms VCEA and other methods. PMID:27338412
An Improved Particle Swarm Optimization Algorithm Based on Ensemble Technique
Institute of Scientific and Technical Information of China (English)
SHI Yan; HUANG Cong-ming
2006-01-01
An improved particle swarm optimization (PSO) algorithm based on ensemble technique is presented. The algorithm combines some previous best positions (pbest) of the particles to get an ensemble position (Epbest), which is used to replace the global best position (gbest). It is compared with the standard PSO algorithm invented by Kennedy and Eberhart and some improved PSO algorithms based on three different benchmark functions. The simulation results show that the improved PSO based on ensemble technique can get better solutions than the standard PSO and some other improved algorithms under all test cases.
NOVEL QUANTUM-INSPIRED GENETIC ALGORITHM BASED ON IMMUNITY
Institute of Scientific and Technical Information of China (English)
Li Ying; Zhao Rongchun; Zhang Yanning; Jiao Licheng
2005-01-01
A novel algorithm, the Immune Quantum-inspired Genetic Algorithm (IQGA), is proposed by introducing immune concepts and methods into Quantum-inspired Genetic Algorithm (QGA). With the condition of preserving QGA's advantages, IQGA utilizes the characteristics and knowledge in the pending problems for restraining the repeated and ineffective operations during evolution, so as to improve the algorithm efficiency. The experimental results of the knapsack problem show that the performance of IQGA is superior to the Conventional Genetic Algorithm (CGA), the Immune Genetic Algorithm (IGA) and QGA.
Development of antibiotic regimens using graph based evolutionary algorithms.
Corns, Steven M; Ashlock, Daniel A; Bryden, Kenneth M
2013-12-01
This paper examines the use of evolutionary algorithms in the development of antibiotic regimens given to production animals. A model is constructed that combines the lifespan of the animal and the bacteria living in the animal's gastro-intestinal tract from the early finishing stage until the animal reaches market weight. This model is used as the fitness evaluation for a set of graph based evolutionary algorithms to assess the impact of diversity control on the evolving antibiotic regimens. The graph based evolutionary algorithms have two objectives: to find an antibiotic treatment regimen that maintains the weight gain and health benefits of antibiotic use and to reduce the risk of spreading antibiotic resistant bacteria. This study examines different regimens of tylosin phosphate use on bacteria populations divided into Gram positive and Gram negative types, with a focus on Campylobacter spp. Treatment regimens were found that provided decreased antibiotic resistance relative to conventional methods while providing nearly the same benefits as conventional antibiotic regimes. By using a graph to control the information flow in the evolutionary algorithm, a variety of solutions along the Pareto front can be found automatically for this and other multi-objective problems.
Research on Damage Detection Method Based on Trust Region Algorithm%基于改进信赖域的结构损伤识别方法
Institute of Scientific and Technical Information of China (English)
王天辉; 马立元; 郝永峰; 李世龙
2013-01-01
A least-square objective function based on flexibility is put forward in the paper. The identification problem is transformed to an optimization problem through minimizing the differences between the experimental and analytical modal flexibility. The trust region approach is used to solve the optimization problem to make the process more robust and reliable, and to sl-ove the problem that flexibility sensitivity method is applied in complex structure. Finally , the feasibility and effectiveness of the method is verified by numerical simulation on a missile launch pad .%提出了基于柔度的最小二乘目标函数,极小化结构实测模态柔度与分析模态柔度之间的误差,将损伤识别问题转化为二次优化问题；采用信赖域方法求解该问题,使优化过程具有更强的鲁棒性和可靠性,同时解决了柔度灵敏度方法应用在复杂结构中的问题.最后,通过某导弹发射台的骨架损伤数值仿真,验证了该方法的可行性和有效性.
A research on fast FCM algorithm based on weighted sample
Institute of Scientific and Technical Information of China (English)
KUANG Ping; ZHU Qing-xin; WANG Ming-wen; CHEN Xu-dong; QING Li
2006-01-01
To improve the computational performance of the fuzzy C-means (FCM) algorithm used in dataset clustering with large numbers,the concepts of the equivalent samples and the weighting samples based on eigenvalue distribution of the samples in the feature space were introduced and a novel fast cluster algorithm named weighted fuzzy C-means (WFCM) algorithm was put forward,which came from the traditional FCM algorithm.It was proved that the duster results were equivalent in dataset with two different cluster algorithms:WFCM and FCM.Furthermore,the WFCM algorithm had better computational performance than the ordinary FCM algorithm.The experiment of the gray image segmentation showed that the WFCM algorithm is a fast and effective cluster algorithm.
A Novel Method to Implement the Matrix Pencil Super Resolution Algorithm for Indoor Positioning
Directory of Open Access Journals (Sweden)
Tariq Jamil Saifullah Khanzada
2011-10-01
Full Text Available This article highlights the estimation of the results for the algorithms implemented in order to estimate the delays and distances for the indoor positioning system. The data sets for the transmitted and received signals are captured at a typical outdoor and indoor area. The estimation super resolution algorithms are applied. Different state of art and super resolution techniques based algorithms are applied to avail the optimal estimates of the delays and distances between the transmitted and received signals and a novel method for matrix pencil algorithm is devised. The algorithms perform variably at different scenarios of transmitted and received positions. Two scenarios are experienced, for the single antenna scenario the super resolution techniques like ESPRIT (Estimation of Signal Parameters via Rotational Invariance Technique and theMatrix Pencil algorithms give optimal performance compared to the conventional techniques. In two antenna scenario RootMUSIC and Matrix Pencil algorithm performed better than other algorithms for the distance estimation, however, the accuracy of all the algorithms is worst than the single antenna scenario. In all cases our devised Matrix Pencil algorithm achieved the best estimation results.
Institute of Scientific and Technical Information of China (English)
贾冀婷
2015-01-01
To improve the automation ability of testcase generation in software testing is very important to guarantee the quality of soft-ware and reduce the cost of software. In this paper,propose an automatic testcase generation method based on particle swarm optimiza-tion,artificial bee colony algorithm and K-means clustering algorithm,and carry out the simulation experiments. The results show that the improved algorithm’ s efficiency is better and convergence ability is stronger than other algorithms such as particle swarm optimization and genetic algorithm in the automation ability of testcase generation.%软件测试中测试用例自动生成技术对于确保软件质量与降低开发成本都是非常重要的。文中基于K均值聚类算法与粒子群算法和人工蜂群算法相结合的混合算法，提出了一种测试用例自动生成方法，并且对此方法进行了仿真实验。实验结果表明，与基本的粒子群算法、遗传算法的测试用例自动生成方法相比较，基于文中改进算法的测试用例自动生成方法具有测试用例自动生成效率高、收敛能力强等优点。
Institute of Scientific and Technical Information of China (English)
郑慧杰; 刘弘; 郑向伟
2012-01-01
Concerning the problems that traditional path planning of group animation needs long time for searching and is of poor optimization, the authors proposed a multi-threaded path planning algorithm based on group search optimization. Firstly, to solve the problem that the algorithm easily gets trapped in local optimum, metraplis rule was introduced in this search mode. Secondly, by using random path through the multi-threading and stitching techniques, the algorithm was applied to path planning. The simulation results show that the algorithm has better global convergence both in high-dimensional and low-dimensional cases, and the method is good enough to meet the requirements of path planning in complex animation environment%针对群体动画中传统路径规划算法搜索时间长、寻优能力差等问题,提出一种利用群搜索算法进行多线程路径规划的方法.该方法首先将模拟退火算法引入到搜索模式中,克服算法易陷入局部最优的问题；其次,通过结合多线程和路径随机拼接技术,将算法应用到路径规划中.仿真实验表明该算法无论在高维还是低维情况下都具有较好的全局收敛性,能够很好地满足在复杂动画环境下路径规划的要求.
A Method for Path Planning of UAVs Based on Improved Genetic Algorithm%基于改进遗传算法的UAV航迹规划
Institute of Scientific and Technical Information of China (English)
鲁艺; 吕跃; 罗燕; 张亮; 赵志强; 唐隆
2012-01-01
An improved genetic algorithm was proposed to solve the problem of UAV's path planning in actual battle field. First,the planned search space was generated by means of skeleton algorithm,the information of the search space was extracted, and the kill probability of path points in the search space was calculated out. Then,with the information of the planned search space and by using a special gene coding mode,K possible paths were obtained by using genetic algorithm. According to the rules of path selection, the optimal path was obtained, and was smoothed by using different step lengths. Eventually the optimal path was acquired, which can meet the safety and maneuverability requirement of UAV.%针对实际作战环境中的UAV航迹规划,提出一种基于改进遗传算法的UAV航迹规划方法；通过骨架化算法生成规划搜索空间,对规划搜索空间中的信息进行提取,求解出规划搜索空间中航迹点的杀伤概率；根据规划搜索空间中的信息,采用特殊的基因编码方式,使用遗传算法为UAV找到K条备选航迹,提高了航迹规划效率；根据设定的航迹选取原则,求出最优航迹并对其按不同步长进行平滑处理,最终得到满足UAV机动性要求的可飞航迹.
ECONOMIC PRICING BASED JOB SCHEDULING ALGORITHM
Directory of Open Access Journals (Sweden)
V.K.MANAVALASUNDARAM
2012-06-01
Full Text Available The incentive approach provides a fair basis in successfully managing the decentralization and heterogeneity that is present in human economies. Competitive economic models provide algorithms/policies and tools for resource sharing or allocation in Grid systems. These models can be based on exchanging or prices. In the Demand Pricing Model, all participants need to own resources and trade resources by exchanges (e.g., storage space for CPU time. In the Best Bid Pricing Model, the resources have a price, based on the negotiation mechanism. They aim to enhance the system throughput,utilization, and complete execution at the earliest possible time rather than improving the utility of application processing. They do not take resource access cost (price into consideration, which means thatthe value of processing applications at any time is treated the same. The end user does not want to pay the highest price but wants to negotiate a particular price based on the demand, value, priority, and available budget. In an economic approach, the scheduling decisions are made dynamically at runtime and they are driven and directed by the end-users requirements.
A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing
Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian
2016-01-01
Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users’ smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users’ explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623
Indexing Algorithm Based on Improved Sparse Local Sensitive Hashing
Directory of Open Access Journals (Sweden)
Yiwei Zhu
2014-01-01
Full Text Available In this article, we propose a new semantic hashing algorithm to address the new-merging problems such as the difficulty in similarity measurement brought by high-dimensional data. Based on local sensitive hashing and spectral hashing, we introduce sparse principal component analysis (SPCA to reduce the dimension of the data set which exclude the redundancy in the parameter list, and thus make high dimensional indexing and retrieval faster and more efficient. In the meanwhile, we employ Boosting algorithm in machine learning to determine the threshold of hashing, so as to improve its adaptive ability to real data and extend its range of application. According to experiments, this method not only has satisfying performance on multimedia data sets such as images and texts, but also performs better than the common indexing methods.
A RBF Network Learning Scheme Using Immune Algorithm Based on Information Entropy
Institute of Scientific and Technical Information of China (English)
GONG Xin-bao; ZANG Xiao-gang; ZHOU Xi-lang
2005-01-01
A hybrid learning method combining immune algorithm and least square method is proposed to design the radial basis function(RBF) networks. The immune algorithm based on information entropy is used to determine the structure and parameters of RBF nonlinear hidden layer, and weights of RBF linear output layer are computed with least square method. By introducing the diversity control and immune memory mechanism, the algorithm improves the efficiency and overcomes the immature problem in genetic algorithm. Computer simulations demonstrate that the RBF networks designed in this method have fast convergence speed with good performances.
Memoryless cooperative graph search based on the simulated annealing algorithm
Institute of Scientific and Technical Information of China (English)
Hou Jian; Yan Gang-Feng; Fan Zhen
2011-01-01
We have studied the problem of reaching a globally optimal segment for a graph-like environment with a single or a group of autonomous mobile agents. Firstly, two efficient simulated-annealing-like algorithms are given for a single agent to solve the problem in a partially known environment and an unknown environment, respectively. It shows that under both proposed control strategies, the agent will eventually converge to a globally optimal segment with probability 1Secondly, we use multi-agent searching to simultaneously reduce the computation complexity and accelerate convergence based on the algorithms we have given for a single agent. By exploiting graph partition, a gossip consensus method based scheme is presented to update the key parameter-radius of the graph, ensuring that the agents spend much less time finding a globally optimal segment.
Fast Non-Local Means Algorithm Based on Krawtchouk Moments
Institute of Scientific and Technical Information of China (English)
吴一全; 戴一冕; 殷骏; 吴健生
2015-01-01
Non-local means (NLM)method is a state-of-the-art denoising algorithm, which replaces each pixel with a weighted average of all the pixels in the image. However, the huge computational complexity makes it impractical for real applications. Thus, a fast non-local means algorithm based on Krawtchouk moments is proposed to improve the denoising performance and reduce the computing time. Krawtchouk moments of each image patch are calculated and used in the subsequent similarity measure in order to perform a weighted averaging. Instead of computing the Euclid-ean distance of two image patches, the similarity measure is obtained by low-order Krawtchouk moments, which can reduce a lot of computational complexity. Since Krawtchouk moments can extract local features and have a good anti-noise ability, they can classify the useful information out of noise and provide an accurate similarity measure. Detailed experiments demonstrate that the proposed method outperforms the original NLM method and other moment-based methods according to a comprehensive consideration on subjective visual quality, method noise, peak signal to noise ratio (PSNR), structural similarity (SSIM) index and computing time. Most importantly, the proposed method is around 35 times faster than the original NLM method.
Institute of Scientific and Technical Information of China (English)
郭惠勇; 王磊; 李正良
2011-01-01
为解决结构多损伤情况下的位置识别和损伤程度判定问题,提出了一种基于改进粒子群优化算法和贝叶斯理论的两阶段损伤识别方法,该方法采用频率和模态应变能作为损伤定位源数据,分别用基于频率改变和基于应变能耗散率的识别方法进行损伤信息的初步提取,再利用贝叶斯融合理论对损伤位置进行较为精确的判定.然后,利用粒子群优化(PSO)算法对损伤位置和程度进行更为精确的二次识别.考虑到简单PSO算法易陷入局部最优解,提出了3种改进措施,即粒子位置突变、最优记忆粒子微搜索和双收敛措施.数值仿真结果表明:采用贝叶斯融合理论可以有效地识别出可能的损伤单元,在此基础上用改进的PSO算法可以更精确地识别损伤的位置和程度,同时采用3种改进措施的PSO算法的识别精度明显优于其他PSO算法和遗传算法.%In order to solve structural multi-damage identification problem, a two-stage identification method based on the particle swarm optimization(PSO) algorithm and the Bayesian theory was proposed. In this method, structural modal strain energy (MSE) and frequency are considered as two information sources, and the methods based on frequency change and MSE dissipation ratio are utilized to extract damage information. Then, the Bayesian theory is utilized to integrate the two information sources and preliminarily detect structural damage locations. Finally, the PSO algorithm is adopted to precisely identify structural damage locations and extents. In order to improve the identification results of a simple PSO algorithm, three improved strategies, particle position mutation, elitist micro-search and double convergence criterion, were presented. The simulation results for a two-dimensional truss structure show that the Bayesian theory can identify the suspected damage locations, the improved PSO algorithm can precisely detect the damage extent, and
Directory of Open Access Journals (Sweden)
K. Kumaravel
2015-05-01
Full Text Available Wireless Mesh Network (WMN uses the latest technology which helps in providing end users a high quality service referred to as the Internet’s “last mile”. Also considering WMN one of the most important technologies that are employed is multicast communication. Among the several issues routing which is significantly an important issue is addressed by every WMN technologies and this is done during the process of data transmission. The IEEE 802.11s Standard entails and sets procedures which need to be followed to facilitate interconnection and thus be able to devise an appropriate WMN. There has been introduction of several protocols by many authors which are mainly devised on the basis of machine learning and artificial intelligence. Multi-path routing may be considered as one such routing method which facilitates transmission of data over several paths, proving its capabilities as a useful strategy for achieving reliability in WMN. Though, multi-path routing in any manner cannot really guarantee deterministic transmission. As here there are multiple paths available for enabling data transmission from source to destination node. The algorithm that had been employed before in the studies conducted did not take in to consideration routing metrics which include energy aware metrics that are used for path selection during transferring of data. The following study proposes use of the hybrid multipath routing algorithm while taking in to consideration routing metrics which include energy, minimal loss for efficient path selection and transferring of data. Proposed algorithm here has two phases. In the first phase prim’s algorithm has been proposed so that in networks route discovery may be possible. For the second one the Hybrid firefly algorithm which is based on harmony search has been employed for selection of the most suitable and best through proper analysis of metrics which include energy awareness and minimal loss for every path that has
Uzawa Type Algorithm Based on Dual Mixed Variational Formulation
Institute of Scientific and Technical Information of China (English)
王光辉; 王烈衡
2002-01-01
Based on the dual mixed variational formulation with three variants (stress,displacement, displacement on contact boundary ) and the unilateral beaming problem of finite element discretization, an Uzawa type iterative algorithm is presented. The convergence of this iterative algorithm is proved, and then the efficiency of the algorithm is tested by a numerical example.
Speech Segmentation Algorithm Based On Fuzzy Memberships
Luis D. Huerta; Jose Antonio Huesca; Julio C. Contreras
2010-01-01
In this work, an automatic speech segmentation algorithm with text independency was implemented. In the algorithm, the use of fuzzy memberships on each characteristic in different speech sub-bands is proposed. Thus, the segmentation is performed a greater detail. Additionally, we tested with various speech signal frequencies and labeling, and we could observe how they affect the performance of the segmentation process in phonemes. The speech segmentation algorithm used is described. During th...
An interactive segmentation method based on superpixel
DEFF Research Database (Denmark)
Yang, Shu; Zhu, Yaping; Wu, Xiaoyu
2015-01-01
This paper proposes an interactive image-segmentation method which is based on superpixel. To achieve fast segmentation, the method is used to establish a Graphcut model using superpixels as nodes, and a new energy function is proposed. Experimental results demonstrate that the authors' method has...... excellent performance in terms of segmentation accuracy and computation efficiency compared with other segmentation algorithm based on pixels....
A New Function-based Framework for Classification and Evaluation of Mutual Exclusion Algorithms
Directory of Open Access Journals (Sweden)
Leila Omrani
2011-05-01
Full Text Available This paper presents a new function-based framework for mutual exclusion algorithms indistributed systems. In the traditional classification mutual exclusion algorithms were dividedin to two groups: Token-based and Permission-based. Recently, some new algorithms areproposed in order to increase fault tolerance, minimize message complexity and decreasesynchronization delay. Although the studies in this field up to now can compare and evaluatethe algorithms, this paper takes a step further and proposes a new function-based frameworkas a brief introduction to the algorithms in the four groups as follows: Token-based,Permission-based, Hybrid and K-mutual exclusion. In addition, because of being dispersaland obscure performance criteria, introduces four parameters which can be used to comparevarious distributed mutual exclusion algorithms such as message complexity, synchronizationdelay, decision theory and nodes configuration. Hope the proposed framework provides asuitable context for technical and clear evaluation of existing and future methods.
Solving SAT by Algorithm Transform of Wu‘s Method
Institute of Scientific and Technical Information of China (English)
贺思敏; 张钹
1999-01-01
Recently algorithms for solving propositional satisfiability problem, or SAT,have aroused great interest,and more attention has been paid to transformation problem solving.The commonly used transformation is representation transform,but since its intermediate computing procedure is a black box from the viewpoint of the original problem,this approach has many limitations.In this paper,a new approach called algorithm transform is proposed and applied to solving SAT by Wu's method,a general algorithm for solving polynomial equations.B y establishing the correspondence between the primitive operation in Wu's method and clause resolution is SAT,it is shown that Wu's method,when used for solving SAT,,is primarily a restricted clause resolution procedure.While Wu's method introduces entirely new concepts.e.g.characteristic set of clauses,to resolution procedure,the complexity result of resolution procedure suggests an exponential lower bound to Wu's method for solving general polynomial equations.Moreover,this algorithm transform can help achieve a more efficient implementation of Wu's method since it can avoid the complex manipulation of polynomials and can make the best use of domain specific knowledge.
Automatic Peak Selection by a Benjamini-Hochberg-Based Algorithm
Abbas, Ahmed
2013-01-07
A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into p-values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx. © 2013
Institute of Scientific and Technical Information of China (English)
王丹丹; 徐越; 宋怀波; 何东健
2015-01-01
果实采摘点的精确定位是采摘机器人必须解决的关键问题。鉴于苹果目标具有良好对称性的特点，利用转动惯量所具有的平移、旋转不变性及其在对称轴方向取得极值的特性，提出了一种基于轮廓对称轴法的苹果目标采摘点定位方法。为了解决分割后苹果目标边缘不够平滑而导致定位精度偏低的问题，提出了一种苹果目标轮廓平滑方法。为了验证算法的有效性，对随机选取的20幅无遮挡的单果苹果图像分别利用轮廓平滑和未进行轮廓平滑的算法进行试验，试验结果表明，未进行轮廓平滑算法的平均定位误差为20.678°，而轮廓平滑后算法平均定位误差为4.542°，比未进行轮廓平滑算法平均定位误差降低了78.035%，未进行轮廓平滑算法的平均运行时间为10.2 ms，而轮廓平滑后算法的平均运行时间为7.5 ms，比未进行轮廓平滑算法平均运行时间降低了25.839%，表明平滑轮廓算法可以提高定位精度和运算效率。利用平滑轮廓对称轴算法可以较好地找到苹果目标的对称轴并实现采摘点定位，表明将该方法应用于苹果目标的对称轴提取及采摘点定位是可行的。%The localization of picking points of fruits is one of the key problems for picking robots, and it is the first step of implementation of the picking task for picking robots. In view of a good symmetry of apples, and characteristics of shift, rotation invariance, and reaching the extreme values in symmetry axis direction which moment of inertia possesses, a new method based on a contour symmetry axis was proposed to locate the picking point of apples. In order to solve the problem of low localization accuracy which results from the rough edge of apples after segmentation, a method of smoothing contour algorithm was presented. The steps of the algorithm were as follow, first, the image was transformed from RGB color space into
The Result Integration Algorithm Based on Matching Strategy
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
The following paper provides a new algorithm: a result integration algorithm based on matching strategy. The algorithm extracts the title and the abstract of Web pages, calculates the relevance between the query string and the Web pages, decides the Web pages accepted, rejected and sorts them out in user interfaces. The experiment results indicate obviously that the new algorithms improve the precision of meta-search engine. This technique is very useful to meta-search engine.
A new classification algorithm based on RGH-tree search
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
In this paper, we put forward a new classification algorithm based on RGH-Tree search and perform the classification analysis and comparison study. This algorithm can save computing resource and increase the classification efficiency. The experiment shows that this algorithm can get better effect in dealing with three dimensional multi-kind data. We find that the algorithm has better generalization ability for small training set and big testing result.
Wireless Sensor Network Path Optimization Based on Hybrid Algorithm
Zeyu Sun; Li, Zhenping
2013-01-01
One merit of genetic algorithm is fast overall searching, but this algorithm usually results in low efficiency because of large quantities of redundant codes. The advantages of ant colony algorithm are strong suitability and good robustness while its disadvantages are tendency to stagnation, slow speed of convergence. Put forward based on improved ant colony algorithm for wireless sensor network path optimization approach will first need to pass the data in the shortest path for transmission,...
An incremental clustering algorithm based on Mahalanobis distance
Aik, Lim Eng; Choon, Tan Wee
2014-12-01
Classical fuzzy c-means clustering algorithm is insufficient to cluster non-spherical or elliptical distributed datasets. The paper replaces classical fuzzy c-means clustering euclidean distance with Mahalanobis distance. It applies Mahalanobis distance to incremental learning for its merits. A Mahalanobis distance based fuzzy incremental clustering learning algorithm is proposed. Experimental results show the algorithm is an effective remedy for the defect in fuzzy c-means algorithm but also increase training accuracy.
An Incremental Algorithm of Text Clustering Based on Semantic Sequences
Institute of Scientific and Technical Information of China (English)
FENG Zhonghui; SHEN Junyi; BAO Junpeng
2006-01-01
This paper proposed an incremental textclustering algorithm based on semantic sequence.Using similarity relation of semantic sequences and calculating the cover of similarity semantic sequences set, the candidate cluster with minimum entropy overlap value was selected as a result cluster every time in this algorithm.The comparison of experimental results shows that the precision of the algorithm is higher than other algorithms under same conditions and this is obvious especially on long documents set.
Variance-based fingerprint distance adjustment algorithm for indoor localization
Institute of Scientific and Technical Information of China (English)
Xiaolong Xu; Yu Tang; Xinheng Wang; Yun Zhang
2015-01-01
The multipath effect and movements of people in in-door environments lead to inaccurate localization. Through the test, calculation and analysis on the received signal strength in-dication (RSSI) and the variance of RSSI, we propose a novel variance-based fingerprint distance adjustment algorithm (VFDA). Based on the rule that variance decreases with the increase of RSSI mean, VFDA calculates RSSI variance with the mean value of received RSSIs. Then, we can get the correction weight. VFDA adjusts the fingerprint distances with the correction weight based on the variance of RSSI, which is used to correct the fingerprint distance. Besides, a threshold value is applied to VFDA to im-prove its performance further. VFDA and VFDA with the threshold value are applied in two kinds of real typical indoor environments deployed with several Wi-Fi access points. One is a quadrate lab room, and the other is a long and narrow corridor of a building. Experimental results and performance analysis show that in in-door environments, both VFDA and VFDA with the threshold have better positioning accuracy and environmental adaptability than the current typical positioning methods based on the k-nearest neighbor algorithm and the weighted k-nearest neighbor algorithm with similar computational costs.
Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua
2016-07-01
On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.
Directory of Open Access Journals (Sweden)
He Peng Ju
2016-01-01
Full Text Available Nowadays, detecting fetal ECG using abdominal signal is a commonly used method, but fetal ECG signal will be affected by maternal ECG. Current FECG extraction algorithms are mainly aiming at multiple channels signal. They often assume there is only one fetus and did not consider multiple births. This paper proposed a single channel blind source separation (SCBSS algorithm based on source number estimation using multi-algorithm fusion to process single abdominal signal. The method decomposed collected single channel signal into multiple intrinsic mode function (IMF utilizing Empirical Mode Decomposition (EMD, mapping single channel into multiple channels. Four multiple channel source number estimation (MCSNE methods (Bootstrap, Hough, AIC and PCA were weighting fused to estimate accurate source number and the particle swarm optimization algorithm (PSO was employed to determine weighted coefficient. According to source number and IMF, nonnegative matrix was constructed and nonnegative matrix factorization (NMF was employed to separate mixed signals. Experiments used single channel signal mixed by four man-made signals and single channel ECG mixed by two to verify the proposed algorithm. Results showed that the proposed algorithm could determine number of independent signal in single acquired signal. FECG could be extracted from single channel observed signal and the algorithm can be used to solve separation of MECG and FECG.
Adaptive RED algorithm based on minority game
Wei, Jiaolong; Lei, Ling; Qian, Jingjing
2007-11-01
With more and more applications appearing and the technology developing in the Internet, only relying on terminal system can not satisfy the complicated demand of QoS network. Router mechanisms must be participated into protecting responsive flows from the non-responsive. Routers mainly use active queue management mechanism (AQM) to avoid congestion. In the point of interaction between the routers, the paper applies minority game to describe the interaction of the users and observes the affection on the length of average queue. The parameters α, β of ARED being hard to confirm, adaptive RED based on minority game can depict the interactions of main body and amend the parameter α, β of ARED to the best. Adaptive RED based on minority game optimizes ARED and realizes the smoothness of average queue length. At the same time, this paper extends the network simulator plat - NS by adding new elements. Simulation has been implemented and the results show that new algorithm can reach the anticipative objects.
Web Based Genetic Algorithm Using Data Mining
Directory of Open Access Journals (Sweden)
Ashiqur Rahman
2016-09-01
Full Text Available This paper presents an approach for classifying students in order to predict their final grade based on features extracted from logged data in an education web-based system. A combination of multiple classifiers leads to a significant improvement in classification performance. Through weighting the feature vectors using a Genetic Algorithm we can optimize the prediction accuracy and get a marked improvement over raw classification. It further shows that when the number of features is few; feature weighting is works better than just feature selection. Many leading educational institutions are working to establish an online teaching and learning presence. Several systems with different capabilities and approaches have been developed to deliver online education in an academic setting. In particular, Michigan State University (MSU has pioneered some of these systems to provide an infrastructure for online instruction. The research presented here was performed on a part of the latest online educational system developed at MSU, the Learning Online Network with Computer-Assisted Personalized Approach (LON-CAPA
Clonal Selection Based Memetic Algorithm for Job Shop Scheduling Problems
Institute of Scientific and Technical Information of China (English)
Jin-hui Yang; Liang Sun; Heow Pueh Lee; Yun Qian; Yan-chun Liang
2008-01-01
A clonal selection based memetic algorithm is proposed for solving job shop scheduling problems in this paper. In the proposed algorithm, the clonal selection and the local search mechanism are designed to enhance exploration and exploitation. In the clonal selection mechanism, clonal selection, hypermutation and receptor edit theories are presented to construct an evolutionary searching mechanism which is used for exploration. In the local search mechanism, a simulated annealing local search algorithm based on Nowicki and Smutnicki's neighborhood is presented to exploit local optima. The proposed algorithm is examined using some well-known benchmark problems. Numerical results validate the effectiveness of the proposed algorithm.
A Developed Algorithm of Apriori Based on Association Analysis
Institute of Scientific and Technical Information of China (English)
LI Pingxiang; CHEN Jiangping; BIAN Fuling
2004-01-01
A method for mining frequent itemsets by evaluating their probability of supports based on association analysis is presented. This paper obtains the probability of every 1-itemset by scanning the database, then evaluates the probability of every 2-itemset, every 3-itemset, every k-itemset from the frequent 1-itemsets and gains all the candidate frequent itemsets. This paper also scans the database for verifying the support of the candidate frequent itemsets. Last, the frequent itemsets are mined. The method reduces a lot of time of scanning database and shortens the computation time of the algorithm.
A Multiple Model Approach to Modeling Based on LPF Algorithm
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
Input-output data fitting methods are often used for unknown-structure nonlinear system modeling. Based on model-on-demand tactics, a multiple model approach to modeling for nonlinear systems is presented. The basic idea is to find out, from vast historical system input-output data sets, some data sets matching with the current working point, then to develop a local model using Local Polynomial Fitting (LPF) algorithm. With the change of working points, multiple local models are built, which realize the exact modeling for the global system. By comparing to other methods, the simulation results show good performance for its simple, effective and reliable estimation.``