WorldWideScience

Sample records for algorithm based method

  1. AN SVAD ALGORITHM BASED ON FNNKD METHOD

    Institute of Scientific and Technical Information of China (English)

    Chen Dong; Zhang Yan; Kuang Jingming

    2002-01-01

    The capacity of mobile communication system is improved by using Voice Activity Detection (VAD) technology. In this letter, a novel VAD algorithm, SVAD algorithm based on Fuzzy Neural Network Knowledge Discovery (FNNKD) method is proposed. The performance of SVAD algorithm is discussed and compared with traditional algorithm recommended by ITU G.729B in different situations. The simulation results show that the SVAD algorithm performs better.

  2. Kernel method-based fuzzy clustering algorithm

    Institute of Scientific and Technical Information of China (English)

    Wu Zhongdong; Gao Xinbo; Xie Weixin; Yu Jianping

    2005-01-01

    The fuzzy C-means clustering algorithm(FCM) to the fuzzy kernel C-means clustering algorithm(FKCM) to effectively perform cluster analysis on the diversiform structures are extended, such as non-hyperspherical data, data with noise, data with mixture of heterogeneous cluster prototypes, asymmetric data, etc. Based on the Mercer kernel, FKCM clustering algorithm is derived from FCM algorithm united with kernel method. The results of experiments with the synthetic and real data show that the FKCM clustering algorithm is universality and can effectively unsupervised analyze datasets with variform structures in contrast to FCM algorithm. It is can be imagined that kernel-based clustering algorithm is one of important research direction of fuzzy clustering analysis.

  3. ADAPTIVE FUSION ALGORITHMS BASED ON WEIGHTED LEAST SQUARE METHOD

    Institute of Scientific and Technical Information of China (English)

    SONG Kaichen; NIE Xili

    2006-01-01

    Weighted fusion algorithms, which can be applied in the area of multi-sensor data fusion,are advanced based on weighted least square method. A weighted fusion algorithm, in which the relationship between weight coefficients and measurement noise is established, is proposed by giving attention to the correlation of measurement noise. Then a simplified weighted fusion algorithm is deduced on the assumption that measurement noise is uncorrelated. In addition, an algorithm, which can adjust the weight coefficients in the simplified algorithm by making estimations of measurement noise from measurements, is presented. It is proved by emulation and experiment that the precision performance of the multi-sensor system based on these algorithms is better than that of the multi-sensor system based on other algorithms.

  4. New Iris Localization Method Based on Chaos Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    Jia Dongli; Muhammad Khurram Khan; Zhang Jiashu

    2005-01-01

    This paper present a new method based on Chaos Genetic Algorithm (CGA) to localize the human iris in a given image. First, the iris image is preprocessed to estimate the range of the iris localization, and then CGA is used to extract the boundary of the iris. Simulation results show that the proposed algorithms is efficient and robust, and can achieve sub pixel precision. Because Genetic Algorithms (GAs) can search in a large space, the algorithm does not need accurate estimation of iris center for subsequent localization, and hence can lower the requirement for original iris image processing. On this point, the present localization algirithm is superior to Daugmans algorithm.

  5. A CT Image Segmentation Algorithm Based on Level Set Method

    Institute of Scientific and Technical Information of China (English)

    QU Jing-yi; SHI Hao-shan

    2006-01-01

    Level Set methods are robust and efficient numerical tools for resolving curve evolution in image segmentation. This paper proposes a new image segmentation algorithm based on Mumford-Shah module. The method is used to CT images and the experiment results demonstrate its efficiency and veracity.

  6. Competition assignment problem algorithm based on Hungarian method

    Institute of Scientific and Technical Information of China (English)

    KONG Chao; REN Yongtai; GE Huiling; DENG Hualing

    2007-01-01

    Traditional Hungarian method can only solve standard assignment problems, while can not solve competition assignment problems. This article emphatically discussed the difference between standard assignment problems and competition assignment problems. The kinds of competition assignment problem algorithms based on Hungarian method and the solutions of them were studied.

  7. A dynamic fuzzy clustering method based on genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    ZHENG Yan; ZHOU Chunguang; LIANG Yanchun; GUO Dongwei

    2003-01-01

    A dynamic fuzzy clustering method is presented based on the genetic algorithm. By calculating the fuzzy dissimilarity between samples the essential associations among samples are modeled factually. The fuzzy dissimilarity between two samples is mapped into their Euclidean distance, that is, the high dimensional samples are mapped into the two-dimensional plane. The mapping is optimized globally by the genetic algorithm, which adjusts the coordinates of each sample, and thus the Euclidean distance, to approximate to the fuzzy dissimilarity between samples gradually. A key advantage of the proposed method is that the clustering is independent of the space distribution of input samples, which improves the flexibility and visualization. This method possesses characteristics of a faster convergence rate and more exact clustering than some typical clustering algorithms. Simulated experiments show the feasibility and availability of the proposed method.

  8. Method of stereo matching based on genetic algorithm

    Science.gov (United States)

    Lu, Chaohui; An, Ping; Zhang, Zhaoyang

    2003-09-01

    A new stereo matching scheme based on image edge and genetic algorithm (GA) is presented to improve the conventional stereo matching method in this paper. In order to extract robust edge feature for stereo matching, infinite symmetric exponential filter (ISEF) is firstly applied to remove the noise of image, and nonlinear Laplace operator together with local variance of intensity are then used to detect edges. Apart from the detected edge, the polarity of edge pixels is also obtained. As an efficient search method, genetic algorithm is applied to find the best matching pair. For this purpose, some new ideas are developed for applying genetic algorithm to stereo matching. Experimental results show that the proposed methods are effective and can obtain good results.

  9. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method

    Science.gov (United States)

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR.

  10. A Novel Assembly Line Balancing Method Based on PSO Algorithm

    Directory of Open Access Journals (Sweden)

    Xiaomei Hu

    2014-01-01

    Full Text Available Assembly line is widely used in manufacturing system. Assembly line balancing problem is a crucial question during design and management of assembly lines since it directly affects the productivity of the whole manufacturing system. The model of assembly line balancing problem is put forward and a general optimization method is proposed. The key data on assembly line balancing problem is confirmed, and the precedence relations diagram is described. A double objective optimization model based on takt time and smoothness index is built, and balance optimization scheme based on PSO algorithm is proposed. Through the simulation experiments of examples, the feasibility and validity of the assembly line balancing method based on PSO algorithm is proved.

  11. Research on Palmprint Identification Method Based on Quantum Algorithms

    Directory of Open Access Journals (Sweden)

    Hui Li

    2014-01-01

    Full Text Available Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%.

  12. Research on palmprint identification method based on quantum algorithms.

    Science.gov (United States)

    Li, Hui; Zhang, Zhanzhan

    2014-01-01

    Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT) is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%.

  13. An Improved Image Segmentation Algorithm Based on MET Method

    Directory of Open Access Journals (Sweden)

    Z. A. Abo-Eleneen

    2012-09-01

    Full Text Available Image segmentation is a basic component of many computer vision systems and pattern recognition. Thresholding is a simple but effective method to separate objects from the background. A commonly used method, Kittler and Illingworth's minimum error thresholding (MET, improves the image segmentation effect obviously. Its simpler and easier to implement. However, it fails in the presence of skew and heavy-tailed class-conditional distributions or if the histogram is unimodal or close to unimodal. The Fisher information (FI measure is an important concept in statistical estimation theory and information theory. Employing the FI measure, an improved threshold image segmentation algorithm FI-based extension of MET is developed. Comparing with the MET method, the improved method in general can achieve more robust performance when the data for either class is skew and heavy-tailed.

  14. Generating Decision Trees Method Based on Improved ID3 Algorithm

    Institute of Scientific and Technical Information of China (English)

    Yang Ming; Guo Shuxu1; Wang Jun3

    2011-01-01

    The ID3 algorithm is a classical learning algorithm of decision tree in data mining.The algorithm trends to choosing the attribute with more values,affect the efficiency of classification and prediction for building a decision tree.This article proposes a new approach based on an improved ID3 algorithm.The new algorithm introduces the importance factor λ when calculating the information entropy.It can strengthen the label of important attributes of a tree and reduce the label of non-important attributes.The algorithm overcomes the flaw of the traditional ID3 algorithm which tends to choose the attributes with more values,and also improves the efficiency and flexibility in the process of generating decision trees.

  15. A Progressive Image Compression Method Based on EZW Algorithm

    Science.gov (United States)

    Du, Ke; Lu, Jianming; Yahagi, Takashi

    A simple method based on the EZW algorithm is presented for improving image compression performance. Recent success in wavelet image coding is mainly attributed to recognition of the importance of data organization and representation. There have been several very competitive wavelet coders developed, namely, Shapiro's EZW(Embedded Zerotree Wavelets)(1), Said and Pearlman's SPIHT(Set Partitioning In Hierarchical Trees)(2), and Bing-Bing Chai's SLCCA(Significance-Linked Connected Component Analysis for Wavelet Image Coding)(3). The EZW algorithm is based on five key concepts: (1) a DWT(Discrete Wavelet Transform) or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, (4) universal lossless data compression which is achieved via adaptive arithmetic coding. and (5) DWT coefficients' degeneration from high scale subbands to low scale subbands. In this paper, we have improved the self-similarity statistical characteristic in concept (5) and present a progressive image compression method.

  16. Color Image Segmentation Method Based on Improved Spectral Clustering Algorithm

    OpenAIRE

    Dong Qin

    2014-01-01

    Contraposing to the features of image data with high sparsity of and the problems on determination of clustering numbers, we try to put forward an color image segmentation algorithm, combined with semi-supervised machine learning technology and spectral graph theory. By the research of related theories and methods of spectral clustering algorithms, we introduce information entropy conception to design a method which can automatically optimize the scale parameter value. So it avoids the unstab...

  17. Switching Equalization Algorithm Based on a New SNR Estimation Method

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    It is well-known that turbo equalization with the max-log-map (MLM) rather than the log-map (LM) algorithm is insensitive to signal to noise ratio (SNR) mismatch. As our first contribution, an improved MLM algorithm called scaled max-log-map (SMLM) algorithm is presented. Simulation results show that the SMLM scheme can dramatically outperform the MLM without sacrificing the robustness against SNR mismatch. Unfortunately, its performance is still inferior to that of the LM algorithm with exact SNR knowledge over the class of high-loss channels. As our second contribution, a switching turbo equalization scheme, which switches between the SMLM and LM schemes, is proposed to practically close the performance gap. It is based on a novel way to estimate the SNR from the reliability values of the extrinsic information of the SMLM algorithm.

  18. Distortion Parameters Analysis Method Based on Improved Filtering Algorithm

    Directory of Open Access Journals (Sweden)

    ZHANG Shutuan

    2013-10-01

    Full Text Available In order to realize the accurate distortion parameters test of aircraft power supply system, and satisfy the requirement of corresponding equipment in the aircraft, the novel power parameters test system based on improved filtering algorithm is introduced in this paper. The hardware of the test system has the characters of s portable and high-speed data acquisition and processing, and the software parts utilize the software Labwindows/CVI as exploitation software, and adopt the pre-processing technique and adding filtering algorithm. Compare with the traditional filtering algorithm, the test system adopted improved filtering algorithm can help to increase the test accuracy. The application shows that the test system with improved filtering algorithm can realize the accurate test results, and reach to the design requirements.  

  19. PCNN document segmentation method based on bacterial foraging optimization algorithm

    Science.gov (United States)

    Liao, Yanping; Zhang, Peng; Guo, Qiang; Wan, Jian

    2014-04-01

    Pulse Coupled Neural Network(PCNN) is widely used in the field of image processing, but it is a difficult task to define the relative parameters properly in the research of the applications of PCNN. So far the determination of parameters of its model needs a lot of experiments. To deal with the above problem, a document segmentation based on the improved PCNN is proposed. It uses the maximum entropy function as the fitness function of bacterial foraging optimization algorithm, adopts bacterial foraging optimization algorithm to search the optimal parameters, and eliminates the trouble of manually set the experiment parameters. Experimental results show that the proposed algorithm can effectively complete document segmentation. And result of the segmentation is better than the contrast algorithms.

  20. Applied RCM2 Algorithms Based on Statistical Methods

    Institute of Scientific and Technical Information of China (English)

    Fausto Pedro García Márquez; Diego J. Pedregal

    2007-01-01

    The main purpose of this paper is to implement a system capable of detecting faults in railway point mechanisms. This is achieved by developing an algorithm that takes advantage of three empirical criteria simultaneously capable of detecting faults from records of measurements of force against time. The system is dynamic in several respects: the base reference data is computed using all the curves free from faults as they are encountered in the experimental data; the algorithm that uses the three criteria simultaneously may be applied in on-line situations as each new data point becomes available; and recursive algorithms are applied to filter noise from the raw data in an automatic way. Encouraging results are found in practice when the system is applied to a number of experiments carried out by an industrial sponsor.

  1. Multiobjective Optimization Method Based on Adaptive Parameter Harmony Search Algorithm

    Directory of Open Access Journals (Sweden)

    P. Sabarinath

    2015-01-01

    Full Text Available The present trend in industries is to improve the techniques currently used in design and manufacture of products in order to meet the challenges of the competitive market. The crucial task nowadays is to find the optimal design and machining parameters so as to minimize the production costs. Design optimization involves more numbers of design variables with multiple and conflicting objectives, subjected to complex nonlinear constraints. The complexity of optimal design of machine elements creates the requirement for increasingly effective algorithms. Solving a nonlinear multiobjective optimization problem requires significant computing effort. From the literature it is evident that metaheuristic algorithms are performing better in dealing with multiobjective optimization. In this paper, we extend the recently developed parameter adaptive harmony search algorithm to solve multiobjective design optimization problems using the weighted sum approach. To determine the best weightage set for this analysis, a performance index based on least average error is used to determine the index of each weightage set. The proposed approach is applied to solve a biobjective design optimization of disc brake problem and a newly formulated biobjective design optimization of helical spring problem. The results reveal that the proposed approach is performing better than other algorithms.

  2. Visual tracking method based on cuckoo search algorithm

    Science.gov (United States)

    Gao, Ming-Liang; Yin, Li-Ju; Zou, Guo-Feng; Li, Hai-Tao; Liu, Wei

    2015-07-01

    Cuckoo search (CS) is a new meta-heuristic optimization algorithm that is based on the obligate brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior of some birds and fruit flies. It has been found to be efficient in solving global optimization problems. An application of CS is presented to solve the visual tracking problem. The relationship between optimization and visual tracking is comparatively studied and the parameters' sensitivity and adjustment of CS in the tracking system are experimentally studied. To demonstrate the tracking ability of a CS-based tracker, a comparative study of tracking accuracy and speed of the CS-based tracker with six "state-of-art" trackers, namely, particle filter, meanshift, PSO, ensemble tracker, fragments tracker, and compressive tracker are presented. Comparative results show that the CS-based tracker outperforms the other trackers.

  3. GPU-based parallel algorithm for blind image restoration using midfrequency-based methods

    Science.gov (United States)

    Xie, Lang; Luo, Yi-han; Bao, Qi-liang

    2013-08-01

    GPU-based general-purpose computing is a new branch of modern parallel computing, so the study of parallel algorithms specially designed for GPU hardware architecture is of great significance. In order to solve the problem of high computational complexity and poor real-time performance in blind image restoration, the midfrequency-based algorithm for blind image restoration was analyzed and improved in this paper. Furthermore, a midfrequency-based filtering method is also used to restore the image hardly with any recursion or iteration. Combining the algorithm with data intensiveness, data parallel computing and GPU execution model of single instruction and multiple threads, a new parallel midfrequency-based algorithm for blind image restoration is proposed in this paper, which is suitable for stream computing of GPU. In this algorithm, the GPU is utilized to accelerate the estimation of class-G point spread functions and midfrequency-based filtering. Aiming at better management of the GPU threads, the threads in a grid are scheduled according to the decomposition of the filtering data in frequency domain after the optimization of data access and the communication between the host and the device. The kernel parallelism structure is determined by the decomposition of the filtering data to ensure the transmission rate to get around the memory bandwidth limitation. The results show that, with the new algorithm, the operational speed is significantly increased and the real-time performance of image restoration is effectively improved, especially for high-resolution images.

  4. A New Genetic Algorithm Based on Niche Technique and Local Search Method

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The genetic algorithm has been widely used in many fields as an easy robust global search and optimization method. In this paper, a new genetic algorithm based on niche technique and local search method is presented under the consideration of inadequacies of the simple genetic algorithm. In order to prove the adaptability and validity of the improved genetic algorithm, optimization problems of multimodal functions with equal peaks, unequal peaks and complicated peak distribution are discussed. The simulation results show that compared to other niching methods, this improved genetic algorithm has obvious potential on many respects, such as convergence speed, solution accuracy, ability of global optimization, etc.

  5. A clustering method of Chinese medicine prescriptions based on modified firefly algorithm.

    Science.gov (United States)

    Yuan, Feng; Liu, Hong; Chen, Shou-Qiang; Xu, Liang

    2016-12-01

    This paper is aimed to study the clustering method for Chinese medicine (CM) medical cases. The traditional K-means clustering algorithm had shortcomings such as dependence of results on the selection of initial value, trapping in local optimum when processing prescriptions form CM medical cases. Therefore, a new clustering method based on the collaboration of firefly algorithm and simulated annealing algorithm was proposed. This algorithm dynamically determined the iteration of firefly algorithm and simulates sampling of annealing algorithm by fitness changes, and increased the diversity of swarm through expansion of the scope of the sudden jump, thereby effectively avoiding premature problem. The results from confirmatory experiments for CM medical cases suggested that, comparing with traditional K-means clustering algorithms, this method was greatly improved in the individual diversity and the obtained clustering results, the computing results from this method had a certain reference value for cluster analysis on CM prescriptions.

  6. Joint Interference Detection Method for DSSS Communications Based on the OMP Algorithm and CA-CFAR

    Directory of Open Access Journals (Sweden)

    Zhang Yongshun

    2016-01-01

    Full Text Available The existing direct sequence spread spectrum (DSSS communications interference detection algorithms are confined to the high sampling rate. In order to solve this problem, algorithm for DSSS communications interference detection was designed based on compressive sensing (CS. First of all, the orthogonal matching pursuit (OMP algorithm was applied to the interference detection in DSSS communications, the advantages and weaknesses of the algorithm were analyzed; Secondly, according to the weaknesses of the OMP algorithm, a joint interference detection method based on the OMP algorithm and cell average constant false alarm rate (CA-CFAR was proposed. The theoretical analyze and computer simulation all proved the effectiveness of the new algorithm. The simulation results show that the new method not only could achieve the interference detection, but also could estimate the interference quantity effectively.

  7. A fast marching method based back projection algorithm for photoacoustic tomography in heterogeneous media

    CERN Document Server

    Wang, Tianren

    2015-01-01

    This paper presents a numerical study on a fast marching method based back projection reconstruction algorithm for photoacoustic tomography in heterogeneous media. Transcranial imaging is used here as a case study. To correct for the phase aberration from the heterogeneity (i.e., skull), the fast marching method is adopted to compute the phase delay based on the known speed of sound distribution, and the phase delay is taken into account by the back projection algorithm for more accurate reconstructions. It is shown that the proposed algorithm is more accurate than the conventional back projection algorithm, but slightly less accurate than the time reversal algorithm particularly in the area close to the skull. However, the image reconstruction time for the proposed algorithm can be as little as 124 ms when implemented by a GPU (512 sensors, 21323 pixels reconstructed), which is two orders of magnitude faster than the time reversal reconstruction. The proposed algorithm, therefore, not only corrects for the p...

  8. On the importance of graph search algorithms for DRGEP-based mechanism reduction methods

    CERN Document Server

    Niemeyer, Kyle E

    2016-01-01

    The importance of graph search algorithm choice to the directed relation graph with error propagation (DRGEP) method is studied by comparing basic and modified depth-first search, basic and R-value-based breadth-first search (RBFS), and Dijkstra's algorithm. By using each algorithm with DRGEP to produce skeletal mechanisms from a detailed mechanism for n-heptane with randomly-shuffled species order, it is demonstrated that only Dijkstra's algorithm and RBFS produce results independent of species order. In addition, each algorithm is used with DRGEP to generate skeletal mechanisms for n-heptane covering a comprehensive range of autoignition conditions for pressure, temperature, and equivalence ratio. Dijkstra's algorithm combined with a coefficient scaling approach is demonstrated to produce the most compact skeletal mechanism with a similar performance compared to larger skeletal mechanisms resulting from the other algorithms. The computational efficiency of each algorithm is also compared by applying the DRG...

  9. A Steganographic Method Based on Integer Wavelet Transform & Genatic Algorithm

    Directory of Open Access Journals (Sweden)

    Preeti Arora

    2014-05-01

    Full Text Available The proposed system presents a novel approach of building a secure data hiding technique of steganography using inverse wavelet transform along with Genetic algorithm. The prominent focus of the proposed work is to develop RS-analysis proof design with higest imperceptibility. Optimal Pixal Adjustment process is also adopted to minimize the difference error between the input cover image and the embedded-image and in order to maximize the hiding capacity with low distortions respectively. The analysis is done for mapping function, PSNR, image histogram, and parameter of RS analysis. The simulation results highlights that the proposed security measure basically gives better and optimal results in comparison to prior research work conducted using wavelets and genetic algorithm.

  10. Interleaver Design Method for Turbo Codes Based on Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    Tan Ying; Sun Hong; Zhou Huai-bei

    2004-01-01

    This paper describes a new interleaver construction technique for turbo code. The technique searches as much as possible pseudo-random interleaving patterns under a certain condition using genetic algorithms(GAs). The new interleavers have the superiority of the S-random interleavers and this interleaver construction technique can reduce the time taken to generate pseudo-random interleaving patterns under a certain condition. Tbe results obtained indicate that the new interleavers yield an equal to or better performance than the Srandom interleavers. Compared to the S-random interleaver,this design requires a lower level of computational complexity.

  11. A novel method to design S-box based on chaotic map and genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yong, E-mail: wangyong_cqupt@163.com [State Key Laboratory of Power Transmission Equipment and System Security and New Technology, Chongqing University, Chongqing 400044 (China); Key Laboratory of Electronic Commerce and Logistics, Chongqing University of Posts and Telecommunications, Chongqing 400065 (China); Wong, Kwok-Wo [Department of Electronic Engineering, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon Tong (Hong Kong); Li, Changbing [Key Laboratory of Electronic Commerce and Logistics, Chongqing University of Posts and Telecommunications, Chongqing 400065 (China); Li, Yang [Department of Automatic Control and Systems Engineering, The University of Sheffield, Mapping Street, S1 3DJ (United Kingdom)

    2012-01-30

    The substitution box (S-box) is an important component in block encryption algorithms. In this Letter, the problem of constructing S-box is transformed to a Traveling Salesman Problem and a method for designing S-box based on chaos and genetic algorithm is proposed. Since the proposed method makes full use of the traits of chaotic map and evolution process, stronger S-box is obtained. The results of performance test show that the presented S-box has good cryptographic properties, which justify that the proposed algorithm is effective in generating strong S-boxes. -- Highlights: ► The problem of constructing S-box is transformed to a Traveling Salesman Problem. ► We present a new method for designing S-box based on chaos and genetic algorithm. ► The proposed algorithm is effective in generating strong S-boxes.

  12. An Improved Singularity Computing Algorithm Based on Wavelet Transform Modulus Maxima Method

    Institute of Scientific and Technical Information of China (English)

    ZHAO Jian; XIE Duan; FAN Xun-li

    2006-01-01

    In order to reduce the hidden danger of noise which can be charactered by singularity spectrum, a new algorithm based on wavelet transform modulus maxima method was proposed. Singularity analysis is one of the most promising new approaches for extracting noise hidden information from noisy time series . Because of singularity strength is hard to calculate accurately, a wavelet transform modulus maxima method was used to get singularity spectrum. The singularity spectrum of white noise and aluminium interconnection electromigration noise was calculated and analyzed. The experimental results show that the new algorithm is more accurate than tradition estimating algorithm. The proposed method is feasible and efficient.

  13. Asymptotically Optimal Algorithm for Short-Term Trading Based on the Method of Calibration

    CERN Document Server

    V'yugin, Vladimir

    2012-01-01

    A trading strategy based on a natural learning process, which asymptotically outperforms any trading strategy from RKHS (Reproduced Kernel Hilbert Space), is presented. In this process, the trader rationally chooses his gambles using predictions made by a randomized well calibrated algorithm. Our strategy is based on Dawid's notion of calibration with more general changing checking rules and on some modification of Kakade and Foster's randomized algorithm for computing calibrated forecasts. We use also Vovk's method of defensive forecasting in RKHS.

  14. A novel method to design S-box based on chaotic map and genetic algorithm

    Science.gov (United States)

    Wang, Yong; Wong, Kwok-Wo; Li, Changbing; Li, Yang

    2012-01-01

    The substitution box (S-box) is an important component in block encryption algorithms. In this Letter, the problem of constructing S-box is transformed to a Traveling Salesman Problem and a method for designing S-box based on chaos and genetic algorithm is proposed. Since the proposed method makes full use of the traits of chaotic map and evolution process, stronger S-box is obtained. The results of performance test show that the presented S-box has good cryptographic properties, which justify that the proposed algorithm is effective in generating strong S-boxes.

  15. Genetic Algorithm Based Combinatorial Auction Method for Multi-Robot Task Allocation

    Institute of Scientific and Technical Information of China (English)

    GONG Jian-wei; HUANG Wan-ning; XIONG Guang-ming; MAN Yi-ming

    2007-01-01

    An improved genetic algorithm is proposed to solve the problem of bad real-time performance or inability to get a global optimal/better solution when applying single-item auction (SIA) method or combinatorial auction method to multi-robot task allocation.The genetic algorithm based combinatorial auction (GACA) method which combines the basic-genetic algorithm with a new concept of ringed chromosome is used to solve the winner determination problem (WDP) of combinatorial auction.The simulation experiments are conducted in OpenSim, a multi-robot simulator.The results show that GACA can get a satisfying solution in a reasonable shot time, and compared with SIA or parthenogenesis algorithm combinatorial auction (PGACA) method, it is the simplest and has higher search efficiency, also, GACA can get a global better/optimal solution and satisfy the high real-time requirement of multi-robot task allocation.

  16. An evaluation method based on absolute difference to validate the performance of SBNUC algorithms

    Science.gov (United States)

    Jin, Minglei; Jin, Weiqi; Li, Yiyang; Li, Shuo

    2016-09-01

    Scene-based non-uniformity correction (SBNUC) algorithms are an important part of infrared image processing; however, SBNUC algorithms usually cause two defects: (1) ghosting artifacts and (2) over-correction. In this paper, we use the absolute difference based on guided image filter (AD-GF) method to validate the performance of SBNUC algorithms. We obtain a self-separation source using the improved guided image filter to process the input image, and use the self-separation source to obtain the space-high-frequency parts of the input image and the corrected image. Finally, we use the absolute difference between the two space-high-frequency parts as the evaluation result. Based on experimental results, the AD-GF method has better robustness and can validate the performance of SBNUC algorithms even if ghosting artifacts or over-correction occur. Also the AD-GF method can measure how SBNUC algorithms perform in the time domain, it's an effective evaluation method for SBNUC algorithm.

  17. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    Science.gov (United States)

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  18. Welding sequences optimization of box structure based on genetic algorithm method

    Institute of Scientific and Technical Information of China (English)

    CUI Xiao-fang; MA Jun; MENG Kai; ZHAO Wen-zhong; ZHAO Hai-yan

    2006-01-01

    In this article, The genetic algorithm method was proposed, that is, to establish the box structure's nonlinear three-dimension optimization numerical model based on thermo-mechanical coupling algorithm, and the objective function of welding distortion has been utilized to determine an optimum welding sequence by optimization simulation. The validity of genetic algorithm method combining with the thermo-mechanical nonlinear finite element model is verified by comparison with the experimental data where available. By choosing the appropriate objective function for the considered case, an optimum welding sequence is determined by a genetic algorithm. All done in this study indicates that the new method presented in this article will have important practical application for designing the welding technical parameters in the future.

  19. Beacon- and Schema-Based Method for Recognizing Algorithms from Students' Source Code

    Science.gov (United States)

    Taherkhani, Ahmad; Malmi, Lauri

    2013-01-01

    In this paper, we present a method for recognizing algorithms from students programming submissions coded in Java. The method is based on the concept of "programming schemas" and "beacons". Schemas are high-level programming knowledge with detailed knowledge abstracted out, and beacons are statements that imply specific…

  20. A Novel Method for Optimum Global Positioning System Satellite Selection Based on a Modified Genetic Algorithm.

    Science.gov (United States)

    Song, Jiancai; Xue, Guixiang; Kang, Yanan

    2016-01-01

    In this paper, a novel method for selecting a navigation satellite subset for a global positioning system (GPS) based on a genetic algorithm is presented. This approach is based on minimizing the factors in the geometric dilution of precision (GDOP) using a modified genetic algorithm (MGA) with an elite conservation strategy, adaptive selection, adaptive mutation, and a hybrid genetic algorithm that can select a subset of the satellites represented by specific numbers in the interval (4 ∼ n) while maintaining position accuracy. A comprehensive simulation demonstrates that the MGA-based satellite selection method effectively selects the correct number of optimal satellite subsets using receiver autonomous integrity monitoring (RAIM) or fault detection and exclusion (FDE). This method is more adaptable and flexible for GPS receivers, particularly for those used in handset equipment and mobile phones.

  1. Fast Matrix Computation Algorithms Based on Rough Attribute Vector Tree Method in RDSS

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The concepts of Rough Decision Support System (RDSS)and equivalence matrix are introduced in this paper. Based on a rough attribute vector tree (RAVT) method, two kinds of matrix computation algorithms - Recursive Matrix Computation (RMC) and Parallel Matrix Computation (PMC) are proposed for rules extraction, attributes reduction and data cleaning finished synchronously. The algorithms emphasize the practicability and efficiency of rules generation. A case study of PMC is analyzed, and a comparison experiment of RMC algorithm shows that it is feasible and efficient for data mining and knowledge-discovery in RDSS.

  2. A Nonlinear Lagrange Algorithm for Stochastic Minimax Problems Based on Sample Average Approximation Method

    Directory of Open Access Journals (Sweden)

    Suxiang He

    2014-01-01

    Full Text Available An implementable nonlinear Lagrange algorithm for stochastic minimax problems is presented based on sample average approximation method in this paper, in which the second step minimizes a nonlinear Lagrange function with sample average approximation functions of original functions and the sample average approximation of the Lagrange multiplier is adopted. Under a set of mild assumptions, it is proven that the sequences of solution and multiplier obtained by the proposed algorithm converge to the Kuhn-Tucker pair of the original problem with probability one as the sample size increases. At last, the numerical experiments for five test examples are performed and the numerical results indicate that the algorithm is promising.

  3. Feature selection method based on multi-fractal dimension and harmony search algorithm and its application

    Science.gov (United States)

    Zhang, Chen; Ni, Zhiwei; Ni, Liping; Tang, Na

    2016-10-01

    Feature selection is an important method of data preprocessing in data mining. In this paper, a novel feature selection method based on multi-fractal dimension and harmony search algorithm is proposed. Multi-fractal dimension is adopted as the evaluation criterion of feature subset, which can determine the number of selected features. An improved harmony search algorithm is used as the search strategy to improve the efficiency of feature selection. The performance of the proposed method is compared with that of other feature selection algorithms on UCI data-sets. Besides, the proposed method is also used to predict the daily average concentration of PM2.5 in China. Experimental results show that the proposed method can obtain competitive results in terms of both prediction accuracy and the number of selected features.

  4. NONLINEAR FILTER METHOD OF GPS DYNAMIC POSITIONING BASED ON BANCROFT ALGORITHM

    Institute of Scientific and Technical Information of China (English)

    ZHANGQin; TAOBen-zao; ZHAOChao-ying; WANGLi

    2005-01-01

    Because of the ignored items after linearization, the extended Kalman filter (EKF) becomes a form of suboptimal gradient descent algorithm. The emanative tendency exists in GPS solution when the filter equations are ill-posed. The deviation in the estimation cannot be avoided. Furthermore, the true solution may be lost in pseudorange positioning because the linearized pseudorange equations are partial solutions. To solve the above problems in GPS dynamic positioning by using EKF, a closed-form Kalman filter method called the two-stage algorithm is presented for the nonlinear algebraic solution of GPS dynamic positioning based on the global nonlinear least squares closed algorithm--Bancroft numerical algorithm of American. The method separates the spatial parts from temporal parts during processing the GPS filter problems, and solves the nonlinear GPS dynamic positioning, thus getting stable and reliable dynamic positioning solutions.

  5. Improved Algorithm for Weak GPS Signal Acquisition Based on Delay-accumulation Method

    Directory of Open Access Journals (Sweden)

    LI Yuanming

    2016-01-01

    Full Text Available A new improved algorithm is proposed to solve the problem of GPS weak signal capture that the traditional algorithms are unavailable to capture under a weak signal environment. This algorithm is based on the analysis of double block zero padding (DBZP algorithm and it adopts the delay-accumulation method to retain the operation results temporarily which are discarded in DBZP algorithm. Waiting for delaying 1 ms, the corresponding correlation calculation results are obtained. Then superimpose the obtained results with the operation results retained temporarily and compare the coherent accumulation results with the threshold value. The data measurements are increased by improving the utilization rate of correlation operation results in the improved algorithm on the premise of increasing little computation. Simulation results showed that the improved algorithm can improve the acquisition algorithm processing gain and it is able to capture the signals whose carrier-to-noise ratio(C/N0 is 17 dB-Hz and the detection probability can achieve to 91%.

  6. A Load Balancing Algorithm Based on Maximum Entropy Methods in Homogeneous Clusters

    Directory of Open Access Journals (Sweden)

    Long Chen

    2014-10-01

    Full Text Available In order to solve the problems of ill-balanced task allocation, long response time, low throughput rate and poor performance when the cluster system is assigning tasks, we introduce the concept of entropy in thermodynamics into load balancing algorithms. This paper proposes a new load balancing algorithm for homogeneous clusters based on the Maximum Entropy Method (MEM. By calculating the entropy of the system and using the maximum entropy principle to ensure that each scheduling and migration is performed following the increasing tendency of the entropy, the system can achieve the load balancing status as soon as possible, shorten the task execution time and enable high performance. The result of simulation experiments show that this algorithm is more advanced when it comes to the time and extent of the load balance of the homogeneous cluster system compared with traditional algorithms. It also provides novel thoughts of solutions for the load balancing problem of the homogeneous cluster system.

  7. Prediction method of rock burst proneness based on rough set and genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    YU Huai-chang; LIU Hai-ning; LU Xue-song; LIU Han-dong

    2009-01-01

    A new method based on rough set theory and genetic algorithm was proposed to predict the rock burst proneness. Nine influencing factors were first selected, and then, the decision table was set up. Attributes were reduced by genetic algorithm. Rough set was used to extract the simplified decision rules of rock burst proneness. Taking the practical engineering for example, the rock burst proneness was evaluated and predicted by decision rules. Comparing the prediction results with the actual results, it shows that the proposed method is feasible and effective.

  8. Cognitive Radio Spectrum Sensing Algorithms based on Eigenvalue and Covariance methods

    Directory of Open Access Journals (Sweden)

    K.SESHU KUMAR

    2013-04-01

    Full Text Available Spectrum sensing method is the fundamental factor when we are working with cognitive radio systems. Main aim and fundamental problem of cognitive radio is to identify weather primary users in authorized or licensed spectrum is presented or not. Paper deals with a new scheme of sensing based on the eigenvalues concept. It contain signals of covariance matrix received by the secondary users. In this method we are suggested two algorithms of sensing, one algorithm established by the maximum to minimum eigenvalue ratio. Other algorithm focused on average to minimum eigenvalue ratio. These two are done by using random matrix theories (RMT, and also these RMT are latest and also produce some accurate results. Now we calculate the ratios of distributions and probabilities of detection (Pd and derive the probabilities of false alarm (Pfa for the proposed algorithms, and also finding thresholds values for given Pfa. This method will improve the problem of noise uncertainty, and also performance isimproved compare to energy detection when highly correlated signal is available. Paper also deals with another method is and also covariance methods. First one is statistical covariance method, it has different noise and received signal, and it is used for finding the primary users presence where there is only noise. These algorithms implemented by use of small number of received signal samples and processed to calculate the sample covariance matrix. By use of sample covariance matrix we are extracted two test statistics. Finally we compare these results and concluded that signal presence. These are used in many signal detection applications, and also do not need signal information, also noise power and channel. We did the Simulations based on two ways. First one is randomly generated signals. Other one is done by captured DTV signals taken from ATSV committee, these are broadcasting signals. These methods confirm and verifies the efficiency of the proposed

  9. 2D-3D Face Recognition Method Based on a Modified CCA-PCA Algorithm

    Directory of Open Access Journals (Sweden)

    Patrik Kamencay

    2014-03-01

    Full Text Available This paper presents a proposed methodology for face recognition based on an information theory approach to coding and decoding face images. In this paper, we propose a 2D-3D face-matching method based on a principal component analysis (PCA algorithm using canonical correlation analysis (CCA to learn the mapping between a 2D face image and 3D face data. This method makes it possible to match a 2D face image with enrolled 3D face data. Our proposed fusion algorithm is based on the PCA method, which is applied to extract base features. PCA feature-level fusion requires the extraction of different features from the source data before features are merged together. Experimental results on the TEXAS face image database have shown that the classification and recognition results based on the modified CCA-PCA method are superior to those based on the CCA method. Testing the 2D-3D face match results gave a recognition rate for the CCA method of a quite poor 55% while the modified CCA method based on PCA-level fusion achieved a very good recognition score of 85%.

  10. Numerical study of variational data assimilation algorithms based on decomposition methods in atmospheric chemistry models

    Science.gov (United States)

    Penenko, Alexey; Antokhin, Pavel

    2016-11-01

    The performance of a variational data assimilation algorithm for a transport and transformation model of atmospheric chemical composition is studied numerically in the case where the emission inventories are missing while there are additional in situ indirect concentration measurements. The algorithm is based on decomposition and splitting methods with a direct solution of the data assimilation problems at the splitting stages. This design allows avoiding iterative processes and working in real-time. In numerical experiments we study the sensitivity of data assimilation to measurement data quantity and quality.

  11. Performance Analysis of Transfer function Based Active Noise Cancellation Method Using Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    Prof. Vikas Gupta

    2014-01-01

    Full Text Available Due to the exponential increase of noise pollution, the demand for noise controlling system is also increases. Basically two types of techniques are used for noise cancellation active and passive. But passive techniques are inactive for low frequency noise, hence there is an increasing demand of research and developmental work on active noise cancellation techniques. In this paper we introduce a new method in the active noise cancellation system. This new method is the transfer function based method which used Genetic and Particle swarm optimization (PSO algorithm for noise cancellation. This method is very simple and efficient for low frequency noise cancellation. Here we analysis the performance of this method in the presence of white Gaussian noise and compare the results of Particle swarm optimization (PSO and Genetic algorithm. Both algorithms are suitable for different environment, so we observe their performance in different fields. In this paper a comparative study of Genetic and Particle swarm optimization (PSO is described with proper results. It will go in depth what exactly transfer function method, how it work and advantages over neural network based method

  12. A fast beam hardening correction method incorporated in a filtered back-projection based MAP algorithm

    Science.gov (United States)

    Luo, Shouhua; Wu, Huazhen; Sun, Yi; Li, Jing; Li, Guang; Gu, Ning

    2017-03-01

    The beam hardening effect can induce strong artifacts in CT images, which result in severely deteriorated image quality with incorrect intensities (CT numbers). This paper develops an effective and efficient beam hardening correction algorithm incorporated in a filtered back-projection based maximum a posteriori (BHC-FMAP). In the proposed algorithm, the beam hardening effect is modeled and incorporated into the forward-projection of the MAP to suppress beam hardening induced artifacts, and the image update process is performed by Feldkamp–Davis–Kress method based back-projection to speed up the convergence. The proposed BHC-FMAP approach does not require information about the beam spectrum or the material properties, or any additional segmentation operation. The proposed method was qualitatively and quantitatively evaluated using both phantom and animal projection data. The experimental results demonstrate that the BHC-FMAP method can efficiently provide a good correction of beam hardening induced artefacts.

  13. An effective trust-based recommendation method using a novel graph clustering algorithm

    Science.gov (United States)

    Moradi, Parham; Ahmadian, Sajad; Akhlaghian, Fardin

    2015-10-01

    Recommender systems are programs that aim to provide personalized recommendations to users for specific items (e.g. music, books) in online sharing communities or on e-commerce sites. Collaborative filtering methods are important and widely accepted types of recommender systems that generate recommendations based on the ratings of like-minded users. On the other hand, these systems confront several inherent issues such as data sparsity and cold start problems, caused by fewer ratings against the unknowns that need to be predicted. Incorporating trust information into the collaborative filtering systems is an attractive approach to resolve these problems. In this paper, we present a model-based collaborative filtering method by applying a novel graph clustering algorithm and also considering trust statements. In the proposed method first of all, the problem space is represented as a graph and then a sparsest subgraph finding algorithm is applied on the graph to find the initial cluster centers. Then, the proposed graph clustering algorithm is performed to obtain the appropriate users/items clusters. Finally, the identified clusters are used as a set of neighbors to recommend unseen items to the current active user. Experimental results based on three real-world datasets demonstrate that the proposed method outperforms several state-of-the-art recommender system methods.

  14. Self-Organizing Genetic Algorithm Based Method for Constructing Bayesian Networks from Databases

    Institute of Scientific and Technical Information of China (English)

    郑建军; 刘玉树; 陈立潮

    2003-01-01

    The typical characteristic of the topology of Bayesian networks (BNs) is the interdependence among different nodes (variables), which makes it impossible to optimize one variable independently of others, and the learning of BNs structures by general genetic algorithms is liable to converge to local extremum. To resolve efficiently this problem, a self-organizing genetic algorithm (SGA) based method for constructing BNs from databases is presented. This method makes use of a self-organizing mechanism to develop a genetic algorithm that extended the crossover operator from one to two, providing mutual competition between them, even adjusting the numbers of parents in recombination (crossover/recomposition) schemes. With the K2 algorithm, this method also optimizes the genetic operators, and utilizes adequately the domain knowledge. As a result, with this method it is able to find a global optimum of the topology of BNs, avoiding premature convergence to local extremum. The experimental results proved to be and the convergence of the SGA was discussed.

  15. Iterative reconstruction methods in atmospheric tomography: FEWHA, Kaczmarz and Gradient-based algorithm

    Science.gov (United States)

    Ramlau, R.; Saxenhuber, D.; Yudytskiy, M.

    2014-07-01

    The problem of atmospheric tomography arises in ground-based telescope imaging with adaptive optics (AO), where one aims to compensate in real-time for the rapidly changing optical distortions in the atmosphere. Many of these systems depend on a sufficient reconstruction of the turbulence profiles in order to obtain a good correction. Due to steadily growing telescope sizes, there is a strong increase in the computational load for atmospheric reconstruction with current methods, first and foremost the MVM. In this paper we present and compare three novel iterative reconstruction methods. The first iterative approach is the Finite Element- Wavelet Hybrid Algorithm (FEWHA), which combines wavelet-based techniques and conjugate gradient schemes to efficiently and accurately tackle the problem of atmospheric reconstruction. The method is extremely fast, highly flexible and yields superior quality. Another novel iterative reconstruction algorithm is the three step approach which decouples the problem in the reconstruction of the incoming wavefronts, the reconstruction of the turbulent layers (atmospheric tomography) and the computation of the best mirror correction (fitting step). For the atmospheric tomography problem within the three step approach, the Kaczmarz algorithm and the Gradient-based method have been developed. We present a detailed comparison of our reconstructors both in terms of quality and speed performance in the context of a Multi-Object Adaptive Optics (MOAO) system for the E-ELT setting on OCTOPUS, the ESO end-to-end simulation tool.

  16. Method and application of wavelet shrinkage denoising based on genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Genetic algorithm (GA) based on wavelet transform threshold shrinkage (WTS) and translation-invafiant threshold shrinkage (TIS) is introduced into the method of noise reduction, where parameters used in WTS and TIS, such as wavelet function,decomposition levels, hard or soft threshold and threshold can be selected automatically. This paper ends by comparing two noise reduction methods on the basis of their denoising performances, computation time, etc. The effectiveness of these methods introduced in this paper is validated by the results of analysis of the simulated and real signals.

  17. A scalable method for parallelizing sampling-based motion planning algorithms

    KAUST Repository

    Jacobs, Sam Ade

    2012-05-01

    This paper describes a scalable method for parallelizing sampling-based motion planning algorithms. It subdivides configuration space (C-space) into (possibly overlapping) regions and independently, in parallel, uses standard (sequential) sampling-based planners to construct roadmaps in each region. Next, in parallel, regional roadmaps in adjacent regions are connected to form a global roadmap. By subdividing the space and restricting the locality of connection attempts, we reduce the work and inter-processor communication associated with nearest neighbor calculation, a critical bottleneck for scalability in existing parallel motion planning methods. We show that our method is general enough to handle a variety of planning schemes, including the widely used Probabilistic Roadmap (PRM) and Rapidly-exploring Random Trees (RRT) algorithms. We compare our approach to two other existing parallel algorithms and demonstrate that our approach achieves better and more scalable performance. Our approach achieves almost linear scalability on a 2400 core LINUX cluster and on a 153,216 core Cray XE6 petascale machine. © 2012 IEEE.

  18. An adaptive turbo-shaft engine modeling method based on PS and MRR-LSSVR algorithms

    Institute of Scientific and Technical Information of China (English)

    Wang Jiankang; Zhang Haibo; Yan Changkai; Duan Shujing; Huang Xianghua

    2013-01-01

    In order to establish an adaptive turbo-shaft engine model with high accuracy,a new modeling method based on parameter selection (PS) algorithm and multi-input multi-output recursive reduced least square support vector regression (MRR-LSSVR) machine is proposed.Firstly,the PS algorithm is designed to choose the most reasonable inputs of the adaptive module.During this process,a wrapper criterion based on least square support vector regression (LSSVR) machine is adopted,which can not only reduce computational complexity but also enhance generalization performance.Secondly,with the input variables determined by the PS algorithm,a mapping model of engine parameter estimation is trained off-line using MRR-LSSVR,which has a satisfying accuracy within 5‰.Finally,based on a numerical simulation platform of an integrated helicopter/turbo-shaft engine system,an adaptive turbo-shaft engine model is developed and tested in a certain flight envelope.Under the condition of single or multiple engine components being degraded,many simulation experiments are carried out,and the simulation results show the effectiveness and validity of the proposed adaptive modeling method.

  19. Speech Emotion Feature Selection Method Based on Contribution Analysis Algorithm of Neural Network

    Science.gov (United States)

    Wang, Xiaojia; Mao, Qirong; Zhan, Yongzhao

    2008-11-01

    There are many emotion features. If all these features are employed to recognize emotions, redundant features may be existed. Furthermore, recognition result is unsatisfying and the cost of feature extraction is high. In this paper, a method to select speech emotion features based on contribution analysis algorithm of NN is presented. The emotion features are selected by using contribution analysis algorithm of NN from the 95 extracted features. Cluster analysis is applied to analyze the effectiveness for the features selected, and the time of feature extraction is evaluated. Finally, 24 emotion features selected are used to recognize six speech emotions. The experiments show that this method can improve the recognition rate and the time of feature extraction

  20. State Generation Method for Humanoid Motion Planning Based on Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Xuyang Wang

    2008-11-01

    Full Text Available A new approach to generate the original motion data for humanoid motion planning is presented in this paper. And a state generator is developed based on the genetic algorithm, which enables users to generate various motion states without using any reference motion data. By specifying various types of constraints such as configuration constraints and contact constraints, the state generator can generate stable states that satisfy the constraint conditions for humanoid robots.To deal with the multiple constraints and inverse kinematics, the state generation is finally simplified as a problem of optimizing and searching. In our method, we introduce a convenient mathematic representation for the constraints involved in the state generator, and solve the optimization problem with the genetic algorithm to acquire a desired state. To demonstrate the effectiveness and advantage of the method, a number of motion states are generated according to the requirements of the motion.

  1. A method for aircraft magnetic interference compensation based on small signal model and LMS algorithm

    Institute of Scientific and Technical Information of China (English)

    Zhou Jianjun; Lin Chunsheng; Chen Hao

    2014-01-01

    Aeromagnetic interference could not be compensated effectively if the precision of parameters which are solved by the aircraft magnetic field model is low. In order to improve the compensation effect under this condition, a method based on small signal model and least mean square (LMS) algorithm is proposed. According to the method, the initial values of adaptive filter’s weight vector are calculated with the solved model parameters through small signal model at first, then the small amount of direction cosine and its derivative are set as the input of the filter, and the small amount of the interference is set as the filter’s expected vector. After that, the aircraft mag-netic interference is compensated by LMS algorithm. Finally, the method is verified by simulation and experiment. The result shows that the compensation effect can be improved obviously by the LMS algorithm when original solved parameters have low precision. The method can further improve the compensation effect even if the solved parameters have high precision.

  2. An efficient method of key-frame extraction based on a cluster algorithm.

    Science.gov (United States)

    Zhang, Qiang; Yu, Shao-Pei; Zhou, Dong-Sheng; Wei, Xiao-Peng

    2013-12-18

    This paper proposes a novel method of key-frame extraction for use with motion capture data. This method is based on an unsupervised cluster algorithm. First, the motion sequence is clustered into two classes by the similarity distance of the adjacent frames so that the thresholds needed in the next step can be determined adaptively. Second, a dynamic cluster algorithm called ISODATA is used to cluster all the frames and the frames nearest to the center of each class are automatically extracted as key-frames of the sequence. Unlike many other clustering techniques, the present improved cluster algorithm can automatically address different motion types without any need for specified parameters from users. The proposed method is capable of summarizing motion capture data reliably and efficiently. The present work also provides a meaningful comparison between the results of the proposed key-frame extraction technique and other previous methods. These results are evaluated in terms of metrics that measure reconstructed motion and the mean absolute error value, which are derived from the reconstructed data and the original data.

  3. Zoning Modulus Inversion Method for Concrete Dams Based on Chaos Genetic Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Hao Gu

    2015-01-01

    Full Text Available For dams and rock foundations of ages, the actual mechanical parameters sometimes differed from the design and the experimental values. Therefore, it is necessary to carry out the inversion analysis on main physical and mechanical parameters of dams and rock foundations. However, only the integrated deformation modulus can be inversed by utilizing the conventional inversion method, and it does not meet the actual situation. Therefore, a new method is developed in this paper to inverse the actual initial zoning deformation modulus and to determine the inversion objective function for the actual zoning deformation modulus, based on the dam displacement measured data and finite element calculation results. Furthermore, based on the chaos genetic optimization algorithm, the inversion method for zoning deformation modulus of dam, dam foundation and, reservoir basin is proposed. Combined with the project case, the feasibility and validity of the proposed method are verified.

  4. Free Energy-Based Conformational Search Algorithm Using the Movable Type Sampling Method.

    Science.gov (United States)

    Pan, Li-Li; Zheng, Zheng; Wang, Ting; Merz, Kenneth M

    2015-12-08

    In this article, we extend the movable type (MT) sampling method to molecular conformational searches (MT-CS) on the free energy surface of the molecule in question. Differing from traditional systematic and stochastic searching algorithms, this method uses Boltzmann energy information to facilitate the selection of the best conformations. The generated ensembles provided good coverage of the available conformational space including available crystal structures. Furthermore, our approach directly provides the solvation free energies and the relative gas and aqueous phase free energies for all generated conformers. The method is validated by a thorough analysis of thrombin ligands as well as against structures extracted from both the Protein Data Bank (PDB) and the Cambridge Structural Database (CSD). An in-depth comparison between OMEGA and MT-CS is presented to illustrate the differences between the two conformational searching strategies, i.e., energy-based versus free energy-based searching. These studies demonstrate that our MT-based ligand conformational search algorithm is a powerful approach to delineate the conformational ensembles of molecular species on free energy surfaces.

  5. Operational Modal Analysis Based on Subspace Algorithm with an Improved Stabilization Diagram Method

    Directory of Open Access Journals (Sweden)

    Shiqiang Qin

    2016-01-01

    Full Text Available Subspace-based algorithms for operational modal analysis have been extensively studied in the past decades. In the postprocessing of subspace-based algorithms, the stabilization diagram is often used to determine modal parameters. In this paper, an improved stabilization diagram is proposed for stochastic subspace identification. Specifically, first, a model order selection method based on singular entropy theory is proposed. The singular entropy increment is calculated from nonzero singular values of the output covariance matrix. The corresponding model order can be selected when the variation of singular entropy increment approaches to zero. Then, the stabilization diagram with confidence intervals which is established using the uncertainty of modal parameter is presented. Finally, a simulation example of a four-story structure and a full-scale cable-stayed footbridge application is employed to illustrate the improved stabilization diagram method. The study demonstrates that the model order can be reasonably determined by the proposed method. The stabilization diagram with confidence intervals can effectively remove the spurious modes.

  6. A Method for Streamlining and Assessing Sound Velocity Profiles Based on Improved D-P Algorithm

    Science.gov (United States)

    Zhao, D.; WU, Z. Y.; Zhou, J.

    2015-12-01

    A multi-beam system transmits sound waves and receives the round-trip time of their reflection or scattering, and thus it is possible to determine the depth and coordinates of the detected targets using the sound velocity profile (SVP) based on Snell's Law. The SVP is determined by a device. Because of the high sampling rate of the modern device, the operational time of ray tracing and beam footprint reduction will increase, lowering the overall efficiency. To promote the timeliness of multi-beam surveys and data processing, redundant points in the original SVP must be screened out and at the same time, errors following the streamlining of the SVP must be evaluated and controlled. We presents a new streamlining and evaluation method based on the Maximum Offset of sound Velocity (MOV) algorithm. Based on measured SVP data, this method selects sound velocity data points by calculating the maximum distance to the sound-velocity-dimension based on an improved Douglas-Peucker Algorithm to streamline the SVP (Fig. 1). To evaluate whether the streamlined SVP meets the desired accuracy requirements, this method is divided into two parts: SVP streamlining, and an accuracy analysis of the multi-beam sounding data processing using the streamlined SVP. Therefore, the method is divided into two modules: the streamlining module and the evaluation module (Fig. 2). The streamlining module is used for streamlining the SVP. Its core is the MOV algorithm.To assess the accuracy of the streamlined SVP, we uses ray tracing and the percentage error analysis method to evaluate the accuracy of the sounding data both before and after streamlining the SVP (Fig. 3). By automatically optimizing the threshold, the reduction rate of sound velocity profile data can reach over 90% and the standard deviation percentage error of sounding data can be controlled to within 0.1% (Fig. 4). The optimized sound velocity profile data improved the operational efficiency of the multi-beam survey and data post

  7. Predicting Modeling Method of Ship Radiated Noise Based on Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Guohui Li

    2016-01-01

    Full Text Available Because the forming mechanism of underwater acoustic signal is complex, it is difficult to establish the accurate predicting model. In this paper, we propose a nonlinear predicting modeling method of ship radiated noise based on genetic algorithm. Three types of ship radiated noise are taken as real underwater acoustic signal. First of all, a basic model framework is chosen. Secondly, each possible model is done with genetic coding. Thirdly, model evaluation standard is established. Fourthly, the operation of genetic algorithm such as crossover, reproduction, and mutation is designed. Finally, a prediction model of real underwater acoustic signal is established by genetic algorithm. By calculating the root mean square error and signal error ratio of underwater acoustic signal predicting model, the satisfactory results are obtained. The results show that the proposed method can establish the accurate predicting model with high prediction accuracy and may play an important role in the further processing of underwater acoustic signal such as noise reduction and feature extraction and classification.

  8. Multi-frequency synthesis algorithm based on Generalized Maximum Entropy Method. Application to 0954+658

    CERN Document Server

    Bajkova, Anisa T

    2011-01-01

    We propose the multi-frequency synthesis (MFS) algorithm with spectral correction of frequency-dependent source brightness distribution based on maximum entropy method. In order to take into account the spectral terms of n-th order in the Taylor expansion for the frequency-dependent brightness distribution, we use a generalized form of the maximum entropy method suitable for reconstruction of not only positive-definite functions, but also sign-variable ones. The proposed algorithm is aimed at producing both improved total intensity image and two-dimensional spectral index distribution over the source. We consider also the problem of frequency-dependent variation of the radio core positions of self-absorbed active galactic nuclei, which should be taken into account in a correct multi-frequency synthesis. First, the proposed MFS algorithm has been tested on simulated data and then applied to four-frequency synthesis imaging of the radio source 0954+658 from VLBA observational data obtained quasi-simultaneously ...

  9. An Improved Algorithm of Grounding Grids Corrosion Diagnosis Based on Total Least Square Method

    Institute of Scientific and Technical Information of China (English)

    ZHANG Ying-jiao; NIU Tao; WANG Sen

    2011-01-01

    A new model considering corrosion property for grounding grids diagnosis is proposed, which provides reference solutions of ambiguous branches. The constraint total least square method based on singular value decomposition is adopted to improve the effectiveness of grounding grids' diagnosis algorithm. The improvement can weaken the influence of the model's error, which results from the differences between design paper and actual grid. Its influence on touch and step voltages caused by the interior resistance of conductors is taken into account. Simulation results show the validity of this approach.

  10. Research and Simulation of FECG Signal Blind Separation Algorithm Based on Gradient Method

    Directory of Open Access Journals (Sweden)

    Yu Chen

    2012-08-01

    Full Text Available Independent Component Analysis (ICA is a new developed signal separation and digital analysis technology in recent years. ICA has widely used because it does not need to know the signal prior information, which has became the hot spot in signal processing field research. In this study, we firstly introduce the principle, meaning and blind source separation algorithm based on the gradient. By using the traditional natural gradient algorithm and Equi-variant Adaptive Source Separation via Independent (EASI blind separation algorithm, mixing ECG signals with noises had been separated effectively into the Maternal Electrocardiograph (MECG signal, Fetal Electrocardiograph (FECG signal and noise signal. The algorithm separation test showed that EASI algorithm can better separate the fetal ECG signal and because the gradient algorithm is a kind of online algorithm, which can be used for clinical fetal ECG signal of the real-time detection with important practical value and research significance.

  11. An accelerated photo-magnetic imaging reconstruction algorithm based on an analytical forward solution and a fast Jacobian assembly method

    Science.gov (United States)

    Nouizi, F.; Erkol, H.; Luk, A.; Marks, M.; Unlu, M. B.; Gulsen, G.

    2016-10-01

    We previously introduced photo-magnetic imaging (PMI), an imaging technique that illuminates the medium under investigation with near-infrared light and measures the induced temperature increase using magnetic resonance thermometry (MRT). Using a multiphysics solver combining photon migration and heat diffusion, PMI models the spatiotemporal distribution of temperature variation and recovers high resolution optical absorption images using these temperature maps. In this paper, we present a new fast non-iterative reconstruction algorithm for PMI. This new algorithm uses analytic methods during the resolution of the forward problem and the assembly of the sensitivity matrix. We validate our new analytic-based algorithm with the first generation finite element method (FEM) based reconstruction algorithm previously developed by our team. The validation is performed using, first synthetic data and afterwards, real MRT measured temperature maps. Our new method accelerates the reconstruction process 30-fold when compared to a single iteration of the FEM-based algorithm.

  12. A Novel Method to Predict Genomic Islands Based on Mean Shift Clustering Algorithm

    Science.gov (United States)

    de Brito, Daniel M.; Maracaja-Coutinho, Vinicius; de Farias, Savio T.; Batista, Leonardo V.; do Rêgo, Thaís G.

    2016-01-01

    Genomic Islands (GIs) are regions of bacterial genomes that are acquired from other organisms by the phenomenon of horizontal transfer. These regions are often responsible for many important acquired adaptations of the bacteria, with great impact on their evolution and behavior. Nevertheless, these adaptations are usually associated with pathogenicity, antibiotic resistance, degradation and metabolism. Identification of such regions is of medical and industrial interest. For this reason, different approaches for genomic islands prediction have been proposed. However, none of them are capable of predicting precisely the complete repertory of GIs in a genome. The difficulties arise due to the changes in performance of different algorithms in the face of the variety of nucleotide distribution in different species. In this paper, we present a novel method to predict GIs that is built upon mean shift clustering algorithm. It does not require any information regarding the number of clusters, and the bandwidth parameter is automatically calculated based on a heuristic approach. The method was implemented in a new user-friendly tool named MSGIP—Mean Shift Genomic Island Predictor. Genomes of bacteria with GIs discussed in other papers were used to evaluate the proposed method. The application of this tool revealed the same GIs predicted by other methods and also different novel unpredicted islands. A detailed investigation of the different features related to typical GI elements inserted in these new regions confirmed its effectiveness. Stand-alone and user-friendly versions for this new methodology are available at http://msgip.integrativebioinformatics.me. PMID:26731657

  13. A Novel Method to Predict Genomic Islands Based on Mean Shift Clustering Algorithm.

    Directory of Open Access Journals (Sweden)

    Daniel M de Brito

    Full Text Available Genomic Islands (GIs are regions of bacterial genomes that are acquired from other organisms by the phenomenon of horizontal transfer. These regions are often responsible for many important acquired adaptations of the bacteria, with great impact on their evolution and behavior. Nevertheless, these adaptations are usually associated with pathogenicity, antibiotic resistance, degradation and metabolism. Identification of such regions is of medical and industrial interest. For this reason, different approaches for genomic islands prediction have been proposed. However, none of them are capable of predicting precisely the complete repertory of GIs in a genome. The difficulties arise due to the changes in performance of different algorithms in the face of the variety of nucleotide distribution in different species. In this paper, we present a novel method to predict GIs that is built upon mean shift clustering algorithm. It does not require any information regarding the number of clusters, and the bandwidth parameter is automatically calculated based on a heuristic approach. The method was implemented in a new user-friendly tool named MSGIP--Mean Shift Genomic Island Predictor. Genomes of bacteria with GIs discussed in other papers were used to evaluate the proposed method. The application of this tool revealed the same GIs predicted by other methods and also different novel unpredicted islands. A detailed investigation of the different features related to typical GI elements inserted in these new regions confirmed its effectiveness. Stand-alone and user-friendly versions for this new methodology are available at http://msgip.integrativebioinformatics.me.

  14. Nonlinear Electrical Circuit Oscillator Control Based on Backstepping Method: A Genetic Algorithm Approach

    Directory of Open Access Journals (Sweden)

    Mahsa Khoeiniha

    2012-01-01

    Full Text Available This paper investigated study of dynamics of nonlinear electrical circuit by means of modern nonlinear techniques and the control of a class of chaotic system by using backstepping method based on Lyapunov function. The behavior of such nonlinear system when they are under the influence of external sinusoidal disturbances with unknown amplitudes has been considered. The objective is to analyze the performance of this system at different amplitudes of disturbances. We illustrate the proposed approach for controlling duffing oscillator problem to stabilize this system at the equilibrium point. Also Genetic Algorithm method (GA for computing the parameters of controller has been used. GA can be successfully applied to achieve a better controller. Simulation results have shown the effectiveness of the proposed method.

  15. Flow Based Algorithm

    Directory of Open Access Journals (Sweden)

    T. Karpagam

    2012-01-01

    Full Text Available Problem statement: Network topology design problems find application in several real life scenario. Approach: Most designs in the past either optimize for a single criterion like shortest or cost minimization or maximum flow. Results: This study discussed about solving a multi objective network topology design problem for a realistic traffic model specifically in the pipeline transportation. Here flow based algorithm focusing to transport liquid goods with maximum capacity with shortest distance, this algorithm developed with the sense of basic pert and critical path method. Conclusion/Recommendations: This flow based algorithm helps to give optimal result for transporting maximum capacity with minimum cost. It could be used in the juice factory, milk industry and its best alternate for the vehicle routing problem.

  16. Adaptive Wavelet Threshold Denoising Method for Machinery Sound Based on Improved Fruit Fly Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Jing Xu

    2016-07-01

    Full Text Available As the sound signal of a machine contains abundant information and is easy to measure, acoustic-based monitoring or diagnosis systems exhibit obvious superiority, especially in some extreme conditions. However, the sound directly collected from industrial field is always polluted. In order to eliminate noise components from machinery sound, a wavelet threshold denoising method optimized by an improved fruit fly optimization algorithm (WTD-IFOA is proposed in this paper. The sound is firstly decomposed by wavelet transform (WT to obtain coefficients of each level. As the wavelet threshold functions proposed by Donoho were discontinuous, many modified functions with continuous first and second order derivative were presented to realize adaptively denoising. However, the function-based denoising process is time-consuming and it is difficult to find optimal thresholds. To overcome these problems, fruit fly optimization algorithm (FOA was introduced to the process. Moreover, to avoid falling into local extremes, an improved fly distance range obeying normal distribution was proposed on the basis of original FOA. Then, sound signal of a motor was recorded in a soundproof laboratory, and Gauss white noise was added into the signal. The simulation results illustrated the effectiveness and superiority of the proposed approach by a comprehensive comparison among five typical methods. Finally, an industrial application on a shearer in coal mining working face was performed to demonstrate the practical effect.

  17. A new method for image segmentation based on Fuzzy C-means algorithm on pixonal images formed by bilateral filtering

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Sharifzadeh, Sara

    2013-01-01

    In this paper, a new pixon-based method is presented for image segmentation. In the proposed algorithm, bilateral filtering is used as a kernel function to form a pixonal image. Using this filter reduces the noise and smoothes the image slightly. By using this pixon-based method, the image over s...... the hierarchical clustering method (Fuzzy C-means algorithm). The experimental results show that the proposed pixon-based approach has a reduced computational load and a better accuracy compared to the other existing pixon-based image segmentation techniques.......In this paper, a new pixon-based method is presented for image segmentation. In the proposed algorithm, bilateral filtering is used as a kernel function to form a pixonal image. Using this filter reduces the noise and smoothes the image slightly. By using this pixon-based method, the image over...

  18. Performance of a cavity-method-based algorithm for the prize-collecting Steiner tree problem on graphs

    Science.gov (United States)

    Biazzo, Indaco; Braunstein, Alfredo; Zecchina, Riccardo

    2012-08-01

    We study the behavior of an algorithm derived from the cavity method for the prize-collecting steiner tree (PCST) problem on graphs. The algorithm is based on the zero temperature limit of the cavity equations and as such is formally simple (a fixed point equation resolved by iteration) and distributed (parallelizable). We provide a detailed comparison with state-of-the-art algorithms on a wide range of existing benchmarks, networks, and random graphs. Specifically, we consider an enhanced derivative of the Goemans-Williamson heuristics and the dhea solver, a branch and cut integer linear programming based approach. The comparison shows that the cavity algorithm outperforms the two algorithms in most large instances both in running time and quality of the solution. Finally we prove a few optimality properties of the solutions provided by our algorithm, including optimality under the two postprocessing procedures defined in the Goemans-Williamson derivative and global optimality in some limit cases.

  19. Performance of a cavity-method-based algorithm for the prize-collecting Steiner tree problem on graphs.

    Science.gov (United States)

    Biazzo, Indaco; Braunstein, Alfredo; Zecchina, Riccardo

    2012-08-01

    We study the behavior of an algorithm derived from the cavity method for the prize-collecting steiner tree (PCST) problem on graphs. The algorithm is based on the zero temperature limit of the cavity equations and as such is formally simple (a fixed point equation resolved by iteration) and distributed (parallelizable). We provide a detailed comparison with state-of-the-art algorithms on a wide range of existing benchmarks, networks, and random graphs. Specifically, we consider an enhanced derivative of the Goemans-Williamson heuristics and the dhea solver, a branch and cut integer linear programming based approach. The comparison shows that the cavity algorithm outperforms the two algorithms in most large instances both in running time and quality of the solution. Finally we prove a few optimality properties of the solutions provided by our algorithm, including optimality under the two postprocessing procedures defined in the Goemans-Williamson derivative and global optimality in some limit cases.

  20. Double images encryption method with resistance against the specific attack based on an asymmetric algorithm.

    Science.gov (United States)

    Wang, Xiaogang; Zhao, Daomu

    2012-05-21

    A double-image encryption technique that based on an asymmetric algorithm is proposed. In this method, the encryption process is different from the decryption and the encrypting keys are also different from the decrypting keys. In the nonlinear encryption process, the images are encoded into an amplitude cyphertext, and two phase-only masks (POMs) generated based on phase truncation are kept as keys for decryption. By using the classical double random phase encoding (DRPE) system, the primary images can be collected by an intensity detector that located at the output plane. Three random POMs that applied in the asymmetric encryption can be safely applied as public keys. Simulation results are presented to demonstrate the validity and security of the proposed protocol.

  1. Optimizing Properties of Aluminum-Based Nanocomposites by Genetic Algorithm Method

    Directory of Open Access Journals (Sweden)

    M.R. Dashtbayazi

    2015-07-01

    Full Text Available Based on molecular dynamics simulation results, a model was developed for determining elastic properties of aluminum nanocomposites reinforced with silicon carbide particles. Also, two models for prediction of density and price of nanocomposites were suggested. Then, optimal volume fraction of reinforcement was obtained by genetic algorithm method for the least density and price, and the highest elastic properties. Based on optimization results, the optimum volume fraction of reinforcement was obtained equal to 0.44. For this optimum volume fraction, optimum Young’s modulus, shear modulus, the price and the density of the nanocomposite were obtained 165.89 GPa, 111.37 GPa, 8.75 $/lb and 2.92 gr/cm3, respectively.

  2. Cyndi: a multi-objective evolution algorithm based method for bioactive molecular conformational generation

    Directory of Open Access Journals (Sweden)

    Li Honglin

    2009-03-01

    Full Text Available Abstract Background Conformation generation is a ubiquitous problem in molecule modelling. Many applications require sampling the broad molecular conformational space or perceiving the bioactive conformers to ensure success. Numerous in silico methods have been proposed in an attempt to resolve the problem, ranging from deterministic to non-deterministic and systemic to stochastic ones. In this work, we described an efficient conformation sampling method named Cyndi, which is based on multi-objective evolution algorithm. Results The conformational perturbation is subjected to evolutionary operation on the genome encoded with dihedral torsions. Various objectives are designated to render the generated Pareto optimal conformers to be energy-favoured as well as evenly scattered across the conformational space. An optional objective concerning the degree of molecular extension is added to achieve geometrically extended or compact conformations which have been observed to impact the molecular bioactivity (J Comput -Aided Mol Des 2002, 16: 105–112. Testing the performance of Cyndi against a test set consisting of 329 small molecules reveals an average minimum RMSD of 0.864 Å to corresponding bioactive conformations, indicating Cyndi is highly competitive against other conformation generation methods. Meanwhile, the high-speed performance (0.49 ± 0.18 seconds per molecule renders Cyndi to be a practical toolkit for conformational database preparation and facilitates subsequent pharmacophore mapping or rigid docking. The copy of precompiled executable of Cyndi and the test set molecules in mol2 format are accessible in Additional file 1. Conclusion On the basis of MOEA algorithm, we present a new, highly efficient conformation generation method, Cyndi, and report the results of validation and performance studies comparing with other four methods. The results reveal that Cyndi is capable of generating geometrically diverse conformers and outperforms

  3. Genetic Algorithm Based Simulated Annealing Method for Solving Unit Commitment Problem in Utility System

    Science.gov (United States)

    Rajan, C. Christober Asir

    2010-10-01

    The objective of this paper is to find the generation scheduling such that the total operating cost can be minimized, when subjected to a variety of constraints. This also means that it is desirable to find the optimal generating unit commitment in the power system for the next H hours. Genetic Algorithms (GA's) are general-purpose optimization techniques based on principles inspired from the biological evolution using metaphors of mechanisms such as neural section, genetic recombination and survival of the fittest. In this, the unit commitment schedule is coded as a string of symbols. An initial population of parent solutions is generated at random. Here, each schedule is formed by committing all the units according to their initial status ("flat start"). Here the parents are obtained from a pre-defined set of solution's i.e. each and every solution is adjusted to meet the requirements. Then, a random recommitment is carried out with respect to the unit's minimum down times. And SA improves the status. A 66-bus utility power system with twelve generating units in India demonstrates the effectiveness of the proposed approach. Numerical results are shown comparing the cost solutions and computation time obtained by using the Genetic Algorithm method and other conventional methods.

  4. Three-Dimensional Path Planning Method for Autonomous Underwater Vehicle Based on Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Chang Liu

    2015-01-01

    Full Text Available Path planning is a classic optimization problem which can be solved by many optimization algorithms. The complexity of three-dimensional (3D path planning for autonomous underwater vehicles (AUVs requires the optimization algorithm to have a quick convergence speed. This work provides a new 3D path planning method for AUV using a modified firefly algorithm. In order to solve the problem of slow convergence of the basic firefly algorithm, an improved method was proposed. In the modified firefly algorithm, the parameters of the algorithm and the random movement steps can be adjusted according to the operating process. At the same time, an autonomous flight strategy is introduced to avoid instances of invalid flight. An excluding operator was used to improve the effect of obstacle avoidance, and a contracting operator was used to enhance the convergence speed and the smoothness of the path. The performance of the modified firefly algorithm and the effectiveness of the 3D path planning method were proved through a varied set of experiments.

  5. A User Differential Range Error Calculating Algorithm Based on Analytic Method

    Institute of Scientific and Technical Information of China (English)

    SHAO Bo; LIU Jiansheng; ZHAO Ruibin; HUANG Zhigang; LI Rui

    2011-01-01

    To enhance the integrity,an analytic method (AM) which has less execution time is proposed to calculate the user differential range error (UDRE) used by the user to detect the potential risk.An ephemeris and clock correction calculation method is introduced first.It shows that the most important thing of computing UDRE is to find the worst user location (WUL) in the service volume.Then,a UDRE algorithm using AM is described to solve this problem.By using the covariance matrix of the error vector,the searching of WUL is converted to an analytic geometry problem.The location of WUL can be obtained directly by mathematical derivation.Experiments are conducted to compare the performance between the proposed AM algorithm and the exhaustive grid search (EGS) method used in the master station.The results show that the correctness of the AM algorithm can be proved by the EGS method and the AM algorithm can reduce the calculation time by more than 90%.The computational complexity of this proposed algorithm is better than that of EGS.Thereby this algorithm is more suitable for computing UDRE at the master station.

  6. A Multiuser Manufacturing Resource Service Composition Method Based on the Bees Algorithm.

    Science.gov (United States)

    Xie, Yongquan; Zhou, Zude; Pham, Duc Truong; Xu, Wenjun; Ji, Chunqian

    2015-01-01

    In order to realize an optimal resource service allocation in current open and service-oriented manufacturing model, multiuser resource service composition (RSC) is modeled as a combinational and constrained multiobjective problem. The model takes into account both subjective and objective quality of service (QoS) properties as representatives to evaluate a solution. The QoS properties aggregation and evaluation techniques are based on existing researches. The basic Bees Algorithm is tailored for finding a near optimal solution to the model, since the basic version is only proposed to find a desired solution in continuous domain and thus not suitable for solving the problem modeled in our study. Particular rules are designed for handling the constraints and finding Pareto optimality. In addition, the established model introduces a trusted service set to each user so that the algorithm could start by searching in the neighbor of more reliable service chains (known as seeds) than those randomly generated. The advantages of these techniques are validated by experiments in terms of success rate, searching speed, ability of avoiding ingenuity, and so forth. The results demonstrate the effectiveness of the proposed method in handling multiuser RSC problems.

  7. A Multiuser Manufacturing Resource Service Composition Method Based on the Bees Algorithm

    Directory of Open Access Journals (Sweden)

    Yongquan Xie

    2015-01-01

    Full Text Available In order to realize an optimal resource service allocation in current open and service-oriented manufacturing model, multiuser resource service composition (RSC is modeled as a combinational and constrained multiobjective problem. The model takes into account both subjective and objective quality of service (QoS properties as representatives to evaluate a solution. The QoS properties aggregation and evaluation techniques are based on existing researches. The basic Bees Algorithm is tailored for finding a near optimal solution to the model, since the basic version is only proposed to find a desired solution in continuous domain and thus not suitable for solving the problem modeled in our study. Particular rules are designed for handling the constraints and finding Pareto optimality. In addition, the established model introduces a trusted service set to each user so that the algorithm could start by searching in the neighbor of more reliable service chains (known as seeds than those randomly generated. The advantages of these techniques are validated by experiments in terms of success rate, searching speed, ability of avoiding ingenuity, and so forth. The results demonstrate the effectiveness of the proposed method in handling multiuser RSC problems.

  8. Log-Linear Model Based Behavior Selection Method for Artificial Fish Swarm Algorithm

    Directory of Open Access Journals (Sweden)

    Zhehuang Huang

    2015-01-01

    Full Text Available Artificial fish swarm algorithm (AFSA is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  9. Log-linear model based behavior selection method for artificial fish swarm algorithm.

    Science.gov (United States)

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  10. A Nonlinear Blind Source Separation Method Based On Radial Basis Function and Quantum Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Wang Pidong

    2016-01-01

    Full Text Available Blind source separation is a hot topic in signal processing. Most existing works focus on dealing with linear combined signals, while in practice we always encounter with nonlinear mixed signals. To address the problem of nonlinear source separation, in this paper we propose a novel algorithm using radial basis function neutral network, optimized by multi-universe parallel quantum genetic algorithm. Experiments show the efficiency of the proposed method.

  11. A new compensation current real-time computing method for power active filter based on double linear construction algorithm

    Institute of Scientific and Technical Information of China (English)

    LI; Zicheng; SUN; Yukun

    2006-01-01

    Considering the detection principle that "when load current is periodic current, the integral in a cycle for absolute value of load current subtracting fundamental active current is the least", harmonic current real-time detection methods for power active filter are proposed based on direct computation, simple iterative algorithm and optimal iterative algorithm. According to the direct computation method, the amplitude of the fundamental active current can be accurately calculated when load current is placed in stable state. The simple iterative algorithm and the optimal iterative algorithm provide an idea about judging the state of load current. On the basis of the direct computation method, the simple iterative algorithm, the optimal iterative algorithm and precise definition of the basic concepts such as the true amplitude of the fundamental active current when load current is placed in varying state, etc., the double linear construction idea is proposed in which the amplitude of the fundamental active current at the moment of the sample is accurately calculated by using the first linear construction and the condition which disposes the next sample is created by using the second linear construction. On the basis of the double linear construction idea, a harmonic current real-time detection method for power active filter is proposed based on the double linear construction algorithm. This method has the characteristics of small computing quantity, fine real-time performance, being capable of accurately calculating the amplitude of the fundamental active current and so on.

  12. A Semi-Supervised WLAN Indoor Localization Method Based on ℓ1-Graph Algorithm

    Institute of Scientific and Technical Information of China (English)

    Liye Zhang; Lin Ma; Yubin Xu

    2015-01-01

    For indoor location estimation based on received signal strength ( RSS ) in wireless local area networks ( WLAN) , in order to reduce the influence of noise on the positioning accuracy, a large number of RSS should be collected in offline phase. Therefore, collecting training data with positioning information is time consuming which becomes the bottleneck of WLAN indoor localization. In this paper, the traditional semi⁃supervised learning method based on k⁃NN andε⁃NN graph for reducing collection workload of offline phase are analyzed, and the result shows that the k⁃NN or ε⁃NN graph are sensitive to data noise, which limit the performance of semi⁃supervised learning WLAN indoor localization system. Aiming at the above problem, it proposes a ℓ1⁃graph⁃algorithm⁃based semi⁃supervised learning ( LG⁃SSL) indoor localization method in which the graph is built by ℓ1⁃norm algorithm. In our system, it firstly labels the unlabeled data using LG⁃SSL and labeled data to build the Radio Map in offline training phase, and then uses LG⁃SSL to estimate user’ s location in online phase. Extensive experimental results show that, benefit from the robustness to noise and sparsity ofℓ1⁃graph, LG⁃SSL exhibits superior performance by effectively reducing the collection workload in offline phase and improving localization accuracy in online phase.

  13. Method of Inequalities-based Multiobjective Genetic Algorithm for Optimizing a Cart-double-pendulum System

    Institute of Scientific and Technical Information of China (English)

    Tung-Kuan Liu; Chiu-Hung Chen; Zu-Shu Li; Jyh-Horng Chou

    2009-01-01

    This article presents a multiobjective approach to the design of the controller for the swing-up and handstand control of a general cart-double-pendulum system (CDPS).The designed controller,which is based on the human-simulated intelligent control (HSIC) method,builds up different control modes to monitor and control the CDPS during four kinetic phases consisting of an initial oscillation phase,a swing-up phase,a posture adjustment phase,and a balance control phase.For the approach,the original method of inequalities-based (MoI) multiobjective genetic algorithm (MMGA) is extended and applied to the case study which uses a set of performance indices that includes the cart displacement over the rail boundary,the number of swings,the settling time,the overshoot of the total energy,and the control effort.The simulation results show good responses of the CDPS with the controllers obtained by the proposed approach.

  14. A New Tool Wear Monitoring Method Based on Ant Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Qianjian Guo

    2013-06-01

    Full Text Available Tool wear prediction is a major contributor to the dimensional errors of a work piece in precision machining, which plays an important role in industry for higher productivity and product quality. Tool wear monitoring is an effective way to predict the tool wear loss in milling process. In this paper, a new bionic prediction model is presented based on the generation mechanism of tool wear loss. Different milling conditions are estimated as the input variables, tool wear loss is estimated as the output variable, neural network method is proposed to establish the mapping relation and ant algorithm is used to train the weights of BP neural networks during tool wear modeling. Finally, a real-time tool wear loss estimator is developed based on ant colony alogrithm and experiments have been conducted for measuring tool wear based on the estimator in a milling machine. The experimental and estimated results are found to be in satisfactory agreement with average error lower than 6%.

  15. Unsupervised classification algorithm based on EM method for polarimetric SAR images

    Science.gov (United States)

    Fernández-Michelli, J. I.; Hurtado, M.; Areta, J. A.; Muravchik, C. H.

    2016-07-01

    In this work we develop an iterative classification algorithm using complex Gaussian mixture models for the polarimetric complex SAR data. It is a non supervised algorithm which does not require training data or an initial set of classes. Additionally, it determines the model order from data, which allows representing data structure with minimum complexity. The algorithm consists of four steps: initialization, model selection, refinement and smoothing. After a simple initialization stage, the EM algorithm is iteratively applied in the model selection step to compute the model order and an initial classification for the refinement step. The refinement step uses Classification EM (CEM) to reach the final classification and the smoothing stage improves the results by means of non-linear filtering. The algorithm is applied to both simulated and real Single Look Complex data of the EMISAR mission and compared with the Wishart classification method. We use confusion matrix and kappa statistic to make the comparison for simulated data whose ground-truth is known. We apply Davies-Bouldin index to compare both classifications for real data. The results obtained for both types of data validate our algorithm and show that its performance is comparable to Wishart's in terms of classification quality.

  16. Fast Contour-Tracing Algorithm Based on a Pixel-Following Method for Image Sensors.

    Science.gov (United States)

    Seo, Jonghoon; Chae, Seungho; Shim, Jinwook; Kim, Dongchul; Cheong, Cheolho; Han, Tack-Don

    2016-03-09

    Contour pixels distinguish objects from the background. Tracing and extracting contour pixels are widely used for smart/wearable image sensor devices, because these are simple and useful for detecting objects. In this paper, we present a novel contour-tracing algorithm for fast and accurate contour following. The proposed algorithm classifies the type of contour pixel, based on its local pattern. Then, it traces the next contour using the previous pixel's type. Therefore, it can classify the type of contour pixels as a straight line, inner corner, outer corner and inner-outer corner, and it can extract pixels of a specific contour type. Moreover, it can trace contour pixels rapidly because it can determine the local minimal path using the contour case. In addition, the proposed algorithm is capable of the compressing data of contour pixels using the representative points and inner-outer corner points, and it can accurately restore the contour image from the data. To compare the performance of the proposed algorithm to that of conventional techniques, we measure their processing time and accuracy. In the experimental results, the proposed algorithm shows better performance compared to the others. Furthermore, it can provide the compressed data of contour pixels and restore them accurately, including the inner-outer corner, which cannot be restored using conventional algorithms.

  17. Fast Contour-Tracing Algorithm Based on a Pixel-Following Method for Image Sensors

    Directory of Open Access Journals (Sweden)

    Jonghoon Seo

    2016-03-01

    Full Text Available Contour pixels distinguish objects from the background. Tracing and extracting contour pixels are widely used for smart/wearable image sensor devices, because these are simple and useful for detecting objects. In this paper, we present a novel contour-tracing algorithm for fast and accurate contour following. The proposed algorithm classifies the type of contour pixel, based on its local pattern. Then, it traces the next contour using the previous pixel’s type. Therefore, it can classify the type of contour pixels as a straight line, inner corner, outer corner and inner-outer corner, and it can extract pixels of a specific contour type. Moreover, it can trace contour pixels rapidly because it can determine the local minimal path using the contour case. In addition, the proposed algorithm is capable of the compressing data of contour pixels using the representative points and inner-outer corner points, and it can accurately restore the contour image from the data. To compare the performance of the proposed algorithm to that of conventional techniques, we measure their processing time and accuracy. In the experimental results, the proposed algorithm shows better performance compared to the others. Furthermore, it can provide the compressed data of contour pixels and restore them accurately, including the inner-outer corner, which cannot be restored using conventional algorithms.

  18. A Thrust Allocation Method for Efficient Dynamic Positioning of a Semisubmersible Drilling Rig Based on the Hybrid Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Luman Zhao

    2015-01-01

    Full Text Available A thrust allocation method was proposed based on a hybrid optimization algorithm to efficiently and dynamically position a semisubmersible drilling rig. That is, the thrust allocation was optimized to produce the generalized forces and moment required while at the same time minimizing the total power consumption under the premise that forbidden zones should be taken into account. An optimization problem was mathematically formulated to provide the optimal thrust allocation by introducing the corresponding design variables, objective function, and constraints. A hybrid optimization algorithm consisting of a genetic algorithm and a sequential quadratic programming (SQP algorithm was selected and used to solve this problem. The proposed method was evaluated by applying it to a thrust allocation problem for a semisubmersible drilling rig. The results indicate that the proposed method can be used as part of a cost-effective strategy for thrust allocation of the rig.

  19. Improved optimization algorithm for proximal point-based dictionary updating methods

    Science.gov (United States)

    Zhao, Changchen; Hwang, Wen-Liang; Lin, Chun-Liang; Chen, Weihai

    2016-09-01

    Proximal K-singular value decomposition (PK-SVD) is a dictionary updating algorithm that incorporates proximal point method into K-SVD. The attempt of combining proximal method and K-SVD has achieved promising result in such areas as sparse approximation, image denoising, and image compression. However, the optimization procedure of PK-SVD is complicated and, therefore, limits the algorithm in both theoretical analysis and practical use. This article proposes a simple but effective optimization approach to the formulation of PK-SVD. We cast this formulation as a fitting problem and relax the constraint on the direction of the k'th row in the sparse coefficient matrix. This relaxation strengthens the regularization effect of the proximal point. The proposed algorithm needs fewer steps to implement and further boost the performance of PK-SVD while maintaining the same computational complexity. Experimental results demonstrate that the proposed algorithm outperforms conventional algorithms in reconstruction error, recovery rate, and convergence speed for sparse approximation and achieves better results in image denoising.

  20. An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method

    Energy Technology Data Exchange (ETDEWEB)

    Campolina, Daniel; Lima, Paulo Rubens I., E-mail: campolina@cdtn.br, E-mail: pauloinacio@cpejr.com.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil). Servico de Tecnologia de Reatores; Pereira, Claubia; Veloso, Maria Auxiliadora F., E-mail: claubia@nuclear.ufmg.br, E-mail: dora@nuclear.ufmg.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Dept. de Engenharia Nuclear

    2015-07-01

    Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k{sub eff} was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)

  1. Creating IRT-Based Parallel Test Forms Using the Genetic Algorithm Method

    Science.gov (United States)

    Sun, Koun-Tem; Chen, Yu-Jen; Tsai, Shu-Yen; Cheng, Chien-Fen

    2008-01-01

    In educational measurement, the construction of parallel test forms is often a combinatorial optimization problem that involves the time-consuming selection of items to construct tests having approximately the same test information functions (TIFs) and constraints. This article proposes a novel method, genetic algorithm (GA), to construct parallel…

  2. A Time-Domain Structural Damage Detection Method Based on Improved Multiparticle Swarm Coevolution Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Shao-Fei Jiang

    2014-01-01

    Full Text Available Optimization techniques have been applied to structural health monitoring and damage detection of civil infrastructures for two decades. The standard particle swarm optimization (PSO is easy to fall into the local optimum and such deficiency also exists in the multiparticle swarm coevolution optimization (MPSCO. This paper presents an improved MPSCO algorithm (IMPSCO firstly and then integrates it with Newmark’s algorithm to localize and quantify the structural damage by using the damage threshold proposed. To validate the proposed method, a numerical simulation and an experimental study of a seven-story steel frame were employed finally, and a comparison was made between the proposed method and the genetic algorithm (GA. The results show threefold: (1 the proposed method not only is capable of localization and quantification of damage, but also has good noise-tolerance; (2 the damage location can be accurately detected using the damage threshold proposed in this paper; and (3 compared with the GA, the IMPSCO algorithm is more efficient and accurate for damage detection problems in general. This implies that the proposed method is applicable and effective in the community of damage detection and structural health monitoring.

  3. Direct Localization Algorithm of White-light Interferogram Center Based on the Weighted Integral Method

    Science.gov (United States)

    Sato, Seichi; Kurihara, Toru; Ando, Shigeru

    This paper proposes an exact direct method to determine all parameters including an envelope peak of the white-light interferogram. A novel mathematical technique, the weighted integral method (WIM), is applied that starts from the characteristic differential equation of the target signal, interferogram in this paper, to obtain the algebraic relation among the finite-interval weighted integrals (observations) of the signal and the waveform parameters (unknowns). We implemented this method using FFT and examined through various numerical simulations. The results show the method is able to localize the envelope peak very accurately even if it is not included in the observed interval. The performance comparisons reveal the superiority of the proposed algorithm over conventional algorithms in all terms of accuracy, efficiency, and estimation range.

  4. NUMERICAL METHOD BASED ON HAMILTON SYSTEM AND SYMPLECTIC ALGORITHM TO DIFFERENTIAL GAMES

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    The resolution of differential games often concerns the difficult problem of two points border value (TPBV), then ascribe linear quadratic differential game to Hamilton system. To Hamilton system, the algorithm of symplectic geometry has the merits of being able to copy the dynamic structure of Hamilton system and keep the measure of phase plane. From the viewpoint of Hamilton system, the symplectic characters of linear quadratic differential game were probed; as a try, Symplectic-Runge-Kutta algorithm was presented for the resolution of infinite horizon linear quadratic differential game. An example of numerical calculation was given, and the result can illuminate the feasibility of this method. At the same time, it embodies the fine conservation characteristics of symplectic algorithm to system energy.

  5. A genetic algorithm approach for assessing soil liquefaction potential based on reliability method

    Indian Academy of Sciences (India)

    M H Bagheripour; I Shooshpasha; M Afzalirad

    2012-02-01

    Deterministic approaches are unable to account for the variations in soil’s strength properties, earthquake loads, as well as source of errors in evaluations of liquefaction potential in sandy soils which make them questionable against other reliability concepts. Furthermore, deterministic approaches are incapable of precisely relating the probability of liquefaction and the factor of safety (FS). Therefore, the use of probabilistic approaches and especially, reliability analysis is considered since a complementary solution is needed to reach better engineering decisions. In this study, Advanced First-Order Second-Moment (AFOSM) technique associated with genetic algorithm (GA) and its corresponding sophisticated optimization techniques have been used to calculate the reliability index and the probability of liquefaction. The use of GA provides a reliable mechanism suitable for computer programming and fast convergence. A new relation is developed here, by which the liquefaction potential can be directly calculated based on the estimated probability of liquefaction (), cyclic stress ratio (CSR) and normalized standard penetration test (SPT) blow counts while containing a mean error of less than 10% from the observational data. The validity of the proposed concept is examined through comparison of the results obtained by the new relation and those predicted by other investigators. A further advantage of the proposed relation is that it relates and FS and hence it provides possibility of decision making based on the liquefaction risk and the use of deterministic approaches. This could be beneficial to geotechnical engineers who use the common methods of FS for evaluation of liquefaction. As an application, the city of Babolsar which is located on the southern coasts of Caspian Sea is investigated for liquefaction potential. The investigation is based primarily on in situ tests in which the results of SPT are analysed.

  6. An Endmember Extraction Method Based on Artificial Bee Colony Algorithms for Hyperspectral Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Xu Sun

    2015-12-01

    Full Text Available Mixed pixels are common in hyperspectral remote sensing images. Endmember extraction is a key step in spectral unmixing. The linear spectral mixture model (LSMM constitutes a geometric approach that is commonly used for this purpose. This paper introduces the use of artificial bee colony (ABC algorithms for spectral unmixing. First, the objective function of the external minimum volume model is improved to enhance the robustness of the results, and then, the ABC-based endmember extraction process is presented. Depending on the characteristics of the objective function, two algorithms, Artificial Bee Colony Endmember Extraction-RMSE (ABCEE-R and ABCEE-Volume (ABCEE-V are proposed. Finally, two sets of experiment using synthetic data and one set of experiments using a real hyperspectral image are reported. Comparative experiments reveal that ABCEE-R and ABCEE-V can achieve better endmember extraction results than other algorithms when processing data with a low signal-to-noise ratio (SNR. ABCEE-R does not require high accuracy in the number of endmembers, and it can always obtain the result with the best root mean square error (RMSE; when the number of endmembers extracted and the true number of endmembers does not match, the RMSE of the ABCEE-V results is usually not as good as that of ABCEE-R, but the endmembers extracted using the former algorithm are closer to the true endmembers.

  7. COMPARISON AND ANALYSIS OF NONLINEAR LEAST SQUARES METHODS FOR VISION BASED NAVIGATION (VBN) ALGORITHMS

    OpenAIRE

    Sheta, B.; M. Elhabiby; Sheimy, N.

    2012-01-01

    A robust scale and rotation invariant image matching algorithm is vital for the Visual Based Navigation (VBN) of aerial vehicles, where matches between an existing geo-referenced database images and the real-time captured images are used to georeference (i.e. six transformation parameters - three rotation and three translation) the real-time captured image from the UAV through the collinearity equations. The georeferencing information is then used in aiding the INS integration Kalman filter a...

  8. Volume Measurement Algorithm for Food Product with Irregular Shape using Computer Vision based on Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Joko Siswantoro

    2014-11-01

    Full Text Available Volume is one of important issues in the production and processing of food product. Traditionally, volume measurement can be performed using water displacement method based on Archimedes’ principle. Water displacement method is inaccurate and considered as destructive method. Computer vision offers an accurate and nondestructive method in measuring volume of food product. This paper proposes algorithm for volume measurement of irregular shape food product using computer vision based on Monte Carlo method. Five images of object were acquired from five different views and then processed to obtain the silhouettes of object. From the silhouettes of object, Monte Carlo method was performed to approximate the volume of object. The simulation result shows that the algorithm produced high accuracy and precision for volume measurement.

  9. A Localization Method for Underwater Wireless Sensor Networks Based on Mobility Prediction and Particle Swarm Optimization Algorithms.

    Science.gov (United States)

    Zhang, Ying; Liang, Jixing; Jiang, Shengming; Chen, Wei

    2016-02-06

    Due to their special environment, Underwater Wireless Sensor Networks (UWSNs) are usually deployed over a large sea area and the nodes are usually floating. This results in a lower beacon node distribution density, a longer time for localization, and more energy consumption. Currently most of the localization algorithms in this field do not pay enough consideration on the mobility of the nodes. In this paper, by analyzing the mobility patterns of water near the seashore, a localization method for UWSNs based on a Mobility Prediction and a Particle Swarm Optimization algorithm (MP-PSO) is proposed. In this method, the range-based PSO algorithm is used to locate the beacon nodes, and their velocities can be calculated. The velocity of an unknown node is calculated by using the spatial correlation of underwater object's mobility, and then their locations can be predicted. The range-based PSO algorithm may cause considerable energy consumption and its computation complexity is a little bit high, nevertheless the number of beacon nodes is relatively smaller, so the calculation for the large number of unknown nodes is succinct, and this method can obviously decrease the energy consumption and time cost of localizing these mobile nodes. The simulation results indicate that this method has higher localization accuracy and better localization coverage rate compared with some other widely used localization methods in this field.

  10. ANFIS Based Time Series Prediction Method of Bank Cash Flow Optimized by Adaptive Population Activity PSO Algorithm

    Directory of Open Access Journals (Sweden)

    Jie-Sheng Wang

    2015-06-01

    Full Text Available In order to improve the accuracy and real-time of all kinds of information in the cash business, and solve the problem which accuracy and stability is not high of the data linkage between cash inventory forecasting and cash management information in the commercial bank, a hybrid learning algorithm is proposed based on adaptive population activity particle swarm optimization (APAPSO algorithm combined with the least squares method (LMS to optimize the adaptive network-based fuzzy inference system (ANFIS model parameters. Through the introduction of metric function of population diversity to ensure the diversity of population and adaptive changes in inertia weight and learning factors, the optimization ability of the particle swarm optimization (PSO algorithm is improved, which avoids the premature convergence problem of the PSO algorithm. The simulation comparison experiments are carried out with BP-LMS algorithm and standard PSO-LMS by adopting real commercial banks’ cash flow data to verify the effectiveness of the proposed time series prediction of bank cash flow based on improved PSO-ANFIS optimization method. Simulation results show that the optimization speed is faster and the prediction accuracy is higher.

  11. An enhanced text categorization method based on improved text frequency approach and mutual information algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Text categorization plays an important role in data mining.Feature selection is the most important process of text categorization.Focused on feature selection,we present an improved text frequency method for filtering of low frequency features to deal with the data preprocessing,propose an improved mutual information algorithm for feature selection,and develop an improved tf.idf method for characteristic weights evaluation.The proposed method is applied to the benchmark test set Reuters-21578 Top10 to examine its effectiveness.Numerical results show that the precision,the recall and the value of F1 of the proposed method are all superior to those of existing conventional methods.

  12. A New Method to Improve Round Robin Scheduling Algorithm with Quantum Time Based on Harmonic-Arithmetic Mean (HARM

    Directory of Open Access Journals (Sweden)

    Ashkan Emami Ale Agha

    2013-06-01

    Full Text Available One of the most important concepts in multi programming Operating Systems is scheduling. It helps in choosing the processes for execution. Round robin method is one of the most important algorithms in scheduling. It is the most popular algorithm due to its fairness and starvation free nature towards the processes, which is achieved by using proper quantum time. The main challenge in this algorithm is selection of quantum time. This parameter affects on average Waiting Time and average Turnaround Time in execution queue. As the quantum time is static, it causes less context switching in case of high quantum time and high context switching in case of less quantum time. Increasing context switch leads to high average waiting time, high average turnaround time which is an overhead and degrades the system performance. With respect to these points, the algorithms should calculate proper value for the quantum time. Two main classes of algorithms that are proposed to calculate the quantum time include static and dynamic methods. In static methods quantum time is fixed during the scheduling. Dynamic algorithms are one of these methods that change the value of quantum time in each cycle. For example in one method the value of quantum time in each cycle is equal to the median of burst times of processes in ready queue and for another method this value is equal to arithmetic mean of burst times of ready processes.In this paper we proposed a new method to obtaining quantum time in each cycle based on arithmetic-harmonic mean (HARM. Harmonic mean is calculated by dividing the number of observations by the reciprocal of each number in the series. With examples we show that in some cases it can provides better scheduling criteria and improves the average Turnaround Time and average Waiting Time.

  13. PMSVM: An Optimized Support Vector Machine Classification Algorithm Based on PCA and Multilevel Grid Search Methods

    Directory of Open Access Journals (Sweden)

    Yukai Yao

    2015-01-01

    Full Text Available We propose an optimized Support Vector Machine classifier, named PMSVM, in which System Normalization, PCA, and Multilevel Grid Search methods are comprehensively considered for data preprocessing and parameters optimization, respectively. The main goals of this study are to improve the classification efficiency and accuracy of SVM. Sensitivity, Specificity, Precision, and ROC curve, and so forth, are adopted to appraise the performances of PMSVM. Experimental results show that PMSVM has relatively better accuracy and remarkable higher efficiency compared with traditional SVM algorithms.

  14. A RBF classification method of remote sensing image based on genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    The remote sensing image classification has stimulated considerable interest as an effective method for better retrieving information from the rapidly increasing large volume, complex and distributed satellite remote imaging data of large scale and cross-time, due to the increase of remote image quantities and image resolutions. In the paper, the genetic algorithms were employed to solve the weighting of the radial basis faction networks in order to improve the precision of remote sensing image classification. The remote sensing image classification was also introduced for the GIS spatial analysis and the spatial online analytical processing (OLAP) ,and the resulted effectiveness was demonstrated in the analysis of land utilization variation of Daqing city.

  15. Parallel scientific computing theory, algorithms, and applications of mesh based and meshless methods

    CERN Document Server

    Trobec, Roman

    2015-01-01

    This book is concentrated on the synergy between computer science and numerical analysis. It is written to provide a firm understanding of the described approaches to computer scientists, engineers or other experts who have to solve real problems. The meshless solution approach is described in more detail, with a description of the required algorithms and the methods that are needed for the design of an efficient computer program. Most of the details are demonstrated on solutions of practical problems, from basic to more complicated ones. This book will be a useful tool for any reader interes

  16. Creeping Ray Tracing Algorithm for Arbitrary NURBS Surfaces Based on Adaptive Variable Step Euler Method

    Directory of Open Access Journals (Sweden)

    Song Fu

    2015-01-01

    Full Text Available Although the uniform theory of diffraction (UTD could be theoretically applied to arbitrarilyshaped convex objects modeled by nonuniform rational B-splines (NURBS, one of the great challenges in calculation of the UTD surface diffracted fields is the difficulty in determining the geodesic paths along which the creeping waves propagate on arbitrarilyshaped NURBS surfaces. In differential geometry, geodesic paths satisfy geodesic differential equation (GDE. Hence, in this paper, a general and efficient adaptive variable step Euler method is introduced for solving the GDE on arbitrarilyshaped NURBS surfaces. In contrast with conventional Euler method, the proposed method employs a shape factor (SF ξ to efficiently enhance the accuracy of tracing and extends the application of UTD for practical engineering. The validity and usefulness of the algorithm can be verified by the numerical results.

  17. Improved Power Flow Algorithm for VSC-HVDC System Based on High-Order Newton-Type Method

    Directory of Open Access Journals (Sweden)

    Yanfang Wei

    2013-01-01

    Full Text Available Voltage source converter (VSC based high-voltage direct-current (HVDC system is a new transmission technique, which has the most promising applications in the fields of power systems and power electronics. Considering the importance of power flow analysis of the VSC-HVDC system for its utilization and exploitation, the improved power flow algorithms for VSC-HVDC system based on third-order and sixth-order Newton-type method are presented. The steady power model of VSC-HVDC system is introduced firstly. Then the derivation solving formats of multivariable matrix for third-order and sixth-order Newton-type power flow method of VSC-HVDC system are given. The formats have the feature of third-order and sixth-order convergence based on Newton method. Further, based on the automatic differentiation technology and third-order Newton method, a new improved algorithm is given, which will help in improving the program development, computation efficiency, maintainability, and flexibility of the power flow. Simulations of AC/DC power systems in two-terminal, multi-terminal, and multi-infeed DC with VSC-HVDC are carried out for the modified IEEE bus systems, which show the effectiveness and practicality of the presented algorithms for VSC-HVDC system.

  18. A new PQ disturbances identification method based on combining neural network with least square weighted fusion algorithm

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A new method for power quality (PQ) disturbances identification is brought forward based on combining a neural network with least square (LS) weighted fusion algorithm. The characteristic components of PQ disturbances are distilled through an improved phase-located loop (PLL) system at first, and then five child BP ANNs with different structures are trained and adopted to identify the PQ disturbances respectively. The combining neural network fuses the identification results of these child ANNs with LS weighted fusion algorithm, and identifies PQ disturbances with the fused result finally. Compared with a single neural network, the combining one with LS weighted fusion algorithm can identify the PQ disturbances correctly when noise is strong. However, a single neural network may fail in this case. Furthermore, the combining neural network is more reliable than a single neural network. The simulation results prove the conclusions above.

  19. A Method for Estimating View Transformations from Image Correspondences Based on the Harmony Search Algorithm

    Directory of Open Access Journals (Sweden)

    Erik Cuevas

    2015-01-01

    Full Text Available In this paper, a new method for robustly estimating multiple view relations from point correspondences is presented. The approach combines the popular random sampling consensus (RANSAC algorithm and the evolutionary method harmony search (HS. With this combination, the proposed method adopts a different sampling strategy than RANSAC to generate putative solutions. Under the new mechanism, at each iteration, new candidate solutions are built taking into account the quality of the models generated by previous candidate solutions, rather than purely random as it is the case of RANSAC. The rules for the generation of candidate solutions (samples are motivated by the improvisation process that occurs when a musician searches for a better state of harmony. As a result, the proposed approach can substantially reduce the number of iterations still preserving the robust capabilities of RANSAC. The method is generic and its use is illustrated by the estimation of homographies, considering synthetic and real images. Additionally, in order to demonstrate the performance of the proposed approach within a real engineering application, it is employed to solve the problem of position estimation in a humanoid robot. Experimental results validate the efficiency of the proposed method in terms of accuracy, speed, and robustness.

  20. Multi-criteria optimisation problems for chemical engineering systems and algorithms for their solution based on fuzzy mathematical methods.

    Science.gov (United States)

    Orazbayev, B B; Orazbayeva, K N; Kurmangaziyeva, L T; Makhatova, V E

    2015-01-01

    Mathematical equations for the multi-criteria task of the optimisation of chemical engineering systems, for example for the optimisation of working regimes for industrial installations for benzene production, have been formulated and developed, and based on fuzzy mathematical methods, algorithms for their solution have been developed. Since the chemical engineering system, which is being researched, is characterised by multiple criteria and often functions in conditions of uncertainty, the presenting problem is formulated in the form of multi-criteria equations for fuzzy mathematical programming. New mathematical formulations for the problems being solved in a fuzzy environment and heuristic algorithms for their solution have been developed by the modification of various optimisation principles based on fuzzy mathematical methods.

  1. Multi-criteria optimisation problems for chemical engineering systems and algorithms for their solution based on fuzzy mathematical methods

    Science.gov (United States)

    Orazbayev, B. B.; Orazbayeva, K. N.; Kurmangaziyeva, L. T.; Makhatova, V.E.

    2015-01-01

    Mathematical equations for the multi-criteria task of the optimisation of chemical engineering systems, for example for the optimisation of working regimes for industrial installations for benzene production, have been formulated and developed, and based on fuzzy mathematical methods, algorithms for their solution have been developed. Since the chemical engineering system, which is being researched, is characterised by multiple criteria and often functions in conditions of uncertainty, the presenting problem is formulated in the form of multi-criteria equations for fuzzy mathematical programming. New mathematical formulations for the problems being solved in a fuzzy environment and heuristic algorithms for their solution have been developed by the modification of various optimisation principles based on fuzzy mathematical methods.

  2. Numerical methods and inversion algorithms in reservoir simulation based on front tracking

    Energy Technology Data Exchange (ETDEWEB)

    Haugse, Vidar

    1999-04-01

    This thesis uses front tracking to analyse laboratory experiments on multiphase flow in porous media. New methods for parameter estimation for two- and three-phase relative permeability experiments have been developed. Up scaling of heterogeneous and stochastic porous media is analysed. Numerical methods based on front tracking is developed and analysed. Such methods are efficient for problems involving steep changes in the physical quantities. Multi-dimensional problems are solved by combining front tracking with dimensional splitting. A method for adaptive grid refinement is developed.

  3. Optimization of flapping-wing micro aircrafts based on the kinematic parameters using genetic algorithm method

    Directory of Open Access Journals (Sweden)

    Ebrahim BARATI

    2013-03-01

    Full Text Available In this paper the optimization of kinematics, which has great influence in performance of flapping foil propulsion, is investigated. The purpose of optimization is to design a flapping-wing micro aircraft with appropriate kinematics and aerodynamics features, making the micro aircraft suitable for transportation over large distance with minimum energy consumption. On the point of optimal design, the pitch amplitude, wing reduced frequency and phase difference between plunging and pitching are considered as given parameters and consumed energy, generated thrust by wings and lost power are computed using the 2D quasi-steady aerodynamic model and multi-objective genetic algorithm. Based on the thrust optimization, the increase in pitch amplitude reduces the power consumption. In this case the lost power increases and the maximum thrust coefficient is computed of 2.43. Based on the power optimization, the results show that the increase in pitch amplitude leads to power consumption increase. Additionally, the minimum lost power obtained in this case is 23% at pitch amplitude of 25°, wing reduced frequency of 0.42 and phase angle difference between plunging and pitching of 77°. Furthermore, the wing reduced frequency can be estimated using regression with respect to pitch amplitude, because reduced frequency variations with pitch amplitude is approximately a linear function.

  4. Multivariable PID Decoupling Control Method of Electroslag Remelting Process Based on Improved Particle Swarm Optimization (PSO Algorithm

    Directory of Open Access Journals (Sweden)

    Jie-Sheng Wang

    2014-02-01

    Full Text Available A mathematical model of electroslag remelting (ESR process is established based on its technical features and dynamic characteristics. A new multivariable self-tuning proportional-integral-derivative (PID controller tuned optimally by an improved particle swarm optimization (IPSO algorithm is proposed to control the two-input/two-output (TITO ESR process. An adaptive chaotic migration mutation operator is used to tackle the particles trapped in the clustering field in order to enhance the diversity of the particles in the population, prevent premature convergence and improve the search efficiency of PSO algorithm. The simulation results show the feasibility and effectiveness of the proposed control method. The new method can overcome dynamic working conditions and coupling features of the system in a wide range, and it has strong robustness and adaptability.

  5. An Aircraft Navigation System Fault Diagnosis Method Based on Optimized Neural Network Algorithm

    Institute of Scientific and Technical Information of China (English)

    Jean-dedieu Weyepe

    2014-01-01

    Air data and inertial reference system (ADIRS) is one of the complex sub-system in the aircraft navigation system and it plays an important role into the flight safety of the aircraft. This paper propose an optimize neural network algorithm which is a combination of neural network and ant colony algorithm to improve efficiency of maintenance engineer job task.

  6. General moving objects recognition method based on graph embedding dimension reduction algorithm

    Institute of Scientific and Technical Information of China (English)

    Yi ZHANG; Jie YANG; Kun LIU

    2009-01-01

    Effective and robust recognition and tracking of objects are the key problems in visual surveillance systems. Most existing object recognition methods were designed with particular objects in mind. This study presents a general moving objects recognition method using global features of targets. Targets are extracted with an adaptive Gaussian mixture model and their silhouette images are captured and unified. A new objects silhouette database is built to provide abundant samples to train the subspace feature. This database is more convincing than the previous ones. A more effective dimension reduction method based on graph embedding is used to obtain the projection eigenvector. In our experiments, we show the effective performance of our method in addressing the moving objects recognition problem and its superiority compared with the previous methods.

  7. A comparison of three Deformable Image Registration Algorithms in 4DCT using conventional contour based methods and voxel-by-voxel comparison methods.

    Directory of Open Access Journals (Sweden)

    Mirek eFatyga

    2015-02-01

    Full Text Available Background: Commonly used methods of assessing the accuracy of Deformable Image Registration (DIR rely on image segmentation or landmark selection. These methods are very labor intensive and thus limited to relatively small number of image pairs. The direct voxel-by-voxel comparison can be automated to examine fluctuations in DIR quality on a long series of image pairs.Methods: A voxel-by-voxel comparison of three DIR algorithms applied to lung patients is presented. Registrations are compared by comparing volume histograms formed both with individual DIR maps and with a voxel-by-voxel subtraction of the two maps. When two DIR maps agree one concludes that both maps are interchangeable in treatment planning applications, though one cannot conclude that either one agrees with the ground truth. If two DIR maps significantly disagree one concludes that at least one of the maps deviates from the ground truth. We use the method to compare three DIR algorithms applied to peak inhale-peak exhale registrations of 4DFBCT data obtained from thirteen patients. Results: All three algorithms appear to be nearly equivalent when compared using DICE similarity coefficients. A comparison based on Jacobian Volume Histograms shows that all three algorithms measure changes in total volume of the lungs with reasonable accuracy, but show large differences in the variance of Jacobian distribution on all contoured structures. Analysis of voxel-by-voxel subtraction of DIR maps shows that the three algorithms differ to a degree which is sufficient to create a potential for dosimetric discrepancy during dose accumulation.Conclusions: DIR algorithms can perform well in some clinical applications, while potentially fail in others. These algorithms are best treated as potentially useful approximations of tissue deformation that need to be separately validated for every intended clinical application.

  8. A recognition method of vibration parameter image based on improved immune negative selection algorithm for rotating machinery

    Institute of Scientific and Technical Information of China (English)

    DOU Wei; LIU Zhan-sheng

    2009-01-01

    To overcome the limitations of traditional monitoring methods, based on vibration parameter image of rotating machinery, this paper presents an abnormality online monitoring method suitable for rotating machinery using the negative selection mechanism of biology immune system. This method uses techniques of biology clone and learning mechanism to improve the negative selection algorithm to generate detectors possessing different monitoring radius, covers the abnormality space effectively, and avoids such problems as the low efficiency of generating detectors, etc. The result of an example applying the presented monitoring method shows that this method can solve the difficulty of obtaining fault samples preferably and extract the turbine state character effec tively, it also can detect abnormality by causing various fault of the turbine and obtain the degree of abnormality accurately. The exact monitoring precision of abnormality indicates that this method is feasible and has better on-line quality, accuracy and robustness.

  9. Load balancing prediction method of cloud storage based on analytic hierarchy process and hybrid hierarchical genetic algorithm.

    Science.gov (United States)

    Zhou, Xiuze; Lin, Fan; Yang, Lvqing; Nie, Jing; Tan, Qian; Zeng, Wenhua; Zhang, Nian

    2016-01-01

    With the continuous expansion of the cloud computing platform scale and rapid growth of users and applications, how to efficiently use system resources to improve the overall performance of cloud computing has become a crucial issue. To address this issue, this paper proposes a method that uses an analytic hierarchy process group decision (AHPGD) to evaluate the load state of server nodes. Training was carried out by using a hybrid hierarchical genetic algorithm (HHGA) for optimizing a radial basis function neural network (RBFNN). The AHPGD makes the aggregative indicator of virtual machines in cloud, and become input parameters of predicted RBFNN. Also, this paper proposes a new dynamic load balancing scheduling algorithm combined with a weighted round-robin algorithm, which uses the predictive periodical load value of nodes based on AHPPGD and RBFNN optimized by HHGA, then calculates the corresponding weight values of nodes and makes constant updates. Meanwhile, it keeps the advantages and avoids the shortcomings of static weighted round-robin algorithm.

  10. A genetic-algorithm-based method to find the unitary transformations for any de- sired quantum computation and application to a one-bit oracle decision problem

    OpenAIRE

    Bang, Jeongho; Yoo, Seokwon

    2014-01-01

    We propose a genetic-algorithm-based method to find the unitary transformations for any desired quantum computation. We formulate a simple genetic algorithm by introducing the "genetic parameter vector" of the unitary transformations to be found. In the genetic algorithm process, all components of the genetic parameter vectors are supposed to evolve to the solution parameters of the unitary transformations. We apply our method to find the optimal unitary transformations and to generalize the ...

  11. Numerical algorithms based on Galerkin methods for the modeling of reactive interfaces in photoelectrochemical (PEC) solar cells

    Science.gov (United States)

    Harmon, Michael; Gamba, Irene M.; Ren, Kui

    2016-12-01

    This work concerns the numerical solution of a coupled system of self-consistent reaction-drift-diffusion-Poisson equations that describes the macroscopic dynamics of charge transport in photoelectrochemical (PEC) solar cells with reactive semiconductor and electrolyte interfaces. We present three numerical algorithms, mainly based on a mixed finite element and a local discontinuous Galerkin method for spatial discretization, with carefully chosen numerical fluxes, and implicit-explicit time stepping techniques, for solving the time-dependent nonlinear systems of partial differential equations. We perform computational simulations under various model parameters to demonstrate the performance of the proposed numerical algorithms as well as the impact of these parameters on the solution to the model.

  12. 3D Continuum Radiative Transfer. An adaptive grid construction algorithm based on the Monte Carlo method

    Science.gov (United States)

    Niccolini, G.; Alcolea, J.

    Solving the radiative transfer problem is a common problematic to may fields in astrophysics. With the increasing angular resolution of spatial or ground-based telescopes (VLTI, HST) but also with the next decade instruments (NGST, ALMA, ...), astrophysical objects reveal and will certainly reveal complex spatial structures. Consequently, it is necessary to develop numerical tools being able to solve the radiative transfer equation in three dimensions in order to model and interpret these observations. I present a 3D radiative transfer program, using a new method for the construction of an adaptive spatial grid, based on the Monte Claro method. With the help of this tools, one can solve the continuum radiative transfer problem (e.g. a dusty medium), computes the temperature structure of the considered medium and obtain the flux of the object (SED and images).

  13. Design of Satellite Attitude Control Algorithm Based on the SDRE Method Using Gas Jets and Reaction Wheels

    Directory of Open Access Journals (Sweden)

    Luiz C. G. de Souza

    2013-01-01

    Full Text Available An experimental attitude control algorithm design using prototypes can minimize space mission costs by reducing the number of errors transmitted to the next phase of the project. The Space Mechanics and Control Division (DMC of INPE is constructing a 3D simulator to supply the conditions for implementing and testing satellite control hardware and software. Satellite large angle maneuver makes the plant highly nonlinear and if the parameters of the system are not well determined, the plant can also present some level of uncertainty. As a result, controller designed by a linear control technique can have its performance and robustness degraded. In this paper the standard LQR linear controller and the SDRE controller associated with an SDRE filter are applied to design a controller for a nonlinear plant. The plant is similar to the DMC 3D satellite simulator where the unstructured uncertainties of the system are represented by process and measurements noise. In the sequel the State-Dependent Riccati Equation (SDRE method is used to design and test an attitude control algorithm based on gas jets and reaction wheel torques to perform large angle maneuver in three axes. The SDRE controller design takes into account the effects of the plant nonlinearities and system noise which represents uncertainty. The SDRE controller performance and robustness are tested during the transition phase from angular velocity reductions to normal mode of operation with stringent pointing accuracy using a switching control algorithm based on minimum system energy. This work serves to validate the numerical simulator model and to verify the functionality of the control algorithm designed by the SDRE method.

  14. Ensemble Methods Foundations and Algorithms

    CERN Document Server

    Zhou, Zhi-Hua

    2012-01-01

    An up-to-date, self-contained introduction to a state-of-the-art machine learning approach, Ensemble Methods: Foundations and Algorithms shows how these accurate methods are used in real-world tasks. It gives you the necessary groundwork to carry out further research in this evolving field. After presenting background and terminology, the book covers the main algorithms and theories, including Boosting, Bagging, Random Forest, averaging and voting schemes, the Stacking method, mixture of experts, and diversity measures. It also discusses multiclass extension, noise tolerance, error-ambiguity a

  15. Nonlinear approximation method in Lagrangian relaxation-based algorithms for hydrothermal scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Guan, X. [Pacific Gas and Electric, San Francisco, CA (United States); Luh, P.B.; Zhang, L. [Univ. of Connecticut, Storrs, CT (United States). Dept. of Electrical and Systems Engineering

    1995-05-01

    When the Lagrangian relaxation technique is used to solve hydrothermal scheduling problems, many subproblems have linear stage-wise cost functions. A well recognized difficulty is that the solutions to these subproblems may oscillate between maximum and minimum generations with slight changes of the multipliers. Furthermore, the subproblem solutions may become singular, i.e., they are un-determined when the linear coefficients become zero. This may result in large differences between subproblem solutions and the optimal primal schedule. In this paper, a nonlinear approximation method is presented which utilizes nonlinear functions, quadratic in this case, to approximate relevant linear cost functions. The analysis shows that the difficulty associated with solution oscillation is reduced, and singularity is avoided. Extensive testing based on Northeast Utilities data indicates that the method consistently generates better schedules than the standard Lagrangian relaxation method.

  16. Research on the algorithm of infrared target detection based on the frame difference and background subtraction method

    Science.gov (United States)

    Liu, Yun; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Hui, Mei; Liu, Xiaohua; Wu, Yijian

    2015-09-01

    As an important branch of infrared imaging technology, infrared target tracking and detection has a very important scientific value and a wide range of applications in both military and civilian areas. For the infrared image which is characterized by low SNR and serious disturbance of background noise, an innovative and effective target detection algorithm is proposed in this paper, according to the correlation of moving target frame-to-frame and the irrelevance of noise in sequential images based on OpenCV. Firstly, since the temporal differencing and background subtraction are very complementary, we use a combined detection method of frame difference and background subtraction which is based on adaptive background updating. Results indicate that it is simple and can extract the foreground moving target from the video sequence stably. For the background updating mechanism continuously updating each pixel, we can detect the infrared moving target more accurately. It paves the way for eventually realizing real-time infrared target detection and tracking, when transplanting the algorithms on OpenCV to the DSP platform. Afterwards, we use the optimal thresholding arithmetic to segment image. It transforms the gray images to black-white images in order to provide a better condition for the image sequences detection. Finally, according to the relevance of moving objects between different frames and mathematical morphology processing, we can eliminate noise, decrease the area, and smooth region boundaries. Experimental results proves that our algorithm precisely achieve the purpose of rapid detection of small infrared target.

  17. A method for classification of network traffic based on C5.0 Machine Learning Algorithm

    DEFF Research Database (Denmark)

    Bujlow, Tomasz; Riaz, M. Tahir; Pedersen, Jens Myrup

    2012-01-01

    and classification, an algorithm for recognizing flow direction and the C5.0 itself. Classified applications include Skype, FTP, torrent, web browser traffic, web radio, interactive gaming and SSH. We performed subsequent tries using different sets of parameters and both training and classification options...

  18. Algorithm composition scheme for problems in composite domains based on the difference potential method

    Science.gov (United States)

    Ryaben'kii, V. S.; Turchaninov, V. I.; Epshteyn, Ye. Yu.

    2006-10-01

    An algorithm composition scheme for the numerical solution of boundary value problems in composite domains is proposed and illustrated using an example. The scheme requires neither difference approximations of the boundary conditions nor matching conditions on the boundary between the subdomains. The scheme is suited for multiprocessor computers.

  19. An intelligent modeling method based on genetic algorithm for partner selection in virtual organizations

    Directory of Open Access Journals (Sweden)

    Pacuraru Raluca

    2011-04-01

    Full Text Available The goal of a Virtual Organization is to find the most appropriate partners in terms of expertise, cost wise, quick response, and environment. In this study we propose a model and a solution approach to a partner selection problem considering three main evaluation criteria: cost, time and risk. This multiobjective problem is solved by an improved genetic algorithm (GA that includes meiosis specific characteristics and step-size adaptation for the mutation operator. The algorithm performs strong exploration initially and exploitation in later generations. It has a high global search ability and a fast convergence rate and also avoids premature convergence. On the basis of the numerical investigations, the incorporation of the proposed enhancements has been successfully proved.

  20. Research of CBR, DM and smart algorithms based design methods for high-rise building structure form-selection

    Institute of Scientific and Technical Information of China (English)

    ZHANG Shi-hai; LIU Shu-jun; LIU Xiao-yan; OU Jin-ping

    2006-01-01

    First, the high-rise building structure design process is divided into three relevant steps, that is,scheme generation and creation, performance evaluation, and scheme optimization. Then with the application of relational database, the case database of high-rise structures is constructed, the structure form-selection designing methods such as the smart algorithm based on CBR, DM, FINS, NN and GA is presented, and the original forms system of this method and its general structure are given. CBR and DM are used to generate scheme candidates; FINS and NN to evaluate and optimize the scheme performance; GA to create new structure forms.Finally, the application cases are presented, whose results fit in with the real project. It proves by combining and using the expert intelligence, algorithm intelligence and machine intelligence that this method makes good use of not only the engineering project knowledge and expertise but also much deeper knowledge contained in various engineering cases. In other words, it is because the form selection has a strong background support of vast real cases that its results prove more reliable and more acceptable. So the introduction of this method prorides an effective approach to improving the quality, efficiency, automatic and smart level of high-rise structures form selection design.

  1. Safety part design optimisation based on the finite elements method and a genetic algorithm

    OpenAIRE

    Gildemyn, Eric; Dal Santo, Philippe; Robert, Camille; POTIRON, Alain; SAÏDANE, Delphine

    2010-01-01

    International audience; This paper deals with a numerical approach for improving the mechanical properties of a safety belt anchor by optimizing its shape and the manufacturing process by using a multi-objective genetic algorithm (NSGA-2). This kind of automotive component is typically manufactured in three stages: blanking, rounding of the edges by punching and finally bending (90°). This study focuses only on the rounding and bending processes. The numerical model is linked to the genetic a...

  2. A genetic-algorithm-based method to find unitary transformations for any desired quantum computation and application to a one-bit oracle decision problem

    Energy Technology Data Exchange (ETDEWEB)

    Bang, Jeongho [Seoul National University, Seoul (Korea, Republic of); Hanyang University, Seoul (Korea, Republic of); Yoo, Seokwon [Hanyang University, Seoul (Korea, Republic of)

    2014-12-15

    We propose a genetic-algorithm-based method to find the unitary transformations for any desired quantum computation. We formulate a simple genetic algorithm by introducing the 'genetic parameter vector' of the unitary transformations to be found. In the genetic algorithm process, all components of the genetic parameter vectors are supposed to evolve to the solution parameters of the unitary transformations. We apply our method to find the optimal unitary transformations and to generalize the corresponding quantum algorithms for a realistic problem, the one-bit oracle decision problem, or the often-called Deutsch problem. By numerical simulations, we can faithfully find the appropriate unitary transformations to solve the problem by using our method. We analyze the quantum algorithms identified by the found unitary transformations and generalize the variant models of the original Deutsch's algorithm.

  3. A genetic-algorithm-based method to find unitary transformations for any desired quantum computation and application to a one-bit oracle decision problem

    Science.gov (United States)

    Bang, Jeongho; Yoo, Seokwon

    2014-12-01

    We propose a genetic-algorithm-based method to find the unitary transformations for any desired quantum computation. We formulate a simple genetic algorithm by introducing the "genetic parameter vector" of the unitary transformations to be found. In the genetic algorithm process, all components of the genetic parameter vectors are supposed to evolve to the solution parameters of the unitary transformations. We apply our method to find the optimal unitary transformations and to generalize the corresponding quantum algorithms for a realistic problem, the one-bit oracle decision problem, or the often-called Deutsch problem. By numerical simulations, we can faithfully find the appropriate unitary transformations to solve the problem by using our method. We analyze the quantum algorithms identified by the found unitary transformations and generalize the variant models of the original Deutsch's algorithm.

  4. Diversity-Based Boosting Algorithm

    Directory of Open Access Journals (Sweden)

    Jafar A. Alzubi

    2016-05-01

    Full Text Available Boosting is a well known and efficient technique for constructing a classifier ensemble. An ensemble is built incrementally by altering the distribution of training data set and forcing learners to focus on misclassification errors. In this paper, an improvement to Boosting algorithm called DivBoosting algorithm is proposed and studied. Experiments on several data sets are conducted on both Boosting and DivBoosting. The experimental results show that DivBoosting is a promising method for ensemble pruning. We believe that it has many advantages over traditional boosting method because its mechanism is not solely based on selecting the most accurate base classifiers but also based on selecting the most diverse set of classifiers.

  5. Forecasting nonlinear chaotic time series with function expression method based on an improved genetic-simulated annealing algorithm.

    Science.gov (United States)

    Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng

    2015-01-01

    The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.

  6. Transmission Power Level Selection Method Based On Binary Search Algorithm for HiLOW

    Directory of Open Access Journals (Sweden)

    Lingeswari V Chandra

    2011-05-01

    Full Text Available Recently the sensor communication research has introduced an IP-based communication known as 6LoWPAN to sensor network. 6LoWPAN was introduced to give a new perspective to sensor network by enabling IPv6 to be applied to wireless sensors as well as wired sensor. Dedicated routing protocols based on 6LoWPAN was soon introduced and Hierarchical Routing Protocol for 6LoWPAN (HiLOW is one of them. HiLOW clearly defines the routing tree setup process, address allocation technique and the data routing process but there is some shortcomings in terms of transmission power selection. HiLOW does not highlight how the suitable transmission power is being selected for sensor communication purpose and this leads to the assumption that at all time and all scenarios the sensors are using maximum transmission power. In the case the sensors are using maximum transmission power for communication even when it is not necessary then power depletion for sensors will be amplified and the network lifetime will be significantly reduced. In this paper we present a brief introduction to 6LoWPAN, a concise review on HiLOW, a highlight on issues revolving each process in HiLOW and propose a new idea on transmission power selection method for HiLOW.

  7. Genetic Algorithm and Graph Theory Based Matrix Factorization Method for Online Friend Recommendation

    Directory of Open Access Journals (Sweden)

    Qu Li

    2014-01-01

    Full Text Available Online friend recommendation is a fast developing topic in web mining. In this paper, we used SVD matrix factorization to model user and item feature vector and used stochastic gradient descent to amend parameter and improve accuracy. To tackle cold start problem and data sparsity, we used KNN model to influence user feature vector. At the same time, we used graph theory to partition communities with fairly low time and space complexity. What is more, matrix factorization can combine online and offline recommendation. Experiments showed that the hybrid recommendation algorithm is able to recommend online friends with good accuracy.

  8. A mission-oriented orbit design method of remote sensing satellite for region monitoring mission based on evolutionary algorithm

    Science.gov (United States)

    Shen, Xin; Zhang, Jing; Yao, Huang

    2015-12-01

    Remote sensing satellites play an increasingly prominent role in environmental monitoring and disaster rescue. Taking advantage of almost the same sunshine condition to same place and global coverage, most of these satellites are operated on the sun-synchronous orbit. However, it brings some problems inevitably, the most significant one is that the temporal resolution of sun-synchronous orbit satellite can't satisfy the demand of specific region monitoring mission. To overcome the disadvantages, two methods are exploited: the first one is to build satellite constellation which contains multiple sunsynchronous satellites, just like the CHARTER mechanism has done; the second is to design non-predetermined orbit based on the concrete mission demand. An effective method for remote sensing satellite orbit design based on multiobjective evolution algorithm is presented in this paper. Orbit design problem is converted into a multi-objective optimization problem, and a fast and elitist multi-objective genetic algorithm is utilized to solve this problem. Firstly, the demand of the mission is transformed into multiple objective functions, and the six orbit elements of the satellite are taken as genes in design space, then a simulate evolution process is performed. An optimal resolution can be obtained after specified generation via evolution operation (selection, crossover, and mutation). To examine validity of the proposed method, a case study is introduced: Orbit design of an optical satellite for regional disaster monitoring, the mission demand include both minimizing the average revisit time internal of two objectives. The simulation result shows that the solution for this mission obtained by our method meet the demand the users' demand. We can draw a conclusion that the method presented in this paper is efficient for remote sensing orbit design.

  9. Control parameter optimal tuning method based on annealing-genetic algorithm for complex electromechanical system

    Institute of Scientific and Technical Information of China (English)

    贺建军; 喻寿益; 钟掘

    2003-01-01

    A new searching algorithm named the annealing-genetic algorithm(AGA) was proposed by skillfully merging GA with SAA. It draws on merits of both GA and SAA ,and offsets their shortcomings. The difference from GA is that AGA takes objective function as adaptability function directly, so it cuts down some unnecessary time expense because of float-point calculation of function conversion. The difference from SAA is that AGA need not execute a very long Markov chain iteration at each point of temperature, so it speeds up the convergence of solution and makes no assumption on the search space,so it is simple and easy to be implemented. It can be applied to a wide class of problems. The optimizing principle and the implementing steps of AGA were expounded. The example of the parameter optimization of a typical complex electromechanical system named temper mill shows that AGA is effective and superior to the conventional GA and SAA. The control system of temper mill optimized by AGA has the optimal performance in the adjustable ranges of its parameters.

  10. Adaptive de-noising method based on wavelet and adaptive learning algorithm in on-line PD monitoring

    Institute of Scientific and Technical Information of China (English)

    王立欣; 诸定秋; 蔡惟铮

    2002-01-01

    It is an important step in the online monitoring of partial discharge (PD) to extract PD pulses from various background noises. An adaptive de-noising method is introduced for adaptive noise reduction during detection of PD pulses. This method is based on Wavelet Transform (WT) , and in the wavelet domain the noises decomposed at the levels are reduced by independent thresholds. Instead of the standard hard thresholding function, a new type of hard thresholding function with continuous derivative is employed by this method. For the selection of thresholds, an unsupervised learning algorithm based on gradient in a mean square error (MSE) is present to search for the optimal threshold for noise reduction, and the optimal threshold is selected when the minimum MSE is obtained. With the simulating signals and on-site experimental data processed by this method,it is shown that the background noises such as narrowband noises can be reduced efficiently. Furthermore, it is proved that in comparison with the conventional wavelet de-noising method the adaptive de-noising method has a better performance in keeping the pulses and is more adaptive when suppressing the background noises of PD signals.

  11. Privacy Preserving Multiview Point Based BAT Clustering Algorithm and Graph Kernel Method for Data Disambiguation on Horizontally Partitioned Data

    Directory of Open Access Journals (Sweden)

    J. Anitha

    2015-06-01

    Full Text Available Data mining has been a popular research area for more than a decade due to its vast spectrum of applications. However, the popularity and wide availability of data mining tools also raised concerns about the privacy of individuals. Thus, the burden of data privacy protection falls on the shoulder of the data holder and data disambiguation problem occurs in the data matrix, anonymized data becomes less secure. All of the existing privacy preservation clustering methods performs clustering based on single point of view, which is the origin, while the latter utilizes many different viewpoints, which are objects assumed to not be in the same cluster with the two objects being measured. To solve this all of above mentioned problems, this study presents a multiview point based clustering methods for anonymized data. Before that data disambiguation problem is solved by using Ramon-Gartner Subtree Graph Kernel (RGSGK, where the weight values are assigned and kernel value is determined for disambiguated data. Obtain privacy by anonymization, where the data is encrypted with secure key is obtained by the Ring-Based Fully Homomorphic Encryption (RBFHE. In order to group the anonymize data, in this study BAT clustering method is proposed based on multiview point based similarity measurement and the proposed method is called as MVBAT. However in this paper initially distance matrix is calculated and using which similarity matrix and dissimilarity matrix is formed. The experimental result of the proposed MVBAT Clustering algorithm is compared with conventional methods in terms of the F-Measure, running time, privacy loss and utility loss. RBFHE encryption results is also compared with existing methods in terms of the communication cost for UCI machine learning datasets such as adult dataset and house dataset.

  12. Method for Walking Gait Identification in a Lower Extremity Exoskeleton based on C4.5 Decision Tree Algorithm

    Directory of Open Access Journals (Sweden)

    Qing Guo

    2015-04-01

    Full Text Available A gait identification method for a lower extremity exoskeleton is presented in order to identify the gait sub-phases in human-machine coordinated motion. First, a sensor layout for the exoskeleton is introduced. Taking the difference between human lower limb motion and human-machine coordinated motion into account, the walking gait is divided into five sub-phases, which are ‘double standing’, ‘right leg swing and left leg stance’, ‘double stance with right leg front and left leg back’, ‘right leg stance and left leg swing’, and ‘double stance with left leg front and right leg back’. The sensors include shoe pressure sensors, knee encoders, and thigh and calf gyroscopes, and are used to measure the contact force of the foot, and the knee joint angle and its angular velocity. Then, five sub-phases of walking gait are identified by a C4.5 decision tree algorithm according to the data fusion of the sensors’ information. Based on the simulation results for the gait division, identification accuracy can be guaranteed by the proposed algorithm. Through the exoskeleton control experiment, a division of five sub-phases for the human-machine coordinated walk is proposed. The experimental results verify this gait division and identification method. They can make hydraulic cylinders retract ahead of time and improve the maximal walking velocity when the exoskeleton follows the person’s motion.

  13. 3D PET image reconstruction based on Maximum Likelihood Estimation Method (MLEM) algorithm

    CERN Document Server

    Słomski, Artur; Bednarski, Tomasz; Białas, Piotr; Czerwiński, Eryk; Kapłon, Łukasz; Kochanowski, Andrzej; Korcyl, Grzegorz; Kowal, Jakub; Kowalski, Paweł; Kozik, Tomasz; Krzemień, Wojciech; Molenda, Marcin; Moskal, Paweł; Niedźwiecki, Szymon; Pałka, Marek; Pawlik, Monika; Raczyński, Lech; Salabura, Piotr; Gupta-Sharma, Neha; Silarski, Michał; Smyrski, Jerzy; Strzelecki, Adam; Wiślicki, Wojciech; Zieliński, Marcin; Zoń, Natalia

    2015-01-01

    Positron emission tomographs (PET) do not measure an image directly. Instead, they measure at the boundary of the field-of-view (FOV) of PET tomograph a sinogram that consists of measurements of the sums of all the counts along the lines connecting two detectors. As there is a multitude of detectors build-in typical PET tomograph structure, there are many possible detector pairs that pertain to the measurement. The problem is how to turn this measurement into an image (this is called imaging). Decisive improvement in PET image quality was reached with the introduction of iterative reconstruction techniques. This stage was reached already twenty years ago (with the advent of new powerful computing processors). However, three dimensional (3D) imaging remains still a challenge. The purpose of the image reconstruction algorithm is to process this imperfect count data for a large number (many millions) of lines-of-responce (LOR) and millions of detected photons to produce an image showing the distribution of the l...

  14. Optimisation of groundwater level monitoring networks using geostatistical modelling based on the Spartan family variogram and a genetic algorithm method

    Science.gov (United States)

    Parasyris, Antonios E.; Spanoudaki, Katerina; Kampanis, Nikolaos A.

    2016-04-01

    Groundwater level monitoring networks provide essential information for water resources management, especially in areas with significant groundwater exploitation for agricultural and domestic use. Given the high maintenance costs of these networks, development of tools, which can be used by regulators for efficient network design is essential. In this work, a monitoring network optimisation tool is presented. The network optimisation tool couples geostatistical modelling based on the Spartan family variogram with a genetic algorithm method and is applied to Mires basin in Crete, Greece, an area of high socioeconomic and agricultural interest, which suffers from groundwater overexploitation leading to a dramatic decrease of groundwater levels. The purpose of the optimisation tool is to determine which wells to exclude from the monitoring network because they add little or no beneficial information to groundwater level mapping of the area. Unlike previous relevant investigations, the network optimisation tool presented here uses Ordinary Kriging with the recently-established non-differentiable Spartan variogram for groundwater level mapping, which, based on a previous geostatistical study in the area leads to optimal groundwater level mapping. Seventy boreholes operate in the area for groundwater abstraction and water level monitoring. The Spartan variogram gives overall the most accurate groundwater level estimates followed closely by the power-law model. The geostatistical model is coupled to an integer genetic algorithm method programmed in MATLAB 2015a. The algorithm is used to find the set of wells whose removal leads to the minimum error between the original water level mapping using all the available wells in the network and the groundwater level mapping using the reduced well network (error is defined as the 2-norm of the difference between the original mapping matrix with 70 wells and the mapping matrix of the reduced well network). The solution to the

  15. Investigating preferences for color-shape combinations with gaze driven optimization method based on evolutionary algorithms.

    Science.gov (United States)

    Holmes, Tim; Zanker, Johannes M

    2013-01-01

    Studying aesthetic preference is notoriously difficult because it targets individual experience. Eye movements provide a rich source of behavioral measures that directly reflect subjective choice. To determine individual preferences for simple composition rules we here use fixation duration as the fitness measure in a Gaze Driven Evolutionary Algorithm (GDEA), which has been demonstrated as a tool to identify aesthetic preferences (Holmes and Zanker, 2012). In the present study, the GDEA was used to investigate the preferred combination of color and shape which have been promoted in the Bauhaus arts school. We used the same three shapes (square, circle, triangle) used by Kandinsky (1923), with the three color palette from the original experiment (A), an extended seven color palette (B), and eight different shape orientation (C). Participants were instructed to look for their preferred circle, triangle or square in displays with eight stimuli of different shapes, colors and rotations, in an attempt to test for a strong preference for red squares, yellow triangles and blue circles in such an unbiased experimental design and with an extended set of possible combinations. We Tested six participants extensively on the different conditions and found consistent preferences for color-shape combinations for individuals, but little evidence at the group level for clear color/shape preference consistent with Kandinsky's claims, apart from some weak link between yellow and triangles. Our findings suggest substantial inter-individual differences in the presence of stable individual associations of color and shapes, but also that these associations are robust within a single individual. These individual differences go some way toward challenging the claims of the universal preference for color/shape combinations proposed by Kandinsky, but also indicate that a much larger sample size would be needed to confidently reject that hypothesis. Moreover, these experiments highlight the

  16. Investigating preferences for colour-shape combinations with gaze driven optimization method based on evolutionary algorithms.

    Directory of Open Access Journals (Sweden)

    Tim eHolmes

    2013-12-01

    Full Text Available Studying aesthetic preference is notoriously difficult because it targets individual experience. Eye movements provide a rich source of behavioural measures that directly reflect subjective choice. To determine individual preferences for simple composition rules we here use fixation duration as the fitness measure in a Gaze Driven Evolutionary Algorithm (GDEA, which has been used as a tool to identify aesthetic preferences (Holmes & Zanker, 2012. In the present study, the GDEA was used to investigate the preferred combination of colour and shape which have been promoted in the Bauhaus arts school. We used the same 3 shapes (square, circle, triangle used by Kandinsky (1923, with the 3 colour palette from the original experiment (A, an extended 7 colour palette (B, and 8 different shape orientation (C. Participants were instructed to look for their preferred circle, triangle or square in displays with 8 stimuli of different shapes, colours and rotations, in an attempt to test for a strong preference for red squares, yellow triangles and blue circles in such an unbiased experimental design and with an extended set of possible combinations. We Tested 6 participants extensively on the different conditions and found consistent preferences for individuals, but little evidence at the group level for preference consistent with Kandinsky’s claims, apart from some weak link between yellow and triangles. Our findings suggest substantial inter-individual differences in the presence of stable individual associations of colour and shapes, but also that these associations are robust within a single individual. These individual differences go some way towards challenging the claims of the universal preference for colour/shape combinations proposed by Kandinsky, but also indicate that a much larger sample size would be needed to confidently reject that hypothesis. Moreover, these experiments highlight the vast potential of the GDEA in experimental aesthetics

  17. An estimation method for direct maintenance cost of aircraft components based on particle swarm optimization with immunity algorithm

    Institute of Scientific and Technical Information of China (English)

    WU Jing-min; ZUO Hong-fu; CHEN Yong

    2005-01-01

    A particle swarm optimization (PSO) algorithm improved by immunity algorithm (IA) was presented.Memory and self-regulation mechanisms of IA were used to avoid PSO plunging into local optima. Vaccination and immune selection mechanisms were used to prevent the undulate phenomenon during the evolutionary process. The algorithm was introduced through an application in the direct maintenance cost (DMC) estimation of aircraft components. Experiments results show that the algorithm can compute simply and run quickly. It resolves the combinatorial optimization problem of component DMC estimation with simple and available parameters. And it has higher accuracy than individual methods, such as PLS, BP and v-SVM, and also has better performance than other combined methods, such as basic PSO and BP neural network.

  18. Land Surface Temperature Retrieval from Landsat 8 TIRS—Comparison between Radiative Transfer Equation-Based Method, Split Window Algorithm and Single Channel Method

    Directory of Open Access Journals (Sweden)

    Xiaolei Yu

    2014-10-01

    Full Text Available Accurate inversion of land surface geo/biophysical variables from remote sensing data for earth observation applications is an essential and challenging topic for the global change research. Land surface temperature (LST is one of the key parameters in the physics of earth surface processes from local to global scales. The importance of LST is being increasingly recognized and there is a strong interest in developing methodologies to measure LST from the space. Landsat 8 Thermal Infrared Sensor (TIRS is the newest thermal infrared sensor for the Landsat project, providing two adjacent thermal bands, which has a great benefit for the LST inversion. In this paper, we compared three different approaches for LST inversion from TIRS, including the radiative transfer equation-based method, the split-window algorithm and the single channel method. Four selected energy balance monitoring sites from the Surface Radiation Budget Network (SURFRAD were used for validation, combining with the MODIS 8 day emissivity product. For the investigated sites and scenes, results show that the LST inverted from the radiative transfer equation-based method using band 10 has the highest accuracy with RMSE lower than 1 K, while the SW algorithm has moderate accuracy and the SC method has the lowest accuracy.

  19. Predicting students’ grades using fuzzy non-parametric regression method and ReliefF-based algorithm

    Directory of Open Access Journals (Sweden)

    Javad Ghasemian

    Full Text Available In this paper we introduce two new approaches to predict the grades that university students will acquire in the final exam of a course and improve the obtained result on some features extracted from logged data in an educational web-based system. First w ...

  20. HISTORY BASED PROBABILISTIC BACKOFF ALGORITHM

    Directory of Open Access Journals (Sweden)

    Narendran Rajagopalan

    2012-01-01

    Full Text Available Performance of Wireless LAN can be improved at each layer of the protocol stack with respect to energy efficiency. The Media Access Control layer is responsible for the key functions like access control and flow control. During contention, Backoff algorithm is used to gain access to the medium with minimum probability of collision. After studying different variations of back off algorithms that have been proposed, a new variant called History based Probabilistic Backoff Algorithm is proposed. Through mathematical analysis and simulation results using NS-2, it is seen that proposed History based Probabilistic Backoff algorithm performs better than Binary Exponential Backoff algorithm.

  1. Genetic algorithms as global random search methods

    Science.gov (United States)

    Peck, Charles C.; Dhawan, Atam P.

    1995-01-01

    Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that the schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solutions and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.

  2. A method for extracting fetal ECG based on EMD-NMF single channel blind source separation algorithm.

    Science.gov (United States)

    He, Pengju; Chen, Xiaomeng

    2015-01-01

    Nowadays, detecting fetal ECG using abdominal signal is a commonly used method, but fetal ECG signal will be affected by maternal ECG. Current FECG extraction algorithms are mainly aiming at multiple channels signal. They often assume there is only one fetus and did not consider multiple births. This paper proposed a single channel blind source separation algorithm to process single abdominal acquired signal. This algorithm decomposed single abdominal signal into multiple intrinsic mode function (IMF) utilizing empirical mode decomposition (EMD). Correlation matrix of IMF was calculated and independent ECG signal number was estimated using eigenvalue method. Nonnegative matrix was constructed according to determined number and decomposed IMF. Separation of MECG and FECG was achieved utilizing nonnegative matrix factorization (NMF). Experiments selected four channels man-made signal and two channels ECG to verify correctness and feasibility of proposed algorithm. Results showed that the proposed algorithm could determine number of independent signal in single acquired signal. FECG could be extracted from single channel observed signal and the algorithm can be used to solve separation of MECG and FECG.

  3. Fast clustering algorithm for large ECG data sets based on CS theory in combination with PCA and K-NN methods.

    Science.gov (United States)

    Balouchestani, Mohammadreza; Krishnan, Sridhar

    2014-01-01

    Long-term recording of Electrocardiogram (ECG) signals plays an important role in health care systems for diagnostic and treatment purposes of heart diseases. Clustering and classification of collecting data are essential parts for detecting concealed information of P-QRS-T waves in the long-term ECG recording. Currently used algorithms do have their share of drawbacks: 1) clustering and classification cannot be done in real time; 2) they suffer from huge energy consumption and load of sampling. These drawbacks motivated us in developing novel optimized clustering algorithm which could easily scan large ECG datasets for establishing low power long-term ECG recording. In this paper, we present an advanced K-means clustering algorithm based on Compressed Sensing (CS) theory as a random sampling procedure. Then, two dimensionality reduction methods: Principal Component Analysis (PCA) and Linear Correlation Coefficient (LCC) followed by sorting the data using the K-Nearest Neighbours (K-NN) and Probabilistic Neural Network (PNN) classifiers are applied to the proposed algorithm. We show our algorithm based on PCA features in combination with K-NN classifier shows better performance than other methods. The proposed algorithm outperforms existing algorithms by increasing 11% classification accuracy. In addition, the proposed algorithm illustrates classification accuracy for K-NN and PNN classifiers, and a Receiver Operating Characteristics (ROC) area of 99.98%, 99.83%, and 99.75% respectively.

  4. Edge Crossing Minimization Algorithm for Hierarchical Graphs Based on Genetic Algorithms

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    We present an edge crossing minimization algorithm forhierarchical gr aphs based on genetic algorithms, and comparing it with some heuristic algorithm s. The proposed algorithm is more efficient and has the following advantages: th e frame of the algorithms is unified, the method is simple, and its implementati on and revision are easy.

  5. A New Method for a Piezoelectric Energy Harvesting System Using a Backtracking Search Algorithm-Based PI Voltage Controller

    Directory of Open Access Journals (Sweden)

    Mahidur R. Sarker

    2016-09-01

    Full Text Available This paper presents a new method for a vibration-based piezoelectric energy harvesting system using a backtracking search algorithm (BSA-based proportional-integral (PI voltage controller. This technique eliminates the exhaustive conventional trial-and-error procedure for obtaining optimized parameter values of proportional gain (Kp, and integral gain (Ki for PI voltage controllers. The generated estimate values of Kp and Ki are executed in the PI voltage controller that is developed through the BSA optimization technique. In this study, mean absolute error (MAE is used as an objective function to minimize output error for a piezoelectric energy harvesting system (PEHS. The model for the PEHS is designed and analyzed using the BSA optimization technique. The BSA-based PI voltage controller of the PEHS produces a significant improvement in minimizing the output error of the converter and a robust, regulated pulse-width modulation (PWM signal to convert a MOSFET switch, with the best response in terms of rise time and settling time under various load conditions.

  6. Genetic algorithm and particle swarm optimization combined with Powell method

    Science.gov (United States)

    Bento, David; Pinho, Diana; Pereira, Ana I.; Lima, Rui

    2013-10-01

    In recent years, the population algorithms are becoming increasingly robust and easy to use, based on Darwin's Theory of Evolution, perform a search for the best solution around a population that will progress according to several generations. This paper present variants of hybrid genetic algorithm - Genetic Algorithm and a bio-inspired hybrid algorithm - Particle Swarm Optimization, both combined with the local method - Powell Method. The developed methods were tested with twelve test functions from unconstrained optimization context.

  7. An infared polarization image fusion method based on NSCT and fuzzy C-means clustering segmentation algorithms

    Science.gov (United States)

    Yu, Xuelian; Chen, Qian; Gu, Guohua; Qian, Weixian; Xu, Mengxi

    2014-11-01

    The integration between polarization and intensity images possessing complementary and discriminative information has emerged as a new and important research area. On the basis of the consideration that the resulting image has different clarity and layering requirement for the target and background, we propose a novel fusion method based on non-subsampled Contourlet transform (NSCT) and fuzzy C-means (FCM) segmentation for IR polarization and light intensity images. First, the polarization characteristic image is derived from fusion of the degree of polarization (DOP) and the angle of polarization (AOP) images using local standard variation and abrupt change degree (ACD) combined criteria. Then, the polarization characteristic image is segmented with FCM algorithm. Meanwhile, the two source images are respectively decomposed by NSCT. The regional energy-weighted and similarity measure are adopted to combine the low-frequency sub-band coefficients of the object. The high-frequency sub-band coefficients of the object boundaries are integrated through the maximum selection rule. In addition, the high-frequency sub-band coefficients of internal objects are integrated by utilizing local variation, matching measure and region feature weighting. The weighted average and maximum rules are employed independently in fusing the low-frequency and high-frequency components of the background. Finally, an inverse NSCT operation is accomplished and the final fused image is obtained. The experimental results illustrate that the proposed IR polarization image fusion algorithm can yield an improved performance in terms of the contrast between artificial target and cluttered background and a more detailed representation of the depicted scene.

  8. A new method for class prediction based on signed-rank algorithms applied to Affymetrix® microarray experiments

    Directory of Open Access Journals (Sweden)

    Vassal Aurélien

    2008-01-01

    Full Text Available Abstract Background The huge amount of data generated by DNA chips is a powerful basis to classify various pathologies. However, constant evolution of microarray technology makes it difficult to mix data from different chip types for class prediction of limited sample populations. Affymetrix® technology provides both a quantitative fluorescence signal and a decision (detection call: absent or present based on signed-rank algorithms applied to several hybridization repeats of each gene, with a per-chip normalization. We developed a new prediction method for class belonging based on the detection call only from recent Affymetrix chip type. Biological data were obtained by hybridization on U133A, U133B and U133Plus 2.0 microarrays of purified normal B cells and cells from three independent groups of multiple myeloma (MM patients. Results After a call-based data reduction step to filter out non class-discriminative probe sets, the gene list obtained was reduced to a predictor with correction for multiple testing by iterative deletion of probe sets that sequentially improve inter-class comparisons and their significance. The error rate of the method was determined using leave-one-out and 5-fold cross-validation. It was successfully applied to (i determine a sex predictor with the normal donor group classifying gender with no error in all patient groups except for male MM samples with a Y chromosome deletion, (ii predict the immunoglobulin light and heavy chains expressed by the malignant myeloma clones of the validation group and (iii predict sex, light and heavy chain nature for every new patient. Finally, this method was shown powerful when compared to the popular classification method Prediction Analysis of Microarray (PAM. Conclusion This normalization-free method is routinely used for quality control and correction of collection errors in patient reports to clinicians. It can be easily extended to multiple class prediction suitable with

  9. Parameter identification for continuous point emission source based on Tikhonov regularization method coupled with particle swarm optimization algorithm.

    Science.gov (United States)

    Ma, Denglong; Tan, Wei; Zhang, Zaoxiao; Hu, Jun

    2017-03-05

    In order to identify the parameters of hazardous gas emission source in atmosphere with less previous information and reliable probability estimation, a hybrid algorithm coupling Tikhonov regularization with particle swarm optimization (PSO) was proposed. When the source location is known, the source strength can be estimated successfully by common Tikhonov regularization method, but it is invalid when the information about both source strength and location is absent. Therefore, a hybrid method combining linear Tikhonov regularization and PSO algorithm was designed. With this method, the nonlinear inverse dispersion model was transformed to a linear form under some assumptions, and the source parameters including source strength and location were identified simultaneously by linear Tikhonov-PSO regularization method. The regularization parameters were selected by L-curve method. The estimation results with different regularization matrixes showed that the confidence interval with high-order regularization matrix is narrower than that with zero-order regularization matrix. But the estimation results of different source parameters are close to each other with different regularization matrixes. A nonlinear Tikhonov-PSO hybrid regularization was also designed with primary nonlinear dispersion model to estimate the source parameters. The comparison results of simulation and experiment case showed that the linear Tikhonov-PSO method with transformed linear inverse model has higher computation efficiency than nonlinear Tikhonov-PSO method. The confidence intervals from linear Tikhonov-PSO are more reasonable than that from nonlinear method. The estimation results from linear Tikhonov-PSO method are similar to that from single PSO algorithm, and a reasonable confidence interval with some probability levels can be additionally given by Tikhonov-PSO method. Therefore, the presented linear Tikhonov-PSO regularization method is a good potential method for hazardous emission

  10. A fast method for video deblurring based on a combination of gradient methods and denoising algorithms in Matlab and C environments

    Science.gov (United States)

    Mirzadeh, Zeynab; Mehri, Razieh; Rabbani, Hossein

    2010-01-01

    In this paper the degraded video with blur and noise is enhanced by using an algorithm based on an iterative procedure. In this algorithm at first we estimate the clean data and blur function using Newton optimization method and then the estimation procedure is improved using appropriate denoising methods. These noise reduction techniques are based on local statistics of clean data and blur function. For estimated blur function we use LPA-ICI (local polynomial approximation - intersection of confidence intervals) method that use an anisotropic window around each point and obtain the enhanced data employing Wiener filter in this local window. Similarly, to improvement the quality of estimated clean video, at first we transform the data to wavelet transform domain and then improve our estimation using maximum a posterior (MAP) estimator and local Laplace prior. This procedure (initial estimation and improvement of estimation by denoising) is iterated and finally the clean video is obtained. The implementation of this algorithm is slow in MATLAB1 environment and so it is not suitable for online applications. However, MATLAB has the capability of running functions written in C. The files which hold the source for these functions are called MEX-Files. The MEX functions allow system-specific APIs to be called to extend MATLAB's abilities. So, in this paper to speed up our algorithm, the written code in MATLAB is sectioned and the elapsed time for each section is measured and slow sections (that use 60% of complete running time) are selected. Then these slow sections are translated to C++ and linked to MATLAB. In fact, the high loads of information in images and processed data in the "for" loops of relevant code, makes MATLAB an unsuitable candidate for writing such programs. The written code for our video deblurring algorithm in MATLAB contains eight "for" loops. These eighth "for" utilize 60% of the total execution time of the entire program and so the runtime should be

  11. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms

    Science.gov (United States)

    Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design. PMID:27630709

  12. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms.

    Science.gov (United States)

    Yang, Yan-Pu; Chen, Deng-Kai; Gu, Rong; Gu, Yu-Feng; Yu, Sui-Huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.

  13. Immune Based Intrusion Detector Generating Algorithm

    Institute of Scientific and Technical Information of China (English)

    DONG Xiao-mei; YU Ge; XIANG Guang

    2005-01-01

    Immune-based intrusion detection approaches are studied. The methods of constructing self set and generating mature detectors are researched and improved. A binary encoding based self set construction method is applied. First,the traditional mature detector generating algorithm is improved to generate mature detectors and detect intrusions faster. Then, a novel mature detector generating algorithm is proposed based on the negative selection mechanism. Accord ing to the algorithm, less mature detectors are needed to detect the abnormal activities in the network. Therefore, the speed of generating mature detectors and intrusion detection is improved. By comparing with those based on existing algo rithms, the intrusion detection system based on the algorithm has higher speed and accuracy.

  14. Eigenvalue Decomposition-Based Modified Newton Algorithm

    Directory of Open Access Journals (Sweden)

    Wen-jun Wang

    2013-01-01

    Full Text Available When the Hessian matrix is not positive, the Newton direction may not be the descending direction. A new method named eigenvalue decomposition-based modified Newton algorithm is presented, which first takes the eigenvalue decomposition of the Hessian matrix, then replaces the negative eigenvalues with their absolute values, and finally reconstructs the Hessian matrix and modifies the searching direction. The new searching direction is always the descending direction. The convergence of the algorithm is proven and the conclusion on convergence rate is presented qualitatively. Finally, a numerical experiment is given for comparing the convergence domains of the modified algorithm and the classical algorithm.

  15. An Experimental Method for the Active Learning of Greedy Algorithms

    Science.gov (United States)

    Velazquez-Iturbide, J. Angel

    2013-01-01

    Greedy algorithms constitute an apparently simple algorithm design technique, but its learning goals are not simple to achieve.We present a didacticmethod aimed at promoting active learning of greedy algorithms. The method is focused on the concept of selection function, and is based on explicit learning goals. It mainly consists of an…

  16. Track initial algorithm based on layered clustering method%层次聚类的航迹起始算法

    Institute of Scientific and Technical Information of China (English)

    卢春燕; 金骁; 邹焕新

    2013-01-01

    A new track initial algorithm based on layered clustering method is proposed to solve multi-target detection problem with passive reconnaissance data.Especially when the passive reconnaissance scanned aperiodically,the capture plots are fragmentary,and the prior information of targets number and athletic characteristics are insufficient.The algorithm effectively utilized the attributive characteristics to solve the track initial problem.Firstly,the observation set was rough clustered according to the systems of Pulse Frequency (PF),Pulse Recurrence Frequency (PRF),Pulse Width (PW) electromagnetic parameter; Secondly,the exact result of classification was get by clustering of electromagnetic parameter using K-Meaus algorithm; Thirdly,computing the velocity of each dimension of all of the probable point pairs to eliminate the illusive observations by space-time constrained conditions; Ultimately,reset the selected observations according to their capture time,and an extended search approach is utilized to find the final initialed track.Experiments on both simulated and real world data showed its effectiveness and practicability.%针对无源侦察数据不存在周期性扫描、目标定位点迹间的时间间隔随机以及目标数量、运动特性等多项先验信息缺乏状况下的多目标检测问题,提出了层次聚类的航迹起始算法.该算法首先利用信号载频、重频、脉宽参数体制的不同对量测记录集进行粗聚类;其次对雷达工作体制相同的每一个子类,采用K-Means算法对其载频、重频、脉宽三个信号参数进行精聚类;再次对属性聚类后的每一个子类构造所有可能的配对点迹,并计算其分维速度,利用速度法筛选出满足速度约束条件的点迹;最后对筛选出的点迹按接收时间重新排序,利用扩展的搜索算法从第一个时刻开始搜索目标航迹.仿真与真实数据的实验结果验证了本文算法的有效性和实用性.

  17. Towards an SDP-based Approach to Spectral Methods: A Nearly-Linear-Time Algorithm for Graph Partitioning and Decomposition

    CERN Document Server

    Orecchia, Lorenzo

    2010-01-01

    In this paper, we consider the following graph partitioning problem: The input is an undirected graph $G=(V,E),$ a balance parameter $b \\in (0,1/2]$ and a target conductance value $\\gamma \\in (0,1).$ The output is a cut which, if non-empty, is of conductance at most $O(f),$ for some function $f(G, \\gamma),$ and which is either balanced or well correlated with all cuts of conductance at most $\\gamma.$ Spielman and Teng gave an $\\tilde{O}(|E|/\\gamma^{2})$-time algorithm for $f= \\sqrt{\\gamma \\log^{3}|V|}$ and used it to decompose graphs into a collection of near-expanders. We present a new spectral algorithm for this problem which runs in time $\\tilde{O}(|E|/\\gamma)$ for $f=\\sqrt{\\gamma}.$ Our result yields the first nearly-linear time algorithm for the classic Balanced Separator problem that achieves the asymptotically optimal approximation guarantee for spectral methods. Our method has the advantage of being conceptually simple and relies on a primal-dual semidefinite-programming SDP approach. We first conside...

  18. Seizure detection algorithms based on EMG signals

    DEFF Research Database (Denmark)

    Conradsen, Isa

    Background: the currently used non-invasive seizure detection methods are not reliable. Muscle fibers are directly connected to the nerves, whereby electric signals are generated during activity. Therefore, an alarm system on electromyography (EMG) signals is a theoretical possibility. Objective...... on the amplitude of the signal. The other algorithm was based on information of the signal in the frequency domain, and it focused on synchronisation of the electrical activity in a single muscle during the seizure. Results: The amplitude-based algorithm reliably detected seizures in 2 of the patients, while......: to show whether medical signal processing of EMG data is feasible for detection of epileptic seizures. Methods: EMG signals during generalised seizures were recorded from 3 patients (with 20 seizures in total). Two possible medical signal processing algorithms were tested. The first algorithm was based...

  19. The design of Helmholtz resonator based acoustic lenses by using the symmetric Multi-Level Wave Based Method and genetic algorithms

    Science.gov (United States)

    Atak, Onur; Huybrechs, Daan; Pluymers, Bert; Desmet, Wim

    2014-07-01

    Sonic crystals can be used as acoustic lenses in certain frequencies and the design of such systems by creating vacancies and using genetic algorithms has been proven to be an effective method. So far, rigid cylinders have been used to create such acoustic lens designs. On the other hand, it has been proven that Helmholtz resonators can be used to construct acoustic lenses with higher refraction index as compared to rigid cylinders, especially in low frequencies by utilizing their local resonances. In this paper, these two concepts are combined to design acoustic lenses that are based on Helmholtz resonators. The Multi-Level Wave Based Method is used as the prediction method. The benefits of the method in the context of design procedure are demonstrated. In addition, symmetric boundary conditions are derived for more efficient calculations. The acoustic lens designs that use Helmholtz resonators are compared with the acoustic lens designs that use rigid cylinders. It is shown that using Helmholtz resonator based sonic crystals leads to better acoustic lens designs, especially at the low frequencies where the local resonances are pronounced.

  20. Evolutionary algorithm based configuration interaction approach

    CERN Document Server

    Chakraborty, Rahul

    2016-01-01

    A stochastic configuration interaction method based on evolutionary algorithm is designed as an affordable approximation to full configuration interaction (FCI). The algorithm comprises of initiation, propagation and termination steps, where the propagation step is performed with cloning, mutation and cross-over, taking inspiration from genetic algorithm. We have tested its accuracy in 1D Hubbard problem and a molecular system (symmetric bond breaking of water molecule). We have tested two different fitness functions based on energy of the determinants and the CI coefficients of determinants. We find that the absolute value of CI coefficients is a more suitable fitness function when combined with a fixed selection scheme.

  1. Research of the Kernel Operator Library Based on Cryptographic Algorithm

    Institute of Scientific and Technical Information of China (English)

    王以刚; 钱力; 黄素梅

    2001-01-01

    The variety of encryption mechanism and algorithms which were conventionally used have some limitations.The kernel operator library based on Cryptographic algorithm is put forward. Owing to the impenetrability of algorithm, the data transfer system with the cryptographic algorithm library has many remarkable advantages in algorithm rebuilding and optimization,easily adding and deleting algorithm, and improving the security power over the traditional algorithm. The user can choose any one in all algorithms with the method against any attack because the cryptographic algorithm library is extensible.

  2. Variables Bounding Based Retiming Algorithm

    Institute of Scientific and Technical Information of China (English)

    宫宗伟; 林争辉; 陈后鹏

    2002-01-01

    Retiming is a technique for optimizing sequential circuits. In this paper, wediscuss this problem and propose an improved retiming algorithm based on variables bounding.Through the computation of the lower and upper bounds on variables, the algorithm can signi-ficantly reduce the number of constraints and speed up the execution of retiming. Furthermore,the elements of matrixes D and W are computed in a demand-driven way, which can reducethe capacity of memory. It is shown through the experimental results on ISCAS89 benchmarksthat our algorithm is very effective for large-scale sequential circuits.

  3. XML document clustering method based on quantum genetic algorithm%基于量子遗传算法的XML聚类方法

    Institute of Scientific and Technical Information of China (English)

    蒋勇; 谭怀亮; 李光文

    2011-01-01

    This paper maiuly targets on XML clustering with kernel methods for pattern analysis and the quantum genetic algorithm.Then, a new method based on the quantum genetic algorithm and kernel clustering algorithm was proposed.To eliminate the XML documents first, the vector space kernel's kernel matrix was generated with frequent-tag sequence, the initial clustering and clustering center with the Gaussian kernel functions were solved, then the quantum genetic algorithm's initial populations were constructed by the initial clustering center structure.Clustering of the globally optimal solutions was obtained through the combination of quantum genetic algorithm and kernel clustering algorithm.The experimental results show that the proposed algorithm is superior to the improved kernel clustering algorithm and K-means in good astringency, stability and overall optimal solutions.%主要用模式分析的核方法与量子遗传算法相结合研究XML聚类,提出了一种基于量子遗传算法混合核聚算法的XML文档聚类新方法.该方法先对XML文档约简,以频繁标签序列建立向量空间核的核矩阵,用高斯核函数求解初始聚类和聚类中心,然后用初始聚类中心构造量子遗传算法的初始种群,通过量子遗传算法与核聚算法相结合求得全局最优解的聚类.实验结果表明,使用该算法的聚类比改进的核聚算法、K均值算法等单一方法具有良好的收敛性、稳定性和更高的全局最优.

  4. A Practical Propositional Knowledge Base Revision Algorithm

    Institute of Scientific and Technical Information of China (English)

    陶雪红; 孙伟; 等

    1997-01-01

    This paper gives an outline of knowledge base revision and some recently presented complexity results about propostitional knowledge base revision.Different methods for revising propositional knowledge base have been proposed recently by several researchers,but all methods are intractable in the general case.For practical application,this paper presents a revision method for special case,and gives its corresponding polynomial algorithm.

  5. Spectrum correction algorithm for detectors in airborne radioactivity monitoring equipment NH-UAV based on a ratio processing method

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Ye [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Tang, Xiao-Bin, E-mail: tangxiaobin@nuaa.edu.cn [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Jiangsu Key Laboratory of Nuclear Energy Equipment Materials Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Wang, Peng; Meng, Jia; Huang, Xi; Wen, Liang-Sheng [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Chen, Da [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Jiangsu Key Laboratory of Nuclear Energy Equipment Materials Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China)

    2015-10-11

    The unmanned aerial vehicle (UAV) radiation monitoring method plays an important role in nuclear accidents emergency. In this research, a spectrum correction algorithm about the UAV airborne radioactivity monitoring equipment NH-UAV was studied to measure the radioactive nuclides within a small area in real time and in a fixed place. The simulation spectra of the high-purity germanium (HPGe) detector and the lanthanum bromide (LaBr{sub 3}) detector in the equipment were obtained using the Monte Carlo technique. Spectrum correction coefficients were calculated after performing ratio processing techniques about the net peak areas between the double detectors on the detection spectrum of the LaBr{sub 3} detector according to the accuracy of the detection spectrum of the HPGe detector. The relationship between the spectrum correction coefficient and the size of the source term was also investigated. A good linear relation exists between the spectrum correction coefficient and the corresponding energy (R{sup 2}=0.9765). The maximum relative deviation from the real condition reduced from 1.65 to 0.035. The spectrum correction method was verified as feasible. - Highlights: • An airborne radioactivity monitoring equipment NH-UAV was developed to measure radionuclide after a nuclear accident. • A spectrum correction algorithm was proposed to obtain precise information on the detected radioactivity within a small area. • The spectrum correction method was verified as feasible. • The corresponding spectrum correction coefficients increase first and then stay constant.

  6. A contact detection algorithm for deformable tetrahedral geometries based on a novel approach for general simplices used in the discrete element method

    Science.gov (United States)

    Stühler, Sven; Fleissner, Florian; Eberhard, Peter

    2016-11-01

    We present an extended particle model for the discrete element method that on the one hand is tetrahedral in shape and on the other hand is capable to describe deformations. The deformations of the tetrahedral particles require a framework to interrelate the particle strains and resulting stresses. Hence, adaptations from the finite element method were used. This allows to link the two methods and to adequately describe material and simulation parameters separately in each scope. Due to the complexity arising of the non-spherical tetrahedral geometry, all possible contact combinations of vertices, edges, and surfaces must be considered by the used contact detection algorithm. The deformations of the particles make the contact evaluation even more challenging. Therefore, a robust contact detection algorithm based on an optimization approach that exploits temporal coherence is presented. This algorithm is suitable for general {R}^{{n}} simplices. An evaluation of the robustness of this algorithm is performed using a numerical example. In order to create complex geometries, bonds between these deformable particles are introduced. This coupling via the tetrahedra faces allows the simulation bonding of deformable bodies composed of several particles. Numerical examples are presented and validated with results that are obtained by the same simulation setup modeled with the finite element method. The intention of using these bonds is to be able to model fracture and material failure. Therefore, the bonds between the particles are not lasting and feature a release mechanism based on a predefined criterion.

  7. Feature Selection Method Based on Artificial Bee Colony Algorithm and Support Vector Machines for Medical Datasets Classification

    Directory of Open Access Journals (Sweden)

    Mustafa Serter Uzer

    2013-01-01

    Full Text Available This paper offers a hybrid approach that uses the artificial bee colony (ABC algorithm for feature selection and support vector machines for classification. The purpose of this paper is to test the effect of elimination of the unimportant and obsolete features of the datasets on the success of the classification, using the SVM classifier. The developed approach conventionally used in liver diseases and diabetes diagnostics, which are commonly observed and reduce the quality of life, is developed. For the diagnosis of these diseases, hepatitis, liver disorders and diabetes datasets from the UCI database were used, and the proposed system reached a classification accuracies of 94.92%, 74.81%, and 79.29%, respectively. For these datasets, the classification accuracies were obtained by the help of the 10-fold cross-validation method. The results show that the performance of the method is highly successful compared to other results attained and seems very promising for pattern recognition applications.

  8. A flexibility-based method via the iterated improved reduction system and the cuckoo optimization algorithm for damage quantification with limited sensors

    Science.gov (United States)

    Zare Hosseinzadeh, Ali; Bagheri, Abdollah; Ghodrati Amiri, Gholamreza; Koo, Ki-Young

    2014-04-01

    In this paper, a novel and effective damage diagnosis algorithm is proposed to localize and quantify structural damage using incomplete modal data, considering the existence of some limitations in the number of attached sensors on structures. The damage detection problem is formulated as an optimization problem by computing static displacements in the reduced model of a structure subjected to a unique static load. The static responses are computed through the flexibility matrix of the damaged structure obtained based on the incomplete modal data of the structure. In the algorithm, an iterated improved reduction system method is applied to prepare an accurate reduced model of a structure. The optimization problem is solved via a new evolutionary optimization algorithm called the cuckoo optimization algorithm. The efficiency and robustness of the presented method are demonstrated through three numerical examples. Moreover, the efficiency of the method is verified by an experimental study of a five-story shear building structure on a shaking table considering only two sensors. The obtained damage identification results for the numerical and experimental studies show the suitable and stable performance of the proposed damage identification method for structures with limited sensors.

  9. Evolutionary algorithm based index assignment algorithm for noisy channel

    Institute of Scientific and Technical Information of China (English)

    李天昊; 余松煜

    2004-01-01

    A globally optimal solution to vector quantization (VQ) index assignment on noisy channel, the evolutionary algorithm based index assignment algorithm (EAIAA), is presented. The algorithm yields a significant reduction in average distortion due to channel errors, over conventional arbitrary index assignment, as confirmed by experimental results over the memoryless binary symmetric channel (BSC) for any bit error.

  10. Optimal Combination of Classification Algorithms and Feature Ranking Methods for Object-Based Classification of Submeter Resolution Z/I-Imaging DMC Imagery

    OpenAIRE

    Fulgencio Cánovas-García; Francisco Alonso-Sarría

    2015-01-01

    Object-based image analysis allows several different features to be calculated for the resulting objects. However, a large number of features means longer computing times and might even result in a loss of classification accuracy. In this study, we use four feature ranking methods (maximum correlation, average correlation, Jeffries–Matusita distance and mean decrease in the Gini index) and five classification algorithms (linear discriminant analysis, naive Bayes, weighted k-nearest neighbors,...

  11. An omni-directional optical antenna and its beam control method based on the EC-KPA algorithm for mobile FSO.

    Science.gov (United States)

    Shang, Tao; Yang, Yintang; Li, Weixu; Wang, Xin; Jia, Jijun

    2013-01-28

    In order to ensure the communication link stability in mobile FSO system, a new omni-directional optical antenna is designed. Being aimed at discontinuous tracking, a novel beam control method based on the error correction Kalman prediction algorithm (EC-KPA) is proposed. The comparison of EC-KPA and the conventional Kalman prediction algorithm (KPA) is given. Numerical simulations about beam control method are carried out. The results show that the prediction accuracy of EC-KPA is improved about 77% than that of KPA in Gaussian noise situation, and that the increase is up to 12.92 times in strong noise situation. Therefore, the beam control method is feasible, and this optical antenna can meet the demands of fast mobile FSO.

  12. Regression toward the mean – a detection method for unknown population mean based on Mee and Chua's algorithm

    Directory of Open Access Journals (Sweden)

    Lüdtke Rainer

    2008-08-01

    Full Text Available Abstract Background Regression to the mean (RTM occurs in situations of repeated measurements when extreme values are followed by measurements in the same subjects that are closer to the mean of the basic population. In uncontrolled studies such changes are likely to be interpreted as a real treatment effect. Methods Several statistical approaches have been developed to analyse such situations, including the algorithm of Mee and Chua which assumes a known population mean μ. We extend this approach to a situation where μ is unknown and suggest to vary it systematically over a range of reasonable values. Using differential calculus we provide formulas to estimate the range of μ where treatment effects are likely to occur when RTM is present. Results We successfully applied our method to three real world examples denoting situations when (a no treatment effect can be confirmed regardless which μ is true, (b when a treatment effect must be assumed independent from the true μ and (c in the appraisal of results of uncontrolled studies. Conclusion Our method can be used to separate the wheat from the chaff in situations, when one has to interpret the results of uncontrolled studies. In meta-analysis, health-technology reports or systematic reviews this approach may be helpful to clarify the evidence given from uncontrolled observational studies.

  13. Function Optimization Based on Quantum Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Ying Sun

    2014-01-01

    Full Text Available Optimization method is important in engineering design and application. Quantum genetic algorithm has the characteristics of good population diversity, rapid convergence and good global search capability and so on. It combines quantum algorithm with genetic algorithm. A novel quantum genetic algorithm is proposed, which is called Variable-boundary-coded Quantum Genetic Algorithm (vbQGA in which qubit chromosomes are collapsed into variable-boundary-coded chromosomes instead of binary-coded chromosomes. Therefore much shorter chromosome strings can be gained. The method of encoding and decoding of chromosome is first described before a new adaptive selection scheme for angle parameters used for rotation gate is put forward based on the core ideas and principles of quantum computation. Eight typical functions are selected to optimize to evaluate the effectiveness and performance of vbQGA against standard Genetic Algorithm (sGA and Genetic Quantum Algorithm (GQA. The simulation results show that vbQGA is significantly superior to sGA in all aspects and outperforms GQA in robustness and solving velocity, especially for multidimensional and complicated functions.

  14. Application of detecting algorithm based on network

    Institute of Scientific and Technical Information of China (English)

    张凤斌; 杨永田; 江子扬; 孙冰心

    2004-01-01

    Because currently intrusion detection systems cannot detect undefined intrusion behavior effectively,according to the robustness and adaptability of the genetic algorithms, this paper integrates the genetic algorithms into an intrusion detection system, and a detection algorithm based on network traffic is proposed. This algorithm is a real-time and self-study algorithm and can detect undefined intrusion behaviors effectively.

  15. A Hybrid Algorithm for Satellite Data Transmission Schedule Based on Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    LI Yun-feng; WU Xiao-yue

    2008-01-01

    A hybrid scheduling algorithm based on genetic algorithm is proposed in this paper for reconnaissance satellite data transmission. At first, based on description of satellite data transmission request, satellite data transmission task modal and satellite data transmission scheduling problem model are established. Secondly, the conflicts in scheduling are discussed. According to the meaning of possible conflict, the method to divide possible conflict task set is given. Thirdly, a hybrid algorithm which consists of genetic algorithm and heuristic information is presented. The heuristic information comes from two concepts, conflict degree and conflict number. Finally, an example shows the algorithm's feasibility and performance better than other traditional algorithms.

  16. High-Speed Rail Train Timetabling Problem: A Time-Space Network Based Method with an Improved Branch-and-Price Algorithm

    Directory of Open Access Journals (Sweden)

    Bisheng He

    2014-01-01

    Full Text Available A time-space network based optimization method is designed for high-speed rail train timetabling problem to improve the service level of the high-speed rail. The general time-space path cost is presented which considers both the train travel time and the high-speed rail operation requirements: (1 service frequency requirement; (2 stopping plan adjustment; and (3 priority of train types. Train timetabling problem based on time-space path aims to minimize the total general time-space path cost of all trains. An improved branch-and-price algorithm is applied to solve the large scale integer programming problem. When dealing with the algorithm, a rapid branching and node selection for branch-and-price tree and a heuristic train time-space path generation for column generation are adopted to speed up the algorithm computation time. The computational results of a set of experiments on China’s high-speed rail system are presented with the discussions about the model validation, the effectiveness of the general time-space path cost, and the improved branch-and-price algorithm.

  17. Study on Pattern Recognition Method of Osteoma Based on Genetic Algorithm%基于遗传算法的骨肿瘤分类方法研究

    Institute of Scientific and Technical Information of China (English)

    余鹏; 吴朝霞; 马林; 王波; 程敬之

    2001-01-01

    为了更好地利用骨肿瘤分形参数集对骨肿瘤进行模式判别,将基于连续变量的遗传算法和相应的交叉与变异算子应用于骨肿瘤的模式分类中.针对该算法在实验中出现的振荡及不收敛问题,相应采用了自适应技术加以改进.通过对比改进前后遗传算法的精度和速度,证明了改进后的自适应遗传算法稳健性能好,运算速度快.利用该算法,可根据分形参数模式集对骨肿瘤进行有效的分类,达到了预期的目标.%In order to classify the pathological characteristics of osteoma more accurately and correctly by using the combined fractal parameters, the genetic algorithm based on continuous variables was applied to the pattern classification of osteoma and the relevant crossover and mutation methods were employed too. According to the shortcomings of the initial algorithm, it is improved by adaptive method. By comparing the performance of two algorithms, it is proved that the adaptive genetic algorithm based on continuos variables is more robust and faster. Using this algorithm, osteomas can be effectively classified by the fractal parameters as expected.

  18. Solution and simulation algorithm of microseismic events location to three-dimensional model by comprehensive location method based on Matlab

    Institute of Scientific and Technical Information of China (English)

    XIA Yuan-yuan; SHAO He-song; LI Shi-xiong; LU Jing-yu

    2012-01-01

    The essential for microseismic monitoring is fast and accurate calculation of seismic wave source location.The precision of most traditional microseismic monitoring processes of mines,using TDOA location method in two-dimensional space to position the microseismic events,as well as the accuracy of positioning microseismic events,may be reduced by the two-dimensional model and simple method,and ill-conditioned equations produced by TDOA location method will increase the positioning error.This article,based on inversion theory,studies the mathematical model of TDOA location method,polarization analysis location method,and comprehensive difference location method of adding angle factor in the traditional TDOA location method.The feasibility of three methods is verified by numerical simulation and analysis of the positioning error of them.The results show that the comprehensive location method of adding angle difference has strong positioning stability and high positioning accuracy,and it may reduce the impact effectively about ill-conditioned equations to positioning results.Comprehensive location method with the data of actual measure may get better positioning results.

  19. A Comprehensive Motion Estimation Technique for the Improvement of EIS Methods Based on the SURF Algorithm and Kalman Filter.

    Science.gov (United States)

    Cheng, Xuemin; Hao, Qun; Xie, Mengdi

    2016-04-07

    Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large.

  20. A feature extraction method of the particle swarm optimization algorithm based on adaptive inertia weight and chaos optimization for Brillouin scattering spectra

    Science.gov (United States)

    Zhang, Yanjun; Zhao, Yu; Fu, Xinghu; Xu, Jinrui

    2016-10-01

    A novel particle swarm optimization algorithm based on adaptive inertia weight and chaos optimization is proposed for extracting the features of Brillouin scattering spectra. Firstly, the adaptive inertia weight parameter of the velocity is introduced to the basic particle swarm algorithm. Based on the current iteration number of particles and the adaptation value, the algorithm can change the weight coefficient and adjust the iteration speed of searching space for particles, so the local optimization ability can be enhanced. Secondly, the logical self-mapping chaotic search is carried out by using the chaos optimization in particle swarm optimization algorithm, which makes the particle swarm optimization algorithm jump out of local optimum. The novel algorithm is compared with finite element analysis-Levenberg Marquardt algorithm, particle swarm optimization-Levenberg Marquardt algorithm and particle swarm optimization algorithm by changing the linewidth, the signal-to-noise ratio and the linear weight ratio of Brillouin scattering spectra. Then the algorithm is applied to the feature extraction of Brillouin scattering spectra in different temperatures. The simulation analysis and experimental results show that this algorithm has a high fitting degree and small Brillouin frequency shift error for different linewidth, SNR and linear weight ratio. Therefore, this algorithm can be applied to the distributed optical fiber sensing system based on Brillouin optical time domain reflection, which can effectively improve the accuracy of Brillouin frequency shift extraction.

  1. The Gravisphere Method Algorithm Programming

    OpenAIRE

    Rosaev, A. E.

    2008-01-01

    The action sphere method program is written. The initial conditions set at pericenter of planetocentric orbits. When action sphere radius is reached, the heliocentric orbit is calculated and data redirected to numeric integration program. The method is useful for capture and collision problem investigation. The very preliminary numeric results were obtained and discussed. A manifold in orbital elements space, leads to temporary capture about 50 year (4 Jupiter revolutions), was found.

  2. Optimization of Process Parameters for Cracking Prevention of UHSS in Hot Stamping Based on Hammersley Sequence Sampling and Back Propagation Neural Network-Genetic Algorithm Mixed Methods

    Institute of Scientific and Technical Information of China (English)

    Menghan Wang∗,Zongmin Yue; Lie Meng

    2016-01-01

    In order to prevent cracking appeared in the work⁃piece during the hot stamping operation, this paper proposes a hybrid optimization method based on Hammersley sequence sampling ( HSS) , finite analysis, back⁃propagation ( BP ) neural network and genetic algorithm ( GA ) . The mechanical properties of high strength boron steel are characterized on the basis of uniaxial tensile test at elevated temperatures. The samples of process parameters are chosen via the HSS that encourages the exploration throughout the design space and hence achieves better discovery of possible global optimum in the solution space. Meanwhile, numerical simulation is carried out to predict the forming quality for the optimized design. A BP neural network model is developed to obtain the mathematical relationship between optimization goal and design variables, and genetic algorithm is used to optimize the process parameters. Finally, the results of numerical simulation are compared with those of production experiment to demonstrate that the optimization strategy proposed in the paper is feasible.

  3. Structure-Based Algorithms for Microvessel Classification

    KAUST Repository

    Smith, Amy F.

    2015-02-01

    © 2014 The Authors. Microcirculation published by John Wiley & Sons Ltd. Objective: Recent developments in high-resolution imaging techniques have enabled digital reconstruction of three-dimensional sections of microvascular networks down to the capillary scale. To better interpret these large data sets, our goal is to distinguish branching trees of arterioles and venules from capillaries. Methods: Two novel algorithms are presented for classifying vessels in microvascular anatomical data sets without requiring flow information. The algorithms are compared with a classification based on observed flow directions (considered the gold standard), and with an existing resistance-based method that relies only on structural data. Results: The first algorithm, developed for networks with one arteriolar and one venular tree, performs well in identifying arterioles and venules and is robust to parameter changes, but incorrectly labels a significant number of capillaries as arterioles or venules. The second algorithm, developed for networks with multiple inlets and outlets, correctly identifies more arterioles and venules, but is more sensitive to parameter changes. Conclusions: The algorithms presented here can be used to classify microvessels in large microvascular data sets lacking flow information. This provides a basis for analyzing the distinct geometrical properties and modelling the functional behavior of arterioles, capillaries, and venules.

  4. Graphical model construction based on evolutionary algorithms

    Institute of Scientific and Technical Information of China (English)

    Youlong YANG; Yan WU; Sanyang LIU

    2006-01-01

    Using Bayesian networks to model promising solutions from the current population of the evolutionary algorithms can ensure efficiency and intelligence search for the optimum. However, to construct a Bayesian network that fits a given dataset is a NP-hard problem, and it also needs consuming mass computational resources. This paper develops a methodology for constructing a graphical model based on Bayesian Dirichlet metric. Our approach is derived from a set of propositions and theorems by researching the local metric relationship of networks matching dataset. This paper presents the algorithm to construct a tree model from a set of potential solutions using above approach. This method is important not only for evolutionary algorithms based on graphical models, but also for machine learning and data mining.The experimental results show that the exact theoretical results and the approximations match very well.

  5. Method of identifying overlapping communities based on GN algorithm%基于 GN 算法的重叠社区识别方法

    Institute of Scientific and Technical Information of China (English)

    高庆一; 李牧

    2015-01-01

    The overlapping community detection in complex networks was studied .The notion of de‐gree of membership was first presented to expresses how strongly a node belongs to a community ,and then the definition of modularity was extended to undirected graphs with overlapping communities . An overlapping community detection algorithm was provided by extending the classical algorithm presented by Girvan and Newman (GN) for identifying disjoint communities ,called GN algorithm .In order to improve the running speed ,a parallel algorithm based on MapReduce was given .The experi‐mental results demonstrate the effectiveness of proposed algorithms on DBLP (digital bibliography and library project) data and show that they outperform other methods on efficiency .%针对社会网络中的重叠社区识别问题,提出用从属度描述节点对不同社区的紧密程度,并把模块度扩展到重叠社区的识别。基于Girvan和Newman提出的非重叠社区识别(GN)算法设计了重叠社区的串行识别算法。基于M apReduce模型设计了并行识别算法,以提高识别效率。对模块度与重叠度进行了分析,结果表明:所提出的算法在计算机科学文献网络中能有效识别重叠社区,且运行效率优于已有重叠社区识别算法。

  6. Optimal Combination of Classification Algorithms and Feature Ranking Methods for Object-Based Classification of Submeter Resolution Z/I-Imaging DMC Imagery

    Directory of Open Access Journals (Sweden)

    Fulgencio Cánovas-García

    2015-04-01

    Full Text Available Object-based image analysis allows several different features to be calculated for the resulting objects. However, a large number of features means longer computing times and might even result in a loss of classification accuracy. In this study, we use four feature ranking methods (maximum correlation, average correlation, Jeffries–Matusita distance and mean decrease in the Gini index and five classification algorithms (linear discriminant analysis, naive Bayes, weighted k-nearest neighbors, support vector machines and random forest. The objective is to discover the optimal algorithm and feature subset to maximize accuracy when classifying a set of 1,076,937 objects, produced by the prior segmentation of a 0.45-m resolution multispectral image, with 356 features calculated on each object. The study area is both large (9070 ha and diverse, which increases the possibility to generalize the results. The mean decrease in the Gini index was found to be the feature ranking method that provided highest accuracy for all of the classification algorithms. In addition, support vector machines and random forest obtained the highest accuracy in the classification, both using their default parameters. This is a useful result that could be taken into account in the processing of high-resolution images in large and diverse areas to obtain a land cover classification.

  7. Algorithmic Differentiation for Calculus-based Optimization

    Science.gov (United States)

    Walther, Andrea

    2010-10-01

    For numerous applications, the computation and provision of exact derivative information plays an important role for optimizing the considered system but quite often also for its simulation. This presentation introduces the technique of Algorithmic Differentiation (AD), a method to compute derivatives of arbitrary order within working precision. Quite often an additional structure exploitation is indispensable for a successful coupling of these derivatives with state-of-the-art optimization algorithms. The talk will discuss two important situations where the problem-inherent structure allows a calculus-based optimization. Examples from aerodynamics and nano optics illustrate these advanced optimization approaches.

  8. Risk Assessment Framework and Algorithm of Power Systems Based on the Partitioned Multi-objective Risk Method

    Institute of Scientific and Technical Information of China (English)

    XIE Shaoyu; WANG Xiuli; WANG Xifan

    2011-01-01

    The average risk indices, such as the loss of load expectation (LOLE) and expected demand not supplied (EDNS), have been widely used in risk assessment of power systems. However, the average indices can't distinguish between the events of low probability but high damage and the events of high probability but low damage. In order to ov+rcome these shortcomings, this paper proposes an extended risk analysis framework for the power system based on the partitioned multi-objective risk method (PMRM).

  9. 基于改进遗传算法的智能组卷研究%An Intelligent Generating Examination Paper Method Based on Improved Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    许永达

    2013-01-01

    因为传统组卷方法的时间和空间开销大、成功率较低,简单遗传算法的收敛速度慢、稳定性差,所以提出了基于改进遗传算法的智能组卷方法,通过根据个体适应度值自适应地选择个体,调整交叉概率和变异概率等措施,加快了算法向最优解的逼近速度,提高了组卷的效率和成功率.论文介绍了该组卷方法的组卷策略,数学模型,各模块的详细设计.%The traditional Generating Examination Paper method has disadvantages of large time and space consumption, and low success rate. The simple generic algorithm has disadvantages of slow convergence speed and poor stability. So an intelligent test construction method based on improved generic algorithm is proposed. The intelligent method adaptivety selects individuals and adjusts the crossover and mutation probabilities based on individual fitness, which accelerates approximation speed to the optimal solution and improves the efficiency and success rate. This paper will introduce the test construction strategies, mathematical model, and the detailed design of each module.

  10. 基于人工鱼群算法的彩色图像检索方法%Color image retrieval method based on artificial fish algorithm

    Institute of Scientific and Technical Information of China (English)

    薛亚娣; 阮文惠

    2016-01-01

    针对传统的彩色图像检索方法计算过程复杂,相似图像匹配度低,检索过程无法实现最优组合的问题,提出一种基于人工鱼群算法的彩色图像检索方法。通过介绍人工鱼群算法,完成彩色图像的颜色特征提取,进行彩色图像鱼群算法的相似度匹配,实现人工鱼群算法图像检索,完成人工鱼群算法检索图像权重的优化。仿真实验表明,提出的人工鱼群算法具备有效性,在实际环境中有检索优势。%The traditional color image retrieval method has complex calculation process,low similar image matching degree,and it unable to achieve the optimal combination in retrieval process.A color image retrieval method based on artificial fish swarm algorithm is thus pro-posed.By introducing the algorithm,the color feature extraction of color image is completed, similarity matching of fish algorithm conducted,the image retrieval realized,and retrieved im-age weight optimization complished.The simulation results show that the proposed artificial fish swarm algorithm is effective,and it has a certain advantage in the actual environment.

  11. Path integral molecular dynamics method based on a pair density matrix approximation: An algorithm for distinguishable and identical particle systems

    Science.gov (United States)

    Miura, Shinichi; Okazaki, Susumu

    2001-09-01

    In this paper, the path integral molecular dynamics (PIMD) method has been extended to employ an efficient approximation of the path action referred to as the pair density matrix approximation. Configurations of the isomorphic classical systems were dynamically sampled by introducing fictitious momenta as in the PIMD based on the standard primitive approximation. The indistinguishability of the particles was handled by a pseudopotential of particle permutation that is an extension of our previous one [J. Chem. Phys. 112, 10 116 (2000)]. As a test of our methodology for Boltzmann statistics, calculations have been performed for liquid helium-4 at 4 K. We found that the PIMD with the pair density matrix approximation dramatically reduced the computational cost to obtain the structural as well as dynamical (using the centroid molecular dynamics approximation) properties at the same level of accuracy as that with the primitive approximation. With respect to the identical particles, we performed the calculation of a bosonic triatomic cluster. Unlike the primitive approximation, the pseudopotential scheme based on the pair density matrix approximation described well the bosonic correlation among the interacting atoms. Convergence with a small number of discretization of the path achieved by this approximation enables us to construct a method of avoiding the problem of the vanishing pseudopotential encountered in the calculations by the primitive approximation.

  12. Function Optimization Based on Quantum Genetic Algorithm

    OpenAIRE

    Ying Sun; Hegen Xiong

    2014-01-01

    Optimization method is important in engineering design and application. Quantum genetic algorithm has the characteristics of good population diversity, rapid convergence and good global search capability and so on. It combines quantum algorithm with genetic algorithm. A novel quantum genetic algorithm is proposed, which is called Variable-boundary-coded Quantum Genetic Algorithm (vbQGA) in which qubit chromosomes are collapsed into variable-boundary-coded chromosomes instead of binary-coded c...

  13. Otsu image threshold segmentation method based on new genetic algorithm%基于新遗传算法的 Otsu图像阈值分割方法

    Institute of Scientific and Technical Information of China (English)

    王宏文; 梁彦彦; 王志华

    2014-01-01

    Maximum between-class variance ( Otsu ) image segmentation method is a common image threshold segmentation method based on statistical theory , but Otsu image segmentation method has some disadvantages , such as more time-consuming , low segmentation accuracy and false image segmentation .Combining the principles of monkey king genetic algorithms, with Otsu algorithm, image gray, just as optimal threshold, was found.The results show that combined method not only improves the quality of image segmentation but also reduce the computation time .It is very suitable for real-time image processing .%最大类间方差( Otsu)图像分割法是常用的一种基于统计原理的图像阈值分割方法。为了改善Otsu耗时较多、分割的精度低、易产生图像误分割等不足,将猴王遗传算法与Otsu算法结合,运用猴王遗传算法的原理,寻找图像灰度的最大类间方差,即最佳阈值。结果表明,结合后的方法不仅提高了图像的分割质量、缩短了运算时间,而且非常适合图像的实时处理。

  14. 基于改进的PSO算法的网络社区划分方法%IMPROVED PSO ALGORITHM BASED NETWORK COMMUNITY DETECTION METHOD

    Institute of Scientific and Technical Information of China (English)

    张钰莎; 蒋盛益; 谢柏林; 唐凯

    2013-01-01

    网络社区划分是复杂网络研究领域的一个热点,现有的复杂网络社区划分方法时间复杂度比较高,准确性过于依赖先验知识,因此许多现有的社区划分方法不太适用于实际网络的社区结构分析.对PSO算法进行改进,改进后的PSO算法的参数设置更简单.基于改进后的PSO算法,提出一种复杂网络社区划分方法,该社区划分方法时间复杂度比较低,并且无需预先知道网络的社区数量、社区节点数.实验结果表明该方法具有良好的性能.%Network community detection is a focus in complex network research field.Present complex network community detection methods' time complexity is high and their accuracy depends too heavily upon prerequisite knowledge.Therefore many present community detection methods are unfit for practical network community structure analysis.The paper improves PSO algorithm to simplify the improved PSK algorithm's parameter configuration.Based on the improved PSO algorithm the paper proposes a complicated network community detection method whose time complexity is low while there is no prerequisite knowledge for network's community number or community node number.Experiment results illustrate that the method shows good performance.

  15. Using second-order calibration method based on trilinear decomposition algorithms coupled with high performance liquid chromatography with diode array detector for determination of quinolones in honey samples.

    Science.gov (United States)

    Yu, Yong-Jie; Wu, Hai-Long; Shao, Sheng-Zhi; Kang, Chao; Zhao, Juan; Wang, Yu; Zhu, Shao-Hua; Yu, Ru-Qin

    2011-09-15

    A novel strategy that combines the second-order calibration method based on the trilinear decomposition algorithms with high performance liquid chromatography with diode array detector (HPLC-DAD) was developed to mathematically separate the overlapped peaks and to quantify quinolones in honey samples. The HPLC-DAD data were obtained within a short time in isocratic mode. The developed method could be applied to determine 12 quinolones at the same time even in the presence of uncalibrated interfering components in complex background. To access the performance of the proposed strategy for the determination of quinolones in honey samples, the figures of merit were employed. The limits of quantitation for all analytes were within the range 1.2-56.7 μg kg(-1). The work presented in this paper illustrated the suitability and interesting potential of combining second-order calibration method with second-order analytical instrument for multi-residue analysis in honey samples.

  16. A Survey of Grid Based Clustering Algorithms

    Directory of Open Access Journals (Sweden)

    MR ILANGO

    2010-08-01

    Full Text Available Cluster Analysis, an automatic process to find similar objects from a database, is a fundamental operation in data mining. A cluster is a collection of data objects that are similar to one another within the same cluster and are dissimilar to the objects in other clusters. Clustering techniques have been discussed extensively in SimilaritySearch, Segmentation, Statistics, Machine Learning, Trend Analysis, Pattern Recognition and Classification [1]. Clustering methods can be classified into i Partitioning methods ii Hierarchical methods iii Density-based methods iv Grid-based methods v Model-based methods. Grid based methods quantize the object space into a finite number of cells (hyper-rectangles and then perform the required operations on the quantized space. The main advantage of Grid based method is its fast processing time which depends on number of cells in each dimension in quantized space. In this research paper, we present some of the grid based methods such as CLIQUE (CLustering In QUEst [2], STING (STatistical INformation Grid [3], MAFIA (Merging of Adaptive Intervals Approach to Spatial Data Mining [4], Wave Cluster [5]and O-CLUSTER (Orthogonal partitioning CLUSTERing [6], as a survey andalso compare their effectiveness in clustering data objects. We also present some of the latest developments in Grid Based methods such as Axis Shifted Grid Clustering Algorithm [7] and Adaptive Mesh Refinement [Wei-Keng Liao etc] [8] to improve the processing time of objects.

  17. DNA Coding Based Knowledge Discovery Algorithm

    Institute of Scientific and Technical Information of China (English)

    LI Ji-yun; GENG Zhao-feng; SHAO Shi-huang

    2002-01-01

    A novel DNA coding based knowledge discovery algorithm was proposed, an example which verified its validity was given. It is proved that this algorithm can discover new simplified rules from the original rule set efficiently.

  18. [Hyper spectral estimation method for soil alkali hydrolysable nitrogen content based on discrete wavelet transform and genetic algorithm in combining with partial least squares DWT-GA-PLS)].

    Science.gov (United States)

    Chen, Hong-Yan; Zhao, Geng-Xing; Li, Xi-Can; Wang, Xiang-Feng; Li, Yu-Ling

    2013-11-01

    Taking the Qihe County in Shandong Province of East China as the study area, soil samples were collected from the field, and based on the hyperspectral reflectance measurement of the soil samples and the transformation with the first deviation, the spectra were denoised and compressed by discrete wavelet transform (DWT), the variables for the soil alkali hydrolysable nitrogen quantitative estimation models were selected by genetic algorithms (GA), and the estimation models for the soil alkali hydrolysable nitrogen content were built by using partial least squares (PLS) regression. The discrete wavelet transform and genetic algorithm in combining with partial least squares (DWT-GA-PLS) could not only compress the spectrum variables and reduce the model variables, but also improve the quantitative estimation accuracy of soil alkali hydrolysable nitrogen content. Based on the 1-2 levels low frequency coefficients of discrete wavelet transform, and under the condition of large scale decrement of spectrum variables, the calibration models could achieve the higher or the same prediction accuracy as the soil full spectra. The model based on the second level low frequency coefficients had the highest precision, with the model predicting R2 being 0.85, the RMSE being 8.11 mg x kg(-1), and RPD being 2.53, indicating the effectiveness of DWT-GA-PLS method in estimating soil alkali hydrolysable nitrogen content.

  19. A New Page Ranking Algorithm Based On WPRVOL Algorithm

    Directory of Open Access Journals (Sweden)

    Roja Javadian Kootenae

    2013-03-01

    Full Text Available The amount of information on the web is always growing, thus powerful search tools are needed to search for such a large collection. Search engines in this direction help users so they can find their desirable information among the massive volume of information in an easier way. But what is important in the search engines and causes a distinction between them is page ranking algorithm used in them. In this paper a new page ranking algorithm based on "Weighted Page Ranking based on Visits of Links (WPRVOL Algorithm" for search engines is being proposed which is called WPR'VOL for short. The proposed algorithm considers the number of visits of first and second level in-links. The original WPRVOL algorithm takes into account the number of visits of first level in-links of the pages and distributes rank scores based on the popularity of the pages whereas the proposed algorithm considers both in-links of that page (first level in-links and in-links of the pages that point to it (second level in-links in order to calculation of rank of the page, hence more related pages are displayed at the top of search result list. In the summary it is said that the proposed algorithm assigns higher rank to pages that both themselves and pages that point to them be important.

  20. Generalized Rule Induction Based on Immune Algorithm

    Institute of Scientific and Technical Information of China (English)

    郑建国; 刘芳; 焦李成

    2002-01-01

    A generalized rule induction mechanism, immune algorithm, for knowledge bases is building an inheritance hierarchy of classes based on the content of their knowledge objects. This hierarchy facilitates group-related processing tasks such as answering set queries, discriminating between objects, finding similarities among objects, etc. Building this hierarchy is a difficult task for knowledge engineers. Conceptual induction may be used to automate or assist engineers in the creation of such a classification structure. This paper introduces a new conceptual rule induction method, which addresses the problem of clustering large amounts of structured objects. The conditions under which the method is applicable are discussed.

  1. Distance Concentration-Based Artificial Immune Algorithm

    Institute of Scientific and Technical Information of China (English)

    LIU Tao; WANG Yao-cai; WANG Zhi-jie; MENG Jiang

    2005-01-01

    The diversity, adaptation and memory of biological immune system attract much attention of researchers. Several optimal algorithms based on immune system have also been proposed up to now. The distance concentration-based artificial immune algorithm (DCAIA) is proposed to overcome defects of the classical artificial immune algorithm (CAIA) in this paper. Compared with genetic algorithm (GA) and CAIA, DCAIA is good for solving the problem of precocity,holding the diversity of antibody, and enhancing convergence rate.

  2. Highly efficient and exact method for parallelization of grid-based algorithms and its implementation in DelPhi

    OpenAIRE

    Li, Chuan; Li, Lin; Jie ZHANG; Alexov, Emil

    2012-01-01

    The Gauss-Seidel method is a standard iterative numerical method widely used to solve a system of equations and, in general, is more efficient comparing to other iterative methods, such as the Jacobi method. However, standard implementation of the Gauss-Seidel method restricts its utilization in parallel computing due to its requirement of using updated neighboring values (i.e., in current iteration) as soon as they are available. Here we report an efficient and exact (not requiring assumptio...

  3. A generalized GPU-based connected component labeling algorithm

    CERN Document Server

    Komura, Yukihiro

    2016-01-01

    We propose a generalized GPU-based connected component labeling (CCL) algorithm that can be applied to both various lattices and to non-lattice environments in a uniform fashion. We extend our recent GPU-based CCL algorithm without the use of conventional iteration to the generalized method. As an application of this algorithm, we deal with the bond percolation problem. We investigate bond percolation on the honeycomb and triangle lattices to confirm the correctness of this algorithm. Moreover, we deal with bond percolation on the Bethe lattice as a substitute for a network structure, and demonstrate the performance of this algorithm on those lattices.

  4. Fixed-point blind source separation algorithm based on ICA

    Institute of Scientific and Technical Information of China (English)

    Hongyan LI; Jianfen MA; Deng'ao LI; Huakui WANG

    2008-01-01

    This paper introduces the fixed-point learning algorithm based on independent component analysis (ICA);the model and process of this algorithm and simulation results are presented.Kurtosis was adopted as the estimation rule of independence.The results of the experiment show that compared with the traditional ICA algorithm based on random grads,this algorithm has advantages such as fast convergence and no necessity for any dynamic parameter,etc.The algorithm is a highly efficient and reliable method in blind signal separation.

  5. A direct phasing method based on the origin-free modulus sum function and the FFT algorithm. XII.

    Science.gov (United States)

    Rius, Jordi; Crespi, Anna; Torrelles, Xavier

    2007-03-01

    An alternative way of refining phases with the origin-free modulus sum function S is shown that, instead of applying the tangent formula in sequential mode [Rius (1993). Acta Cryst. A49, 406-409], applies it in parallel mode with the help of the fast Fourier transform (FFT) algorithm. The test calculations performed on intensity data of small crystal structures at atomic resolution prove the convergence and hence the viability of the procedure. This new procedure called S-FFT is valid for all space groups and especially competitive for low-symmetry ones. It works well when the charge-density peaks in the crystal structure have the same sign, i.e. either positive or negative.

  6. Stochastic Recursive Algorithms for Optimization Simultaneous Perturbation Methods

    CERN Document Server

    Bhatnagar, S; Prashanth, L A

    2013-01-01

    Stochastic Recursive Algorithms for Optimization presents algorithms for constrained and unconstrained optimization and for reinforcement learning. Efficient perturbation approaches form a thread unifying all the algorithms considered. Simultaneous perturbation stochastic approximation and smooth fractional estimators for gradient- and Hessian-based methods are presented. These algorithms: • are easily implemented; • do not require an explicit system model; and • work with real or simulated data. Chapters on their application in service systems, vehicular traffic control and communications networks illustrate this point. The book is self-contained with necessary mathematical results placed in an appendix. The text provides easy-to-use, off-the-shelf algorithms that are given detailed mathematical treatment so the material presented will be of significant interest to practitioners, academic researchers and graduate students alike. The breadth of applications makes the book appropriate for reader from sim...

  7. Star identification methods, techniques and algorithms

    CERN Document Server

    Zhang, Guangjun

    2017-01-01

    This book summarizes the research advances in star identification that the author’s team has made over the past 10 years, systematically introducing the principles of star identification, general methods, key techniques and practicable algorithms. It also offers examples of hardware implementation and performance evaluation for the star identification algorithms. Star identification is the key step for celestial navigation and greatly improves the performance of star sensors, and as such the book include the fundamentals of star sensors and celestial navigation, the processing of the star catalog and star images, star identification using modified triangle algorithms, star identification using star patterns and using neural networks, rapid star tracking using star matching between adjacent frames, as well as implementation hardware and using performance tests for star identification. It is not only valuable as a reference book for star sensor designers and researchers working in pattern recognition and othe...

  8. Influence of crossover methods used by genetic algorithm-based heuristic to solve the selective harmonic equations (SHE) in multi-level voltage source inverter

    Indian Academy of Sciences (India)

    Sangeetha S; S Jeevananthan

    2015-12-01

    Genetic Algorithms (GA) has always done justice to the art of optimization. One such endeavor has been made in employing the roots of GA in a most proficient way to determine the switching moments of a cascaded H-bridge seven level inverter with equal DC sources. Evolutionary techniques have proved themselves efficient to solve such an obscurity. GA is one of the methods to achieve the objective through biological mimicking. The extraordinary property of crossover is extracted using Random 3-Point Neighbourhood Crossover (RPNC) and Multi Midpoint Selective Bit Neighbourhood crossover (MMSBNC). This paper deals with solving of the selective harmonic equations (SHE) using binary coded GA specific to knowledge based neighbourhood multipoint crossover technique. This is directly related to the switching moments of the multilevel inverter under consideration. Although the previous root-finding techniques such as N-R or resultant like methods endeavor the same, the latter offers faster convergence, better program reliability and wide range of solutions. With an acute algorithm developed in Turbo C, the switching moments are calculated offline. The simulation results closely agree with the hardware results.

  9. Parallel algorithms for the spectral transform method

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.T. [Argonne National Lab., IL (United States); Worley, P.H. [Oak Ridge National Lab., TN (United States)

    1994-04-01

    The spectral transform method is a standard numerical technique for solving partial differential equations on a sphere and is widely used in atmospheric circulation models. Recent research has identified several promising algorithms for implementing this method on massively parallel computers; however, no detailed comparison of the different algorithms has previously been attempted. In this paper, we describe these different parallel algorithms and report on computational experiments that we have conducted to evaluate their efficiency on parallel computers. The experiments used a testbed code that solves the nonlinear shallow water equations or a sphere; considerable care was taken to ensure that the experiments provide a fair comparison of the different algorithms and that the results are relevant to global models. We focus on hypercube- and mesh-connected multicomputers with cut-through routing, such as the Intel iPSC/860, DELTA, and Paragon, and the nCUBE/2, but also indicate how the results extend to other parallel computer architectures. The results of this study are relevant not only to the spectral transform method but also to multidimensional FFTs and other parallel transforms.

  10. A novel tree structure based watermarking algorithm

    Science.gov (United States)

    Lin, Qiwei; Feng, Gui

    2008-03-01

    In this paper, we propose a new blind watermarking algorithm for images which is based on tree structure. The algorithm embeds the watermark in wavelet transform domain, and the embedding positions are determined by significant coefficients wavelets tree(SCWT) structure, which has the same idea with the embedded zero-tree wavelet (EZW) compression technique. According to EZW concepts, we obtain coefficients that are related to each other by a tree structure. This relationship among the wavelet coefficients allows our technique to embed more watermark data. If the watermarked image is attacked such that the set of significant coefficients is changed, the tree structure allows the correlation-based watermark detector to recover synchronously. The algorithm also uses a visual adaptive scheme to insert the watermark to minimize watermark perceptibility. In addition to the watermark, a template is inserted into the watermarked image at the same time. The template contains synchronization information, allowing the detector to determine the geometric transformations type applied to the watermarked image. Experimental results show that the proposed watermarking algorithm is robust against most signal processing attacks, such as JPEG compression, median filtering, sharpening and rotating. And it is also an adaptive method which shows a good performance to find the best areas to insert a stronger watermark.

  11. ALGORITHM FOR GENERATING DEM BASED ON CONE

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Digital elevation model (DEM) has a variety of applications in GIS and CAD.It is the basic model for generating three-dimensional terrain feature.Generally speaking,there are two methods for building DEM.One is based upon the digital terrain model of discrete points,and is characterized by fast speed and low precision.The other is based upon triangular digital terrain model,and slow speed and high precision are the features of the method.Combining the advantages of the two methods,an algorithm for generating DEM with discrete points is presented in this paper.When interpolating elevation,this method can create a triangle which includes interpolating point and the elevation of the interpolating point can be obtained from the triangle.The method has the advantage of fast speed,high precision and less memory.

  12. A Novel Method for the Detection of Microcalcifications Based on Multi-scale Morphological Gradient Watershed Segmentation Algorithm

    Directory of Open Access Journals (Sweden)

    S. Vijaya Kumar

    2010-07-01

    Full Text Available This paper presents an automated system for detecting masses in mammogram images. Breast cancer is one of the leading causes of women mortality in the world. Since the causes are unknown, breast cancer cannot be prevented. It is difficult for radiologists to provide both accurate and uniform evaluation over the enormous number of ammograms generated in widespread screening. Microcalcifications (calcium deposits and masses are the earliest signs of breast carcinomas and their detection is one of the key issues for breast cancer control. Computer-aided detection of Microcalcifications and masses is an important and challenging task in breast cancer control. This paper presents a novel approach for detecting microcalcification clusters. First digitized mammogram has been taken from Mammography Image Analysis Society (MIAS database. The Mammogram is preprocessed using Adaptive median filtered. Next, the microcalcification clusters are identified by using the marker extractions of the gradient images obtained by multiscale morphological reconstruction and avoids Oversegmentation vivid in Watershed algorithm. Experimental result show that the microcalcification can be accurately and efficiently detected using the proposed approach.

  13. Moving target detection method based on artificial bee colony algorithm%基于人工蜂群算法的运动目标检测方法

    Institute of Scientific and Technical Information of China (English)

    陈雷; 张立毅; 郭艳菊; 刘婷; 李锵

    2012-01-01

    A moving target detection method based on artificial bee colony algorithm is proposed. The principle that moving target detection problem is transformed into the problem of independent component analysis is utilized. Kur-tosis is selected as the criterion for independent component analysis and artificial bee colony algorithm is used for optimizing the objective function based on kurtosis. The separated signal component is wiped off from sequence images using decorrelation method and the moving target trajectory can be extracted successfully. Computer experiments aim at simulation moving target and real moving target shows that the method proposed can achieve good results for finding out the trajectory of the moving target.%提出了一种基于人工蜂群算法的运动目标检测方法.利用将运动目标检测问题转化为独立成分分析问题的原理,选用峭度作为求解信号独立成分的判据,使用人工蜂群算法对基于峭度的目标函数进行优化求解,通过去相关方法从序列图像中剔除分离出的信号成分,进而实现对运动目标轨迹的成功提取.针对模拟运动物体和实际运动物体图像的仿真实验表明,该方法可以很好地检测出序列图像中运动物体清晰的运动轨迹.

  14. Neural Network-Based Hyperspectral Algorithms

    Science.gov (United States)

    2016-06-07

    Neural Network-Based Hyperspectral Algorithms Walter F. Smith, Jr. and Juanita Sandidge Naval Research Laboratory Code 7340, Bldg 1105 Stennis Space...our effort is development of robust numerical inversion algorithms , which will retrieve inherent optical properties of the water column as well as...validate the resulting inversion algorithms with in-situ data and provide estimates of the error bounds associated with the inversion algorithm . APPROACH

  15. SIFT based algorithm for point feature tracking

    Directory of Open Access Journals (Sweden)

    Adrian BURLACU

    2007-12-01

    Full Text Available In this paper a tracking algorithm for SIFT features in image sequences is developed. For each point feature extracted using SIFT algorithm a descriptor is computed using information from its neighborhood. Using an algorithm based on minimizing the distance between two descriptors tracking point features throughout image sequences is engaged. Experimental results, obtained from image sequences that capture scaling of different geometrical type object, reveal the performances of the tracking algorithm.

  16. Highly efficient and exact method for parallelization of grid-based algorithms and its implementation in DelPhi.

    Science.gov (United States)

    Li, Chuan; Li, Lin; Zhang, Jie; Alexov, Emil

    2012-09-15

    The Gauss-Seidel (GS) method is a standard iterative numerical method widely used to solve a system of equations and, in general, is more efficient comparing to other iterative methods, such as the Jacobi method. However, standard implementation of the GS method restricts its utilization in parallel computing due to its requirement of using updated neighboring values (i.e., in current iteration) as soon as they are available. Here, we report an efficient and exact (not requiring assumptions) method to parallelize iterations and to reduce the computational time as a linear/nearly linear function of the number of processes or computing units. In contrast to other existing solutions, our method does not require any assumptions and is equally applicable for solving linear and nonlinear equations. This approach is implemented in the DelPhi program, which is a finite difference Poisson-Boltzmann equation solver to model electrostatics in molecular biology. This development makes the iterative procedure on obtaining the electrostatic potential distribution in the parallelized DelPhi several folds faster than that in the serial code. Further, we demonstrate the advantages of the new parallelized DelPhi by computing the electrostatic potential and the corresponding energies of large supramolecular structures.

  17. Genetic Algorithms, Neural Networks, and Time Effectiveness Algorithm Based Air Combat Intelligence Simulation System

    Institute of Scientific and Technical Information of China (English)

    曾宪钊; 成冀; 安欣; 方礼明

    2002-01-01

    This paper introduces a new Air Combat Intelligence Simulation System (ACISS) in a 32 versus 32 air combat, describes three methods: Genetic Algorithms (GA) in the multi-targeting decision and Evading Missile Rule Base learning, Neural Networks (NN) in the maneuvering decision, and Time Effectiveness Algorithm (TEA) in the adjudicating an air combat and the evaluating evading missile effectiveness.

  18. A Novel Hardware Troj an Detection Method Based on Genetic Algorithm%基于遗传算法的硬件木马检测方法

    Institute of Scientific and Technical Information of China (English)

    刘燕江; 何春华; 王力纬; 恩云飞; 谢少锋; 谢云

    2016-01-01

    针对硬件木马严重威胁到芯片的安全性和系统的可靠性的问题,提出了一种基于遗传算法的集成电路硬件木马检测方法,该方法将K均值算法的局部收敛能力和遗传算法的全局收敛能力结合起来,提取旁路信息间的微小特征差异,实现硬件木马的在线自动检测。本实验在FPGA芯片实现硬件验证,以全局时钟、动态功耗和环形振荡器的输出三个信号作为研究对象,搭建集成电路硬件木马检测系统采集三维旁路信息,对160个样本芯片数据进行聚类分析,实验结果证明该方法可准确有效地检测出硬件木马,检测分辨率达到10-4量级。%Hardware Troj an,a malicious circuit inserted into the golden circuit,causes a serious risk to the security of Integrated Circuits and the trust of critical system.Thus,hardware Trojan detection is of great significance.A novel hardware Troj an detection method based on clustering analysis algorithm is presented in this work,which employs the global optimized ability of genetic algorithm and the local searching capability of K-means algorithm to automatically fulfill the detection online.Side-channel signals,such as global clock signal,reset signal and a ring oscillator signal of a chip,are applied as the inputs of the clustering algorithm,since they may be altered by the Troj an circuit.Experimental results implemented with FPGAs demonstrate that the tested chips can be clustered into two categories,and the Trojan chips can be distinguished from the Golden chips accurately.The detection resolution achieves about 10-4 ,which indicates the proposed novel detection method is feasible and effective.

  19. 基于改进遗传算法的传感器优化配置%Sensor Optimization Method Based on Improved Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    耿飞; 龙海辉; 赵健康; 丁鹏

    2014-01-01

    In this paper, an optimization method based on system observability was proposed to the optimized con-figuration of satellite solar panels sensor. Block analytical form of the observability Gram array was used to avoid the solution of higher-order Lyapunov matrix equation. Sensor optimal allocation principle was proposed based on the a-nalysis of particularity of observability. In order to quickly search for the optimal location and overcome the limitation and insufficient of the traditional genetic algorithm, an improved adaptive genetic algorithm was presented. Adaptively adjusting crossover probability and mutation probability and excellent individual protection were addressed to overcome the traditional genetic algorithm premature and divergent phenomenon defects. Experimental results show that im-proved genetic algorithm is effective for sensor placement optimization.%针对卫星太阳能帆板传感器的优化配置问题,为提高稳定性,提出了基于系统可观性的优化方法。通过可观性Gram阵的分块解析形式,避免了求解高阶Lyapunov矩阵方程。分析可观性的特殊性,提出了传感器的优化配置准则。为快速寻找到传感器的最优位置,针对传统遗传算法的局限性和不足,提出了自适应改进遗传算法。通过自适应调整交叉概率与变异概率和优秀个体保护,克服了传统遗传算法的早熟和发散现象的缺陷。实验结果表明,改进的遗传算法对于传感器的配置优化是有效的。

  20. Structural, spectroscopic aspects, and electronic properties of (TiO2)n clusters: a study based on the use of natural algorithms in association with quantum chemical methods.

    Science.gov (United States)

    Ganguly Neogi, Soumya; Chaudhury, Pinaki

    2014-01-05

    In this article, we propose a stochastic search-based method, namely genetic algorithm (GA) and simulated annealing (SA) in conjunction with density functional theory (DFT) to evaluate global and local minimum structures of (TiO2)n clusters with n = 1-12. Once the structures are established, we evaluate the infrared spectroscopic modes, cluster formation energy, vertical excitation energy, vertical ionization potential, vertical electron affinity, highest occupied molecular orbital (HOMO)-lowest unoccupied molecular orbital (LUMO) gaps, and so forth. We show that an initial determination of structure using stochastic techniques (GA/SA), also popularly known as natural algorithms as their working principle mimics certain natural processes, and following it up with density functional calculations lead to high-quality structures for these systems. We have shown that the clusters tend to form three-dimensional networks. We compare our results with the available experimental and theoretical results. The results obtained from SA/GA-DFT technique agree well with available theoretical and experimental data of literature.

  1. 一种基于FKT算法的人脸表情识别方法%A Method of Facial Expression Recognition Based on FKT Algorithm

    Institute of Scientific and Technical Information of China (English)

    邱伟

    2011-01-01

    Facial expression recognition has been one of the important research themes in the fields of pattern recognition and computer vision. In this paper, a face detection system base on the dynamic cascade learning algorithm is implemented by C++ programs. With the help of the obtained face detector, face samples are extracted to form the data sets for expression recognition. Finally, FKT (Fuktunaga-Koontz Transform) algorithm is applied to solve the problem of expression recognition. The experimental results demonstrate the effectiveness of the proposed method.%人脸表情识别是模式识别和计算机视觉领域的重要课题之一.本文使用C++编程实现基于动态级联算法的人脸检测器,使用人脸检测器提取出表情识别需要的人脸数据集,通过FKT(Fukunaga-Koontz Transform)算法来解决表情识别问题.实验结果表明算法的有效性.

  2. DYNAMIC LABELING BASED FPGA DELAY OPTIMIZATION ALGORITHM

    Institute of Scientific and Technical Information of China (English)

    吕宗伟; 林争辉; 张镭

    2001-01-01

    DAG-MAP is an FPGA technology mapping algorithm for delay optimization and the labeling phase is the algorithm's kernel. This paper studied the labeling phase and presented an improved labeling method. It is shown through the experimental results on MCNC benchmarks that the improved method is more effective than the original method while the computation time is almost the same.

  3. Accuracy verification methods theory and algorithms

    CERN Document Server

    Mali, Olli; Repin, Sergey

    2014-01-01

    The importance of accuracy verification methods was understood at the very beginning of the development of numerical analysis. Recent decades have seen a rapid growth of results related to adaptive numerical methods and a posteriori estimates. However, in this important area there often exists a noticeable gap between mathematicians creating the theory and researchers developing applied algorithms that could be used in engineering and scientific computations for guaranteed and efficient error control.   The goals of the book are to (1) give a transparent explanation of the underlying mathematical theory in a style accessible not only to advanced numerical analysts but also to engineers and students; (2) present detailed step-by-step algorithms that follow from a theory; (3) discuss their advantages and drawbacks, areas of applicability, give recommendations and examples.

  4. Algorithmic and analytical methods in network biology.

    Science.gov (United States)

    Koyutürk, Mehmet

    2010-01-01

    During the genomic revolution, algorithmic and analytical methods for organizing, integrating, analyzing, and querying biological sequence data proved invaluable. Today, increasing availability of high-throughput data pertaining to functional states of biomolecules, as well as their interactions, enables genome-scale studies of the cell from a systems perspective. The past decade witnessed significant efforts on the development of computational infrastructure for large-scale modeling and analysis of biological systems, commonly using network models. Such efforts lead to novel insights into the complexity of living systems, through development of sophisticated abstractions, algorithms, and analytical techniques that address a broad range of problems, including the following: (1) inference and reconstruction of complex cellular networks; (2) identification of common and coherent patterns in cellular networks, with a view to understanding the organizing principles and building blocks of cellular signaling, regulation, and metabolism; and (3) characterization of cellular mechanisms that underlie the differences between living systems, in terms of evolutionary diversity, development and differentiation, and complex phenotypes, including human disease. These problems pose significant algorithmic and analytical challenges because of the inherent complexity of the systems being studied; limitations of data in terms of availability, scope, and scale; intractability of resulting computational problems; and limitations of reference models for reliable statistical inference. This article provides a broad overview of existing algorithmic and analytical approaches to these problems, highlights key biological insights provided by these approaches, and outlines emerging opportunities and challenges in computational systems biology.

  5. A Multi-Scale Gradient Algorithm Based on Morphological Operators

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Watershed transformation is a powerful morphological tool for image segmentation. However, the performance of the image segmentation methods based on watershed transformation depends largely on the algorithm for computing the gradient of the image to be segmented. In this paper, we present a multi-scale gradient algorithm based on morphological operators for watershed-based image segmentation, with effective handling of both step and blurred edges. We also present an algorithm to eliminate the local minima produced by noise and quantization errors. Experimental results indicate that watershed transformation with the algorithms proposed in this paper produces meaningful segmentations, even without a region-merging step.

  6. A solution quality assessment method for swarm intelligence optimization algorithms.

    Science.gov (United States)

    Zhang, Zhaojun; Wang, Gai-Ge; Zou, Kuansheng; Zhang, Jianhua

    2014-01-01

    Nowadays, swarm intelligence optimization has become an important optimization tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment method for intelligent optimization is proposed in this paper. It is an experimental analysis method based on the analysis of search space and characteristic of algorithm itself. Instead of "value performance," the "ordinal performance" is used as evaluation criteria in this method. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and "good enough" set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed method, some intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and artificial fish swarm algorithm (AFS) were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed method.

  7. Continuous Attributes Discretization Algorithm based on FPGA

    Directory of Open Access Journals (Sweden)

    Guoqiang Sun

    2013-07-01

    Full Text Available The paper addresses the problem of Discretization of continuous attributes in rough set. Discretization of continuous attributes is an important part of rough set theory because most of data that we usually gain are continuous data. In order to improve processing speed of discretization, we propose a FPGA-based discretization algorithm of continuous attributes making use of the speed advantage of FPGA. Combined attributes dependency degree of rough ret, the discretization system was divided into eight modules according to block design. This method can save much time of pretreatment in rough set and improve operation efficiency. Extensive experiments on a certain fighter fault diagnosis validate the effectiveness of the algorithm.  

  8. Maximum Entropy Method of Image Segmentation Based on Genetic Algorithm%改进的最大熵算法在图像分割中的应用

    Institute of Scientific and Technical Information of China (English)

    王文渊; 王芳梅

    2011-01-01

    The traditional entropy threshold has shortcomings of theory and computational complexity, resulting in time - consuming in image segmentation and low efficiency. In order to improve the efficiency and accuracy of image segmentation, an image segmentation method is proposed, which combines the improved genetic algorithm with maxi-mum entropy algorithm. First, the two -dimensional histogram based on the image gray value information is used to extract features, then three genetic operations of selecting, crossover and mutation are used to search for the optimal threshold for image segmentation. Simulation results show that the improved algorithm, compared with the traditional maximum entropy image segmentation algorithm, increases segmentation efficiency, and the accuracy of image seg-mentation has greatly improved, which speeds up the segmentation speed.%研究图像分割优化问题,要求图像分割速度快,清晰度高.针对传统的熵值法在理论上存在的不足,同时抗噪能力差,速度慢,图像模糊等缺陷,造成图像分割过程耗时长,分割效率低等问题.为了提高图像分割效率和精确度,提出一种改进的遗传算法和最大熵算法相结合的图像分割新方法.首先依据图像二维直方图信息来对图像进行特征提取,最后通过遗传算法的选择、交叉和变异操作搜索最优阈值,从而获得最优阈值来对图像进行分割.仿真结果表明,改进的算法与传统最大熵值的图像分割算法相比,分割效率明显提高,同时图像分割的精度也大大提高,加快了图像分割的速度,为设计提供了依据.

  9. Global optimal path planning for mobile robot based on improved Dijkstra algorithm and ant system algorithm

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A novel method of global optimal path planning for mobile robot was proposed based on the improved Dijkstra algorithm and ant system algorithm. This method includes three steps: the first step is adopting the MAKLINK graph theory to establish the free space model of the mobile robot, the second step is adopting the improved Dijkstra algorithm to find out a sub-optimal collision-free path, and the third step is using the ant system algorithm to adjust and optimize the location of the sub-optimal path so as to generate the global optimal path for the mobile robot. The computer simulation experiment was carried out and the results show that this method is correct and effective. The comparison of the results confirms that the proposed method is better than the hybrid genetic algorithm in the global optimal path planning.

  10. A simulation based approach to optimize inventory replenishment with RAND algorithm: An extended study of corrected demand using Holt's method for textile industry

    Science.gov (United States)

    Morshed, Mohammad Sarwar; Kamal, Mostafa Mashnoon; Khan, Somaiya Islam

    2016-07-01

    Inventory has been a major concern in supply chain and numerous researches have been done lately on inventory control which brought forth a number of methods that efficiently manage inventory and related overheads by reducing cost of replenishment. This research is aimed towards providing a better replenishment policy in case of multi-product, single supplier situations for chemical raw materials of textile industries in Bangladesh. It is assumed that industries currently pursue individual replenishment system. The purpose is to find out the optimum ideal cycle time and individual replenishment cycle time of each product for replenishment that will cause lowest annual holding and ordering cost, and also find the optimum ordering quantity. In this paper indirect grouping strategy has been used. It is suggested that indirect grouping Strategy outperforms direct grouping strategy when major cost is high. An algorithm by Kaspi and Rosenblatt (1991) called RAND is exercised for its simplicity and ease of application. RAND provides an ideal cycle time (T) for replenishment and integer multiplier (ki) for individual items. Thus the replenishment cycle time for each product is found as T×ki. Firstly, based on data, a comparison between currently prevailing (individual) process and RAND is provided that uses the actual demands which presents 49% improvement in total cost of replenishment. Secondly, discrepancies in demand is corrected by using Holt's method. However, demands can only be forecasted one or two months into the future because of the demand pattern of the industry under consideration. Evidently, application of RAND with corrected demand display even greater improvement. The results of this study demonstrates that cost of replenishment can be significantly reduced by applying RAND algorithm and exponential smoothing models.

  11. Dynamic route guidance algorithm based algorithm based on artificial immune system

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    To improve the performance of the K-shortest paths search in intelligent traffic guidance systems,this paper proposes an optimal search algorithm based on the intelligent optimization search theory and the memphor mechanism of vertebrate immune systems.This algorithm,applied to the urban traffic network model established by the node-expanding method,can expediently realize K-shortest paths search in the urban traffic guidance systems.Because of the immune memory and global parallel search ability from artificial immune systems,K shortest paths can be found without any repeat,which indicates evidently the superiority of the algorithm to the conventional ones.Not only does it perform a better parallelism,the algorithm also prevents premature phenomenon that often occurs in genetic algorithms.Thus,it is especially suitable for real-time requirement of the traffic guidance system and other engineering optimal applications.A case study verifies the efficiency and the practicability of the algorithm aforementioned.

  12. A Novel Denoising Method for an Acoustic-Based System through Empirical Mode Decomposition and an Improved Fruit Fly Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Jing Xu

    2017-02-01

    Full Text Available Generally, the sound signal produced by transmission unit or cutting unit contains abundant information about the working state of a machine. The acoustic-based diagnosis system presents some distinct advantages in some severe conditions particularly due to its unique non-contact measurement and unlimited use at the installation site. However, the original acoustic signal collected from manufacture process is always polluted by various background noises. In order to eliminate noise components from machinery sound effectively, an empirical mode decomposition (EMD threshold denoising method optimized by an improved fruit fly optimization algorithm (IFOA is launched in this paper. The acoustic signal was first decomposed by the adaptive EMD to obtain a series of intrinsic mode functions (IMFs. Then, the soft threshold function was applied to shrink the IMF coefficients. While the threshold of each IMF was determined by statistical estimation and empirical value for traditional EMD denoising, the denoising effect was often not desired and time-consuming. To solve these disadvantages, fruit fly optimization algorithm (FOA was introduced to search global optimal threshold of each IMF. Moreover, to enhance the group diversity during production of the next generation of fruit flies and balance the local and global searching ability, a variation coefficient and a disturbance coefficient was introduced to the basic FOA. Then, a piece of simulated acoustic signal produced by the train was applied to validate the proposed EMD and IFOA threshold denoising (EMD-IFOA. The simulation results, which decreased 35.40% and 18.92% in mean squared error (MSE and percent root mean square difference (PRD respectively, and increased 40.36% in signal-to-noise ratio improvement (SNRimp compared with basic EMD denoising scheme at SNR = 5 dB, illustrated the effectiveness and superiority of the proposed approach. Finally, the proposed EMD-IFOA was conducted on an actual

  13. Lazy learner text categorization algorithm based on embedded feature selection

    Institute of Scientific and Technical Information of China (English)

    Yan Peng; Zheng Xuefeng; Zhu Jianyong; Xiao Yunhong

    2009-01-01

    To avoid the curse of dimensionality, text categorization (TC) algorithms based on machine learning (ML) have to use an feature selection (FS) method to reduce the dimensionality of feature space. Although having been widely used, FS process will generally cause information losing and then have much side-effect on the whole performance of TC algorithms. On the basis of the sparsity characteristic of text vectors, a new TC algorithm based on lazy feature selection (LFS) is presented. As a new type of embedded feature selection approach, the LFS method can greatly reduce the dimension of features without any information losing, which can improve both efficiency and performance of algorithms greatly. The experiments show the new algorithm can simultaneously achieve much higher both performance and efficiency than some of other classical TC algorithms.

  14. Algorithms for the nonclassical method of symmetry reductions

    CERN Document Server

    Clarkson, P A; Peter A Clarkson; Elizabeth L Mansfield

    1994-01-01

    In this article we present first an algorithm for calculating the determining equations associated with so-called "nonclassical method" of symmetry reductions (a la Bluman and Cole) for systems of partial differentail equations. This algorithm requires significantly less computation time than that standardly used, and avoids many of the difficulties commonly encountered. The proof of correctness of the algorithm is a simple application of the theory of Grobner bases. In the second part we demonstrate some algorithms which may be used to analyse, and often to solve, the resulting systems of overdetermined nonlinear PDEs. We take as our principal example a generalised Boussinesq equation, which arises in shallow water theory. Although the equation appears to be non-integrable, we obtain an exact "two-soliton" solution from a nonclassical reduction.

  15. A new optimization algorithm based on chaos

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In this article, some methods are proposed for enhancing the converging velocity of the COA (chaos optimization algorithm) based on using carrier wave two times, which can greatly increase the speed and efficiency of the first carrier wave's search for the optimal point in implementing the sophisticated searching during the second carrier wave is faster and more accurate.In addition, the concept of using the carrier wave three times is proposed and put into practice to tackle the multi-variables optimization problems, where the searching for the optimal point of the last several variables is frequently worse than the first several ones.

  16. 基于传播算子的间谐波参数智能估计算法%Interharmonic parameter intelligent estimation algorithm based on propagator method

    Institute of Scientific and Technical Information of China (English)

    陈国志; 陈隆道; 蔡忠法

    2011-01-01

    为了降低多重信号分类(MUSIC)法频率估计的计算复杂度,提出基于传播算子的间谐波频率估计方法.通过传播算子可以得到噪声子空间,不需要估计协方差矩阵和进行特征分解,并且不需要间谐波个数的先验知识,基于传播算子MUSIC算法的频率估计性能与MUSIC算法几乎相同.构造复数域自适应线性神经网络模型来估计谐波和间谐波的幅值和相位.该模型的输入变量和权值仅为实数域自适应线性神经网络的一半,简化了网络结构;采用Levenberg-Marquardt(LM)算法对网络进行学习,大大减少了学习次数.仿真结果表明,该算法无需同步采样,能够快速准确地估计间谐波的频率、幅值和相位.%An interharmonic frequency estimation algorithm based on propagator method (PM) was proposed in order to reduce the computational complexity of multiple signal classification (MUSIC). The propagator can be used to construct the noise subspace. The PM didn't involve covariance matrix and eigenvalue decomposition and the priori knowledge of interharmonic number, and had the approximation performance compared with MUSIC. A complex Adaline neural network was employed to obtain amplitudes and phases of harmonics and interharmonics. The proposed complex Adaline structure was based on LevenbergMarquardt (LM) rule. The algorithm reduced input vectors and weights to half of the number that real Adaline used. These attributes can increase convergence speed. The simulation results show that the algorithm can accurately achieve frequencies, amplitudes and phases of interharmonics without synchronous sampling data.

  17. Cosmic Web Reconstruction through Density Ridges: Method and Algorithm

    CERN Document Server

    Chen, Yen-Chi; Freeman, Peter E; Genovese, Christopher R; Wasserman, Larry

    2015-01-01

    The detection and characterization of filamentary structures in the cosmic web allows cosmologists to constrain parameters that dictates the evolution of the Universe. While many filament estimators have been proposed, they generally lack estimates of uncertainty, reducing their inferential power. In this paper, we demonstrate how one may apply the Subspace Constrained Mean Shift (SCMS) algorithm (Ozertem and Erdogmus (2011); Genovese et al. (2012)) to uncover filamentary structure in galaxy data. The SCMS algorithm is a gradient ascent method that models filaments as density ridges, one-dimensional smooth curves that trace high-density regions within the point cloud. We also demonstrate how augmenting the SCMS algorithm with bootstrap-based methods of uncertainty estimation allows one to place uncertainty bands around putative filaments. We apply the SCMS method to datasets sampled from the P3M N-body simulation, with galaxy number densities consistent with SDSS and WFIRST-AFTA and to LOWZ and CMASS data fro...

  18. TOA estimation algorithm based on multi-search

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    A new time of arrival (TOA) estimation algorithm is proposed. The algorithm computes the optimal sub-correlation length based on the SNR theory. So the robust of TOA acquirement is guaranteed very well. Then, according to the actual transmission environment and network system, the multi-search method is given. From the simulation result,the algorithm shows a very high application value in the realization of wireless location system (WLS).

  19. Variable Neighborhood Search Based Algorithm for University Course Timetabling Problem

    OpenAIRE

    Kralev, Velin; Kraleva, Radoslava

    2016-01-01

    In this paper a variable neighborhood search approach as a method for solving combinatoric optimization problems is presented. A variable neighborhood search based algorithm for solving the problem concerning the university course timetable design has been developed. This algorithm is used to solve the real problem regarding the university course timetable design. It is compared with other algorithms that are tested on the same sets of input data. The object and the methodology of study are p...

  20. Enterprise Human Resources Information Mining Based on Improved Apriori Algorithm

    Directory of Open Access Journals (Sweden)

    Lei He

    2013-05-01

    Full Text Available With the unceasing development of information and technology in today’s modern society, enterprises’ demand of human resources information mining is getting bigger and bigger. Based on the enterprise human resources information mining situation, this paper puts forward a kind of improved Apriori algorithm based model on the enterprise human resources information mining, this model introduced data mining technology and traditional Apriori algorithm, and improved on its basis, divided the association rules mining task of the original algorithm into two subtasks of producing frequent item sets and producing rule, using SQL technology to directly generating frequent item sets, and using the method of establishing chart to extract the information which are interested to customers. The experimental results show that the improved Apriori algorithm based model on the enterprise human resources information mining is better in efficiency than the original algorithm, and the practical application test results show that the improved algorithm is practical and effective.

  1. Video segmentation using multiple features based on EM algorithm

    Institute of Scientific and Technical Information of China (English)

    张风超; 杨杰; 刘尔琦

    2004-01-01

    Object-based video segmentation is an important issue for many multimedia applications. A video segmentation method based on EM algorithm is proposed. We consider video segmentation as an unsupervised classification problem and apply EM algorithm to obtain the maximum-likelihood estimation of the Gaussian model parameters for model-based segmentation. We simultaneously combine multiple features (motion, color) within a maximum likelihood framework to obtain accurate segment results. We also use the temporal consistency among video frames to improve the speed of EM algorithm. Experimental results on typical MPEG-4 sequences and real scene sequences show that our method has an attractive accuracy and robustness.

  2. Multicast Routing Based on Hybrid Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    CAO Yuan-da; CAI Gui

    2005-01-01

    A new multicast routing algorithm based on the hybrid genetic algorithm (HGA) is proposed. The coding pattern based on the number of routing paths is used. A fitness function that is computed easily and makes algorithm quickly convergent is proposed. A new approach that defines the HGA's parameters is provided. The simulation shows that the approach can increase largely the convergent ratio, and the fitting values of the parameters of this algorithm are different from that of the original algorithms. The optimal mutation probability of HGA equals 0.50 in HGA in the experiment, but that equals 0.07 in SGA. It has been concluded that the population size has a significant influence on the HGA's convergent ratio when it's mutation probability is bigger. The algorithm with a small population size has a high average convergent rate. The population size has little influence on HGA with the lower mutation probability.

  3. Robust adaptive beamforming algorithm based on Bayesian approach

    Institute of Scientific and Technical Information of China (English)

    Xin SONG; Jinkuan WANG; Yinghua HAN; Han WANG

    2008-01-01

    The performance of adaptive array beamform-ing algorithms substantially degrades in practice because of a slight mismatch between actual and presumed array res-ponses to the desired signal. A novel robust adaptive beam-forming algorithm based on Bayesian approach is therefore proposed. The algorithm responds to the current envi-ronment by estimating the direction of arrival (DOA) of the actual signal from observations. Computational com-plexity of the proposed algorithm can thus be reduced com-pared with other algorithms since the recursive method is used to obtain inverse matrix. In addition, it has strong robustness to the uncertainty of actual signal DOA and makes the mean output array signal-to-interference-plus-noise ratio (SINR) consistently approach the optimum. Simulation results show that the proposed algorithm is bet-ter in performance than conventional adaptive beamform-ing algorithms.

  4. Clonal Selection Algorithm Based Iterative Learning Control with Random Disturbance

    Directory of Open Access Journals (Sweden)

    Yuanyuan Ju

    2013-01-01

    Full Text Available Clonal selection algorithm is improved and proposed as a method to solve optimization problems in iterative learning control. And a clonal selection algorithm based optimal iterative learning control algorithm with random disturbance is proposed. In the algorithm, at the same time, the size of the search space is decreased and the convergence speed of the algorithm is increased. In addition a model modifying device is used in the algorithm to cope with the uncertainty in the plant model. In addition a model is used in the algorithm cope with the uncertainty in the plant model. Simulations show that the convergence speed is satisfactory regardless of whether or not the plant model is precise nonlinear plants. The simulation test verify the controlled system with random disturbance can reached to stability by using improved iterative learning control law but not the traditional control law.

  5. Land Surface Model and Particle Swarm Optimization Algorithm Based on the Model-Optimization Method for Improving Soil Moisture Simulation in a Semi-Arid Region.

    Directory of Open Access Journals (Sweden)

    Qidong Yang

    Full Text Available Improving the capability of land-surface process models to simulate soil moisture assists in better understanding the atmosphere-land interaction. In semi-arid regions, due to limited near-surface observational data and large errors in large-scale parameters obtained by the remote sensing method, there exist uncertainties in land surface parameters, which can cause large offsets between the simulated results of land-surface process models and the observational data for the soil moisture. In this study, observational data from the Semi-Arid Climate Observatory and Laboratory (SACOL station in the semi-arid loess plateau of China were divided into three datasets: summer, autumn, and summer-autumn. By combing the particle swarm optimization (PSO algorithm and the land-surface process model SHAW (Simultaneous Heat and Water, the soil and vegetation parameters that are related to the soil moisture but difficult to obtain by observations are optimized using three datasets. On this basis, the SHAW model was run with the optimized parameters to simulate the characteristics of the land-surface process in the semi-arid loess plateau. Simultaneously, the default SHAW model was run with the same atmospheric forcing as a comparison test. Simulation results revealed the following: parameters optimized by the particle swarm optimization algorithm in all simulation tests improved simulations of the soil moisture and latent heat flux; differences between simulated results and observational data are clearly reduced, but simulation tests involving the adoption of optimized parameters cannot simultaneously improve the simulation results for the net radiation, sensible heat flux, and soil temperature. Optimized soil and vegetation parameters based on different datasets have the same order of magnitude but are not identical; soil parameters only vary to a small degree, but the variation range of vegetation parameters is large.

  6. Speech Enhancement based on Compressive Sensing Algorithm

    Science.gov (United States)

    Sulong, Amart; Gunawan, Teddy S.; Khalifa, Othman O.; Chebil, Jalel

    2013-12-01

    There are various methods, in performance of speech enhancement, have been proposed over the years. The accurate method for the speech enhancement design mainly focuses on quality and intelligibility. The method proposed with high performance level. A novel speech enhancement by using compressive sensing (CS) is a new paradigm of acquiring signals, fundamentally different from uniform rate digitization followed by compression, often used for transmission or storage. Using CS can reduce the number of degrees of freedom of a sparse/compressible signal by permitting only certain configurations of the large and zero/small coefficients, and structured sparsity models. Therefore, CS is significantly provides a way of reconstructing a compressed version of the speech in the original signal by taking only a small amount of linear and non-adaptive measurement. The performance of overall algorithms will be evaluated based on the speech quality by optimise using informal listening test and Perceptual Evaluation of Speech Quality (PESQ). Experimental results show that the CS algorithm perform very well in a wide range of speech test and being significantly given good performance for speech enhancement method with better noise suppression ability over conventional approaches without obvious degradation of speech quality.

  7. PDE Based Algorithms for Smooth Watersheds.

    Science.gov (United States)

    Hodneland, Erlend; Tai, Xue-Cheng; Kalisch, Henrik

    2016-04-01

    Watershed segmentation is useful for a number of image segmentation problems with a wide range of practical applications. Traditionally, the tracking of the immersion front is done by applying a fast sorting algorithm. In this work, we explore a continuous approach based on a geometric description of the immersion front which gives rise to a partial differential equation. The main advantage of using a partial differential equation to track the immersion front is that the method becomes versatile and may easily be stabilized by introducing regularization terms. Coupling the geometric approach with a proper "merging strategy" creates a robust algorithm which minimizes over- and under-segmentation even without predefined markers. Since reliable markers defined prior to segmentation can be difficult to construct automatically for various reasons, being able to treat marker-free situations is a major advantage of the proposed method over earlier watershed formulations. The motivation for the methods developed in this paper is taken from high-throughput screening of cells. A fully automated segmentation of single cells enables the extraction of cell properties from large data sets, which can provide substantial insight into a biological model system. Applying smoothing to the boundaries can improve the accuracy in many image analysis tasks requiring a precise delineation of the plasma membrane of the cell. The proposed segmentation method is applied to real images containing fluorescently labeled cells, and the experimental results show that our implementation is robust and reliable for a variety of challenging segmentation tasks.

  8. Methods in Logic Based Control

    DEFF Research Database (Denmark)

    Christensen, Georg Kronborg

    1999-01-01

    Desing and theory of Logic Based Control systems.Boolean Algebra, Karnaugh Map, Quine McClusky's algorithm. Sequential control design. Logic Based Control Method, Cascade Control Method. Implementation techniques: relay, pneumatic, TTL/CMOS,PAL and PLC- and Soft_PLC implementation. PLC-design met......Desing and theory of Logic Based Control systems.Boolean Algebra, Karnaugh Map, Quine McClusky's algorithm. Sequential control design. Logic Based Control Method, Cascade Control Method. Implementation techniques: relay, pneumatic, TTL/CMOS,PAL and PLC- and Soft_PLC implementation. PLC...

  9. 3D-QSAR studies of triazolopyrimidine derivatives of Plasmodium falciparum dihydroorotate dehydrogenase inhibitors using a combination of molecular dynamics, docking, and genetic algorithm-based methods.

    Science.gov (United States)

    Shah, Priyanka; Kumar, Sumit; Tiwari, Sunita; Siddiqi, Mohammad Imran

    2012-07-01

    A series of 35 triazolopyrimidine analogues reported as Plasmodium falciparum dihydroorotate dehydrogenase (PfDHODH) inhibitors were optimized using quantum mechanics methods, and their binding conformations were studied by docking and 3D quantitative structure-activity relationship studies. Genetic algorithm-based criteria was adopted for selection of training and test sets while maintaining structural diversity of training and test sets, which is also very crucial for model development and validation. Both the comparative molecular field analyses ([Formula: see text], [Formula: see text]) and comparative molecular similarity indices analyses ([Formula: see text], [Formula: see text]) show excellent correlation and high predictive power. Furthermore, molecular dynamics simulations were performed to explore the binding mode of the two of the most active compounds of the series, 10 and 14. Harmonization in the two simulation results validate the analysis and therefore applicability of docking parameters based on crystallographic conformation of compound 14 bound to receptor molecule. This work provides useful information about the inhibition mechanism of this class of molecules and will assist in the design of more potent inhibitors of PfDHODH.

  10. Recursive algorithm for the two-stage EFOP estimation method

    Institute of Scientific and Technical Information of China (English)

    LUO GuiMing; HUANG Jian

    2008-01-01

    A recursive algorithm for the two-stage empirical frequency-domain optimal param-eter (EFOP) estimation method Was proposed. The EFOP method was a novel sys-tem identificallon method for Black-box models that combines time-domain esti-mation and frequency-domain estimation. It has improved anti-disturbance perfor-mance, and could precisely identify models with fewer sample numbers. The two-stage EFOP method based on the boot-strap technique was generally suitable for Black-box models, but it was an iterative method and takes too much computation work so that it did not work well online. A recursive algorithm was proposed for dis-turbed stochastic systems. Some simulation examples are included to demonstrate the validity of the new method.

  11. An intersection algorithm based on transformation

    Institute of Scientific and Technical Information of China (English)

    CHEN Xiao-xia; YONG Jun-hai; CHEN Yu-jian

    2006-01-01

    How to obtain intersection of curves and surfaces is a fundamental problem in many areas such as computer graphics,CAD/CAM,computer animation,and robotics.Especially,how to deal with singular cases,such as tangency or superposition,is a key problem in obtaining intersection results.A method for solving the intersection problem based on the coordinate transformation is presented.With the Lagrange multiplier method,the minimum distance between the center of a circle and a quadric surface is given as well.Experience shows that the coordinate transformation could significantly simplify the method for calculating intersection to the tangency condition.It can improve the stability of the intersection of given curves and surfaces in singularity cases.The new algorithm is applied in a three dimensional CAD software (GEMS),produced by Tsinghua University.

  12. New Iterative Learning Control Algorithms Based on Vector Plots Analysis1)

    Institute of Scientific and Technical Information of China (English)

    XIESheng-Li; TIANSen-Ping; XIEZhen-Dong

    2004-01-01

    Based on vector plots analysis, this paper researches the geometric frame of iterativelearning control method. New structure of iterative learning algorithms is obtained by analyzingthe vector plots of some general algorithms. The structure of the new algorithm is different fromthose of the present algorithms. It is of faster convergence speed and higher accuracy. Simulationspresented here illustrate the effectiveness and advantage of the new algorithm.

  13. QRS Detection Based on an Advanced Multilevel Algorithm

    OpenAIRE

    Wissam Jenkal; Rachid Latif; Ahmed Toumanari; Azzedine Dliou; Oussama El B’charri; Fadel Mrabih Rabou Maoulainine

    2016-01-01

    This paper presents an advanced multilevel algorithm used for the QRS complex detection. This method is based on three levels. The first permits the extraction of higher peaks using an adaptive thresholding technique. The second allows the QRS region detection. The last level permits the detection of Q, R and S waves. The proposed algorithm shows interesting results compared to recently published methods. The perspective of this work is the implementation of this method on an embedded system ...

  14. Method of deinterleaving radar signal based on PRI transform algorithm%一种基于PRI变换的雷达信号分选方法

    Institute of Scientific and Technical Information of China (English)

    王海滨; 马琦

    2013-01-01

    With the signal environment of information warfare increasingly complicated, the radar signal sorting technology, as one of development directions of modern radar, is of great importance to radar reconnaissance. Several radar signal deinterleav-ing methods are proposed based on PRI parameter. The traditional PRI transform can overcome the subharmonic problem pro-duced in the histogram statistic methods, but has a poor performance on anti-jitter. The paper begins with a discussion of the im-proved PRI transform which overcomes the disadvantages of traditional PRI method effectively, followed by the description of the algorithm simulation. Finally, a method for sorting pulse repetition intervals of staggered PRI is discussed.%信息作战环境日益复杂,而雷达信号分选技术是作为现代信息对抗领域的重要发展方向之一,对于雷达侦察非常重要.对于雷达信号分选,基于PRI参数提出了很多分选方法.传统的PRI变换能克服直方图统计法中的子谐波问题,但抗抖动性差.讨论了修正的PRI变换分选算法,有效地克服了传统PRI变换的缺点,并对算法进行了计算机仿真.最后还讨论了重频参差抖动脉冲序列的分选方法.

  15. New algorithms for solving third- and fifth-order two point boundary value problems based on nonsymmetric generalized Jacobi Petrov-Galerkin method.

    Science.gov (United States)

    Doha, E H; Abd-Elhameed, W M; Youssri, Y H

    2015-09-01

    Two families of certain nonsymmetric generalized Jacobi polynomials with negative integer indexes are employed for solving third- and fifth-order two point boundary value problems governed by homogeneous and nonhomogeneous boundary conditions using a dual Petrov-Galerkin method. The idea behind our method is to use trial functions satisfying the underlying boundary conditions of the differential equations and the test functions satisfying the dual boundary conditions. The resulting linear systems from the application of our method are specially structured and they can be efficiently inverted. The use of generalized Jacobi polynomials simplify the theoretical and numerical analysis of the method and also leads to accurate and efficient numerical algorithms. The presented numerical results indicate that the proposed numerical algorithms are reliable and very efficient.

  16. Efficient mining of association rules based on gravitational search algorithm

    Directory of Open Access Journals (Sweden)

    Fariba Khademolghorani

    2011-07-01

    Full Text Available Association rules mining are one of the most used tools to discover relationships among attributes in a database. A lot of algorithms have been introduced for discovering these rules. These algorithms have to mine association rules in two stages separately. Most of them mine occurrence rules which are easily predictable by the users. Therefore, this paper discusses the application of gravitational search algorithm for discovering interesting association rules. This evolutionary algorithm is based on the Newtonian gravity and the laws of motion. Furthermore, contrary to the previous methods, the proposed method in this study is able to mine the best association rules without generating frequent itemsets and is independent of the minimum support and confidence values. The results of applying this method in comparison with the method of mining association rules based upon the particle swarm optimization show that our method is successful.

  17. A novel iris segmentation algorithm based on small eigenvalue analysis

    Science.gov (United States)

    Harish, B. S.; Aruna Kumar, S. V.; Guru, D. S.; Ngo, Minh Ngoc

    2015-12-01

    In this paper, a simple and robust algorithm is proposed for iris segmentation. The proposed method consists of two steps. In first step, iris and pupil is segmented using Robust Spatial Kernel FCM (RSKFCM) algorithm. RSKFCM is based on traditional Fuzzy-c-Means (FCM) algorithm, which incorporates spatial information and uses kernel metric as distance measure. In second step, small eigenvalue transformation is applied to localize iris boundary. The transformation is based on statistical and geometrical properties of the small eigenvalue of the covariance matrix of a set of edge pixels. Extensive experimentations are carried out on standard benchmark iris dataset (viz. CASIA-IrisV4 and UBIRIS.v2). We compared our proposed method with existing iris segmentation methods. Our proposed method has the least time complexity of O(n(i+p)) . The result of the experiments emphasizes that the proposed algorithm outperforms the existing iris segmentation methods.

  18. Mercer Kernel Based Fuzzy Clustering Self-Adaptive Algorithm

    Institute of Scientific and Technical Information of China (English)

    李侃; 刘玉树

    2004-01-01

    A novel mercer kernel based fuzzy clustering self-adaptive algorithm is presented. The mercer kernel method is introduced to the fuzzy c-means clustering. It may map implicitly the input data into the high-dimensional feature space through the nonlinear transformation. Among other fuzzy c-means and its variants, the number of clusters is first determined. A self-adaptive algorithm is proposed. The number of clusters, which is not given in advance, can be gotten automatically by a validity measure function. Finally, experiments are given to show better performance with the method of kernel based fuzzy c-means self-adaptive algorithm.

  19. Agent-based Algorithm for Spatial Distribution of Objects

    KAUST Repository

    Collier, Nathan

    2012-06-02

    In this paper we present an agent-based algorithm for the spatial distribution of objects. The algorithm is a generalization of the bubble mesh algorithm, initially created for the point insertion stage of the meshing process of the finite element method. The bubble mesh algorithm treats objects in space as bubbles, which repel and attract each other. The dynamics of each bubble are approximated by solving a series of ordinary differential equations. We present numerical results for a meshing application as well as a graph visualization application.

  20. Research on Algorithms for Mining Distance-Based Outliers

    Institute of Scientific and Technical Information of China (English)

    WANGLizhen; ZOULikun

    2005-01-01

    The outlier detection is an important and valuable research in KDD (Knowledge discover in database). The identification of outliers can lead to the discovery of truly unexpected knowledge in areas such as electronic commerce, credit card fraud, and even weather forecast. In existing methods that we have seen for finding outliers, the notion of DB-(Distance-based) outliers is not restricted computationally to small values of the number of dimensions k and goes beyond the data space. Here, we study algorithms for mining DB-outliers. We focus on developing algorithms unlimited by k. First, we present a Partition-based algorithm (the PBA). The key idea is to gain efficiency by divide-and-conquer. Second, we present an optimized algorithm called Object-class-based algorithm (the OCBA). The computing of this algorithm has nothing to do with k and the efficiency of this algorithm is as good as the cell-based algorithm. We provide experimental results showing that the two new algorithms have better execution efficiency.

  1. Method of Building a Group of NURBS Models Based on Genetic Algorithm%基于遗传算法的NURBS模型群体生成方法

    Institute of Scientific and Technical Information of China (English)

    李媛媛; 刘弘

    2011-01-01

    Producing a crowd animation needs a great number of individual models.For how to quickly build multiple models from a single built one and ensure that they are not the same, researched methods of NURBS modeling and presented a method of building a group of NURBS models based on genetic algorithm.This method first extracted structure lines of a given NURBS model, initialized a population according to it, and regarded the given model as the best one.Then it adjusted each structure line of each model by using selection,crossover and mutation operations, and made the population evolve to build a group of models with different appearances and to be used in crowd animation.%制作群体动画需要使用大量的个体模型.针对如何由一个制作好的模型快速生成多个模型,并且保证各模型之间的外观不完全相同,对NURBS建模方法进行研究,提出基于遗传算法的NURBS模型群体生成方法.该方法首先提取给定NURBS模型的结构线,根据给定的NURBS模型初始化一个种群,并将给定的NURBS模型作为最佳模型,然后利用选择、交叉、变异三种基本操作对种群中每个模型的每条结构线进行放大或缩小调整,对种群进行进化,生成形态各异的NURBS模型的群体,用于群体动画的制作.

  2. The Three-Dimensional Fast Segmentation Algorithm Based on Level Set Method%基于Level Set的交互式快速分割算法

    Institute of Scientific and Technical Information of China (English)

    孙海鹏; 余伟巍; 席平

    2011-01-01

    三维医学图像数据量大,并且受噪声、边界模糊等原因的影响,致使三维分割过程消耗时间较长,容易产生欠分割或过度分割.针对以上问题,提出一种基于LevelSet的三维快速分割算法,采用Fast Marching获取二维分割区域,优化轮廓边界,利用直线数值微分算法(Digital Differential Analyzer,DDA)提取轮廓像素;进一步引入扫描线种子填充思想,实现医学图像的三维快速分割.实验结果表明,上述算法能够快速准确地分割出感兴趣区域.%Because of the large volume of medical image data, the impact of noise, blurred boundaries and other reasons, the three-dimensional segmentation process is time-consuming, and easily produces less or over segmentation. To solve the above problems, this paper proposes a three-dimensional fast segmentation algorithm based on Level Set, using Level Set Fast Marching Method to obtain two-dimensional segmental region, optimizing the boundary contour, using the Digital Differential Analyzer method to extract contour pixels, finally introducing the idea of the Scan Line Seed-filling to achieve the three-dimensional fast segmentation. The actual clinical CT images of vertebral segmentation experiment result shows that this method can quickly and accurately separate out the interested area.

  3. Drilling Path Optimization Based on Particle Swarm Optimization Algorithm

    Institute of Scientific and Technical Information of China (English)

    ZHU Guangyu; ZHANG Weibo; DU Yuexiang

    2006-01-01

    This paper presents a new approach based on the particle swarm optimization (PSO) algorithm for solving the drilling path optimization problem belonging to discrete space. Because the standard PSO algorithm is not guaranteed to be global convergence or local convergence, based on the mathematical algorithm model, the algorithm is improved by adopting the method of generate the stop evolution particle over again to get the ability of convergence to the global optimization solution. And the operators are improved by establishing the duality transposition method and the handle manner for the elements of the operator, the improved operator can satisfy the need of integer coding in drilling path optimization. The experiment with small node numbers indicates that the improved algorithm has the characteristics of easy realize, fast convergence speed, and better global convergence characteristics, hence the new PSO can play a role in solving the problem of drilling path optimization in drilling holes.

  4. Automatic Image Registration Algorithm Based on Wavelet Transform

    Institute of Scientific and Technical Information of China (English)

    LIU Qiong; NI Guo-qiang

    2006-01-01

    An automatic image registration approach based on wavelet transform is proposed. This proposed method utilizes multiscale wavelet transform to extract feature points. A coarse-to-fine feature matching method is utilized in the feature matching phase. A two-way matching method based on cross-correlation to get candidate point pairs and a fine matching based on support strength combine to form the matching algorithm. At last, based on an affine transformation model, the parameters are iteratively refined by using the least-squares estimation approach. Experimental results have verified that the proposed algorithm can realize automatic registration of various kinds of images rapidly and effectively.

  5. Collaborative Filtering Algorithms Based on Kendall Correlation in Recommender Systems

    Institute of Scientific and Technical Information of China (English)

    YAO Yu; ZHU Shanfeng; CHEN Xinmeng

    2006-01-01

    In this work, Kendall correlation based collaborative filtering algorithms for the recommender systems are proposed. The Kendall correlation method is used to measure the correlation amongst users by means of considering the relative order of the users' ratings. Kendall based algorithm is based upon a more general model and thus could be more widely applied in e-commerce. Another discovery of this work is that the consideration of only positive correlated neighbors in prediction, in both Pearson and Kendall algorithms, achieves higher accuracy than the consideration of all neighbors, with only a small loss of coverage.

  6. Fast image matching algorithm based on projection characteristics

    Science.gov (United States)

    Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun

    2011-06-01

    Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.

  7. A PRESSURE-BASED ALGORITHM FOR CAVITATING FLOW COMPUTATIONS

    Institute of Scientific and Technical Information of China (English)

    ZHANG Ling-xin; ZHAO Wei-guo; SHAO Xue-ming

    2011-01-01

    A pressure-based algorithm for the prediction of cavitating flows is presented. The algorithm employs a set of equations including the Navier-Stokes equations and a cavitation model explaining the phase change between liquid and vapor. A pressure-based method is used to construct the algorithm and the coupling between pressure and velocity is considered. The pressure correction equation is derived from a new continuity equation which employs a source term related to phase change rate instead of the material derivative of density Dp/Dt.Thispressure-based algorithm allows for the computation of steady or unsteady,2-Dor 3-D cavitating flows. Two 2-D cases, flows around a flat-nose cylinder and around a NACA0015 hydrofoil, are simulated respectively, and the periodic cavitation behaviors associated with the re-entrant jets are captured. This algorithm shows good capability of computating time-dependent cavitating flows.

  8. QPSO-Based Adaptive DNA Computing Algorithm

    Directory of Open Access Journals (Sweden)

    Mehmet Karakose

    2013-01-01

    Full Text Available DNA (deoxyribonucleic acid computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO. Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1 parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2 adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3 numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm.

  9. QPSO-based adaptive DNA computing algorithm.

    Science.gov (United States)

    Karakose, Mehmet; Cigdem, Ugur

    2013-01-01

    DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm.

  10. A New Base-6 FFT Algorithm

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    A new FFT algorithm has been deduced, which is called the base-6 FFT algorithm. The amount for calculating the DFT of complex sequence of N=2r by the base-6 FFT algorithm is Mr(N)=14/3*Nlog6N-4N+4 for multiplication operation of real number and Ar(N)=23/3*Nlog6N-2N+2 for addition operation of real number. The amount for calculating the DFT of real sequence is a half of it with the complex sequence.

  11. Analog Circuit Design Optimization Based on Evolutionary Algorithms

    Directory of Open Access Journals (Sweden)

    Mansour Barari

    2014-01-01

    Full Text Available This paper investigates an evolutionary-based designing system for automated sizing of analog integrated circuits (ICs. Two evolutionary algorithms, genetic algorithm and PSO (Parswal particle swarm optimization algorithm, are proposed to design analog ICs with practical user-defined specifications. On the basis of the combination of HSPICE and MATLAB, the system links circuit performances, evaluated through specific electrical simulation, to the optimization system in the MATLAB environment, for the selected topology. The system has been tested by typical and hard-to-design cases, such as complex analog blocks with stringent design requirements. The results show that the design specifications are closely met. Comparisons with available methods like genetic algorithms show that the proposed algorithm offers important advantages in terms of optimization quality and robustness. Moreover, the algorithm is shown to be efficient.

  12. A Practical Localization Algorithm Based on Wireless Sensor Networks

    CERN Document Server

    Huang, Tao; Xia, Feng; Jin, Cheng; Li, Liang

    2010-01-01

    Many localization algorithms and systems have been developed by means of wireless sensor networks for both indoor and outdoor environments. To achieve higher localization accuracy, extra hardware equipments are utilized by most of the existing localization algorithms, which increase the cost and greatly limit the range of location-based applications. In this paper we present a method which can effectively meet different localization accuracy requirements of most indoor and outdoor location services in realistic applications. Our algorithm is composed of two phases: partition phase, in which the target region is split into small grids and localization refinement phase in which a higher accuracy location can be generated by applying a trick algorithm. A realistic demo system using our algorithm has been developed to illustrate its feasibility and availability. The results show that our algorithm can improve the localization accuracy.

  13. Alternative Method for Solving Traveling Salesman Problem by Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    Zuzana Čičková

    2008-06-01

    Full Text Available This article describes the application of Self Organizing Migrating Algorithm (SOMA to the well-known optimization problem - Traveling Salesman Problem (TSP. SOMA is a relatively new optimization method that is based on Evolutionary Algorithms that are originally focused on solving non-linear programming problems that contain continuous variables. The TSP has model character in many branches of Operation Research because of its computational complexity; therefore the use of Evolutionary Algorithm requires some special approaches to guarantee feasibility of solutions. In this article two concrete examples of TSP as 8 cities set and 25 cities set are given to demonstrate the practical use of SOMA. Firstly, the penalty approach is applied as a simple way to guarantee feasibility of solution. Then, new approach that works only on feasible solutions is presented.

  14. Stator Current Harmonics Evaluation by Flexible Neural Network Method With Reconstruction Structure During Learning Step Based On CFE/SS Algorithm for ACEC Generator of Rey Power Plant

    Directory of Open Access Journals (Sweden)

    Mohammad Reza Yousefi

    2010-07-01

    Full Text Available One method for on-line fault detection in synchronous generator is stator current harmonics analysis. In this paper, the flexible neural network with reconstruction structure during learning has been used to evaluate the stator current harmonics in different loads. Generator modeling, finite element method and state space model make training set of flexible neural network. Many points from generator capability curve are used to complete this set. Flexible neural network that is used in this paper is a perception network with single hidden layer, flexible hidden layer neuron and back propagation algorithms. Results are show that the trained flexible neural network can identify stator current harmonics for desired load from the capability curve. The error is less than 10% in compared to the values obtained directly from the CFE-SS algorithms. The parameters of modeled generator are 43950(KVA, 11(kV, 3000(rpm, 50(HZ, (PF=0.5.

  15. Pupil location method based on improved OTSU algorithm%基于改进型最大类间方差法的瞳孔定位方法

    Institute of Scientific and Technical Information of China (English)

    黄丽丽; 杨帆; 王东强; 唐云建

    2013-01-01

    To improve the accuracy and rapidity of pupil location in human-computer interaction, a method based on the improved OTSU algorithm is proposed in this paper. The image is pre-processed using light compensating and median filtering to calculate the maximum between-class variance of the pixels which exist in the image, and obtain the optimum threshold of the binary image, the pupil center and radius are got by using ellipse fitting. Experimental results show that the method is quite accurate and fast, which lays a great foundation for the eye tracking.%为了提高人机交互中瞳孔定位的精确性与快速性,提出一种基于改进型最大类间方差法的瞳孔定位方法。对图像进行基于光线补偿和中值滤波的预处理操作,进而根据直方图统计出图像中未出现的像素灰度值,计算余下像素的最大类间方差,求取二值化图像的最佳阈值,采用椭圆拟合方法确定瞳孔中心及半径。实验结果证明,该方法准确快速,为视线跟踪奠定必要的基础。

  16. In Silico Calculation of Infinite Dilution Activity Coefficients of Molecular Solutes in Ionic Liquids: Critical Review of Current Methods and New Models Based on Three Machine Learning Algorithms.

    Science.gov (United States)

    Paduszyński, Kamil

    2016-08-22

    The aim of the paper is to address all the disadvantages of currently available models for calculating infinite dilution activity coefficients (γ(∞)) of molecular solutes in ionic liquids (ILs)-a relevant property from the point of view of many applications of ILs, particularly in separations. Three new models are proposed, each of them based on distinct machine learning algorithm: stepwise multiple linear regression (SWMLR), feed-forward artificial neural network (FFANN), and least-squares support vector machine (LSSVM). The models were established based on the most comprehensive γ(∞) data bank reported so far (>34 000 data points for 188 ILs and 128 solutes). Following the paper published previously [J. Chem. Inf. Model 2014, 54, 1311-1324], the ILs were treated in terms of group contributions, whereas the Abraham solvation parameters were used to quantify an impact of solute structure. Temperature is also included in the input data of the models so that they can be utilized to obtain temperature-dependent data and thus related thermodynamic functions. Both internal and external validation techniques were applied to assess the statistical significance and explanatory power of the final correlations. A comparative study of the overall performance of the investigated SWMLR/FFANN/LSSVM approaches is presented in terms of root-mean-square error and average absolute relative deviation between calculated and experimental γ(∞), evaluated for different families of ILs and solutes, as well as between calculated and experimental infinite dilution selectivity for separation problems benzene from n-hexane and thiophene from n-heptane. LSSVM is shown to be a method with the lowest values of both training and generalization errors. It is finally demonstrated that the established models exhibit an improved accuracy compared to the state-of-the-art model, namely, temperature-dependent group contribution linear solvation energy relationship, published in 2011 [J. Chem

  17. Optimization of water-level monitoring networks in the eastern Snake River Plain aquifer using a kriging-based genetic algorithm method

    Science.gov (United States)

    Fisher, Jason C.

    2013-01-01

    Long-term groundwater monitoring networks can provide essential information for the planning and management of water resources. Budget constraints in water resource management agencies often mean a reduction in the number of observation wells included in a monitoring network. A network design tool, distributed as an R package, was developed to determine which wells to exclude from a monitoring network because they add little or no beneficial information. A kriging-based genetic algorithm method was used to optimize the monitoring network. The algorithm was used to find the set of wells whose removal leads to the smallest increase in the weighted sum of the (1) mean standard error at all nodes in the kriging grid where the water table is estimated, (2) root-mean-squared-error between the measured and estimated water-level elevation at the removed sites, (3) mean standard deviation of measurements across time at the removed sites, and (4) mean measurement error of wells in the reduced network. The solution to the optimization problem (the best wells to retain in the monitoring network) depends on the total number of wells removed; this number is a management decision. The network design tool was applied to optimize two observation well networks monitoring the water table of the eastern Snake River Plain aquifer, Idaho; these networks include the 2008 Federal-State Cooperative water-level monitoring network (Co-op network) with 166 observation wells, and the 2008 U.S. Geological Survey-Idaho National Laboratory water-level monitoring network (USGS-INL network) with 171 wells. Each water-level monitoring network was optimized five times: by removing (1) 10, (2) 20, (3) 40, (4) 60, and (5) 80 observation wells from the original network. An examination of the trade-offs associated with changes in the number of wells to remove indicates that 20 wells can be removed from the Co-op network with a relatively small degradation of the estimated water table map, and 40 wells

  18. Optimization of Water-Level Monitoring Networks in the Eastern Snake River Plain Aquifer Using a Kriging-Based Genetic Algorithm Method

    Science.gov (United States)

    Fisher, J. C.

    2013-12-01

    Long-term groundwater monitoring networks can provide essential information for the planning and management of water resources. Budget constraints in water resource management agencies often mean a reduction in the number of observation wells included in a monitoring network. A network design tool, distributed as an R package, was developed to determine which wells to exclude from a monitoring network because they add little or no beneficial information. A kriging-based genetic algorithm method was used to optimize the monitoring network. The algorithm was used to find the set of wells whose removal leads to the smallest increase in the weighted sum of the (1) mean standard error at all nodes in the kriging grid where the water table is estimated, (2) root-mean-squared-error between the measured and estimated water-level elevation at the removed sites, (3) mean standard deviation of measurements across time at the removed sites, and (4) mean measurement error of wells in the reduced network. The solution to the optimization problem (the best wells to retain in the monitoring network) depends on the total number of wells removed; this number is a management decision. The network design tool was applied to optimize two observation well networks monitoring the water table of the eastern Snake River Plain aquifer, Idaho; these networks include the 2008 Federal-State Cooperative water-level monitoring network (Co-op network) with 166 observation wells, and the 2008 U.S. Geological Survey-Idaho National Laboratory water-level monitoring network (USGS-INL network) with 171 wells. Each water-level monitoring network was optimized five times: by removing (1) 10, (2) 20, (3) 40, (4) 60, and (5) 80 observation wells from the original network. An examination of the trade-offs associated with changes in the number of wells to remove indicates that 20 wells can be removed from the Co-op network with a relatively small degradation of the estimated water table map, and 40 wells

  19. Vein image extraction method based on improved NIBLACK algorithm%基于NIBLACK改进算法的静脉识别特征提取

    Institute of Scientific and Technical Information of China (English)

    郑均辉; 喻恒

    2015-01-01

    静脉识别是一种新兴的生物特征识别技术,为了满足静脉识别中的特征提取需求,对手背静脉提取方法进行了研究。首先采用CLAHE算法对手背静脉图像进行增强处理,然后针对传统NIBALCK二值化算法的不足,提出一种局部静态阈值与NIBLACK相结合的改进算法。实验证明,该方法能有效消除传统方法中噪声过多、纹络断裂的现象,克服光强因素对图像提取的影响,保持完整清晰的静脉纹络结构,从而满足后续识别工作的需要。%Vein recognition is an emerging biometric feature recognition technology. In order to meet the demands for the feature extraction in hand vein recognition,a study on hand vein extraction methods is made in this paper. The hand vein image is enhanced with CLAHE algorithm first before the image processing. Because the traditional NIBLACK algorithm for image bina⁃rization processing has some flaws,an improved algorithm combining local static threshold method with NIBLACK algorithm is proposed in this paper. Experimental results show that the method can effectively eliminate the phenomena of noise nimiety and texture break,and overcome the impact of light intensity on the image extraction,which can keep the clear vein structure.

  20. A pruning method for the recursive least squared algorithm.

    Science.gov (United States)

    Leung, C S; Wong, K W; Sum, P F; Chan, L W

    2001-03-01

    The recursive least squared (RLS) algorithm is an effective online training method for neural networks. However, its conjunctions with weight decay and pruning have not been well studied. This paper elucidates how generalization ability can be improved by selecting an appropriate initial value of the error covariance matrix in the RLS algorithm. Moreover, how the pruning of neural networks can be benefited by using the final value of the error covariance matrix will also be investigated. Our study found that the RLS algorithm is implicitly a weight decay method, where the weight decay effect is controlled by the initial value of the error covariance matrix; and that the inverse of the error covariance matrix is approximately equal to the Hessian matrix of the network being trained. We propose that neural networks are first trained by the RLS algorithm and then some unimportant weights are removed based on the approximate Hessian matrix. Simulation results show that our approach is an effective training and pruning method for neural networks.

  1. A Modularity Degree Based Heuristic Community Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Dongming Chen

    2014-01-01

    Full Text Available A community in a complex network can be seen as a subgroup of nodes that are densely connected. Discovery of community structures is a basic problem of research and can be used in various areas, such as biology, computer science, and sociology. Existing community detection methods usually try to expand or collapse the nodes partitions in order to optimize a given quality function. These optimization function based methods share the same drawback of inefficiency. Here we propose a heuristic algorithm (MDBH algorithm based on network structure which employs modularity degree as a measure function. Experiments on both synthetic benchmarks and real-world networks show that our algorithm gives competitive accuracy with previous modularity optimization methods, even though it has less computational complexity. Furthermore, due to the use of modularity degree, our algorithm naturally improves the resolution limit in community detection.

  2. Manipulator Neural Network Control Based on Fuzzy Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The three-layer forward neural networks are used to establish the inverse kinem a tics models of robot manipulators. The fuzzy genetic algorithm based on the line ar scaling of the fitness value is presented to update the weights of neural net works. To increase the search speed of the algorithm, the crossover probability and the mutation probability are adjusted through fuzzy control and the fitness is modified by the linear scaling method in FGA. Simulations show that the propo sed method improves considerably the precision of the inverse kinematics solutio ns for robot manipulators and guarantees a rapid global convergence and overcome s the drawbacks of SGA and the BP algorithm.

  3. Haplotyping a single triploid individual based on genetic algorithm.

    Science.gov (United States)

    Wu, Jingli; Chen, Xixi; Li, Xianchen

    2014-01-01

    The minimum error correction model is an important combinatorial model for haplotyping a single individual. In this article, triploid individual haplotype reconstruction problem is studied by using the model. A genetic algorithm based method GTIHR is presented for reconstructing the triploid individual haplotype. A novel coding method and an effectual hill-climbing operator are introduced for the GTIHR algorithm. This relatively short chromosome code can lead to a smaller solution space, which plays a positive role in speeding up the convergence process. The hill-climbing operator ensures algorithm GTIHR converge at a good solution quickly, and prevents premature convergence simultaneously. The experimental results prove that algorithm GTIHR can be implemented efficiently, and can get higher reconstruction rate than previous algorithms.

  4. Fingerprint Image Segmentation Algorithm Based on Contourlet Transform Technology

    Directory of Open Access Journals (Sweden)

    Guanghua Zhang

    2016-09-01

    Full Text Available This paper briefly introduces two classic algorithms for fingerprint image processing, which include the soft threshold denoise algorithm of wavelet domain based on wavelet domain and the fingerprint image enhancement algorithm based on Gabor function. Contourlet transform has good texture sensitivity and can be used for the segmentation enforcement of the fingerprint image. The method proposed in this paper has attained the final fingerprint segmentation image through utilizing a modified denoising for a high-frequency coefficient after Contourlet decomposition, highlighting the fingerprint ridge line through modulus maxima detection and finally connecting the broken fingerprint line using a value filter in direction. It can attain richer direction information than the method based on wavelet transform and Gabor function and can make the positioning of detailed features more accurate. However, its ridge should be more coherent. Experiments have shown that this algorithm is obviously superior in fingerprint features detection.

  5. A novel bit-quad-based Euler number computing algorithm.

    Science.gov (United States)

    Yao, Bin; He, Lifeng; Kang, Shiying; Chao, Yuyan; Zhao, Xiao

    2015-01-01

    The Euler number of a binary image is an important topological property in computer vision and pattern recognition. This paper proposes a novel bit-quad-based Euler number computing algorithm. Based on graph theory and analysis on bit-quad patterns, our algorithm only needs to count two bit-quad patterns. Moreover, by use of the information obtained during processing the previous bit-quad, the average number of pixels to be checked for processing a bit-quad is only 1.75. Experimental results demonstrated that our method outperforms significantly conventional Euler number computing algorithms.

  6. THE PARALLEL RECURSIVE AP ADAPTIVE ALGORITHM BASED ON VOLTERRA SERIES

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    Aiming at the nonlinear system identification problem, a parallel recursive affine projection (AP) adaptive algorithm for the nonlinear system based on Volterra series is presented in this paper. The algorithm identifies in parallel the Volterra kernel of each order, recursively estimate the inverse of the autocorrelation matrix for the Volterra input of each order, and remarkably improve the convergence speed of the identification process compared with the NLMS and conventional AP adaptive algorithm based on Volterra series. Simulation results indicate that the proposed method in this paper is efficient.

  7. [Heart rate measurement algorithm based on artificial intelligence].

    Science.gov (United States)

    Chengxian, Cai; Wei, Wang

    2010-01-01

    Based on the heart rate measurement method using time-lapse image of human cheek, this paper proposes a novel measurement algorithm based on Artificial Intelligence. The algorithm combining with fuzzy logic theory acquires the heart beat point by using the defined fuzzy membership function of each sampled point. As a result, it calculates the heart rate by counting the heart beat points in a certain time period. Experiment shows said algorithm satisfies in operability, accuracy and robustness, which leads to constant practical value.

  8. Duality based optical flow algorithms with applications

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau

    We consider the popular TV-L1 optical flow formulation, and the so-called duality based algorithm for minimizing the TV-L1 energy. The original formulation is extended to allow for vector valued images, and minimization results are given. In addition we consider different definitions of total...... variation regularization, and related formulations of the optical flow problem that may be used with a duality based algorithm. We present a highly optimized algorithmic setup to estimate optical flows, and give five novel applications. The first application is registration of medical images, where X......-ray images of different hands, taken using different imaging devices are registered using a TV-L1 optical flow algorithm. We propose to regularize the input images, using sparsity enhancing regularization of the image gradient to improve registration results. The second application is registration of 2D...

  9. Efficient Satellite Scheduling Based on Improved Vector Evaluated Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Tengyue Mao

    2012-03-01

    Full Text Available Satellite scheduling is a typical multi-peak, many-valley, nonlinear multi-objective optimization problem. How to effectively implement the satellite scheduling is a crucial research in space areas.This paper mainly discusses the performance of VEGA (Vector Evaluated Genetic Algorithm based on the study of basic principles of VEGA algorithm, algorithm realization and test function, and then improves VEGA algorithm through introducing vector coding, new crossover and mutation operators, new methods to assign fitness and hold good individuals. As a result, the diversity and convergence of improved VEGA algorithm of improved VEGA algorithm have been significantly enhanced and will be applied to Earth-Mars orbit optimization. At the same time, this paper analyzes the results of the improved VEGA, whose results of performance analysis and evaluation show that although VEGA has a profound impact upon multi-objective evolutionary research,  multi-objective evolutionary algorithm on the basis of Pareto seems to be a more effective method to get the non-dominated solutions from the perspective of diversity and convergence of experimental result. Finally, based on Visual C + + integrated development environment, we have implemented improved vector evaluation algorithm in the satellite scheduling.

  10. Real-Coded Quantum-Inspired Genetic Algorithm-Based BP Neural Network Algorithm

    Directory of Open Access Journals (Sweden)

    Jianyong Liu

    2015-01-01

    Full Text Available The method that the real-coded quantum-inspired genetic algorithm (RQGA used to optimize the weights and threshold of BP neural network is proposed to overcome the defect that the gradient descent method makes the algorithm easily fall into local optimal value in the learning process. Quantum genetic algorithm (QGA is with good directional global optimization ability, but the conventional QGA is based on binary coding; the speed of calculation is reduced by the coding and decoding processes. So, RQGA is introduced to explore the search space, and the improved varied learning rate is adopted to train the BP neural network. Simulation test shows that the proposed algorithm is effective to rapidly converge to the solution conformed to constraint conditions.

  11. A new method for constructing total energy conservation algorithms

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    Based on a basic rule for the research on numerical methods which requires that the properties of the original problem should be preserved as much as possible under discretization, a new method for constructing total energy conservation algorithms is presented. By this method, all kinds of implicit schemes with energy conservation laws including many classical conservation schemes can be constructed from a kind of special function. Also, the concrete criterion for judging total energy conservation schemes is given. Numerical tests show that these new schemes are effective.

  12. Network Intrusion Detection based on GMKL Algorithm

    Directory of Open Access Journals (Sweden)

    Li Yuxiang

    2013-06-01

    Full Text Available According to the 31th statistical reports of China Internet network information center (CNNIC, by the end of December 2012, the number of Chinese netizens has reached 564 million, and the scale of mobile Internet users also reached 420 million. But when the network brings great convenience to people's life, it also brings huge threat in the life of people. So through collecting and analyzing the information in the computer system or network we can detect any possible behaviors that can damage the availability, integrity and confidentiality of the computer resource, and make timely treatment to these behaviors which have important research significance to improve the operation environment of network and network service. At present, the Neural Network, Support Vector machine (SVM and Hidden Markov Model, Fuzzy inference and Genetic Algorithms are introduced into the research of network intrusion detection, trying to build a healthy and secure network operation environment. But most of these algorithms are based on the total sample and it also hypothesizes that the number of the sample is infinity. But in the field of network intrusion the collected data often cannot meet the above requirements. It often shows high latitudes, variability and small sample characteristics. For these data using traditional machine learning methods are hard to get ideal results. In view of this, this paper proposed a Generalized Multi-Kernel Learning method to applied to network intrusion detection. The Generalized Multi-Kernel Learning method can be well applied to large scale sample data, dimension complex, containing a large number of heterogeneous information and so on. The experimental results show that applying GMKL to network attack detection has high classification precision and low abnormal practical precision.

  13. Scalable force directed graph layout algorithms using fast multipole methods

    KAUST Repository

    Yunis, Enas Abdulrahman

    2012-06-01

    We present an extension to ExaFMM, a Fast Multipole Method library, as a generalized approach for fast and scalable execution of the Force-Directed Graph Layout algorithm. The Force-Directed Graph Layout algorithm is a physics-based approach to graph layout that treats the vertices V as repelling charged particles with the edges E connecting them acting as springs. Traditionally, the amount of work required in applying the Force-Directed Graph Layout algorithm is O(|V|2 + |E|) using direct calculations and O(|V| log |V| + |E|) using truncation, filtering, and/or multi-level techniques. Correct application of the Fast Multipole Method allows us to maintain a lower complexity of O(|V| + |E|) while regaining most of the precision lost in other techniques. Solving layout problems for truly large graphs with millions of vertices still requires a scalable algorithm and implementation. We have been able to leverage the scalability and architectural adaptability of the ExaFMM library to create a Force-Directed Graph Layout implementation that runs efficiently on distributed multicore and multi-GPU architectures. © 2012 IEEE.

  14. 一种基于交叉熵的社区发现算法%Community Detection Algorithm Based on Cross-Entropy Method

    Institute of Scientific and Technical Information of China (English)

    于海; 赵玉丽; 崔坤; 朱志良

    2015-01-01

    Community detection algorithm is a very significant research topic in the complex network theory,which can be applied in communities’structures search and discovery applications. In this paper,the concept of Cross-Entropy in the field of signal processing is introduced and a community detection algorithm based on Cross-Entropy is proposed.The algorithm defines modularity as the quality function,which uses importance sampling in Cross-Entropy to speed up the convergence,thus the efficiency and accuracy of communities’detection can be improved simultaneously.Comparing with the Girvan-Newman algorithm over networks the computer generated,the proposed algorithm achieves higher NMI and the proportion of correctly division nodes.Moreover, the simulation results over real-world networks further reveal that the proposed algorithm accomplishes higher value of Modularity than Girvan-Newman algorithm,and no less than External Optimization algorithm.It is further verified that the proposed algorithm is more accurate than Girvan-Newman and External Optimization ones.%作为复杂网络中的一个极其重要的研究领域,社区结构的搜寻和发现研究具有重要的应用价值。该文将信号处理领域的交叉熵概念引入到网络社区结构的发现算法中,提出了一种基于交叉熵的社区发现算法。算法利用 Modularity 值作为判别依据,使用交叉熵方法中的重要抽样方法提高收敛速度,从而在提高社区发现算法运算效率的同时,提高算法的精确性。针对计算机生成网络的社区划分结果表明,该算法所得 MNI 值和划分正确节点所占比例高于 Girvan-Newman 算法。在真实网络上的仿真结果表明,该社区划分算法的 Modularity 值高于Girvan-Newman 算法,且不低于极值优化算法,进一步验证了该文提出算法的社区划分准确性优于已有的 Girvan-Newman 算法和极值优化算法。

  15. Joint Estimation of the Electric Vehicle Power Battery State of Charge Based on the Least Squares Method and the Kalman Filter Algorithm

    Directory of Open Access Journals (Sweden)

    Xiangwei Guo

    2016-02-01

    Full Text Available An estimation of the power battery state of charge (SOC is related to the energy management, the battery cycle life and the use cost of electric vehicles. When a lithium-ion power battery is used in an electric vehicle, the SOC displays a very strong time-dependent nonlinearity under the influence of random factors, such as the working conditions and the environment. Hence, research on estimating the SOC of a power battery for an electric vehicle is of great theoretical significance and application value. In this paper, according to the dynamic response of the power battery terminal voltage during a discharging process, the second-order RC circuit is first used as the equivalent model of the power battery. Subsequently, on the basis of this model, the least squares method (LS with a forgetting factor and the adaptive unscented Kalman filter (AUKF algorithm are used jointly in the estimation of the power battery SOC. Simulation experiments show that the joint estimation algorithm proposed in this paper has higher precision and convergence of the initial value error than a single AUKF algorithm.

  16. Genetic Algorithm based PID controller for Frequency Regulation Ancillary services

    Directory of Open Access Journals (Sweden)

    Sandeep Bhongade

    2010-12-01

    Full Text Available In this paper, the parameters of Proportional, Integral and Derivative (PID controller for Automatic Generation Control (AGC suitable in restructured power system is tuned according to Generic Algorithms (GAs based performance indices. The key idea of the proposed method is to use the fitness function based on Area Control Error (ACE. The functioning of the proposed Genetic Algorithm based PID (GAPID controller has been demonstrated on a 75-bus Indian power system network and the results have been compared with those obtained by using Least Square Minimization method.

  17. Ant colony algorithm based on genetic method for continuous optimization problem%基于遗传机制的蚁群算法求解连续优化问题

    Institute of Scientific and Technical Information of China (English)

    朱经纬; 蒙陪生; 王乘

    2007-01-01

    A new algorithm is presented by using the ant colony algorithm based on genetic method (ACG) to solve the continuous optimization problem. Each component has a seed set. The seed in the set has the value of component, trail information and fitness. The ant chooses a seed from the seed set with the possibility determined by trail information and fitness of the seed. The genetic method is used to form new solutions from the solutions got by the ants. Best solutions are selected to update the seeds in the sets and trail information of the seeds. In updating the trail information, a diffusion function is used to achieve the diffuseness of trail information. The new algorithm is tested with 8 different benchmark functions.

  18. Overlap Removal Methods for Data Projection Algorithms

    OpenAIRE

    Spicker, Marc

    2011-01-01

    Projection algorithms map high dimensional data points to lower dimensions. However, when adding arbitrary shaped objects as representatives for these data points, they may intersect. The positions of these representatives have to be modi ed in order to remove existing overlaps. There are multiple algorithms designed to deal with this layout adjustment problem, which lead to very di erent results. These adjustment strategies are evaluated according to di erent measures for comparison: euclide...

  19. SIMULATED ANNEALING BASED POLYNOMIAL TIME QOS ROUTING ALGORITHM FOR MANETS

    Institute of Scientific and Technical Information of China (English)

    Liu Lianggui; Feng Guangzeng

    2006-01-01

    Multi-constrained Quality-of-Service (QoS) routing is a big challenge for Mobile Ad hoc Networks (MANETs) where the topology may change constantly. In this paper a novel QoS Routing Algorithm based on Simulated Annealing (SA_RA) is proposed. This algorithm first uses an energy function to translate multiple QoS weights into a single mixed metric and then seeks to find a feasible path by simulated annealing. The paper outlines simulated annealing algorithm and analyzes the problems met when we apply it to Qos Routing (QoSR) in MANETs. Theoretical analysis and experiment results demonstrate that the proposed method is an effective approximation algorithms showing better performance than the other pertinent algorithm in seeking the (approximate) optimal configuration within a period of polynomial time.

  20. Distribution network planning algorithm based on Hopfield neural network

    Institute of Scientific and Technical Information of China (English)

    GAO Wei-xin; LUO Xian-jue

    2005-01-01

    This paper presents a new algorithm based on Hopfield neural network to find the optimal solution for an electric distribution network. This algorithm transforms the distribution power network-planning problem into a directed graph-planning problem. The Hopfield neural network is designed to decide the in-degree of each node and is in combined application with an energy function. The new algorithm doesn't need to code city streets and normalize data, so the program is easier to be realized. A case study applying the method to a district of 29 street proved that an optimal solution for the planning of such a power system could be obtained by only 26 iterations. The energy function and algorithm developed in this work have the following advantages over many existing algorithms for electric distribution network planning: fast convergence and unnecessary to code all possible lines.

  1. Image fusion based on expectation maximization algorithm and steerable pyramid

    Institute of Scientific and Technical Information of China (English)

    Gang Liu(刘刚); Zhongliang Jing(敬忠良); Shaoyuan Sun(孙韶媛); Jianxun Li(李建勋); Zhenhua Li(李振华); Henry Leung

    2004-01-01

    In this paper, a novel image fusion method based on the expectation maximization (EM) algorithm and steerable pyramid is proposed. The registered images are first decomposed by using steerable pyramid.The EM algorithm is used to fuse the image components in the low frequency band. The selection method involving the informative importance measure is applied to those in the high frequency band. The final fused image is then computed by taking the inverse transform on the composite coefficient representations.Experimental results show that the proposed method outperforms conventional image fusion methods.

  2. Secure OFDM communications based on hashing algorithms

    Science.gov (United States)

    Neri, Alessandro; Campisi, Patrizio; Blasi, Daniele

    2007-10-01

    In this paper we propose an OFDM (Orthogonal Frequency Division Multiplexing) wireless communication system that introduces mutual authentication and encryption at the physical layer, without impairing spectral efficiency, exploiting some freedom degrees of the base-band signal, and using encrypted-hash algorithms. FEC (Forward Error Correction) is instead performed through variable-rate Turbo Codes. To avoid false rejections, i.e. rejections of enrolled (authorized) users, we designed and tested a robust hash algorithm. This robustness is obtained both by a segmentation of the hash domain (based on BCH codes) and by the FEC capabilities of Turbo Codes.

  3. A novel image encryption algorithm based on DNA subsequence operation.

    Science.gov (United States)

    Zhang, Qiang; Xue, Xianglian; Wei, Xiaopeng

    2012-01-01

    We present a novel image encryption algorithm based on DNA subsequence operation. Different from the traditional DNA encryption methods, our algorithm does not use complex biological operation but just uses the idea of DNA subsequence operations (such as elongation operation, truncation operation, deletion operation, etc.) combining with the logistic chaotic map to scramble the location and the value of pixel points from the image. The experimental results and security analysis show that the proposed algorithm is easy to be implemented, can get good encryption effect, has a wide secret key's space, strong sensitivity to secret key, and has the abilities of resisting exhaustive attack and statistic attack.

  4. A motion retargeting algorithm based on model simplification

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    A new motion retargeting algorithm is presented, which adapts the motion capture data to a new character. To make the resulting motion realistic, the physically-based optimization method is adopted. However, the optimization process is difficult to converge to the optimal value because of high complexity of the physical human model. In order to address this problem, an appropriate simplified model automatically determined by a motion analysis technique is utilized, and then motion retargeting with this simplified model as an intermediate agent is implemented. The entire motion retargeting algorithm involves three steps of nonlinearly constrained optimization: forward retargeting, motion scaling and inverse retargeting. Experimental results show the validity of this algorithm.

  5. A Novel Image Encryption Algorithm Based on DNA Subsequence Operation

    Directory of Open Access Journals (Sweden)

    Qiang Zhang

    2012-01-01

    Full Text Available We present a novel image encryption algorithm based on DNA subsequence operation. Different from the traditional DNA encryption methods, our algorithm does not use complex biological operation but just uses the idea of DNA subsequence operations (such as elongation operation, truncation operation, deletion operation, etc. combining with the logistic chaotic map to scramble the location and the value of pixel points from the image. The experimental results and security analysis show that the proposed algorithm is easy to be implemented, can get good encryption effect, has a wide secret key's space, strong sensitivity to secret key, and has the abilities of resisting exhaustive attack and statistic attack.

  6. A Novel Heuristic Algorithm Based on Clark and Wright Algorithm for Green Vehicle Routing Problem

    Directory of Open Access Journals (Sweden)

    Mehdi Alinaghian

    2015-08-01

    Full Text Available A significant portion of Gross Domestic Production (GDP in any country belongs to the transportation system. Transportation equipment, in the other hand, is supposed to be great consumer of oil products. Many attempts have been assigned to the vehicles to cut down Greenhouse Gas (GHG. In this paper a novel heuristic algorithm based on Clark and Wright Algorithm called Green Clark and Wright (GCW for Vehicle Routing Problem regarding to fuel consumption is presented. The objective function is fuel consumption, drivers, and the usage of vehicles. Being compared to exact methods solutions for small-sized problems and to Differential Evolution (DE algorithm solutions for large-scaled problems, the results show efficient performance of the proposed GCW algorithm.

  7. Knowledge Automatic Indexing Based on Concept Lexicon and Segmentation Algorithm

    Institute of Scientific and Technical Information of China (English)

    WANG Lan-cheng; JIANG Dan; LE Jia-jin

    2005-01-01

    This paper is based on two existing theories about automatic indexing of thematic knowledge concept. The prohibit-word table with position information has been designed. The improved Maximum Matching-Minimum Backtracking method has been researched. Moreover it has been studied on improved indexing algorithm and application technology based on rules and thematic concept word table.

  8. Graph Drawing with Algorithm Engineering Methods (Dagstuhl Seminar 11191)

    OpenAIRE

    Demetrescu, Camil; Kaufmann, Michael; Kobourov, Stephen; Mutzel, Petra

    2011-01-01

    This report documents the program and the outcomes of Dagstuhl Seminar 11191 ``Graph Drawing with Algorithm Engineering Methods''. We summarize the talks, open problems, and working group discussions.

  9. A novel algorithm combining finite state method and genetic algorithm for solving crude oil scheduling problem.

    Science.gov (United States)

    Duan, Qian-Qian; Yang, Gen-Ke; Pan, Chang-Chun

    2014-01-01

    A hybrid optimization algorithm combining finite state method (FSM) and genetic algorithm (GA) is proposed to solve the crude oil scheduling problem. The FSM and GA are combined to take the advantage of each method and compensate deficiencies of individual methods. In the proposed algorithm, the finite state method makes up for the weakness of GA which is poor at local searching ability. The heuristic returned by the FSM can guide the GA algorithm towards good solutions. The idea behind this is that we can generate promising substructure or partial solution by using FSM. Furthermore, the FSM can guarantee that the entire solution space is uniformly covered. Therefore, the combination of the two algorithms has better global performance than the existing GA or FSM which is operated individually. Finally, a real-life crude oil scheduling problem from the literature is used for conducting simulation. The experimental results validate that the proposed method outperforms the state-of-art GA method.

  10. Second Attribute Algorithm Based on Tree Expression

    Institute of Scientific and Technical Information of China (English)

    Su-Qing Han; Jue Wang

    2006-01-01

    One view of finding a personalized solution of reduct in an information system is grounded on the viewpoint that attribute order can serve as a kind of semantic representation of user requirements. Thus the problem of finding personalized solutions can be transformed into computing the reduct on an attribute order. The second attribute theorem describes the relationship between the set of attribute orders and the set of reducts, and can be used to transform the problem of searching solutions to meet user requirements into the problem of modifying reduct based on a given attribute order. An algorithm is implied based on the second attribute theorem, with computation on the discernibility matrix. Its time complexity is O(n2 × m) (n is the number of the objects and m the number of the attributes of an information system).This paper presents another effective second attribute algorithm for facilitating the use of the second attribute theorem,with computation on the tree expression of an information system. The time complexity of the new algorithm is linear in n. This algorithm is proved to be equivalent to the algorithm on the discernibility matrix.

  11. Quantum Monte Carlo methods algorithms for lattice models

    CERN Document Server

    Gubernatis, James; Werner, Philipp

    2016-01-01

    Featuring detailed explanations of the major algorithms used in quantum Monte Carlo simulations, this is the first textbook of its kind to provide a pedagogical overview of the field and its applications. The book provides a comprehensive introduction to the Monte Carlo method, its use, and its foundations, and examines algorithms for the simulation of quantum many-body lattice problems at finite and zero temperature. These algorithms include continuous-time loop and cluster algorithms for quantum spins, determinant methods for simulating fermions, power methods for computing ground and excited states, and the variational Monte Carlo method. Also discussed are continuous-time algorithms for quantum impurity models and their use within dynamical mean-field theory, along with algorithms for analytically continuing imaginary-time quantum Monte Carlo data. The parallelization of Monte Carlo simulations is also addressed. This is an essential resource for graduate students, teachers, and researchers interested in ...

  12. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Cheng-Yuan Shih

    2010-01-01

    Full Text Available This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA and quadratic discriminant analysis (QDA. It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  13. A robust DCT domain watermarking algorithm based on chaos system

    Science.gov (United States)

    Xiao, Mingsong; Wan, Xiaoxia; Gan, Chaohua; Du, Bo

    2009-10-01

    Digital watermarking is a kind of technique that can be used for protecting and enforcing the intellectual property (IP) rights of the digital media like the digital images containting in the transaction copyright. There are many kinds of digital watermarking algorithms. However, existing digital watermarking algorithms are not robust enough against geometric attacks and signal processing operations. In this paper, a robust watermarking algorithm based on chaos array in DCT (discrete cosine transform)-domain for gray images is proposed. The algorithm provides an one-to-one method to extract the watermark.Experimental results have proved that this new method has high accuracy and is highly robust against geometric attacks, signal processing operations and geometric transformations. Furthermore, the one who have on idea of the key can't find the position of the watermark embedded in. As a result, the watermark not easy to be modified, so this scheme is secure and robust.

  14. Efficient Iris Recognition Algorithm Using Method of Moments

    Directory of Open Access Journals (Sweden)

    Bimi Jain

    2012-09-01

    Full Text Available This paper presents an efficient biometric algorithm for iris recognition using Fast Fourier Transform and moments. Biometric system provides automatic identification of an individual based on a unique feature or characteristic possessed by the individual. The Fast Fourier Transform converts image from spatial domain to frequency domain and also filters noise in the image giving more precise information. Moments are area descriptors used to characterize the shape and size of the image. The moments values are invariant to scale and orientation of the object under study, also insensitive to rotation and scale transformation. At last Euclidean distance formula is used for image matching. The CASIA database clearly demonstrates an efficient method for Biometrics. As per experimental result,the algorithm is achieving higher Correct Recognition Rate.

  15. Efficient Iris Recognition Algorithm Using Method of Moments

    Directory of Open Access Journals (Sweden)

    Bimi Jain

    2012-10-01

    Full Text Available This paper presents an efficient biometric algorithm for iris recognition using Fast Fourier Transform andmoments. Biometric system provides automatic identification of an individual based on a unique feature orcharacteristic possessed by the individual. The Fast Fourier Transform converts image from spatialdomain to frequency domain and also filters noise in the image giving more precise information. Momentsare area descriptors used to characterize the shape and size of the image. The moments values areinvariant to scale and orientation of the object under study, also insensitive to rotation and scaletransformation. At last Euclidean distance formula is used for image matching.The CASIA database clearly demonstrates an efficient method for Biometrics. As per experimentalresult,the algorithm is achieving higher Correct Recognition Rate.

  16. Face detection based on multiple kernel learning algorithm

    Science.gov (United States)

    Sun, Bo; Cao, Siming; He, Jun; Yu, Lejun

    2016-09-01

    Face detection is important for face localization in face or facial expression recognition, etc. The basic idea is to determine whether there is a face in an image or not, and also its location, size. It can be seen as a binary classification problem, which can be well solved by support vector machine (SVM). Though SVM has strong model generalization ability, it has some limitations, which will be deeply analyzed in the paper. To access them, we study the principle and characteristics of the Multiple Kernel Learning (MKL) and propose a MKL-based face detection algorithm. In the paper, we describe the proposed algorithm in the interdisciplinary research perspective of machine learning and image processing. After analyzing the limitation of describing a face with a single feature, we apply several ones. To fuse them well, we try different kernel functions on different feature. By MKL method, the weight of each single function is determined. Thus, we obtain the face detection model, which is the kernel of the proposed method. Experiments on the public data set and real life face images are performed. We compare the performance of the proposed algorithm with the single kernel-single feature based algorithm and multiple kernels-single feature based algorithm. The effectiveness of the proposed algorithm is illustrated. Keywords: face detection, feature fusion, SVM, MKL

  17. Duality based optical flow algorithms with applications

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau

    We consider the popular TV-L1 optical flow formulation, and the so-called duality based algorithm for minimizing the TV-L1 energy. The original formulation is extended to allow for vector valued images, and minimization results are given. In addition we consider different definitions of total...

  18. Model based development of engine control algorithms

    NARCIS (Netherlands)

    Dekker, H.J.; Sturm, W.L.

    1996-01-01

    Model based development of engine control systems has several advantages. The development time and costs are strongly reduced because much of the development and optimization work is carried out by simulating both engine and control system. After optimizing the control algorithm it can be executed b

  19. An SQP Algorithm for Recourse-based Stochastic Nonlinear Programming

    Directory of Open Access Journals (Sweden)

    Xinshun Ma

    2016-05-01

    Full Text Available The stochastic nonlinear programming problem with completed recourse and nonlinear constraints is studied in this paper. We present a sequential quadratic programming method for solving the problem based on the certainty extended nonlinear model. This algorithm is obtained by combing the active set method and filter method. The convergence of the method is established under some standard assumptions. Moreover, a practical design is presented and numerical results are provided.

  20. THE DISCRETE TIME, COST AND QUALITY TRADE-OFF PROBLEM IN PROJECT SCHEDULING: AN EFFICIENT SOLUTION METHOD BASED ON CELLDE ALGORITHM

    Directory of Open Access Journals (Sweden)

    Gh. Assadipour

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT:The trade-off between time, cost, and quality is one of the important problems of project management. This problem assumes that all project activities can be executed in different modes of cost, time, and quality. Thus a manager should select each activity’s mode such that the project can meet the deadline with the minimum possible cost and the maximum achievable quality. As the problem is NP-hard and the objectives are in conflict with each other, a multi-objective meta-heuristic called CellDE, which is a hybrid cellular genetic algorithm, is implemented as the optimisation method. The proposed algorithm provides project managers with a set of non-dominated or Pareto-optimal solutions, and enables them to choose the best one according to their preferences. A set of problems of different sizes is generated and solved using the proposed algorithm. Three metrics are employed for evaluating the performance of the algorithm, appraising the diversity and convergence of the achieved Pareto fronts. Finally a comparison is made between CellDE and another meta-heuristic available in the literature. The results show the superiority of CellDE.

    AFRIKAANSE OPSOMMING: ‘n Balans tussen tyd, koste en gehalte is een van die belangrike probleme van projekbestuur. Die vraagstuk maak gewoonlik die aanname dat alle projekaktiwiteite uitgevoer kan word op uiteenlopende wyses wat verband hou met koste, tyd en gehalte. ‘n Projekbestuurder selekteer gewoonlik die uitvoeringsmetodes sodanig per aktiwiteit dat gehoor gegegee word aan minimum koste en maksimum gehalte teen die voorwaarde van voltooiingsdatum wat bereik moet word.

    Aangesien die beskrewe problem NP-hard is, word dit behandel ten opsigte van konflikterende doelwitte met ‘n multidoelwit metaheuristiese metode (CellDE. Die metode is ‘n hibride-sellulêre genetiese algoritme. Die algoritme lewer aan die besluitvormer ‘n versameling van ongedomineerde of Pareto

  1. Analog Group Delay Equalizers Design Based on Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    M. Laipert

    2006-04-01

    Full Text Available This paper deals with a design method of the analog all-pass filter designated for equalization of the group delay frequency response of the analog filter. This method is based on usage of evolutionary algorithm, the Differential Evolution algorithm in particular. We are able to design such equalizers to be obtained equal-ripple group delay frequency response in the pass-band of the low-pass filter. The procedure works automatically without an input estimation. The method is presented on solving practical examples.

  2. Algorithms for Quantum Branching Programs Based on Fingerprinting

    CERN Document Server

    Ablayev, Farid; 10.4204/EPTCS.9.1

    2009-01-01

    In the paper we develop a method for constructing quantum algorithms for computing Boolean functions by quantum ordered read-once branching programs (quantum OBDDs). Our method is based on fingerprinting technique and representation of Boolean functions by their characteristic polynomials. We use circuit notation for branching programs for desired algorithms presentation. For several known functions our approach provides optimal QOBDDs. Namely we consider such functions as Equality, Palindrome, and Permutation Matrix Test. We also propose a generalization of our method and apply it to the Boolean variant of the Hidden Subgroup Problem.

  3. Web mining based on chaotic social evolutionary programming algorithm

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    With an aim to the fact that the K-means clustering algorithm usually ends in local optimization and is hard to harvest global optimization, a new web clustering method is presented based on the chaotic social evolutionary programming (CSEP) algorithm. This method brings up the manner of that a cognitive agent inherits a paradigm in clustering to enable the cognitive agent to acquire a chaotic mutation operator in the betrayal. As proven in the experiment, this method can not only effectively increase web clustering efficiency, but it can also practically improve the precision of web clustering.

  4. QRS Detection Based on an Advanced Multilevel Algorithm

    Directory of Open Access Journals (Sweden)

    Wissam Jenkal

    2016-01-01

    Full Text Available This paper presents an advanced multilevel algorithm used for the QRS complex detection. This method is based on three levels. The first permits the extraction of higher peaks using an adaptive thresholding technique. The second allows the QRS region detection. The last level permits the detection of Q, R and S waves. The proposed algorithm shows interesting results compared to recently published methods. The perspective of this work is the implementation of this method on an embedded system for a real time ECG monitoring system.

  5. Verification-Based Interval-Passing Algorithm for Compressed Sensing

    OpenAIRE

    Wu, Xiaofu; Yang, Zhen

    2013-01-01

    We propose a verification-based Interval-Passing (IP) algorithm for iteratively reconstruction of nonnegative sparse signals using parity check matrices of low-density parity check (LDPC) codes as measurement matrices. The proposed algorithm can be considered as an improved IP algorithm by further incorporation of the mechanism of verification algorithm. It is proved that the proposed algorithm performs always better than either the IP algorithm or the verification algorithm. Simulation resul...

  6. 基于改进的字典学习算法的图像去噪方法%Image Denoising Method Based on Modified Dictionary Learning Algorithm

    Institute of Scientific and Technical Information of China (English)

    谢勤岚; 丁晶晶

    2014-01-01

    Two improvements of the MOD and K-SVD dictionary learning algorithms are proposed by modifying two main parts of these algorithms :the dictionary update and the sparse coding stages .A new dictionary-update stage is used to find both the dictionary and the representations while keeping the supports intact .According to the previous sparse-coding , it suggests to update the known representations .These two ideas are tested in practice and show how they lead to faster training and better quality outcome .Experimental results prove the effectiveness of the proposed method .%提出对基于MOD和K-SVD字典学习算法的图像去噪的两个方面的改进。在字典更新阶段,采用一种新的字典更新方式,在保持支集完备的同时寻找字典和表示法。在稀疏编码阶段,根据前一次追踪过程产生的部分系数进行修正和更新。分别对这两种改进进行了验证,并说明了如何进行更快速的训练以及取得更好的结果,实验结果证实了论文方法的有效性。

  7. A hybrid cuckoo search algorithm with Nelder Mead method for solving global optimization problems.

    Science.gov (United States)

    Ali, Ahmed F; Tawhid, Mohamed A

    2016-01-01

    Cuckoo search algorithm is a promising metaheuristic population based method. It has been applied to solve many real life problems. In this paper, we propose a new cuckoo search algorithm by combining the cuckoo search algorithm with the Nelder-Mead method in order to solve the integer and minimax optimization problems. We call the proposed algorithm by hybrid cuckoo search and Nelder-Mead method (HCSNM). HCSNM starts the search by applying the standard cuckoo search for number of iterations then the best obtained solution is passing to the Nelder-Mead algorithm as an intensification process in order to accelerate the search and overcome the slow convergence of the standard cuckoo search algorithm. The proposed algorithm is balancing between the global exploration of the Cuckoo search algorithm and the deep exploitation of the Nelder-Mead method. We test HCSNM algorithm on seven integer programming problems and ten minimax problems and compare against eight algorithms for solving integer programming problems and seven algorithms for solving minimax problems. The experiments results show the efficiency of the proposed algorithm and its ability to solve integer and minimax optimization problems in reasonable time.

  8. Relevance Feedback Algorithm Based on Collaborative Filtering in Image Retrieval

    Directory of Open Access Journals (Sweden)

    Yan Sun

    2010-12-01

    Full Text Available Content-based image retrieval is a very dynamic study field, and in this field, how to improve retrieval speed and retrieval accuracy is a hot issue. The retrieval performance can be improved when applying relevance feedback to image retrieval and introducing the participation of people to the retrieval process. However, as for many existing image retrieval methods, there are disadvantages of relevance feedback with information not being fully saved and used, and their accuracy and flexibility are relatively poor. Based on this, the collaborative filtering technology was combined with relevance feedback in this study, and an improved relevance feedback algorithm based on collaborative filtering was proposed. In the method, the collaborative filtering technology was used not only to predict the semantic relevance between images in database and the retrieval samples, but to analyze feedback log files in image retrieval, which can make the historical data of relevance feedback be fully used by image retrieval system, and further to improve the efficiency of feedback. The improved algorithm presented has been tested on the content-based image retrieval database, and the performance of the algorithm has been analyzed and compared with the existing algorithms. The experimental results showed that, compared with the traditional feedback algorithms, this method can obviously improve the efficiency of relevance feedback, and effectively promote the recall and precision of image retrieval.

  9. Optimal Hops-Based Adaptive Clustering Algorithm

    Science.gov (United States)

    Xuan, Xin; Chen, Jian; Zhen, Shanshan; Kuo, Yonghong

    This paper proposes an optimal hops-based adaptive clustering algorithm (OHACA). The algorithm sets an energy selection threshold before the cluster forms so that the nodes with less energy are more likely to go to sleep immediately. In setup phase, OHACA introduces an adaptive mechanism to adjust cluster head and load balance. And the optimal distance theory is applied to discover the practical optimal routing path to minimize the total energy for transmission. Simulation results show that OHACA prolongs the life of network, improves utilizing rate and transmits more data because of energy balance.

  10. Numerical Algorithms Based on Biorthogonal Wavelets

    Science.gov (United States)

    Ponenti, Pj.; Liandrat, J.

    1996-01-01

    Wavelet bases are used to generate spaces of approximation for the resolution of bidimensional elliptic and parabolic problems. Under some specific hypotheses relating the properties of the wavelets to the order of the involved operators, it is shown that an approximate solution can be built. This approximation is then stable and converges towards the exact solution. It is designed such that fast algorithms involving biorthogonal multi resolution analyses can be used to resolve the corresponding numerical problems. Detailed algorithms are provided as well as the results of numerical tests on partial differential equations defined on the bidimensional torus.

  11. A new automatic alignment technology for single mode fiber-waveguide based on improved genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    ZHENG Yu; CHEN Zhuang-zhuang; LI Ya-juan; DUAN Jian

    2009-01-01

    A novel automatic alignment algorithm of single mode fiber-waveguide based on improved genetic algorithm is proposed. The genetic searching is based on the dynamic crossover operator and the adaptive mutation operator to solve the premature convergence of simple genetic algorithm The improved genetic algorithm combines with hill-climbing method and pattern searching algorithm, to solve low precision of simple genetic algorithm in later searching. The simulation results indicate that the improved genetic algorithm can rise the alignment precision and reach the coupling loss of 0.01 dB when platform moves near 207 space points averagely.

  12. A face recognition algorithm based on thermal and visible data

    Science.gov (United States)

    Sochenkov, Ilya; Tihonkih, Dmitrii; Vokhmintcev, Aleksandr; Melnikov, Andrey; Makovetskii, Artyom

    2016-09-01

    In this work we present an algorithm of fusing thermal infrared and visible imagery to identify persons. The proposed face recognition method contains several components. In particular this is rigid body image registration. The rigid registration is achieved by a modified variant of the iterative closest point (ICP) algorithm. We consider an affine transformation in three-dimensional space that preserves the angles between the lines. An algorithm of matching is inspirited by the recent results of neurophysiology of vision. Also we consider the ICP minimizing error metric stage for the case of an arbitrary affine transformation. Our face recognition algorithm also uses the localized-contouring algorithms to segment the subject's face; thermal matching based on partial least squares discriminant analysis. Thermal imagery face recognition methods are advantageous when there is no control over illumination or for detecting disguised faces. The proposed algorithm leads to good matching accuracies for different person recognition scenarios (near infrared, far infrared, thermal infrared, viewed sketch). The performance of the proposed face recognition algorithm in real indoor environments is presented and discussed.

  13. A multi-sequential number-theoretic optimization algorithm using clustering methods

    Institute of Scientific and Technical Information of China (English)

    XU Qing-song; LIANG Yi-zeng; HOU Zhen-ting

    2005-01-01

    A multi-sequential number-theoretic optimization method based on clustering was developed and applied to the optimization of functions with many local extrema. Details of the procedure to generate the clusters and the sequential schedules were given. The algorithm was assessed by comparing its performance with generalized simulated annealing algorithm in a difficult instructive example and a D-optimum experimental design problem. It is shown the presented algorithm to be more effective and reliable based on the two examples.

  14. A Novel Algorithm Based on 3D-MUSIC Algorithm for Localizing Near-Field Source

    Institute of Scientific and Technical Information of China (English)

    SHAN Zhi-yong; ZHOU Xi-lang; PEN Gen-jiang

    2005-01-01

    A novel 3-D MUSIC algorithm based on the classical 3D-MUSIC algorithm for the location of near-field source was presented. Under the far-field assumption of actual near-field, two algebraic relations of the location parameters between the actual near-field sources and the far-field ones were derived. With Fourier transformation and polynomial-root methods, the elevation and the azimuth of the far-field were obtained, the tracking paths can be developed, and the location parameters of the near-field source can be determined, then the more accurate results can be estimated using an optimization method. The computer simulation results p rove that the algorithm for the location of the near-fields is more accurate, effective and suitable for real-time applications.

  15. Web Based Genetic Algorithm Using Data Mining

    OpenAIRE

    Ashiqur Rahman; Asaduzzaman Noman; Md. Ashraful Islam; Al-Amin Gaji

    2016-01-01

    This paper presents an approach for classifying students in order to predict their final grade based on features extracted from logged data in an education web-based system. A combination of multiple classifiers leads to a significant improvement in classification performance. Through weighting the feature vectors using a Genetic Algorithm we can optimize the prediction accuracy and get a marked improvement over raw classification. It further shows that when the number of features is few; fea...

  16. AN OPTIMIZATION ALGORITHM BASED ON BACTERIA BEHAVIOR

    Directory of Open Access Journals (Sweden)

    Ricardo Contreras

    2014-09-01

    Full Text Available Paradigms based on competition have shown to be useful for solving difficult problems. In this paper we present a new approach for solving hard problems using a collaborative philosophy. A collaborative philosophy can produce paradigms as interesting as the ones found in algorithms based on a competitive philosophy. Furthermore, we show that the performance - in problems associated to explosive combinatorial - is comparable to the performance obtained using a classic evolutive approach.

  17. 基于扰动观测法的 MPPT 改进算法%MPPT Improved Algorithm Based on Disturbance Observation Method

    Institute of Scientific and Technical Information of China (English)

    谢友慧; 戴永涛; 刘肇荣; 袁强

    2016-01-01

    Output power of Photovoltaic cells (array) is relevant to irradiance and temperature. For reaching the maximum output power, it needs to track the maximum power point. This paper analyses and discusses the ad-vantages and disadvantages of two kinds of commonly used control methods. The paper proposes an improved al-gorithm of combing the constant voltage method with fixed step disturbance observation method. Seen from the simulation results, it shows that this new method can track the maximum power point fast and accurately.%光伏电池(阵列)的输出功率与辐照度和温度有关,为使得输出功率达到最大,需要进行最大功率点跟踪(MPPT)。本文对常用的两种控制方法进行了分析和讨论,进而提出了将定电压法与定步长扰动观测法结合的改进算法。从仿真结果看,该算法能较快速和准确地跟踪达到最大功率点。

  18. Eigenvalue based Spectrum Sensing Algorithms for Cognitive Radio

    CERN Document Server

    Zeng, Yonghong

    2008-01-01

    Spectrum sensing is a fundamental component is a cognitive radio. In this paper, we propose new sensing methods based on the eigenvalues of the covariance matrix of signals received at the secondary users. In particular, two sensing algorithms are suggested, one is based on the ratio of the maximum eigenvalue to minimum eigenvalue; the other is based on the ratio of the average eigenvalue to minimum eigenvalue. Using some latest random matrix theories (RMT), we quantify the distributions of these ratios and derive the probabilities of false alarm and probabilities of detection for the proposed algorithms. We also find the thresholds of the methods for a given probability of false alarm. The proposed methods overcome the noise uncertainty problem, and can even perform better than the ideal energy detection when the signals to be detected are highly correlated. The methods can be used for various signal detection applications without requiring the knowledge of signal, channel and noise power. Simulations based ...

  19. A Reversible Image Steganographic Algorithm Based on Slantlet Transform

    Directory of Open Access Journals (Sweden)

    Sushil Kumar

    2013-07-01

    Full Text Available In this paper we present a reversible imagesteganography technique based on Slantlet transform (SLTand using advanced encryption standard (AES method. Theproposed method first encodes the message using two sourcecodes, viz., Huffman codes and a self-synchronizing variablelength code known as, T-code. Next, the encoded binarystring is encrypted using an improved AES method. Theencrypted data so obtained is embedded in the middle andhigh frequency sub-bands, obtained by applying 2-level ofSLT to the cover-image, using thresholding method. Theproposed algorithm is compared with the existing techniquesbased on wavelet transform. The Experimental results showthat the proposed algorithm can extract hidden message andrecover the original cover image with low distortion. Theproposed algorithm offers acceptable imperceptibility,security (two-layer security and provides robustness againstGaussian and Salt-n-Pepper noise attack.

  20. Genetic Algorithm-based Optimized Design Method for Casing String%基于遗传算法的套管柱优化设计方法

    Institute of Scientific and Technical Information of China (English)

    何英明; 王瑞和; 雷杨; 臧艳彬; 何英君

    2012-01-01

    In light of changeable external load of casing in complex geologic conditions, the optimized method to design casing string on the basis of genetic algorithm was established. The genetic algorithm theory was applied to determine the coding mode of chromosome, production method of initial population, and evaluation function of chromosome and genetic operation. Thus, the optimized design model for casing string on the basis of genetic algo- rithm was established to overcome the defects of complex calculation process and neglect of casing costs and risk factor in the traditional design method of casing string strength concerning complex geologic conditions. The design verification of some casing string in Puguang Gasfield shows that the design method is accurate and reliable. It improves the reliability of casing string and helps to reduce the costs. The present study offers the method to optimize the design of casing string in complex geologic conditions.%针对复杂地质条件下套管外载荷多变的特点,建立了基于遗传算法的套管柱优化设计方法。应用遗传算法理论,确定染色体编码方式、初始种群的产生方法、染色体评估函数及遗传操作,建立基于遗传算法的套管柱优化设计模型,从而克服了传统套管柱设计方法在复杂地质条件下套管柱强度设计中运算过程复杂,无法兼顾套管成本与风险因素的缺点。对普光气田某井套管柱进行的设计验证情况表明,该方法设计结果准确可靠,既提高了套管柱可靠性,又有利于降低成本。该项研究为复杂地质条件下优化套管柱设计提供了新的方法。

  1. Application of genetic algorithm to hexagon-based motion estimation.

    Science.gov (United States)

    Kung, Chih-Ming; Cheng, Wan-Shu; Jeng, Jyh-Horng

    2014-01-01

    With the improvement of science and technology, the development of the network, and the exploitation of the HDTV, the demands of audio and video become more and more important. Depending on the video coding technology would be the solution for achieving these requirements. Motion estimation, which removes the redundancy in video frames, plays an important role in the video coding. Therefore, many experts devote themselves to the issues. The existing fast algorithms rely on the assumption that the matching error decreases monotonically as the searched point moves closer to the global optimum. However, genetic algorithm is not fundamentally limited to this restriction. The character would help the proposed scheme to search the mean square error closer to the algorithm of full search than those fast algorithms. The aim of this paper is to propose a new technique which focuses on combing the hexagon-based search algorithm, which is faster than diamond search, and genetic algorithm. Experiments are performed to demonstrate the encoding speed and accuracy of hexagon-based search pattern method and proposed method.

  2. 基于蛙跳算法与Otsu法的图像多阈值分割技术%Multilevel thresholding segmentation based on shuffled frog leaping algorithm and Otsu method

    Institute of Scientific and Technical Information of China (English)

    康杰红; 马苗

    2012-01-01

    为了快速准确地确定多阈值图像分割中的最佳阈值,提出了一种基于蛙跳算法与Otsu法相结合的多阈值图像分割方法.该方法将多阈值求解看作一种多变量的组合求解优化问题,利用多阈值Otsu法设计分割目标函数,将新兴的仿生学优化求解算法——蛙跳算法引入到图像分割技术中,通过蛙跳算法中全局搜索和局部搜索相结合的搜索机制并行求解多个阈值.实验结果表明,该方法与基于人工鱼群算法的图像多阈值分割方法相比,明显提高了图像分割速度和分割质量.%In order to obtain a group of satisfying thresholds in image segmentation quickly and accurately, this paper proposed a method based on shuffled frog leaping (SFL) algorithm and Otsu method for multilevel thresholding image segmentation. The method regarded the group of thresholds as a group of potential solutions to a certain objective function, and employed the extended Otsu method to be the fitness function for SFL algorithm. And then, the powerful searching ability of SFL algorithm was used to locate the thresholds in parallel, which combines the global search in the whole swarm and local searches in subswarms. Experimental results showed that compared with the method based on artificial fish swarm (AFS) algorithm, the suggested method obviously im- proved the performance of image segmentation in speed and quality.

  3. Majorization-minimization algorithms for wavelet-based image restoration.

    Science.gov (United States)

    Figueiredo, Mário A T; Bioucas-Dias, José M; Nowak, Robert D

    2007-12-01

    Standard formulations of image/signal deconvolution under wavelet-based priors/regularizers lead to very high-dimensional optimization problems involving the following difficulties: the non-Gaussian (heavy-tailed) wavelet priors lead to objective functions which are nonquadratic, usually nondifferentiable, and sometimes even nonconvex; the presence of the convolution operator destroys the separability which underlies the simplicity of wavelet-based denoising. This paper presents a unified view of several recently proposed algorithms for handling this class of optimization problems, placing them in a common majorization-minimization (MM) framework. One of the classes of algorithms considered (when using quadratic bounds on nondifferentiable log-priors) shares the infamous "singularity issue" (SI) of "iteratively reweighted least squares" (IRLS) algorithms: the possibility of having to handle infinite weights, which may cause both numerical and convergence issues. In this paper, we prove several new results which strongly support the claim that the SI does not compromise the usefulness of this class of algorithms. Exploiting the unified MM perspective, we introduce a new algorithm, resulting from using l1 bounds for nonconvex regularizers; the experiments confirm the superior performance of this method, when compared to the one based on quadratic majorization. Finally, an experimental comparison of the several algorithms, reveals their relative merits for different standard types of scenarios.

  4. Performance-Based Seismic Design of Steel Frames Utilizing Colliding Bodies Algorithm

    Directory of Open Access Journals (Sweden)

    H. Veladi

    2014-01-01

    Full Text Available A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm.

  5. Cloud-based Evolutionary Algorithms: An algorithmic study

    CERN Document Server

    Merelo, Juan-J; Mora, Antonio M; Castillo, Pedro; Romero, Gustavo; Laredo, JLJ

    2011-01-01

    After a proof of concept using Dropbox(tm), a free storage and synchronization service, showed that an evolutionary algorithm using several dissimilar computers connected via WiFi or Ethernet had a good scaling behavior in terms of evaluations per second, it remains to be proved whether that effect also translates to the algorithmic performance of the algorithm. In this paper we will check several different, and difficult, problems, and see what effects the automatic load-balancing and asynchrony have on the speed of resolution of problems.

  6. New Method in Voltage Flicker Parameter Identification Based on Local Mean Decomposition Algorithm%基于LMD算法的电压闪变参数识别新方法

    Institute of Scientific and Technical Information of China (English)

    胡晓波; 杜娟丽

    2016-01-01

    为了实现电压闪变参数的精确识别,针对采用常规算法求取的电压闪变的调幅波、间谐波参数波动大的问题,首次提出了将局部均值分解(LMD)算法应用于电压闪变参数识别.选取典型的电压闪变算例,应用LMD算法和HHT算法进行仿真对比,结果表明,LMD通过除以包络函数得到PF,"筛分"次数较少,端点效应较小,所求的闪变参数在稳态时基本不变化.该方法步骤简单、运算速度快,证明了LMD算法的可行性与准确性.%To realize high-accuracy measurement parameters of voltage flicker,focusing on the amplitude modulation wave and inter-harmonics parameter fluctuation problem in voltage flicker based on conventional algorithm ,the local mean decomposition(LMD)algorithm is adopted to analyze voltage flicker parameter identification in power system for the first time. The paper selects typical voltage flicker signal,compares the simulation results by LMD algorithm and HHT algorithm. Simulation results show that because the LMD algorithm gets a PF by dividing the envelope function and the number of"Screening"is relatively less,the end effect is small. The flicker parameters in the steady state basically do not change based on LMD. LMD is superiority,the method is simple,rapid and high accuracy. The results show LMD algorithm is feasible and accurate.

  7. Genetic algorithm-based evaluation of spatial straightness error

    Institute of Scientific and Technical Information of China (English)

    崔长彩; 车仁生; 黄庆成; 叶东; 陈刚

    2003-01-01

    A genetic algorithm ( GA ) -based approach is proposed to evaluate the straightness error of spatial lines. According to the mathematical definition of spatial straightness, a verification model is established for straightness error, and the fitness function of GA is then given and the implementation techniques of the proposed algorithm is discussed in detail. The implementation techniques include real number encoding, adaptive variable range choosing, roulette wheel and elitist combination selection strategies, heuristic crossover and single point mutation schemes etc. An application example is quoted to validate the proposed algorithm. The computation result shows that the GA-based approach is a superior nonlinear parallel optimization method. The performance of the evolution population can be improved through genetic operations such as reproduction, crossover and mutation until the optimum goal of the minimum zone solution is obtained. The quality of the solution is better and the efficiency of computation is higher than other methods.

  8. The optimal time-frequency atom search based on a modified ant colony algorithm

    Institute of Scientific and Technical Information of China (English)

    GUO Jun-feng; LI Yan-jun; YU Rui-xing; ZHANG Ke

    2008-01-01

    In this paper,a new optimal time-frequency atom search method based on a modified ant colony algorithm is proposed to improve the precision of the traditional methods.First,the discretization formula of finite length time-frequency atom is inferred at length.Second; a modified ant colony algorithm in continuous space is proposed.Finally,the optimal timefrequency atom search algorithm based on the modified ant colony algorithm is described in detail and the simulation experiment is carried on.The result indicates that the developed algorithm is valid and stable,and the precision of the method is higher than that of the traditional method.

  9. CUDA based lattice Boltzmann method: Algorithm design and program optimization%基于CUDA的格子Boltzmann方法:算法设计与程序优化

    Institute of Scientific and Technical Information of China (English)

    黄昌盛; 张文欢; 侯志敏; 陈俊辉; 李明晶; 何南忠; 施保昌

    2011-01-01

    格子Boltzmann方法(LBM)由于其具有计算简单,天然并行,易于程序实现,易于处理复杂边界等优点而成为流体建模和模拟的一种重要方法.LBM的上述优点也使得其非常适合利用图形处理单元(graphic processing unit,GPU)进行大规模流体计算.基于GPU的CUDA(compute unified device architecture)编程平台,首先设计了相应的LBM算法,并以二维方腔流、二维圆柱绕流以及三维方腔流为例,着重探讨了存储器访问优化等优化技术的作用;此外,本文也对程序的性能进行了详细分析.结果表明,本文的算法取得了理想的加速效果,证实了GPU与LBM的良好匹配关系.%Lattice Boltzmann method (LBM) has become a powerful tool in modeling and simulating fluid flows for its fully parallelism, easy implementation, and simple code, and thus, LBM is quite suitable for large-scale computation of fluid flows on graphic processing unit (GPU) due to these advantages. In this paper, we implement the LBM algorithm on GPU using CUDA, and simulate 2D cavity flow, 2D flow around a cylinder and 3D cavity flow. The role of memory access optimization and other optimization technologies in programming and the performance of programs are analyzed in detail. The results show that our algorithm gives satisfactory acceleration, and confirm that LBM is very compatible with GPU for large-scale parallel computation.

  10. Knowledge Template Based Multi-perspective Car Recognition Algorithm

    Directory of Open Access Journals (Sweden)

    Bo Cai

    2010-12-01

    Full Text Available In order to solve the problem due to the vehicle-oriented society such as traffic jam or traffic accident, intelligent transportation system(ITS is raised and become scientist’s research focus, with the purpose of giving people better and safer driving condition and assistance. The core of intelligent transport system is the vehicle recognition and detection, and it’s the prerequisites for other related problems. Many existing vehicle recognition algorithms are aiming at one specific direction perspective, mostly front/back and side view. To make the algorithm more robust, our paper raised a vehicle recognition algorithm for oblique vehicles while also do research on front/back and side ones. The algorithm is designed based on the common knowledge of the car, such as shape, structure and so on. The experimental results of many car images show that our method has fine accuracy in car recognition.

  11. Measuring Disorientation Based on the Needleman-Wunsch Algorithm

    Directory of Open Access Journals (Sweden)

    Tolga Güyer

    2015-04-01

    Full Text Available This study offers a new method to measure navigation disorientation in web based systems which is powerful learning medium for distance and open education. The Needleman-Wunsch algorithm is used to measure disorientation in a more precise manner. The process combines theoretical and applied knowledge from two previously distinct research areas, disorientation and string-matching. String-matching algorithms provide a more convenient disorientation measurement than other techniques, in that they examine the similarity between an optimal path and learners’ navigation paths. The algorithm particularly takes into account the contextual similarity between partly relevant web-pages in a user’s navigation path and pages in an optimal path. This study focuses on the reasons and the required steps to use this algorithm for disorientation measurement. Examples of actual student activities and learning environment data are provided to illustrate the process.

  12. Hybrid Collision Detection Algorithm based on Image Space

    Directory of Open Access Journals (Sweden)

    XueLi Shen

    2013-07-01

    Full Text Available Collision detection is an important application in the field of virtual reality, and efficiently completing collision detection has become the research focus. For the poorly real-time defect of collision detection, this paper has presented an algorithm based on the hybrid collision detection, detecting the potential collision object sets quickly with the mixed bounding volume hierarchy tree, and then using the streaming pattern collision detection algorithm to make an accurate detection. With the above methods, it can achieve the purpose of balancing load of the CPU and GPU and speeding up the detection rate. The experimental results show that compared with the classic Rapid algorithm, this algorithm can effectively improve the efficiency of collision detection.

  13. Physics-based signal processing algorithms for micromachined cantilever arrays

    Science.gov (United States)

    Candy, James V; Clague, David S; Lee, Christopher L; Rudd, Robert E; Burnham, Alan K; Tringe, Joseph W

    2013-11-19

    A method of using physics-based signal processing algorithms for micromachined cantilever arrays. The methods utilize deflection of a micromachined cantilever that represents the chemical, biological, or physical element being detected. One embodiment of the method comprises the steps of modeling the deflection of the micromachined cantilever producing a deflection model, sensing the deflection of the micromachined cantilever and producing a signal representing the deflection, and comparing the signal representing the deflection with the deflection model.

  14. 基于梯度蜂群混合算法的电力系统最优潮流计算%Hybrid Artificial bee Colony Algorithm Based on Gradient Method for Optimal Power Flower Calculation System

    Institute of Scientific and Technical Information of China (English)

    杨琳; 孔峰

    2011-01-01

    Aiming at optimal power flow calculation problem in power system, this paper presents a new method of hybrid artificial bee colony algorithm based on gradient method GABC. Firstly, the new algorithm used quickness searching of gradient method to obtain a local minimum. then by utilizing the abilities of global searching of artificial bee colony algorithm, it escaped from trapping this local minimum. At last, the global minimum was achieved through iterative computation. Simulation experiments of IEEE5 system show that the improved algorithm can be better dealt with optimal flow constraints in dealing with the issue of optimal power flow. This method can also find preferable results and its correctness and validity is proven by a series of tests and computation, and that the algorithms can be widely applied to the areas of power system planning and operation.%针对电力系统最优潮流计算的问题提出一种基于梯度蜂群混合算法GABC.利用梯度算法的快速寻优特性得到某一局部极值,然后采用蜂群算法的全局寻优能力跳出该局部极值,并经过反复交替迭代最终找到问题的最优解.通过对IEEE5节点系统的计算结果表明改进后的人工蜂群算法可较好的处理最优潮流约束条件,有效提高基本蜂群算法的全局寻优能力和收敛精度.在处理最优潮流问题上具有一定的有效性和优越性.

  15. Stellar Population Analysis of Galaxies based on Genetic Algorithms

    Institute of Scientific and Technical Information of China (English)

    Abdel-Fattah Attia; H.A.Ismail; I.M.Selim; A.M.Osman; I.A.Isaa; M.A.Marie; A.A.Shaker

    2005-01-01

    We present a new method for determining the age and relative contribution of different stellar populations in galaxies based on the genetic algorithm.We apply this method to the barred spiral galaxy NGC 3384, using CCD images in U, B, V, R and I bands. This analysis indicates that the galaxy NGC 3384 is mainly inhabited by old stellar population (age > 109 yr). Some problems were encountered when numerical simulations are used for determining the contribution of different stellar populations in the integrated color of a galaxy. The results show that the proposed genetic algorithm can search efficiently through the very large space of the possible ages.

  16. ITO-based evolutionary algorithm to solve traveling salesman problem

    Science.gov (United States)

    Dong, Wenyong; Sheng, Kang; Yang, Chuanhua; Yi, Yunfei

    2014-03-01

    In this paper, a ITO algorithm inspired by ITO stochastic process is proposed for Traveling Salesmen Problems (TSP), so far, many meta-heuristic methods have been successfully applied to TSP, however, as a member of them, ITO needs further demonstration for TSP. So starting from designing the key operators, which include the move operator, wave operator, etc, the method based on ITO for TSP is presented, and moreover, the ITO algorithm performance under different parameter sets and the maintenance of population diversity information are also studied.

  17. Core Business Selection Based on Ant Colony Clustering Algorithm

    Directory of Open Access Journals (Sweden)

    Yu Lan

    2014-01-01

    Full Text Available Core business is the most important business to the enterprise in diversified business. In this paper, we first introduce the definition and characteristics of the core business and then descript the ant colony clustering algorithm. In order to test the effectiveness of the proposed method, Tianjin Port Logistics Development Co., Ltd. is selected as the research object. Based on the current situation of the development of the company, the core business of the company can be acquired by ant colony clustering algorithm. Thus, the results indicate that the proposed method is an effective way to determine the core business for company.

  18. A New Algorithm for Total Variation Based Image Denoising

    Institute of Scientific and Technical Information of China (English)

    Yi-ping XU

    2012-01-01

    We propose a new algorithm for the total variation based on image denoising problem.The split Bregman method is used to convert an unconstrained minimization denoising problem to a linear system in the outer iteration.An algebraic multi-grid method is applied to solve the linear system in the inner iteration.Furthermore,Krylov subspace acceleration is adopted to improve convergence in the outer iteration.Numerical experiments demonstrate that this algorithm is efficient even for images with large signal-to-noise ratio.

  19. Measuring Disorientation Based on the Needleman-Wunsch Algorithm

    Science.gov (United States)

    Güyer, Tolga; Atasoy, Bilal; Somyürek, Sibel

    2015-01-01

    This study offers a new method to measure navigation disorientation in web based systems which is powerful learning medium for distance and open education. The Needleman-Wunsch algorithm is used to measure disorientation in a more precise manner. The process combines theoretical and applied knowledge from two previously distinct research areas,…

  20. Gradient Gene Algorithm: a Fast Optimization Method to MST Problem

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The extension of Minimum Spanning Tree(MST) problem is an NP hardproblem which does not exit a polynomial time algorithm. In this paper, a fast optimizat ion method on MST problem--the Gradient Gene Algorithm is introduced. Compar ed with other evolutionary algorithms on MST problem, it is more advanced: firstly, very simple and easy to realize; then, efficient and accurate; finally general on other combination optimization problems.

  1. Engine fault diagnosis method based on PSO-RVM algorithm%基于P SO-RVM算法的发动机故障诊断

    Institute of Scientific and Technical Information of China (English)

    毕晓君; 柳长源; 卢迪

    2014-01-01

    针对汽车发动机失火故障问题,提出一种新的智能诊断方法。建立了汽车尾气中各气体的体积分数与失火故障原因的映射关系,对归一化处理的数据进行机器训练,将训练好的相关向量机模型应用于故障分类诊断。算法中的惩罚因子和径向基核函数参数对分类准确率有着很大的影响,利用粒子群算法对超参数进行了优化。将优化训练后的相关向量机模型与目前较成熟的遗传优化的神经网络及支持向量机方法进行了对比,实验结果表明新方法比传统方法在诊断精度和鲁棒性方面均有一定的提高。%To solve the problems of the misfiring errors of an automobile engine, the authors, put forward a new in-telligent fault diagnosis method. A mapping relation is established the volume fraction of gases in the exhaust of the automobile and the cause of the misfire. Machine training is applied to normalized data and the trained relevance vector machine model is applied to the fault classification and diagnosis. The penalty factor and the RBF kernel pa-rameters in the algorithm greatly affect the classification accuracy. The particle swarm algorithm is used to optimize the super-parameters;in addition, the relevance vector machine model having experienced optimization training is compared with the presently mature genetic optimized neural network and support vector machine method. The ex-perimental results show that the new method improves the diagnosis accuracy and robustness.

  2. Neighborhood based Levenberg-Marquardt algorithm for neural network training.

    Science.gov (United States)

    Lera, G; Pinzolas, M

    2002-01-01

    Although the Levenberg-Marquardt (LM) algorithm has been extensively applied as a neural-network training method, it suffers from being very expensive, both in memory and number of operations required, when the network to be trained has a significant number of adaptive weights. In this paper, the behavior of a recently proposed variation of this algorithm is studied. This new method is based on the application of the concept of neural neighborhoods to the LM algorithm. It is shown that, by performing an LM step on a single neighborhood at each training iteration, not only significant savings in memory occupation and computing effort are obtained, but also, the overall performance of the LM method can be increased.

  3. TWO NEW FCT ALGORITHMS BASED ON PRODUCT SYSTEM

    Institute of Scientific and Technical Information of China (English)

    Guo Zhaoli; Shi Baochang; Wang Nengchao

    2001-01-01

    In this paper we present a product system and give arepresentation for consine functions with the sys tem. Based on the formula two new algorithms are designed for computing the Discrete Cosine Transform. Both algorithms have regular recursive structure and good numerical stability and are easy to parallize. CLC Number:O17 Document ID:AReferences:[1]Arguello,F. and Zapata,E. L. ,Fast Cosine Transform Based on the Successive Doubling Method,Electronics Lett.,26:19,1990,1616-1618.[2]Chan,S.C. and Ho,K.L. ,Direct Methods for Computing Discrete Sinusoidal Transform,IEE Proceedings,136: 6,1990,433- 442.[3]Chan,S.C. and Ho,K.L. ,A New Two-Dimensional Fast Cosine Transform Algorithm,IEEE Trans. Signal Processing,32:2,1991,481-485.[4]Cvetkovic,Z. and Popovic,M. V.,New Fast Recursive Algorithms for the Computation of Discrete Cosine and Sine Transforms,IEEE Trans. Signal Processing,40: 8,1992,2083-2086.[5]Hou,H.S.,A Fast Recursive Algorithm for Computing the Discrete Cosine Transform,IEEE Trans. ASSP-35:10,1987,1455-1461.[6]Lee,B.G. ,A New Algorithm to Compute the Discrete Cosine Transform,IEEE Trans. ASSP,Vol. ASSP-32:6,1984,1243-1245.[7]Lee,P. and Uang,F. Y.,Restructured Recursive DCT and DST Algorithms,IEEE Trans.Signal Processing,42: 7,1994,1600- 1609.[8]Yun,D. and Lee,S.U. ,On the Fixed-Point Error Analysis of Several Fast IDCT Algorithms,IEEE Trans. Circuits and Systems- I : Analog and Digital Signal Processing,42 : 11,1995,686- 692.Manuscript Received:2000年2月20日Published:2001年9月1日

  4. Optimized Audio Classification and Segmentation Algorithm by Using Ensemble Methods

    Directory of Open Access Journals (Sweden)

    Saadia Zahid

    2015-01-01

    Full Text Available Audio segmentation is a basis for multimedia content analysis which is the most important and widely used application nowadays. An optimized audio classification and segmentation algorithm is presented in this paper that segments a superimposed audio stream on the basis of its content into four main audio types: pure-speech, music, environment sound, and silence. An algorithm is proposed that preserves important audio content and reduces the misclassification rate without using large amount of training data, which handles noise and is suitable for use for real-time applications. Noise in an audio stream is segmented out as environment sound. A hybrid classification approach is used, bagged support vector machines (SVMs with artificial neural networks (ANNs. Audio stream is classified, firstly, into speech and nonspeech segment by using bagged support vector machines; nonspeech segment is further classified into music and environment sound by using artificial neural networks and lastly, speech segment is classified into silence and pure-speech segments on the basis of rule-based classifier. Minimum data is used for training classifier; ensemble methods are used for minimizing misclassification rate and approximately 98% accurate segments are obtained. A fast and efficient algorithm is designed that can be used with real-time multimedia applications.

  5. LSB Based Quantum Image Steganography Algorithm

    Science.gov (United States)

    Jiang, Nan; Zhao, Na; Wang, Luo

    2016-01-01

    Quantum steganography is the technique which hides a secret message into quantum covers such as quantum images. In this paper, two blind LSB steganography algorithms in the form of quantum circuits are proposed based on the novel enhanced quantum representation (NEQR) for quantum images. One algorithm is plain LSB which uses the message bits to substitute for the pixels' LSB directly. The other is block LSB which embeds a message bit into a number of pixels that belong to one image block. The extracting circuits can regain the secret message only according to the stego cover. Analysis and simulation-based experimental results demonstrate that the invisibility is good, and the balance between the capacity and the robustness can be adjusted according to the needs of applications.

  6. Network-based recommendation algorithms: A review

    CERN Document Server

    Yu, Fei; Gillard, Sebastien; Medo, Matus

    2015-01-01

    Recommender systems are a vital tool that helps us to overcome the information overload problem. They are being used by most e-commerce web sites and attract the interest of a broad scientific community. A recommender system uses data on users' past preferences to choose new items that might be appreciated by a given individual user. While many approaches to recommendation exist, the approach based on a network representation of the input data has gained considerable attention in the past. We review here a broad range of network-based recommendation algorithms and for the first time compare their performance on three distinct real datasets. We present recommendation topics that go beyond the mere question of which algorithm to use - such as the possible influence of recommendation on the evolution of systems that use it - and finally discuss open research directions and challenges.

  7. Network-based recommendation algorithms: A review

    Science.gov (United States)

    Yu, Fei; Zeng, An; Gillard, Sébastien; Medo, Matúš

    2016-06-01

    Recommender systems are a vital tool that helps us to overcome the information overload problem. They are being used by most e-commerce web sites and attract the interest of a broad scientific community. A recommender system uses data on users' past preferences to choose new items that might be appreciated by a given individual user. While many approaches to recommendation exist, the approach based on a network representation of the input data has gained considerable attention in the past. We review here a broad range of network-based recommendation algorithms and for the first time compare their performance on three distinct real datasets. We present recommendation topics that go beyond the mere question of which algorithm to use-such as the possible influence of recommendation on the evolution of systems that use it-and finally discuss open research directions and challenges.

  8. Resizing Technique-Based Hybrid Genetic Algorithm for Optimal Drift Design of Multistory Steel Frame Buildings

    Directory of Open Access Journals (Sweden)

    Hyo Seon Park

    2014-01-01

    Full Text Available Since genetic algorithm-based optimization methods are computationally expensive for practical use in the field of structural optimization, a resizing technique-based hybrid genetic algorithm for the drift design of multistory steel frame buildings is proposed to increase the convergence speed of genetic algorithms. To reduce the number of structural analyses required for the convergence, a genetic algorithm is combined with a resizing technique that is an efficient optimal technique to control the drift of buildings without the repetitive structural analysis. The resizing technique-based hybrid genetic algorithm proposed in this paper is applied to the minimum weight design of three steel frame buildings. To evaluate the performance of the algorithm, optimum weights, computational times, and generation numbers from the proposed algorithm are compared with those from a genetic algorithm. Based on the comparisons, it is concluded that the hybrid genetic algorithm shows clear improvements in convergence properties.

  9. 一种基于SPIHT和匹配追踪的图像编码方法——SPMP算法%The SPMP algorithm: an image coding method based on SPIHT and matching pursuit

    Institute of Scientific and Technical Information of China (English)

    牛建伟; 高宾; 沈思思

    2011-01-01

    为了更好地利用图像的结构特征,提高图像重建的质量,提出了一种基于多级树集合划分(SPIHT)和匹配追踪(MP)的分层图像编码方法--(SPMP)算法.该方法首先采用拉普拉斯金字塔(Laplacian Pyramid)算法将原始图像分解成低频平滑层和高频细节层,然后使用离散小波变换和SPIHT算法编码图像的低频成分,使用基于克隆选择的匹配追踪算法编码图像的高频细节层.实验结果表明,该方法能够产生渐进:PSNR的位流,图像重建质量要明显高于小波图像编码算法.%To make full use of the structural characteristics of original images to improve reconstructed images' quality this paper proposes a novel hierarchical image encoding method based on set partitioning in hierarchical trees (SPIHT) and matching pursuit (MP) algorithm, named the SPMP algorithm. This new method divides the original images into a smooth layer in low frequency and a detail layer in high frequency by using the Laplacian Pyramid algorithm. In the smooth layer, it uses the discrete wavelet to transform the images from the space domain to the frequency domain, and then it encodes the coefficients in the frequency domain by using the SPIHT algorithm. The method adopts the matching pursuit (MP) algorithm to encode the detail layer based on the clone selection algorithm. The experimental results demonstrate that when using the SPMP algorithm, the output bitstream is embedded with the progressive peak signal-to-noise ratio (PSNR), and the reconstructed images quality is significantly better than that obtained using the wavelet transform encoding, even more obviously under the high compression ratio.

  10. Web service composition method based on approximate skyline algorithm%基于近似的skyline算法的Web服务组合方法

    Institute of Scientific and Technical Information of China (English)

    段静珊; 周彦晖

    2014-01-01

    The vigorous development of cloud computing brings new opportunities and challenges to the development of the combination of Web services. Traditional service composition methods focus on achieving value-added service to meet the gro-wing demand of the users, but there are a lot of the same functionality of the specific non-functional type of services on the In-ternet. How to select the combination of Web services from these services in accordance with user preferences, especially the services that meet the specific needs of the user, becomes a problem to be solved. The article uses the AHP algorithm to quantify user preferences. Then we integrate the relative user preferences weight into the approximate skyline query algorithms to pick out the right combination of Web services in order to meet the user’s need. Validity and extendibility of the method are verified through a series of experiments.%云计算的蓬勃发展,为 Web 组合服务的发展带来了新的机遇和挑战,传统的服务组合方法中只注重实现服务增值以满足用户日益增长的需求,而互联网上存在着大量的功能性相同、非功能性不同的具体服务。在这些服务中挑选出符合用户偏好的 Web 组合服务,尤其是符合用户特定需求的 Web 组合服务成为一个亟待解决的问题。文章采用 AHP 算法量化用户偏好,再将得到的用户相对偏好权重融入到近似的skyline查询算法中,挑选出合适的Web组合服务,来满足用户的需求。最后通过一系列实验验证所提出方法的有效性和可扩展性。

  11. AN IMPROVED FAST BLIND DECONVOLUTION ALGORITHM BASED ON DECORRELATION AND BLOCK MATRIX

    Institute of Scientific and Technical Information of China (English)

    Yang Jun'an; He Xuefan; Tan Ying

    2008-01-01

    In order to alleviate the shortcomings of most blind deconvolution algorithms,this paper proposes an improved fast algorithm for blind deconvolution based on decorrelation technique and broadband block matrix. Althougth the original algorithm can overcome the shortcomings of current blind deconvolution algorithms,it has a constraint that the number of the source signals must be less than that of the channels. The improved algorithm deletes this constraint by using decorrelation technique. Besides,the improved algorithm raises the separation speed in terms of improving the computing methods of the output signal matrix. Simulation results demonstrate the validation and fast separation of the improved algorithm.

  12. A Single Pattern Matching Algorithm Based on Character Frequency

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    Based on the study of single pattern matching, MBF algorithm is proposed by imitating the string searching procedure of human. The algorithm preprocesses the pattern by using the idea of Quick Search algorithm and the already-matched pattern psefix and suffix information. In searching phase, the algorithm makes use of the!character using frequency and the continue-skip idea. The experiment shows that MBF algorithm is more efficient than other algorithms.

  13. Algorithmic Methods for Sponsored Search Advertising

    CERN Document Server

    Feldman, Jon

    2008-01-01

    Modern commercial Internet search engines display advertisements along side the search results in response to user queries. Such sponsored search relies on market mechanisms to elicit prices for these advertisements, making use of an auction among advertisers who bid in order to have their ads shown for specific keywords. We present an overview of the current systems for such auctions and also describe the underlying game-theoretic aspects. The game involves three parties--advertisers, the search engine, and search users--and we present example research directions that emphasize the role of each. The algorithms for bidding and pricing in these games use techniques from three mathematical areas: mechanism design, optimization, and statistical estimation. Finally, we present some challenges in sponsored search advertising.

  14. The relation of low frequency restoration methods to the Gerchberg-Papoulis algorithm.

    Science.gov (United States)

    Yan, H; Mao, J T

    1990-10-01

    In magnetic resonance imaging, low frequency components can be allowed to saturate the analog to digital converter to reduce the quantization noise. These components can be estimated using least squares error estimation based low frequency restoration methods or the iterative Gerchberg-Papoulis algorithm. In this paper, we show the relationship between the closed form estimation methods and the iterative algorithm, propose a method for improving the speed of iteration, and discuss the advantages and disadvantages of two types of methods.

  15. Image Recovery Algorithm Based on Learned Dictionary

    Directory of Open Access Journals (Sweden)

    Xinghui Zhu

    2014-01-01

    Full Text Available We proposed a recovery scheme for image deblurring. The scheme is under the framework of sparse representation and it has three main contributions. Firstly, considering the sparse property of natural image, the nonlocal overcompleted dictionaries are learned for image patches in our scheme. And, then, we coded the patches in each nonlocal clustering with the corresponding learned dictionary to recover the whole latent image. In addition, for some practical applications, we also proposed a method to evaluate the blur kernel to make the algorithm usable in blind image recovery. The experimental results demonstrated that the proposed scheme is competitive with some current state-of-the-art methods.

  16. CUDT: a CUDA based decision tree algorithm.

    Science.gov (United States)

    Lo, Win-Tsung; Chang, Yue-Shan; Sheu, Ruey-Kai; Chiu, Chun-Chieh; Yuan, Shyan-Ming

    2014-01-01

    Decision tree is one of the famous classification methods in data mining. Many researches have been proposed, which were focusing on improving the performance of decision tree. However, those algorithms are developed and run on traditional distributed systems. Obviously the latency could not be improved while processing huge data generated by ubiquitous sensing node in the era without new technology help. In order to improve data processing latency in huge data mining, in this paper, we design and implement a new parallelized decision tree algorithm on a CUDA (compute unified device architecture), which is a GPGPU solution provided by NVIDIA. In the proposed system, CPU is responsible for flow control while the GPU is responsible for computation. We have conducted many experiments to evaluate system performance of CUDT and made a comparison with traditional CPU version. The results show that CUDT is 5 ∼ 55 times faster than Weka-j48 and is 18 times speedup than SPRINT for large data set.

  17. Node-Dependence-Based Dynamic Incentive Algorithm in Opportunistic Networks

    Directory of Open Access Journals (Sweden)

    Ruiyun Yu

    2014-01-01

    Full Text Available Opportunistic networks lack end-to-end paths between source nodes and destination nodes, so the communications are mainly carried out by the “store-carry-forward” strategy. Selfish behaviors of rejecting packet relay requests will severely worsen the network performance. Incentive is an efficient way to reduce selfish behaviors and hence improves the reliability and robustness of the networks. In this paper, we propose the node-dependence-based dynamic gaming incentive (NDI algorithm, which exploits the dynamic repeated gaming to motivate nodes relaying packets for other nodes. The NDI algorithm presents a mechanism of tolerating selfish behaviors of nodes. Reward and punishment methods are also designed based on the node dependence degree. Simulation results show that the NDI algorithm is effective in increasing the delivery ratio and decreasing average latency when there are a lot of selfish nodes in the opportunistic networks.

  18. Time-Based Dynamic Trust Model Using Ant Colony Algorithm

    Institute of Scientific and Technical Information of China (English)

    TANG Zhuo; LU Zhengding; LI Kai

    2006-01-01

    The trust in distributed environment is uncertain, which is variation for various factors. This paper introduces TDTM, a model for time-based dynamic trust. Every entity in the distribute environment is endowed with a trust-vector, which figures the trust intensity between this entity and the others. The trust intensity is dynamic due to the time and the inter-operation between two entities, a method is proposed to quantify this change based on the mind of ant colony algorithm and then an algorithm for the transfer of trust relation is also proposed. Furthermore, this paper analyses the influence to the trust intensity among all entities that is aroused by the change of trust intensity between the two entities, and presents an algorithm to resolve the problem. Finally, we show the process of the trusts'change that is aroused by the time' lapse and the inter-operation through an instance.

  19. A New Method to Extract Customer Relational Graph Based on Modified FP-Growth Algorithm%基于改进的FP-Growth算法提取客户关系图

    Institute of Scientific and Technical Information of China (English)

    汪欢文; 陆海良; 单宇翔

    2015-01-01

    利用客户关系图可以很清晰地看出企业与客户之间的各类关系,便于企业决策者采取针对性的措施来改善客户关系.该文提出了一种基于改进的FP-Growth算法进行客户关系图提取的方法,通过最小支持度寻找到所有的频繁项集,然后结合最小置信度,筛选出所需要的关联规则来提高算法的效率.本方法已应用于浙江中烟CRM系统,结果证明该改进算法有比较好的效果.%Customer relationships can be clearly seen in customer relationship graph, thus business decision-makers can take spe-cific measures to facilitate customer relationships. This paper presents an improved algorithm based on FP-Growth algorithm to ex-tract customer relationship graph. We find all frequent itemsets through minimum support, then filter out the desired association rules integrated with the minimum confidence, which can improve the efficiency of the algorithm considerably. This method has been applied to Zhejiang Tobacco CRM system, and the results show that the improved algorithm is very effective.

  20. Multi-Agent Reinforcement Learning Algorithm Based on Action Prediction

    Institute of Scientific and Technical Information of China (English)

    TONG Liang; LU Ji-lian

    2006-01-01

    Multi-agent reinforcement learning algorithms are studied. A prediction-based multi-agent reinforcement learning algorithm is presented for multi-robot cooperation task. The multi-robot cooperation experiment based on multi-agent inverted pendulum is made to test the efficency of the new algorithm, and the experiment results show that the new algorithm can achieve the cooperation strategy much faster than the primitive multiagent reinforcement learning algorithm.

  1. Algorithms and software for total variation image reconstruction via first-order methods

    DEFF Research Database (Denmark)

    Dahl, Joahim; Hansen, Per Christian; Jensen, Søren Holdt

    2010-01-01

    This paper describes new algorithms and related software for total variation (TV) image reconstruction, more specifically: denoising, inpainting, and deblurring. The algorithms are based on one of Nesterov's first-order methods, tailored to the image processing applications in such a way that...

  2. 基于改进人工蜂群算法的高光谱图像端元提取方法%An endmember extraction method for hyperspectral remote sensing imagery based on improved artificial bee colony algorithm

    Institute of Scientific and Technical Information of China (English)

    李冰; 孙辉; 孙宁; 王坤

    2015-01-01

    To solve the problem of endmember extraction for hyperspectral remote sensing imagery,a new endmember extraction method based on improved artificial bee colony algorithm is proposed. First,the weighted generated bee guided search strategy is used to balance the exploration and exploitation in ABC, and a new algorithm named IABC is proposed. Experiments are carried out on 8 benchmark functions,and the results show that the performance of the new algorithm is significantly improved. Then,the core idea and the main steps of the IABC-based extraction are introduced. The results show that the new algorithm has better applicability compared with ABC and conventional extraction algorithm in the simulation and real hyperspectral data.%针对高光谱图像中端元提取的问题,提出了一种基于改进人工蜂群算法的提取方法。首先,为平衡人工蜂群算法全局和局部搜索能力,研究了加权构造蜂引导的搜索策略,构造了改进人工蜂群算法。在8个基准测试函数中进行实验,验证了新算法的性能有明显提升。然后,介绍了基于IABC端元提取的核心思想与主要步骤,与ABC和常规提取算法在模拟和真实高光谱遥感数据中进行实验对比,结果表明了新算法具有更好的适用性。

  3. Navigation Algorithm Using Fuzzy Control Method in Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Cviklovič Vladimír

    2016-03-01

    Full Text Available The issue of navigation methods is being continuously developed globally. The aim of this article is to test the fuzzy control algorithm for track finding in mobile robotics. The concept of an autonomous mobile robot EN20 has been designed to test its behaviour. The odometry navigation method was used. The benefits of fuzzy control are in the evidence of mobile robot’s behaviour. These benefits are obtained when more physical variables on the base of more input variables are controlled at the same time. In our case, there are two input variables - heading angle and distance, and two output variables - the angular velocity of the left and right wheel. The autonomous mobile robot is moving with human logic.

  4. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation

    Directory of Open Access Journals (Sweden)

    Xiao Sun

    2015-01-01

    Full Text Available Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it.

  5. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation.

    Science.gov (United States)

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it.

  6. A CUDA-based reverse gridding algorithm for MR reconstruction.

    Science.gov (United States)

    Yang, Jingzhu; Feng, Chaolu; Zhao, Dazhe

    2013-02-01

    MR raw data collected using non-Cartesian method can be transformed on Cartesian grids by traditional gridding algorithm (GA) and reconstructed by Fourier transform. However, its runtime complexity is O(K×N(2)), where resolution of raw data is N×N and size of convolution window (CW) is K. And it involves a large number of matrix calculation including modulus, addition, multiplication and convolution. Therefore, a Compute Unified Device Architecture (CUDA)-based algorithm is proposed to improve the reconstruction efficiency of PROPELLER (a globally recognized non-Cartesian sampling method). Experiment shows a write-write conflict among multiple CUDA threads. This induces an inconsistent result when synchronously convoluting multiple k-space data onto the same grid. To overcome this problem, a reverse gridding algorithm (RGA) was developed. Different from the method of generating a grid window for each trajectory as in traditional GA, RGA calculates a trajectory window for each grid. This is what "reverse" means. For each k-space point in the CW, contribution is cumulated to this grid. Although this algorithm can be easily extended to reconstruct other non-Cartesian sampled raw data, we only implement it based on PROPELLER. Experiment illustrates that this CUDA-based RGA has successfully solved the write-write conflict and its reconstruction speed is 7.5 times higher than that of traditional GA.

  7. 基于粗集分类和遗传算法的知识库集成方法%The Methods of Knowledge Database Integration Based on the Rough Set Classification and Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    郭平; 程代杰

    2003-01-01

    As the base of intelligent system, it is very important to guarantee the consistency and non-redundancy of knowledge in knowledge database. Since the variety of knowledge sources, it is necessary to dispose knowledge with redundancy, inclusion and even contradiction during the integration of knowledge database. This paper researches the integration method based on the multi-knowledge database. Firstly, it finds out the inconsistent knowledge sets between the knowledge databases by rough set classification and presents one method eliminating the inconsistency by test data. Then, it regards consistent knowledge sets as the initial population of genetic calculation and constructs a genetic adaptive function based on accuracy, practicability and spreadability of knowledge representation to carry on the genetic calculation. Lastly, classifying the results of genetic calculation reduces the knowledge redundancy of knowledge database. This paper also presents a frameworkfor knowledge database integration based on the rough set classification and genetic algorithm.

  8. DIGITAL SPECKLE CORRELATION METHOD IMPROVED BY GENETIC ALGORITHM

    Institute of Scientific and Technical Information of China (English)

    MaShaopeng; JillGuanchang

    2003-01-01

    The digital speckle correlation method is an important optical metrology for surface displacement and strain measurement. With this technique, the whole field deformation information can be obtained by tracking the geometric points on the speckle images based on a correlation-matching search technique. However, general search techniques suffer from great computational complexity in the processing of speckle images with large deformation and the large random errors in the processing of images of bad quality. In this paper, an advanced approach based on genetic algorithms (GA) for correlation-matching search is developed. Benefiting from the abilities of global optimum and parallelism searching of GA, this new approach can complete the correlation-matching search with less computational consumption and at high accuracy. Two experimental results from the simulated speckle images have proved the efficiency of the new approach.

  9. TOA-BASED ROBUST LOCATION ALGORITHMS FOR WIRELESS CELLULAR NETWORKS

    Institute of Scientific and Technical Information of China (English)

    Sun Guolin; Guo Wei

    2005-01-01

    Caused by Non-Line-Of-Sight (NLOS) propagation effect, the non-symmetric contamination of measured Time Of Arrival (TOA) data leads to high inaccuracies of the conventional TOA based mobile location techniques. Robust position estimation method based on bootstrapping M-estimation and Huber estimator are proposed to mitigate the effects of NLOS propagation on the location error. Simulation results show the improvement over traditional Least-Square (LS)algorithm on location accuracy under different channel environments.

  10. A Novel Multiobjective Evolutionary Algorithm Based on Regression Analysis

    Directory of Open Access Journals (Sweden)

    Zhiming Song

    2015-01-01

    Full Text Available As is known, the Pareto set of a continuous multiobjective optimization problem with m objective functions is a piecewise continuous (m-1-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is (m-1-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper.

  11. A novel multiobjective evolutionary algorithm based on regression analysis.

    Science.gov (United States)

    Song, Zhiming; Wang, Maocai; Dai, Guangming; Vasile, Massimiliano

    2015-01-01

    As is known, the Pareto set of a continuous multiobjective optimization problem with m objective functions is a piecewise continuous (m - 1)-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA) is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is (m - 1)-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper.

  12. Quantitative Methods in Supply Chain Management Models and Algorithms

    CERN Document Server

    Christou, Ioannis T

    2012-01-01

    Quantitative Methods in Supply Chain Management presents some of the most important methods and tools available for modeling and solving problems arising in the context of supply chain management. In the context of this book, “solving problems” usually means designing efficient algorithms for obtaining high-quality solutions. The first chapter is an extensive optimization review covering continuous unconstrained and constrained linear and nonlinear optimization algorithms, as well as dynamic programming and discrete optimization exact methods and heuristics. The second chapter presents time-series forecasting methods together with prediction market techniques for demand forecasting of new products and services. The third chapter details models and algorithms for planning and scheduling with an emphasis on production planning and personnel scheduling. The fourth chapter presents deterministic and stochastic models for inventory control with a detailed analysis on periodic review systems and algorithmic dev...

  13. Photoacoustic image reconstruction based on Bayesian compressive sensing algorithm

    Institute of Scientific and Technical Information of China (English)

    Mingjian Sun; Naizhang Feng; Yi Shen; Jiangang Li; Liyong Ma; Zhenghua Wu

    2011-01-01

    The photoacoustic tomography (PAT) method, based on compressive sensing (CS) theory, requires that,for the CS reconstruction, the desired image should have a sparse representation in a known transform domain. However, the sparsity of photoacoustic signals is destroyed because noises always exist. Therefore,the original sparse signal cannot be effectively recovered using the general reconstruction algorithm. In this study, Bayesian compressive sensing (BCS) is employed to obtain highly sparse representations of photoacoustic images based on a set of noisy CS measurements. Results of simulation demonstrate that the BCS-reconstructed image can achieve superior performance than other state-of-the-art CS-reconstruction algorithms.%@@ The photoacoustic tomography (PAT) method, based on compressive sensing (CS) theory, requires that,for the CS reconstruction, the desired image should have a sparse representation in a known transform domain.However, the sparsity of photoacoustic signals is destroyed because noises always exist.Therefore,the original sparse signal cannot be effectively recovered using the general reconstruction algorithm.In this study, Bayesian compressive sensing (BCS) is employed to obtain highly sparse representations of photoacoustic inages based on a set of noisy CS measurements.Results of simulation demonstrate that the BCS-reconstructed image can achieve superior performance than other state-of-the-art CS-reconstruction algorithms.

  14. Algorithm Research of Individualized Travelling Route Recommendation Based on Similarity

    Directory of Open Access Journals (Sweden)

    Xue Shan

    2015-01-01

    Full Text Available Although commercial recommendation system has made certain achievement in travelling route development, the recommendation system is facing a series of challenges because of people’s increasing interest in travelling. It is obvious that the core content of the recommendation system is recommendation algorithm. The advantages of recommendation algorithm can bring great effect to the recommendation system. Based on this, this paper applies traditional collaborative filtering algorithm for analysis. Besides, illustrating the deficiencies of the algorithm, such as the rating unicity and rating matrix sparsity, this paper proposes an improved algorithm combing the multi-similarity algorithm based on user and the element similarity algorithm based on user, so as to compensate for the deficiencies that traditional algorithm has within a controllable range. Experimental results have shown that the improved algorithm has obvious advantages in comparison with the traditional one. The improved algorithm has obvious effect on remedying the rating matrix sparsity and rating unicity.

  15. Matrix-based, finite-difference algorithms for computational acoustics

    Science.gov (United States)

    Davis, Sanford

    1990-01-01

    A compact numerical algorithm is introduced for simulating multidimensional acoustic waves. The algorithm is expressed in terms of a set of matrix coefficients on a three-point spatial grid that approximates the acoustic wave equation with a discretization error of O(h exp 5). The method is based on tracking a local phase variable and its implementation suggests a convenient coordinate splitting along with natural intermediate boundary conditions. Results are presented for oblique plane waves and compared with other procedures. Preliminary computations of acoustic diffraction are also considered.

  16. Multiple Lookup Table-Based AES Encryption Algorithm Implementation

    Science.gov (United States)

    Gong, Jin; Liu, Wenyi; Zhang, Huixin

    Anew AES (Advanced Encryption Standard) encryption algorithm implementation was proposed in this paper. It is based on five lookup tables, which are generated from S-box(the substitution table in AES). The obvious advantages are reducing the code-size, improving the implementation efficiency, and helping new learners to understand the AES encryption algorithm and GF(28) multiplication which are necessary to correctly implement AES[1]. This method can be applied on processors with word length 32 or above, FPGA and others. And correspondingly we can implement it by VHDL, Verilog, VB and other languages.

  17. METHOD OF CENTERS ALGORITHM FOR MULTI-OBJECTIVE PROGRAMMING PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    Tarek Emam

    2009-01-01

    In this paper, we consider a method of centers for solving multi-objective programming problems, where the objective functions involved are concave functions and the set of feasible points is convex. The algorithm is defined so that the sub-problems that must be solved during its execution may be solved by finite-step procedures. Conditions are given under which the algorithm generates sequences of feasible points and constraint multiplier vectors that have accumulation points satisfying the KKT conditions. Finally, we establish convergence of the proposed method of centers algorithm for solving multiobjective programming problems.

  18. Crossover Method for Interactive Genetic Algorithms to Estimate Multimodal Preferences

    Directory of Open Access Journals (Sweden)

    Misato Tanaka

    2013-01-01

    Full Text Available We apply an interactive genetic algorithm (iGA to generate product recommendations. iGAs search for a single optimum point based on a user’s Kansei through the interaction between the user and machine. However, especially in the domain of product recommendations, there may be numerous optimum points. Therefore, the purpose of this study is to develop a new iGA crossover method that concurrently searches for multiple optimum points for multiple user preferences. The proposed method estimates the locations of the optimum area by a clustering method and then searches for the maximum values of the area by a probabilistic model. To confirm the effectiveness of this method, two experiments were performed. In the first experiment, a pseudouser operated an experiment system that implemented the proposed and conventional methods and the solutions obtained were evaluated using a set of pseudomultiple preferences. With this experiment, we proved that when there are multiple preferences, the proposed method searches faster and more diversely than the conventional one. The second experiment was a subjective experiment. This experiment showed that the proposed method was able to search concurrently for more preferences when subjects had multiple preferences.

  19. Image completion algorithm based on texture synthesis

    Institute of Scientific and Technical Information of China (English)

    Zhang Hongying; Peng Qicong; Wu Yadong

    2007-01-01

    A new algorithm is proposed for completing the missing parts caused by the removal of foreground or background elements from an image of natural scenery in a visually plausible way.The major contributions of the proposed algorithm are: (1) for most natural images, there is a strong orientation of texture or color distribution.So a method is introduced to compute the main direction of the texture and complete the image by limiting the search to one direction to carry out image completion quite fast; (2) there exists a synthesis ordering for image completion.The searching order of the patches is denned to ensure the regions with more known information and the structures should be completed before filling in other regions; (3) to improve the visual effect of texture synthesis, an adaptive scheme is presented to determine the size of the template window for capturing the features of various scales.A number of examples are given to demonstrate the effectiveness of the proposed algorithm.

  20. Asian Option Pricing Based on Genetic Algorithms

    Institute of Scientific and Technical Information of China (English)

    YunzhongLiu; HuiyuXuan

    2004-01-01

    The cross-fertilization between artificial intelligence and computational finance has resulted in some of the most active research areas in financial engineering. One direction is the application of machine learning techniques to pricing financial products, which is certainly one of the most complex issues in finance. In the literature, when the interest rate,the mean rate of return and the volatility of the underlying asset follow general stochastic processes, the exact solution is usually not available. In this paper, we shall illustrate how genetic algorithms (GAs), as a numerical approach, can be potentially helpful in dealing with pricing. In particular, we test the performance of basic genetic algorithms by using it to the determination of prices of Asian options, whose exact solutions is known from Black-Scholesoption pricing theory. The solutions found by basic genetic algorithms are compared with the exact solution, and the performance of GAs is ewluated accordingly. Based on these ewluations, some limitations of GAs in option pricing are examined and possible extensions to future works are also proposed.

  1. Melody Extraction Method from MIDI Based on H-K Algorithm%基于H-K算法的MIDI音乐主旋律提取

    Institute of Scientific and Technical Information of China (English)

    刘勇; 林景栋; 穆伟力

    2011-01-01

    音乐的主旋律音轨包含了很多重要的音乐旋律信息,是音乐特征识别的基础,也是音乐灯光表演方案设计的前提工作.这方面的工作涉及了音乐旋律的表达、旋律特征的抽取以及分类技术等许多内容.针对多音轨MIDI文件,介绍一种多音轨MIDI音乐主旋律识别方法.通过对表征音乐旋律的特征量的提取,采用H-K分类算法,构建音轨分类器模型,对MIDI主旋律音轨和伴奏旋律音轨进行分类,从而提取MIDI音乐主旋律音轨.实验结果显示取得了较好的效果,为音乐灯光表演方案的自动设计做了必要的前提工作.%Main melody is the basis of the music feature recognition and the premise of music light show design work by compute, as it contains the most important information about music melody.This work includes many questions, such as melody representation, melody character exwaction and classifying technology.This paper represents a model for auto melody extraction method for multi-track MIDI files.Construct the feature vector space by means of extracting the music melody feature.To classify the melody of the main track and the accompaniment tracks with the H-K algorithm in order t6 exwact the main melody track, and achieved good results.It prepares for the program of automatic design the music light show.

  2. Multi-objective community detection based on memetic algorithm.

    Directory of Open Access Journals (Sweden)

    Peng Wu

    Full Text Available Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels.

  3. The Books Recommend Service System Based on Improved Algorithm for Mining Association Rules

    Institute of Scientific and Technical Information of China (English)

    王萍

    2009-01-01

    The Apriori algorithm is a classical method of association rules mining. Based on analysis of this theory, the paper provides an improved Apriori algorithm. The paper puts foward with algorithm combines HASH table technique and reduction of candidate item sets to en-hance the usage efficiency of resources as well as the individualized service of the data library.

  4. Improved pulse laser ranging algorithm based on high speed sampling

    Science.gov (United States)

    Gao, Xuan-yi; Qian, Rui-hai; Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; He, Shi-jie; Guo, Xiao-kang

    2016-10-01

    Narrow pulse laser ranging achieves long-range target detection using laser pulse with low divergent beams. Pulse laser ranging is widely used in military, industrial, civil, engineering and transportation field. In this paper, an improved narrow pulse laser ranging algorithm is studied based on the high speed sampling. Firstly, theoretical simulation models have been built and analyzed including the laser emission and pulse laser ranging algorithm. An improved pulse ranging algorithm is developed. This new algorithm combines the matched filter algorithm and the constant fraction discrimination (CFD) algorithm. After the algorithm simulation, a laser ranging hardware system is set up to implement the improved algorithm. The laser ranging hardware system includes a laser diode, a laser detector and a high sample rate data logging circuit. Subsequently, using Verilog HDL language, the improved algorithm is implemented in the FPGA chip based on fusion of the matched filter algorithm and the CFD algorithm. Finally, the laser ranging experiment is carried out to test the improved algorithm ranging performance comparing to the matched filter algorithm and the CFD algorithm using the laser ranging hardware system. The test analysis result demonstrates that the laser ranging hardware system realized the high speed processing and high speed sampling data transmission. The algorithm analysis result presents that the improved algorithm achieves 0.3m distance ranging precision. The improved algorithm analysis result meets the expected effect, which is consistent with the theoretical simulation.

  5. Multicast Routing Problem Using Tree-Based Cuckoo Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Mahmood Sardarpour

    2016-06-01

    Full Text Available The problem of QoS multicast routing is to find a multicast tree with the least expense/cost which would meet the limitations such as band width, delay and loss rate. This is a NP-Complete problem. To solve the problem of multicast routing, the entire routes from the source node to every destination node are often recognized. Then the routes are integrated and changed into a single multicast tree. But they are slow and complicated methods. The present paper introduces a new tree-based optimization method to overcome such weaknesses. The recommended method directly optimizes the multicast tree. Therefore a tree-based typology including several spanning trees is created which combines the trees two by two. For this purpose, the Cuckoo Algorithm is used which is proved to be well converged and makes quick calculations. The simulation conducted on different types of network typologies proved that it is a practical and influential algorithm.

  6. The positioning algorithm based on feature variance of billet character

    Science.gov (United States)

    Yi, Jiansong; Hong, Hanyu; Shi, Yu; Chen, Hongyang

    2015-12-01

    In the process of steel billets recognition on the production line, the key problem is how to determine the position of the billet from complex scenes. To solve this problem, this paper presents a positioning algorithm based on the feature variance of billet character. Using the largest intra-cluster variance recursive method based on multilevel filtering, the billet characters are segmented completely from the complex scenes. There are three rows of characters on each steel billet, we are able to determine whether the connected regions, which satisfy the condition of the feature variance, are on a straight line. Then we can accurately locate the steel billet. The experimental results demonstrated that the proposed method in this paper is competitive to other methods in positioning the characters and it also reduce the running time. The algorithm can provide a better basis for the character recognition.

  7. Unified HMM-based layout analysis framework and algorithm

    Institute of Scientific and Technical Information of China (English)

    陈明; 丁晓青; 吴佑寿

    2003-01-01

    To manipulate the layout analysis problem for complex or irregular document image, a Unified HMM-based Layout Analysis Framework is presented in this paper. Based on the multi-resolution wavelet analysis results of the document image, we use HMM method in both inner-scale image model and trans-scale context model to classify the pixel region properties, such as text, picture or background. In each scale, a HMM direct segmentation method is used to get better inner-scale classification result. Then another HMM method is used to fuse the inner-scale result in each scale and then get better final seg- mentation result. The optimized algorithm uses a stop rule in the coarse to fine multi-scale segmentation process, so the speed is improved remarkably. Experiments prove the efficiency of proposed algorithm.

  8. PARTIAL TRAINING METHOD FOR HEURISTIC ALGORITHM OF POSSIBLE CLUSTERIZATION UNDER UNKNOWN NUMBER OF CLASSES

    Directory of Open Access Journals (Sweden)

    D. A. Viattchenin

    2009-01-01

    Full Text Available A method for constructing a subset of labeled objects which is used in a heuristic algorithm of possible  clusterization with partial  training is proposed in the  paper.  The  method  is  based  on  data preprocessing by the heuristic algorithm of possible clusterization using a transitive closure of a fuzzy tolerance. Method efficiency is demonstrated by way of an illustrative example.

  9. PCNN image segmentation method based on bactrial foraging optimization algorithm%PCNN文本图像分割的细菌觅食优化算法

    Institute of Scientific and Technical Information of China (English)

    廖艳萍; 张鹏

    2015-01-01

    为解决脉冲耦合神经网络(pulse-coupled neural network,PCNN)模型参数人工凭经验和需要反复实验才能确定的难题,提出一种基于改进的PCNN模型.以最大类间方差函数作为细菌觅食算法的适应度函数,采用细菌觅食优化算法搜索最优参数的图像分割算法,避免了人工实验设定参数的盲目性.实验结果表明,该算法可以有效实现文本图像分割,并且分割效果明显优于对比算法.%To handle the difficult task of setting the relative parameters properly in the research and application of Pulse Coupled Neural Networks ( PCNN),an improved PCNN algorithm is proposed.It uses the maximum between-cluster variance function as the fitness function of bacterial foraging optimization algorithm,and adopts bacterial foraging optimization algorithm to search the optimal parameters,and eliminates the trouble of manually setting the experiment parameters.Experimental results show that the proposed algorithm can effectively complete document image segmentation,and result of the segmentation is obviously better than the contrast algorithms.

  10. Determination of Selection Method in Genetic Algorithm for Land Suitability

    Directory of Open Access Journals (Sweden)

    Irfianti Asti Dwi

    2016-01-01

    Full Text Available Genetic Algoirthm is one alternative solution in the field of modeling optimization, automatic programming and machine learning. The purpose of the study was to compare some type of selection methods in Genetic Algorithm for land suitability. Contribution of this research applies the best method to develop region based horticultural commodities. This testing is done by comparing the three methods on the method of selection, the Roulette Wheel, Tournament Selection and Stochastic Universal Sampling. Parameters of the locations used in the test scenarios include Temperature = 27°C, Rainfall = 1200 mm, hummidity = 30%, Cluster fruit = 4, Crossover Probabiitiy (Pc = 0.6, Mutation Probabilty (Pm = 0.2 and Epoch = 10. The second test epoch incluides location parameters consist of Temperature = 30°C, Rainfall = 2000 mm, Humidity = 35%, Cluster fruit = 5, Crossover Probability (Pc = 0.7, Mutation Probability (Pm = 0.3 and Epoch 10. The conclusion of this study shows that the Roulette Wheel is the best method because it produces more stable and fitness value than the other two methods.

  11. An Initiative-Learning Algorithm Based on System Uncertainty

    Institute of Scientific and Technical Information of China (English)

    ZHAO Jun

    2005-01-01

    Initiative-learning algorithms are characterized by and hence advantageous for their independence of prior domain knowledge.Usually,their induced results could more objectively express the potential characteristics and patterns of information systems.Initiative-learning processes can be effectively conducted by system uncertainty,because uncertainty is an intrinsic common feature of and also an essential link between information systems and their induced results.Obviously,the effectiveness of such initiative-learning framework is heavily dependent on the accuracy of system uncertainty measurements.Herein,a more reasonable method for measuring system uncertainty is developed based on rough set theory and the conception of information entropy;then a new algorithm is developed on the bases of the new system uncertainty measurement and the Skowron's algorithm for mining propositional default decision rules.The proposed algorithm is typically initiative-learning.It is well adaptable to system uncertainty.As shown by simulation experiments,its comprehensive performances are much better than those of congeneric algorithms.

  12. 基于奔德斯算法的安全约束机组组合方法%A Method for Security Constrained Unit Commitment Based on Benders Algorithm

    Institute of Scientific and Technical Information of China (English)

    王楠; 张粒子; 袁喆; 张黎明; 李雪

    2012-01-01

    When security-constrained unit commitment (SCUC) was directly solved by mixed integer programming algorithm, the calculation efficiency would considerably decrease, and when SCUC was solved by Benders algorithm, it led to the problem that the solution efficiency would decrease due to the algorithm shock and the restriction of system scale. A new Benders algorithm-based method to solve SCUC was proposed. Based on Benders algorithm, by means of adding the link to correct the constraint of out-of-limit after the iteration of Benders master problem, the search direction of Benders cut could be controlled; by means of adding the link to identify taken-effect constraints the search space of Benders algorithm was reduced, thus the solution efficiency of SCUC optimization could be improved. The effectiveness of the proposed method was verified by simulation results of 6-machine 3-bus system and 54-machine 118-bus system.%针对采用混合整数规划算法直接求解安全约束机组组合,将使计算效率大幅度降低,而利用奔德斯算法求解则存在着算法振荡和受系统规模制约求解效率下降的问题,提出了一种基于奔德斯算法的安全约束机组组合新方法.该方法在奔德斯算法的基础上,通过纳入新增越限约束校正环节,控制了奔德斯割的寻优方向,通过增加起作用约束识别环节,缩小了奔德斯算法的寻优空间,进而提高了安全约束机组组合优化问题的求解效率.6机3节点和54机118节点算例验证了所提方法的有效性.

  13. Function Optimization Based on Quantum Genetic Algorithm

    OpenAIRE

    Ying Sun; Yuesheng Gu; Hegen Xiong

    2013-01-01

    Quantum genetic algorithm has the characteristics of good population diversity, rapid convergence and good global search capability and so on.It combines quantum algorithm with genetic algorithm. A novel quantum genetic algorithm is proposed ,which is called variable-boundary-coded quantum genetic algorithm (vbQGA) in which qubit chromosomes are collapsed into variableboundary- coded chromosomes instead of binary-coded chromosomes. Therefore much shorter chromosome strings can be gained.The m...

  14. An Artificial Glowworm Swarm Optimization Algorithm Based on Powell Local Optimization Method%一种用Powell方法局部优化的人工萤火虫算法

    Institute of Scientific and Technical Information of China (English)

    张军丽; 周永权

    2011-01-01

    In order to overcome the shortcomings of artificial glowworm swarm optimization ( GSO) algorithm including slow convergence speed, easily falling into local optimum value, low computational accuracy and low success rate of convergence, an artificial GSO algorithm based on Powell local optimization method is proposed. It adopts the powerful local optimization ability of Powell method and embeds it into GSO as a local search operator. Experimental results of 8 typical functions show that the proposed algorithm is superior to GSO in convergence efficiency, computational precision and stability.%针对人工萤火虫算法在寻找函数全局最优值时,存在着收敛速度慢、易陷入局部最优、收敛成功率和求解精度低等不足,利用Powell方法强大的局部优化能力,将其作为一局部搜索算子嵌入到人工萤火虫算法,提出一种用Powell方法局部优化的人工萤火虫算法.最后,8个标准函数测试结果表明,改进后人工萤火虫算法在收敛速度、精度和稳定性方面都优于人工萤火虫算法.

  15. An Algorithm on Calculating Value of Mathematical Constant “e” Based on Monte-Carlo Method%一种基于Monte-Carlo方法计算数学常数e值的算法

    Institute of Scientific and Technical Information of China (English)

    张乐成; 邵梅; 迟津愉; 宁宁宁

    2012-01-01

    Monte-Carlo方法是一种以概率统计理论为指导的非常重要的数值计算方法,基于Monte-Carlo方法计算定积分的算法是较常见定积分近似计算方法.本文针对计算数学常数e(自然对数的底)值的问题,选择一个特殊定积分分别用Monte-Carlo方法和Newton-Leibniz公式进行计算,通过对这两个计算结果进行比较分析,从中得到数学常数e计算方法.实验结果表明,该算法具有实效性,且有较好的准确率和时间效率.%Monte-Carlo method is a very important numerical method guided by a statistical probability theory. The algorithm based on Monte-Carlo method to calculate the definite integral is the more common method of approximate calculation of definite integrals. To the problem about calculating the value of the mathematical constant "e" (natural logarithm) , the paper uses a method to select a special set points and uses Monte-Carlo method and the Newton-Leibniz formula to calculate "e" in order to get the algorithm on the mathematical constant "e". The result shows that the algorithm is effective and is of better accuracy and time efficiency.

  16. Optimization design of a gating system for sand casting aluminium A356 using a Taguchi method and multi-objective culture-based QPSO algorithm

    Directory of Open Access Journals (Sweden)

    Wen-Jong Chen

    2016-04-01

    Full Text Available This article combined Taguchi method and analysis of variance with the culture-based quantum-behaved particle swarm optimization to determine the optimal models of gating system for aluminium (Al A356 sand casting part. First, the Taguchi method and analysis of variance were, respectively, applied to establish an L27(38 orthogonal array and determine significant process parameters, including riser diameter, pouring temperature, pouring speed, riser position and gating diameter. Subsequently, a response surface methodology was used to construct a second-order regression model, including filling time, solidification time and oxide ratio. Finally, the culture-based quantum-behaved particle swarm optimization was used to determine the multi-objective Pareto optimal solutions and identify corresponding process conditions. The results showed that the proposed method, compared with initial casting model, enabled reducing the filling time, solidification time and oxide ratio by 68.14%, 50.56% and 20.20%, respectively. A confirmation experiment was verified to be able to effectively reduce the defect of casting and improve the casting quality.

  17. A New Control Method for Grid-Connected PV System Based on Quasi-Z-Source Cascaded Multilevel Inverter Using Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    Hamid Reza Mohammadi

    2015-03-01

    Full Text Available In this paper, a new control method for quasi-Z-source cascaded multilevel inverter based grid-connected photovoltaic (PV system is proposed. The proposed method is capable of boosting the PV array voltage to a higher level and solves the imbalance problem of DC-link voltage in traditional cascaded H-bridge inverters. The proposed control system adjusts the grid injected current in phase with the grid voltage and achieves independent maximum power point tracking (MPPT for the separate PV arrays. To achieve these goals, the proportional-integral (PI controllers are employed for each module. For achieving the best performance, this paper presents an optimum approach to design the controller parameters using particle swarm optimization (PSO. The primary design goal is to obtain good response by minimizing the integral absolute error. Also, the transient response is guaranteed by minimizing the overshoot, settling time and rise time of the system response. The effectiveness of the new proposed control method has been verified through simulation studies based on a seven level quasi-Z-Source cascaded multilevel inverter.

  18. Exploration of a capability-focused aerospace system of systems architecture alternative with bilayer design space, based on RST-SOM algorithmic methods.

    Science.gov (United States)

    Li, Zhifei; Qin, Dongliang; Yang, Feng

    2014-01-01

    In defense related programs, the use of capability-based analysis, design, and acquisition has been significant. In order to confront one of the most challenging features of a huge design space in capability based analysis (CBA), a literature review of design space exploration was first examined. Then, in the process of an aerospace system of systems design space exploration, a bilayer mapping method was put forward, based on the existing experimental and operating data. Finally, the feasibility of the foregoing approach was demonstrated with an illustrative example. With the data mining RST (rough sets theory) and SOM (self-organized mapping) techniques, the alternative to the aerospace system of systems architecture was mapping from P-space (performance space) to C-space (configuration space), and then from C-space to D-space (design space), respectively. Ultimately, the performance space was mapped to the design space, which completed the exploration and preliminary reduction of the entire design space. This method provides a computational analysis and implementation scheme for large-scale simulation.

  19. Crime Busting Model Based on Dynamic Ranking Algorithms

    Directory of Open Access Journals (Sweden)

    Yang Cao

    2013-01-01

    Full Text Available This paper proposed a crime busting model with two dynamic ranking algorithms to detect the likelihood of a suspect and the possibility of a leader in a complex social network. Signally, in order to obtain the priority list of suspects, an advanced network mining approach with a dynamic cumulative nominating algorithm is adopted to rapidly reduce computational expensiveness than most other topology-based approaches. Our method can also greatly increase the accuracy of solution with the enhancement of semantic learning filtering at the same time. Moreover, another dynamic algorithm of node contraction is also presented to help identify the leader among conspirators. Test results are given to verify the theoretical results, which show the great performance for either small or large datasets.

  20. An Algorithm of Sensor Management Based on Dynamic Target Detection

    Institute of Scientific and Technical Information of China (English)

    LIUXianxing; ZHOULin; JINYong

    2005-01-01

    The probability density of stationary target is only evolved at measurement update, but the probability density of dynamic target is evolved not only at measurement update but also during measurements, this paper researches an algorithm of dynamic targets detection. Firstly, it presents the evolution of probability density at measurement update by Bayes' rule and the evolution of probability density during measurements by Fokker-Planck differential equations, respectively. Secondly, the method of obtaining information entropy by the probability density is given and sensor resources are distributed based on the evolution of information entropy viz. the maximization of information gain. Simulation results show that compared with the algorithm of serial search, this algorithm is feasible and effective when it is used to detect dynamic target.

  1. An algorithm for motif-based network design

    CERN Document Server

    Mäki-Marttunen, Tuomo

    2016-01-01

    A determinant property of the structure of a biological network is the distribution of local connectivity patterns, i.e., network motifs. In this work, a method for creating directed, unweighted networks while promoting a certain combination of motifs is presented. This motif-based network algorithm starts with an empty graph and randomly connects the nodes by advancing or discouraging the formation of chosen motifs. The in- or out-degree distribution of the generated networks can be explicitly chosen. The algorithm is shown to perform well in producing networks with high occurrences of the targeted motifs, both ones consisting of 3 nodes as well as ones consisting of 4 nodes. Moreover, the algorithm can also be tuned to bring about global network characteristics found in many natural networks, such as small-worldness and modularity.

  2. Improving the performance of DCT-based fragile watermarking using intelligent optimization algorithms

    Science.gov (United States)

    Aslantas, Veysel; Ozer, Saban; Ozturk, Serkan

    2009-07-01

    The performance of a fragile watermarking method based on discrete cosine transform (DCT) has been improved in this paper by using intelligent optimization algorithms (IOA), namely genetic algorithm, differential evolution algorithm, clonal selection algorithm and particle swarm optimization algorithm. In DCT based fragile watermarking techniques, watermark embedding can usually be achieved by modifying the least significant bits of the transformation coefficients. After the embedding process is completed, transforming the modified coefficients from the frequency domain to the spatial domain produces some rounding errors due to the conversion of real numbers to integers. The rounding errors caused by this transformation process were corrected by the use of intelligent optimization algorithms mentioned above. This paper gives experimental results which show the feasibility of using these optimization algorithms for the fragile watermarking and demonstrate the accuracy of these methods. The performance comparison of the algorithms was also realized.

  3. A tuning algorithm for model predictive controllers based on genetic algorithms and fuzzy decision making.

    Science.gov (United States)

    van der Lee, J H; Svrcek, W Y; Young, B R

    2008-01-01

    Model Predictive Control is a valuable tool for the process control engineer in a wide variety of applications. Because of this the structure of an MPC can vary dramatically from application to application. There have been a number of works dedicated to MPC tuning for specific cases. Since MPCs can differ significantly, this means that these tuning methods become inapplicable and a trial and error tuning approach must be used. This can be quite time consuming and can result in non-optimum tuning. In an attempt to resolve this, a generalized automated tuning algorithm for MPCs was developed. This approach is numerically based and combines a genetic algorithm with multi-objective fuzzy decision-making. The key advantages to this approach are that genetic algorithms are not problem specific and only need to be adapted to account for the number and ranges of tuning parameters for a given MPC. As well, multi-objective fuzzy decision-making can handle qualitative statements of what optimum control is, in addition to being able to use multiple inputs to determine tuning parameters that best match the desired results. This is particularly useful for multi-input, multi-output (MIMO) cases where the definition of "optimum" control is subject to the opinion of the control engineer tuning the system. A case study will be presented in order to illustrate the use of the tuning algorithm. This will include how different definitions of "optimum" control can arise, and how they are accounted for in the multi-objective decision making algorithm. The resulting tuning parameters from each of the definition sets will be compared, and in doing so show that the tuning parameters vary in order to meet each definition of optimum control, thus showing the generalized automated tuning algorithm approach for tuning MPCs is feasible.

  4. Cognitive radio resource allocation based on coupled chaotic genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    Zu Yun-Xiao; Zhou Jie; Zeng Chang-Chang

    2010-01-01

    A coupled chaotic genetic algorithm for cognitive radio resource allocation which is based on genetic algorithm and coupled Logistic map is proposed. A fitness function for cognitive radio resource allocation is provided. Simulations are conducted for cognitive radio resource allocation by using the coupled chaotic genetic algorithm, simple genetic algorithm and dynamic allocation algorithm respectively. The simulation results show that, compared with simple genetic and dynamic allocation algorithm, coupled chaotic genetic algorithm reduces the total transmission power and bit error rate in cognitive radio system, and has faster convergence speed.

  5. Cognitive radio resource allocation based on coupled chaotic genetic algorithm

    Science.gov (United States)

    Zu, Yun-Xiao; Zhou, Jie; Zeng, Chang-Chang

    2010-11-01

    A coupled chaotic genetic algorithm for cognitive radio resource allocation which is based on genetic algorithm and coupled Logistic map is proposed. A fitness function for cognitive radio resource allocation is provided. Simulations are conducted for cognitive radio resource allocation by using the coupled chaotic genetic algorithm, simple genetic algorithm and dynamic allocation algorithm respectively. The simulation results show that, compared with simple genetic and dynamic allocation algorithm, coupled chaotic genetic algorithm reduces the total transmission power and bit error rate in cognitive radio system, and has faster convergence speed.

  6. A NOVEL THRESHOLD BASED EDGE DETECTION ALGORITHM

    Directory of Open Access Journals (Sweden)

    Y. RAMADEVI,

    2011-06-01

    Full Text Available Image segmentation is the process of partitioning/subdividing a digital image into multiple meaningful regions or sets of pixels regions with respect to a particular application. Edge detection is one of the frequently used techniques in digital image processing. The level to which the subdivision is carried depends on theproblem being viewed. Edges characterize boundaries and are therefore a problem of fundamental importance in image processing. There are many ways to perform edge detection. In this paper different Edge detection methods such as Sobel, Prewitt, Robert, Canny, Laplacian of Gaussian (LOG are used for segmenting the image. Expectation-Maximization (EM algorithm, OSTU and Genetic algorithms are also used. A new edge detection technique is proposed which detects the sharp and accurate edges that are not possible with the existing techniques. The proposed method with different threshold values for given input image is shown that ranges between 0 and 1 and it are observed that when the threshold value is 0.68 the sharp edges are recognised properly.

  7. A fast flexible docking method using an incremental construction algorithm.

    Science.gov (United States)

    Rarey, M; Kramer, B; Lengauer, T; Klebe, G

    1996-08-23

    We present an automatic method for docking organic ligands into protein binding sites. The method can be used in the design process of specific protein ligands. It combines an appropriate model of the physico-chemical properties of the docked molecules with efficient methods for sampling the conformational space of the ligand. If the ligand is flexible, it can adopt a large variety of different conformations. Each such minimum in conformational space presents a potential candidate for the conformation of the ligand in the complexed state. Our docking method samples the conformation space of the ligand on the basis of a discrete model and uses a tree-search technique for placing the ligand incrementally into the active site. For placing the first fragment of the ligand into the protein, we use hashing techniques adapted from computer vision. The incremental construction algorithm is based on a greedy strategy combined with efficient methods for overlap detection and for the search of new interactions. We present results on 19 complexes of which the binding geometry has been crystallographically determined. All considered ligands are docked in at most three minutes on a current workstation. The experimentally observed binding mode of the ligand is reproduced with 0.5 to 1.2 A rms deviation. It is almost always found among the highest-ranking conformations computed.

  8. Multifeature Fusion Vehicle Detection Algorithm Based on Choquet Integral

    Directory of Open Access Journals (Sweden)

    Wenhui Li

    2014-01-01

    Full Text Available Vision-based multivehicle detection plays an important role in Forward Collision Warning Systems (FCWS and Blind Spot Detection Systems (BSDS. The performance of these systems depends on the real-time capability, accuracy, and robustness of vehicle detection methods. To improve the accuracy of vehicle detection algorithm, we propose a multifeature fusion vehicle detection algorithm based on Choquet integral. This algorithm divides the vehicle detection problem into two phases: feature similarity measure and multifeature fusion. In the feature similarity measure phase, we first propose a taillight-based vehicle detection method, and then vehicle taillight feature similarity measure is defined. Second, combining with the definition of Choquet integral, the vehicle symmetry similarity measure and the HOG + AdaBoost feature similarity measure are defined. Finally, these three features are fused together by Choquet integral. Being evaluated on public test collections and our own test images, the experimental results show that our method has achieved effective and robust multivehicle detection in complicated environments. Our method can not only improve the detection rate but also reduce the false alarm rate, which meets the engineering requirements of Advanced Driving Assistance Systems (ADAS.

  9. The structured total least squares algorithm research for passive location based on angle information

    Institute of Scientific and Technical Information of China (English)

    WANG Ding; ZHANG Li; WU Ying

    2009-01-01

    Based on the constrained total least squares (CTLS) passive location algorithm with bearing-only measurements, in this paper, the same passive location problem is transformed into the structured total least squares (STLS) problem. The solution of the STLS problem for passive location can be obtained using the inverse iteration method. It also expatiates that both the STLS algorithm and the CTLS algorithm have the same location mean squares error under certain condition. Finally, the article presents a kind of location and tracking algorithm for moving target by combining STLS location algorithm with Kalman filter (KF). The efficiency and superiority of the proposed algorithms can be confirmed by computer simulation results.

  10. Evolutionary Algorithm Based on Immune Strategy

    Institute of Scientific and Technical Information of China (English)

    WANG Lei; JIAO Licheng

    2001-01-01

    A novel evolutionary algorithm,evolution-immunity strategies(EIS), is proposed with reference to the immune theory in biology, which constructs an immune operator accomplished by two steps, a vaccination and an immune selection. The aim of introducing the immune concepts and methods into ES is for finding the ways and means obtaining the optimal solution of difficult problems with locally characteristic information. The detail processes of realizing EIS are presented which contain 6 steps. EIS is analyzed with Markovian theory and it is approved to be convergent with probability 1. In EIS, an immune operator is an aggregation of specific operations and procedures, and methods of selecting vaccines and constructing an immune operator are given in this paper. It is shown with an example of the 442-city TSP that the EIS can restrain the degenerate phenomenon during the evolutionary process by simulated calculating result, improve the searching capability and efficiency, and therefore, increase the convergent speed greatly.

  11. Synthetic aperture sonar movement compensation algorithm based on time-delay and phase estimation

    Institute of Scientific and Technical Information of China (English)

    JIANG Nan; SUN Dajun; TIAN Tan

    2003-01-01

    The effects of movement errors on imaging results of synthetic aperture sonar andthe necessity of movement compensation are discussed. Based on analyzing so-called displacedphase center algorithm, an improved algorithm is proposed. In this method, the time delayis estimated firstly, then the phase is estimated for the residual error, so that the range ofmovement error suited to the algorithm is extended to some extent. Some simulation resultson computer and experimental results in the test tank using the proposed algorithm are givenas well.

  12. 基于故障扩散的复杂中压配电系统可靠性评估算法%RELIABILITY EVALUATION ALGORITHM FOR COMPLEX MEDIUM VOLTAGE RADIAL DISTRIBUTION NETWORKS BASED ON FAULT-SPREADING-METHOD

    Institute of Scientific and Technical Information of China (English)

    谢开贵; 周平; 周家启; 孙渝江; 龙小平

    2001-01-01

    提出一种中压配电系统可靠性评估算法。该算法对复杂的中压配电系统(带子馈线)有较强的处理能力,利用前向搜索算法确定断路器动作影响范围,用故障扩散方法确定故障隔离的范围,从而确定节点的故障类型。根据故障的类型,便可形成相应的节点、馈线以及系统的可靠性指标。以RBTS—Bus6,RBTS—Bus2等配电网络和大量实际运行网络验证了该方法的有效性和实用性。%A reliability evaluation algorithm for medium voltage radial distribution network is proposed. The algorithm is suitable for evaluating the relatively complex systems which consist of many sub-feeders. It employes ahead-searching-method to determine the influencing area of breaker, applies fault-spreading-method to determine disconnection area, based on which the failure types of nodes can be determined. Then the reliability indices of nodes, feeders and system can be calculated. The RBTS-Bus6, RBTS-Bus2 and other medium voltage radial distribution networks are evaluated by using the algorithm, which verifies the effectiveness of the proposed algorithm.

  13. A Genetic Algorithm-Based Feature Selection

    Directory of Open Access Journals (Sweden)

    Babatunde Oluleye

    2014-07-01

    Full Text Available This article details the exploration and application of Genetic Algorithm (GA for feature selection. Particularly a binary GA was used for dimensionality reduction to enhance the performance of the concerned classifiers. In this work, hundred (100 features were extracted from set of images found in the Flavia dataset (a publicly available dataset. The extracted features are Zernike Moments (ZM, Fourier Descriptors (FD, Lengendre Moments (LM, Hu 7 Moments (Hu7M, Texture Properties (TP and Geometrical Properties (GP. The main contributions of this article are (1 detailed documentation of the GA Toolbox in MATLAB and (2 the development of a GA-based feature selector using a novel fitness function (kNN-based classification error which enabled the GA to obtain a combinatorial set of feature giving rise to optimal accuracy. The results obtained were compared with various feature selectors from WEKA software and obtained better results in many ways than WEKA feature selectors in terms of classification accuracy

  14. Entropy-Based Search Algorithm for Experimental Design

    Science.gov (United States)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  15. Analytical and Computational Method of Heat Transfer Coefficient Using FEM-based Algorithm%基于有限元的换热系数反求法

    Institute of Scientific and Technical Information of China (English)

    张时锋; 李自良

    2011-01-01

    Heat transfer coefficient is the main parameters of assessing the cooling capacity of quenching cooling medium, and it also is the key parameters of establishing the thermal boundary conditions. Using the inverse method for heat transfer coefficient, the heat transfer coefficient is taken as the unknown variables to solve the problem, which is classified as inverse heat conduction problems. Such problems have the extremely vital significance in practical engineering application research. This article presented a program of the inverse method for heat transfer coefficient using MATLAB software. The program based on the finite element method verified by Ansys software simulations and experiments. The results show that the method described in this article is a kind of effective method of calculating heat transfer coefficient.%换热系数是评定淬火介质冷却能力的主要参数,也是建立热边界条件的关键参数.换热系数反求法就是把换热系数作为未知量来求解,属于反向热传导问题,这类问题的研究在实际工程应用中具有十分重要的意义.本文用Matlab编写了基于有限元的换热系数反求法程序,用Ansys软件模拟和试验相结合的方法,进行了相应的验证,结果表明,本文所述的方法是一种有效的计算换热系数的方法.

  16. Identification of Hammerstein Model Based on Quantum Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Zhang Hai Li

    2013-07-01

    Full Text Available Nonlinear system identification is a main topic of modern identification. A new method for nonlinear system identification is presented by using Quantum Genetic Algorithm(QGA.The problems of nonlinear system identification are cast as function optimization overprameter space,and the Quantum Genetic Algorithm is adopted to solve the optimization problem. Simulation experiments show that: compared with the genetic algorithm, quantum genetic algorithm is an effective swarm intelligence algorithm, its salient features of the algorithm parameters, small population size, and the use of Quantum gate update populations, greatly improving the recognition in the optimization of speed and accuracy. Simulation results show the effectiveness of the proposed method.

  17. DOBD Algorithm for Training Neural Network: Part I. Method

    Institute of Scientific and Technical Information of China (English)

    吴建昱; 何小荣

    2002-01-01

    Overfitting is one of the important problems that restrain the application of neural network. The traditional OBD (Optimal Brain Damage) algorithm canavoid overfitting effectively. But it needs to train the network repeatedly with low calculational efficiency. In this paper, the Marquardt algorithm is incorporated into the OBD algorithm and a new method for pruning network-the Dynamic Optimal Brain Damage (DOBD) is introduced. This algorithm simplifies a network and obtains good generalization through dynamically deleting weight parameters with low sensitivity that is defined as the change of error function value with respect to the change of weights. Also a simplified method is presented through which sensitivities can be calculated during training with a little computation. Arule to determine the lower limit of sensitivity for deleting the unnecessary weights and other control methods during pruning and training are introduced. Thetraining course is analyzed theoretically and the reason why DOBD algorithm canobtain a much faster training speed than the OBD algorithm and avoid overfitting effectively is given.``

  18. Facial expression feature selection method based on neighborhood rough set theory and quantum genetic algorithm%基于邻域粗糙集与量子遗传算法的人脸表情特征选择方法

    Institute of Scientific and Technical Information of China (English)

    冯林; 李聪; 沈莉

    2013-01-01

    人脸表情特征选择是人脸表情识别研究领域关注的一个热点.基于量子遗传算法与邻域粗糙集理论,文章提出一种新的人脸表情特征选择方法(Feature Selection based on Neighborhood Rough Set Theory and Quantum Genetic Algorithm,简称FSNRSTQGA),以邻域粗糙集理论为基础,定义了最优特征集的适应度函数来评价表情特征子集的选择效果;并结合量子遗传算法进化策略,提出了一种表情特征选择方法.Cohn-Kanade表情数据集上的仿真实验结果表明了该方法的有效性.%Facial expression feature selection is one of the hot issues in the field of facial expression recognition. A novel facial expression feature selection method named feature selection based on neighborhood rough set theory and quantum genetic algorithm (FSNRSTQGA) is proposed. First, an evaluation criterion of the optimization expression feature subset is established based on neighborhood rough set theory and used as the fitness function. Then, by quantum genetic algorithm evolutionary strategy, an approach of facial expression feature selection is proposed. The results of the simulation on Cohn-Kanade expression dataset illustrate that the FSNRSTQGA method is effective.

  19. Computer Crime Forensics Based on Improved Decision Tree Algorithm

    Directory of Open Access Journals (Sweden)

    Ying Wang

    2014-04-01

    Full Text Available To find out the evidence of crime-related evidence and association rules among massive data, the classic decision tree algorithms such as ID3 for classification analysis have appeared in related prototype systems. So how to make it more suitable for computer forensics in variable environments becomes a hot issue. When selecting classification attributes, ID3 relies on computation of information entropy. Then the attributes owning more value are selected as classification nodes of the decision tress. Such classification is unrealistic under many cases. During the process of ID3 algorithm there are too many logarithms, so it is complicated to handle with the dataset which has various classification attributes. Therefore, contraposing the special demand for computer crime forensics, ID3 algorithm is improved and a novel classification attribute selection method based on Maclaurin-Priority Value First method is proposed. It adopts the foot changing formula and infinitesimal substitution to simplify the logarithms in ID3. For the errors generated in this process, an apposite constant is introduced to be multiplied by the simplified formulas for compensation. The idea of Priority Value First is introduced to solve the problems of value deviation. The performance of improved method is strictly proved in theory. Finally, the experiments verify that our scheme has advantage in computation time and classification accuracy, compared to ID3 and two existing algorithms

  20. A Trust-region-based Sequential Quadratic Programming Algorithm

    DEFF Research Database (Denmark)

    Henriksen, Lars Christian; Poulsen, Niels Kjølstad

    This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints....

  1. Model-based multiobjective evolutionary algorithm optimization for HCCI engines

    OpenAIRE

    Ma, He; Xu, Hongming; Wang, Jihong; Schnier, Thorsten; Neaves, Ben; Tan, Cheng; Wang, Zhi

    2014-01-01

    Modern engines feature a considerable number of adjustable control parameters. With this increasing number of Degrees of Freedom (DoF) for engines, and the consequent considerable calibration effort required to optimize engine performance, traditional manual engine calibration or optimization methods are reaching their limits. An automated engine optimization approach is desired. In this paper, a self-learning evolutionary algorithm based multi-objective globally optimization approach for a H...

  2. Adaptive algorithm for mobile user positioning based on environment estimation

    Directory of Open Access Journals (Sweden)

    Grujović Darko

    2014-01-01

    Full Text Available This paper analyzes the challenges to realize an infrastructure independent and a low-cost positioning method in cellular networks based on RSS (Received Signal Strength parameter, auxiliary timing parameter and environment estimation. The proposed algorithm has been evaluated using field measurements collected from GSM (Global System for Mobile Communications network, but it is technology independent and can be applied in UMTS (Universal Mobile Telecommunication Systems and LTE (Long-Term Evolution networks, also.

  3. DELAUNAY-BASED SURFACE RECONSTRUCTION ALGORITHM IN REVERSE ENGINEERING

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Triangulation of scattered points is the first important section during reverse engineering. New concepts of dynamic circle and closed point are put forward based on current basic method. These new concepts can narrow the extent which triangulation process should seek through and optimize the triangles during producing them. Updating the searching edges dynamically controls progress of triangulation. Intersection judgment between new triangle and produced triangles is changed into intersection judgment between new triangle and searching edges. Examples illustrate superiorities of this new algorithm.

  4. SENSITIVITY ANALYSIS BASED ON LANCZOS ALGORITHM IN STRUCTURAL DYNAMICS

    Institute of Scientific and Technical Information of China (English)

    李书; 王波; 胡继忠

    2003-01-01

    The sensitivity calculating formulas in structural dynamics was developed byutilizing the mathematical theorem and new definitions of sensitivities. So the singularityproblem of sensitivity with repeated eigenvalues is solved completely. To improve thecomputational efficiency, the reduction system is obtained based on Lanczos vectors. Afterincorporating the mathematical theory with the Lanczos algorithm, the approximatesensitivity solution can be obtained. A numerical example is presented to illustrate theperformance of the method.

  5. A haplotype inference algorithm for trios based on deterministic sampling

    Directory of Open Access Journals (Sweden)

    Iliadis Alexandros

    2010-08-01

    Full Text Available Abstract Background In genome-wide association studies, thousands of individuals are genotyped in hundreds of thousands of single nucleotide polymorphisms (SNPs. Statistical power can be increased when haplotypes, rather than three-valued genotypes, are used in analysis, so the problem of haplotype phase inference (phasing is particularly relevant. Several phasing algorithms have been developed for data from unrelated individuals, based on different models, some of which have been extended to father-mother-child "trio" data. Results We introduce a technique for phasing trio datasets using a tree-based deterministic sampling scheme. We have compared our method with publicly available algorithms PHASE v2.1, BEAGLE v3.0.2 and 2SNP v1.7 on datasets of varying number of markers and trios. We have found that the computational complexity of PHASE makes it prohibitive for routine use; on the other hand 2SNP, though the fastest method for small datasets, was significantly inaccurate. We have shown that our method outperforms BEAGLE in terms of speed and accuracy for small to intermediate dataset sizes in terms of number of trios for all marker sizes examined. Our method is implemented in the "Tree-Based Deterministic Sampling" (TDS package, available for download at http://www.ee.columbia.edu/~anastas/tds Conclusions Using a Tree-Based Deterministic sampling technique, we present an intuitive and conceptually simple phasing algorithm for trio data. The trade off between speed and accuracy achieved by our algorithm makes it a strong candidate for routine use on trio datasets.

  6. A modified decision tree algorithm based on genetic algorithm for mobile user classification problem.

    Science.gov (United States)

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity.

  7. User Equilibrium Exchange Allocation Algorithm Based on Super Network

    Directory of Open Access Journals (Sweden)

    Peiyi Dong

    2013-12-01

    Full Text Available The theory of super network is an effective method to various traffic networks with means of multiple decision-making. It provides us with a favorable pricing decision tool for it combines a practical transport network with the space pricing decision. Spatial price equilibrium problem has always been the important research direction of the Transport Economics and regional transportation planning. As to how to combine the two, this paper presents the user equilibrium exchange allocation algorithm based on super network, which successfully keep the classical spatial price equilibrium problems (SPE into a super-network analysis framework. Through super-network analysis, we can add two virtual nodes in the network, which correspond to the virtual supply node and the super-super-demand virtual node, analysis the user equivalence with the SPE equilibrium and find the concrete steps of users exchange allocation algorithm based on super-network equilibrium. Finally, we carried out experiments to verify. The experiments show that: through the user equilibrium exchange SPE allocation algorithm based on super-network, we can get the steady-state equilibrium solution, which demonstrate that the algorithm is reasonable.

  8. Research on Wavelet-Based Algorithm for Image Contrast Enhancement

    Institute of Scientific and Technical Information of China (English)

    Wu Ying-qian; Du Pei-jun; Shi Peng-fei

    2004-01-01

    A novel wavelet-based algorithm for image enhancement is proposed in the paper. On the basis of multiscale analysis, the proposed algorithm solves efficiently the problem of noise over-enhancement, which commonly occurs in the traditional methods for contrast enhancement. The decomposed coefficients at same scales are processed by a nonlinear method, and the coefficients at different scales are enhanced in different degree. During the procedure, the method takes full advantage of the properties of Human visual system so as to achieve better performance. The simulations demonstrate that these characters of the proposed approach enable it to fully enhance the content in images, to efficiently alleviate the enhancement of noise and to achieve much better enhancement effect than the traditional approaches.

  9. 3D face recognition algorithm based on detecting reliable components

    Institute of Scientific and Technical Information of China (English)

    Huang Wenjun; Zhou Xuebing; Niu Xiamu

    2007-01-01

    Fisherfaces algorithm is a popular method for face recognition. However, there exist some unstable components that degrade recognition performance. In this paper, we propose a method based on detecting reliable components to overcome the problem and introduce it to 3D face recognition. The reliable components are detected within the binary feature vector, which is generated from the Fisherfaces feature vector based on statistical properties, and is used for 3D face recognition as the final feature vector. Experimental results show that the reliable components feature vector is much more effective than the Fisherfaces feature vector for face recognition.

  10. Sparse diffraction imaging method using an adaptive reweighting homotopy algorithm

    Science.gov (United States)

    Yu, Caixia; Zhao, Jingtao; Wang, Yanfei; Qiu, Zhen

    2017-02-01

    Seismic diffractions carry valuable information from subsurface small-scale geologic discontinuities, such as faults, cavities and other features associated with hydrocarbon reservoirs. However, seismic imaging methods mainly use reflection theory for constructing imaging models, which means a smooth constraint on imaging conditions. In fact, diffractors occupy a small account of distributions in an imaging model and possess discontinuous characteristics. In mathematics, this kind of phenomena can be described by the sparse optimization theory. Therefore, we propose a diffraction imaging method based on a sparsity-constraint model for studying diffractors. A reweighted L 2-norm and L 1-norm minimization model is investigated, where the L 2 term requests a least-square error between modeled diffractions and observed diffractions and the L 1 term imposes sparsity on the solution. In order to efficiently solve this model, we use an adaptive reweighting homotopy algorithm that updates the solutions by tracking a path along inexpensive homotopy steps. Numerical examples and field data application demonstrate the feasibility of the proposed method and show its significance for detecting small-scale discontinuities in a seismic section. The proposed method has an advantage in improving the focusing ability of diffractions and reducing the migration artifacts.

  11. Infrared image gray adaptive adjusting enhancement algorithm based on gray redundancy histogram-dealing technique

    Science.gov (United States)

    Hao, Zi-long; Liu, Yong; Chen, Ruo-wang

    2016-11-01

    In view of the histogram equalizing algorithm to enhance image in digital image processing, an Infrared Image Gray adaptive adjusting Enhancement Algorithm Based on Gray Redundancy Histogram-dealing Technique is proposed. The algorithm is based on the determination of the entire image gray value, enhanced or lowered the image's overall gray value by increasing appropriate gray points, and then use gray-level redundancy HE method to compress the gray-scale of the image. The algorithm can enhance image detail information. Through MATLAB simulation, this paper compares the algorithm with the histogram equalization method and the algorithm based on gray redundancy histogram-dealing technique , and verifies the effectiveness of the algorithm.

  12. PI Parameter Optimization Method Based on the Floating-Point Coded Genetic Algorithm%基于浮点数编码遗传算法的PI参数优化算法

    Institute of Scientific and Technical Information of China (English)

    何同祥; 韩宁青; 李洪亮; 常保春

    2011-01-01

    This article introduces PID parameter optimization method based on the floating-point coded genetic algorithm, using the performance index -time squared integral of the error as the objective function, making use of the global search ability of genetic algorithm to achieve an optimum solution of the optimization, to reduce the difficulty to design PID performance, and overall improve system performance. The simulation results show that coded by floating-point genetic algorithm parameter optimization enables system PI has a good dynamic quality and steady state characteristics.%本文介绍了基于浮点数编码遗传算法寻优的PID参数优化方法,采用误差绝对值时间平方积分性能指标作为参数选择的目标函数,利用遗传算法的全局搜索能力,实现对全局最优解的寻优,以降低PID参数整定的难度,达到总体提高系统性能的目的.仿真结果表明,通过浮点数编码遗传算法进行PI参数优化可使系统具有很好的动态品质和稳态特性.

  13. CUDT: A CUDA Based Decision Tree Algorithm

    Directory of Open Access Journals (Sweden)

    Win-Tsung Lo

    2014-01-01

    Full Text Available Decision tree is one of the famous classification methods in data mining. Many researches have been proposed, which were focusing on improving the performance of decision tree. However, those algorithms are developed and run on traditional distributed systems. Obviously the latency could not be improved while processing huge data generated by ubiquitous sensing node in the era without new technology help. In order to improve data processing latency in huge data mining, in this paper, we design and implement a new parallelized decision tree algorithm on a CUDA (compute unified device architecture, which is a GPGPU solution provided by NVIDIA. In the proposed system, CPU is responsible for flow control while the GPU is responsible for computation. We have conducted many experiments to evaluate system performance of CUDT and made a comparison with traditional CPU version. The results show that CUDT is 5∼55 times faster than Weka-j48 and is 18 times speedup than SPRINT for large data set.

  14. An Improved Image Segmentation Based on Mean Shift Algorithm

    Institute of Scientific and Technical Information of China (English)

    CHENHanfeng; QIFeihu

    2003-01-01

    Gray image segmentation is to segment an image into some homogeneous regions and only one gray level is defined for each region as the result. These grayl evels are called major gray levels. Mean shift algorithm(MSA) has shown its efficiency in image segmentation. An improved gray image segmentation method based on MSAis proposed in this paper since usual image segmentation methods based on MSA often fail in segmenting imageswith weak edges. Corrupted block and its J-value are defined firstly in the proposed method. Then, J-matrix gotten from corrupted blocks are proposed to measure whether weak edges appear in the image. According to the J-matrix, major gray levels gotten with usual segmen-tation methods based on MSA are augmented and corre-sponding allocation windows are modified to detect weak edges. Experimental results demonstrate the effectiveness of the proposed method in gray image segmentation.

  15. A Text Categorization Algorithm Based on Sense Group

    Directory of Open Access Journals (Sweden)

    Jing Wan

    2013-02-01

    Full Text Available Giving further consideration on linguistic feature, this study proposes an algorithm of Chinese text categorization based on sense group. The algorithm extracts sense group by analyzing syntactic and semantic properties of Chinese texts and builds the category sense group library. SVM is used for the experiment of text categorization. The experimental results show that the precision and recall of the new algorithm based on sense group is better than that of traditional algorithms.

  16. POWER OPTIMIZATION ALGORITHM BASED ON XNOR/OR LOGIC

    Institute of Scientific and Technical Information of China (English)

    Wang Pengjun; Lu Jingang; Xu Jian; Dai Jing

    2009-01-01

    Based on the investigation of the XNOR/OR logical expression and the propagation algorithm of signal probability, a low power synthesis algorithm based on the XNOR/OR logic is proposed in this paper. The proposed algorithm has been implemented with C language. Fourteen Microelectronics Center North Carolina (MCNC) benchmarks are tested, and the results show that the proposed algorithm not only significantly reduces the average power consumption up to 27% without area and delay compensations, but also makes the runtime shorter.

  17. Brain source localization: a new method based on MUltiple SIgnal Classification algorithm and spatial sparsity of the field signal for electroencephalogram measurements.

    Science.gov (United States)

    Vergallo, P; Lay-Ekuakille, A

    2013-08-01

    Brain activity can be recorded by means of EEG (Electroencephalogram) electrodes placed on the scalp of the patient. The EEG reflects the activity of groups of neurons located in the head, and the fundamental problem in neurophysiology is the identification of the sources responsible of brain activity, especially if a seizure occurs and in this case it is important to identify it. The studies conducted in order to formalize the relationship between the electromagnetic activity in the head and the recording of the generated external field allow to know pattern of brain activity. The inverse problem, that is given the sampling field at different electrodes the underlying asset must be determined, is more difficult because the problem may not have a unique solution, or the search for the solution is made difficult by a low spatial resolution which may not allow to distinguish between activities involving sources close to each other. Thus, sources of interest may be obscured or not detected and known method in source localization problem as MUSIC (MUltiple SIgnal Classification) could fail. Many advanced source localization techniques achieve a best resolution by exploiting sparsity: if the number of sources is small as a result, the neural power vs. location is sparse. In this work a solution based on the spatial sparsity of the field signal is presented and analyzed to improve MUSIC method. For this purpose, it is necessary to set a priori information of the sparsity in the signal. The problem is formulated and solved using a regularization method as Tikhonov, which calculates a solution that is the better compromise between two cost functions to minimize, one related to the fitting of the data, and another concerning the maintenance of the sparsity of the signal. At the first, the method is tested on simulated EEG signals obtained by the solution of the forward problem. Relatively to the model considered for the head and brain sources, the result obtained allows to

  18. A Method of Network Traffic Identification Based on Improved Clustering Algorithms%基于改进分簇算法的网络流量识别方法

    Institute of Scientific and Technical Information of China (English)

    王宇科; 黎文伟; 苏欣

    2011-01-01

    The automatic detection of applications associated with network traffic is very important for network security and traffic management. Unfortunately, because of some of the applications like P2P, VOIP applications using dynamic port numbers, masquerading techniques, and encryption, it is difficult using simple port-based analysis to classify packet payloads in order to identify these applica tions. And many research works have proposed using the clustering algorithms to identify network traf fic, but these algorithms have some defects in how to choose the cluster center and the number of clus ters. In this paper, we first use the Weighting D2 algorithm to improve the selection of the initialized cluster centers, and use the value of NMK Normalize Mutual Information) to ascertain the number of clusters, and then get an improved clustering algorithm, and finally propose a application level identifi cation method based on this algorithm. The experimental results show that this method reaches 90% ac curacy or more, and gets lower False Positive Rate and False Rejection Rate.%网络流量相关应用的自动检测对于网络安全和流量管理来说非常重要.但是,由于Peer-to-Peer(P2P)、VOIP等网络新应用使用动态端口、伪装和加密流等技术,使得基于端口匹配和数据包特征字段分析等识别方法在识别这些应用时存在一定的难度.不少研究工作提出了分簇算法进行流量识别,但现有的分簇算法在簇中心和簇数目的选择上存在一定缺陷.本文首先使用基于Weighting D2算法对初始化簇中心选择进行改进,通过NMI值来确定簇的数目,得到改进的分簇算法,并提出一种基于该算法的应用层流量识别方法.对于应用层流量,尤其是P2P应用识别实验结果表明,该方法能达到90%以上的识别率以及较低的误识别率和漏识别率.

  19. A TDOA location algorithm based on data fusion

    Institute of Scientific and Technical Information of China (English)

    LIU Jun-min; ZHANG Chen; LIU Shi

    2006-01-01

    A new positioning method in mobile networks is presented.Based on the data fusion technology,it processes multi-layer information fusion for the location estimates achieved by the Chan algorithm,which increases mobile positioning accuracy effectively by only using measured difference of arriving (TDOA) signals.The method is simple and practical,especially when the location estimates are corrupted by the non-line-of-sight (NLOS) error.It not only has high positioning accuracy,but also reduces the location failure probability.Results from computer simulation show that the proposed method is effective in various environments.

  20. Improved Adaptive LSB Steganography Based on Chaos and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Yu Lifang

    2010-01-01

    Full Text Available We propose a novel steganographic method in JPEG images with high performance. Firstly, we propose improved adaptive LSB steganography, which can achieve high capacity while preserving the first-order statistics. Secondly, in order to minimize visual degradation of the stego image, we shuffle bits-order of the message based on chaos whose parameters are selected by the genetic algorithm. Shuffling message's bits-order provides us with a new way to improve the performance of steganography. Experimental results show that our method outperforms classical steganographic methods in image quality, while preserving characteristics of histogram and providing high capacity.

  1. An interactive segmentation method based on superpixel

    DEFF Research Database (Denmark)

    Yang, Shu; Zhu, Yaping; Wu, Xiaoyu

    2015-01-01

    This paper proposes an interactive image-segmentation method which is based on superpixel. To achieve fast segmentation, the method is used to establish a Graphcut model using superpixels as nodes, and a new energy function is proposed. Experimental results demonstrate that the authors' method has...... excellent performance in terms of segmentation accuracy and computation efficiency compared with other segmentation algorithm based on pixels....

  2. 基于时差定位算法的空间三站跟踪海洋动目标方法研究%Research on Space 3-station Moving Maritime Target Tracking Method Based on TDOA Location Algorithm

    Institute of Scientific and Technical Information of China (English)

    李悦; 冯新建; 宋庆雷; 秦洋

    2015-01-01

    This paper is focused on the multi‐orientation tracking of moving maritime target by the space moving three‐points TDOA location .A tracking algorithm based on KF (Kalman Filter) and a single/double TDOA tracking algorithm based on SR‐UKF (square root‐nonlinear Kalman filter) are given .The measure method of the initial anchor point and the initial covariance matrix w hich affect the filtering effect is also given .T he simulation re‐sults show that SR‐UKF tracking method is better under the TDOA data missing (single TDOA data) conditions .%研究了运动三站对海洋移动目标多次定位的跟踪滤波问题,给出基于卡尔曼滤波(KF)的定位点跟踪算法和基于平方根‐非线性卡尔曼滤波(SR‐UKF)的单/双时差跟踪算法,对于影响滤波效果的初始定位点和初始协方差阵,给出了计算方法,最后通过仿真验证了时差残缺(单时差)的观测条件下,SR‐UKF跟踪方法更优。

  3. Performance evaluation of sensor allocation algorithm based on covariance control

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The covariance control capability of sensor allocation algorithms based on covariance control strategy is an important index to evaluate the performance of these algorithms. Owing to lack of standard performance metric indices to evaluate covariance control capability, sensor allocation ratio, etc, there are no guides to follow in the design procedure of sensor allocation algorithm in practical applications. To meet these demands, three quantified performance metric indices are presented, which are average covariance misadjustment quantity (ACMQ), average sensor allocation ratio (ASAR) and matrix metric influence factor (MMIF), where ACMQ, ASAR and MMIF quantify the covariance control capability, the usage of sensor resources and the robustness of sensor allocation algorithm, respectively. Meanwhile, a covariance adaptive sensor allocation algorithm based on a new objective function is proposed to improve the covariance control capability of the algorithm based on information gain. The experiment results show that the proposed algorithm have the advantage over the preceding sensor allocation algorithm in covariance control capability and robustness.

  4. A Novel Method to Implement the Matrix Pencil Super Resolution Algorithm for Indoor Positioning

    Directory of Open Access Journals (Sweden)

    Tariq Jamil Saifullah Khanzada

    2011-10-01

    Full Text Available This article highlights the estimation of the results for the algorithms implemented in order to estimate the delays and distances for the indoor positioning system. The data sets for the transmitted and received signals are captured at a typical outdoor and indoor area. The estimation super resolution algorithms are applied. Different state of art and super resolution techniques based algorithms are applied to avail the optimal estimates of the delays and distances between the transmitted and received signals and a novel method for matrix pencil algorithm is devised. The algorithms perform variably at different scenarios of transmitted and received positions. Two scenarios are experienced, for the single antenna scenario the super resolution techniques like ESPRIT (Estimation of Signal Parameters via Rotational Invariance Technique and theMatrix Pencil algorithms give optimal performance compared to the conventional techniques. In two antenna scenario RootMUSIC and Matrix Pencil algorithm performed better than other algorithms for the distance estimation, however, the accuracy of all the algorithms is worst than the single antenna scenario. In all cases our devised Matrix Pencil algorithm achieved the best estimation results.

  5. A Location-Based Business Information Recommendation Algorithm

    Directory of Open Access Journals (Sweden)

    Shudong Liu

    2015-01-01

    Full Text Available Recently, many researches on information (e.g., POI, ADs recommendation based on location have been done in both research and industry. In this paper, we firstly construct a region-based location graph (RLG, in which region node respectively connects with user node and business information node, and then we propose a location-based recommendation algorithm based on RLG, which can combine with user short-ranged mobility formed by daily activity and long-distance mobility formed by social network ties and sequentially can recommend local business information and long-distance business information to users. Moreover, it can combine user-based collaborative filtering with item-based collaborative filtering, and it can alleviate cold start problem which traditional recommender systems often suffer from. Empirical studies from large-scale real-world data from Yelp demonstrate that our method outperforms other methods on the aspect of recommendation accuracy.

  6. Elimination Algorithm for the Spike of the Measured Digital Signal Based on the Racetrack Method%基于最小航道法的实测数字信号奇异值剔除算法

    Institute of Scientific and Technical Information of China (English)

    闫楚良; 段垚奇; 刘扬; 刘克格

    2012-01-01

    Due to the influence of the electromagnetic field and environment , there are spikes in the measured digital signal of carrier machinery. Eliminating the spike accurately is a key work of the digital signal processing. The algorithm based on the racetrack method of eliminating spike was put forward. This algorithm detects the spike according to the changing trends of the data, distinguishes spike and normal data obviously. Compared with the treatment results with amplitude threshold detection and differential threshold detection , the results show that the algorithm based on the racetrack method of eliminating spike is better than the others.%由于使用环境和本身功能的影响,运载机械在进行实测数字信号采集时,常常会出现干扰信号值,即数字信号奇异值.准确地剔除数字信号奇异值是实测数字信号处理中的一项关键工作.提出了基于最小航道法的数字信号奇异值剔除算法,根据数据的变化趋势进行数字信号奇异值的甄别与检测.与幅值门限检测法和梯度门限检测法进行了对比分析,结果表明基于最小航道法的奇异值剔除算法优于幅值门限检测法和梯度门限检测法,处理结果真实可靠.

  7. Review: Image Encryption Using Chaos Based algorithms

    Directory of Open Access Journals (Sweden)

    Er. Ankita Gaur

    2014-03-01

    Full Text Available Due to the development in the field of network technology and multimedia applications, every minute thousands of messages which can be text, images, audios, videos are created and transmitted over wireless network. Improper delivery of the message may leads to the leakage of important information. So encryption is used to provide security. In last few years, variety of image encryption algorithms based on chaotic system has been proposed to protect image from unauthorized access. 1-D chaotic system using logistic maps has weak security, small key space and due to the floating of pixel values, some data lose occurs and proper decryption of image becomes impossible. In this paper different chaotic maps such as Arnold cat map, sine map, logistic map, tent map have been studied.

  8. A Method for Path Planning of UAVs Based on Improved Genetic Algorithm%基于改进遗传算法的UAV航迹规划

    Institute of Scientific and Technical Information of China (English)

    鲁艺; 吕跃; 罗燕; 张亮; 赵志强; 唐隆

    2012-01-01

    An improved genetic algorithm was proposed to solve the problem of UAV's path planning in actual battle field. First,the planned search space was generated by means of skeleton algorithm,the information of the search space was extracted, and the kill probability of path points in the search space was calculated out. Then,with the information of the planned search space and by using a special gene coding mode,K possible paths were obtained by using genetic algorithm. According to the rules of path selection, the optimal path was obtained, and was smoothed by using different step lengths. Eventually the optimal path was acquired, which can meet the safety and maneuverability requirement of UAV.%针对实际作战环境中的UAV航迹规划,提出一种基于改进遗传算法的UAV航迹规划方法;通过骨架化算法生成规划搜索空间,对规划搜索空间中的信息进行提取,求解出规划搜索空间中航迹点的杀伤概率;根据规划搜索空间中的信息,采用特殊的基因编码方式,使用遗传算法为UAV找到K条备选航迹,提高了航迹规划效率;根据设定的航迹选取原则,求出最优航迹并对其按不同步长进行平滑处理,最终得到满足UAV机动性要求的可飞航迹.

  9. Beam Pattern Synthesis Based on Hybrid Optimization Algorithm

    Institute of Scientific and Technical Information of China (English)

    YU Yan-li; WANG Ying-min; LI Lei

    2010-01-01

    As conventional methods for beam pattern synthesis can not always obtain the desired optimum pattern for the arbitrary underwater acoustic sensor arrays, a hybrid numerical synthesis method based on adaptive principle and genetic algorithm was presented in this paper. First, based on the adaptive theory, a given array was supposed as an adaptive array and its sidelobes were reduced by assigning a number of interference signals in the sidelobe region. An initial beam pattern was obtained after several iterations and adjustments of the interference intensity, and based on its parameters, a desired pattern was created. Then, an objective function based on the difference between the designed and desired patterns can be constructed. The pattern can be optimized by using the genetic algorithm to minimize the objective function. A design example for a double-circular array demonstrates the effectiveness of this method. Compared with the approaches existing before, the proposed method can reduce the sidelobe effectively and achieve less synthesis magnitude error in the mainlobe.The method can search for optimum attainable pattern for the specific elements if the desired pattern can not be found.

  10. An Improved Particle Swarm Optimization Algorithm Based on Ensemble Technique

    Institute of Scientific and Technical Information of China (English)

    SHI Yan; HUANG Cong-ming

    2006-01-01

    An improved particle swarm optimization (PSO) algorithm based on ensemble technique is presented. The algorithm combines some previous best positions (pbest) of the particles to get an ensemble position (Epbest), which is used to replace the global best position (gbest). It is compared with the standard PSO algorithm invented by Kennedy and Eberhart and some improved PSO algorithms based on three different benchmark functions. The simulation results show that the improved PSO based on ensemble technique can get better solutions than the standard PSO and some other improved algorithms under all test cases.

  11. Localization method of picking point of apple target based on smoothing contour symmetry axis algorithm%基于平滑轮廓对称轴法的苹果目标采摘点定位方法

    Institute of Scientific and Technical Information of China (English)

    王丹丹; 徐越; 宋怀波; 何东健

    2015-01-01

    果实采摘点的精确定位是采摘机器人必须解决的关键问题。鉴于苹果目标具有良好对称性的特点,利用转动惯量所具有的平移、旋转不变性及其在对称轴方向取得极值的特性,提出了一种基于轮廓对称轴法的苹果目标采摘点定位方法。为了解决分割后苹果目标边缘不够平滑而导致定位精度偏低的问题,提出了一种苹果目标轮廓平滑方法。为了验证算法的有效性,对随机选取的20幅无遮挡的单果苹果图像分别利用轮廓平滑和未进行轮廓平滑的算法进行试验,试验结果表明,未进行轮廓平滑算法的平均定位误差为20.678°,而轮廓平滑后算法平均定位误差为4.542°,比未进行轮廓平滑算法平均定位误差降低了78.035%,未进行轮廓平滑算法的平均运行时间为10.2 ms,而轮廓平滑后算法的平均运行时间为7.5 ms,比未进行轮廓平滑算法平均运行时间降低了25.839%,表明平滑轮廓算法可以提高定位精度和运算效率。利用平滑轮廓对称轴算法可以较好地找到苹果目标的对称轴并实现采摘点定位,表明将该方法应用于苹果目标的对称轴提取及采摘点定位是可行的。%The localization of picking points of fruits is one of the key problems for picking robots, and it is the first step of implementation of the picking task for picking robots. In view of a good symmetry of apples, and characteristics of shift, rotation invariance, and reaching the extreme values in symmetry axis direction which moment of inertia possesses, a new method based on a contour symmetry axis was proposed to locate the picking point of apples. In order to solve the problem of low localization accuracy which results from the rough edge of apples after segmentation, a method of smoothing contour algorithm was presented. The steps of the algorithm were as follow, first, the image was transformed from RGB color space into

  12. Development of antibiotic regimens using graph based evolutionary algorithms.

    Science.gov (United States)

    Corns, Steven M; Ashlock, Daniel A; Bryden, Kenneth M

    2013-12-01

    This paper examines the use of evolutionary algorithms in the development of antibiotic regimens given to production animals. A model is constructed that combines the lifespan of the animal and the bacteria living in the animal's gastro-intestinal tract from the early finishing stage until the animal reaches market weight. This model is used as the fitness evaluation for a set of graph based evolutionary algorithms to assess the impact of diversity control on the evolving antibiotic regimens. The graph based evolutionary algorithms have two objectives: to find an antibiotic treatment regimen that maintains the weight gain and health benefits of antibiotic use and to reduce the risk of spreading antibiotic resistant bacteria. This study examines different regimens of tylosin phosphate use on bacteria populations divided into Gram positive and Gram negative types, with a focus on Campylobacter spp. Treatment regimens were found that provided decreased antibiotic resistance relative to conventional methods while providing nearly the same benefits as conventional antibiotic regimes. By using a graph to control the information flow in the evolutionary algorithm, a variety of solutions along the Pareto front can be found automatically for this and other multi-objective problems.

  13. aTrunk—An ALS-Based Trunk Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Sebastian Lamprecht

    2015-08-01

    Full Text Available This paper presents a rapid multi-return ALS-based (Airborne Laser Scanning tree trunk detection approach. The multi-core Divide & Conquer algorithm uses a CBH (Crown Base Height estimation and 3D-clustering approach to isolate points associated with single trunks. For each trunk, a principal-component-based linear model is fitted, while a deterministic modification of LO-RANSAC is used to identify an optimal model. The algorithm returns a vector-based model for each identified trunk while parameters like the ground position, zenith orientation, azimuth orientation and length of the trunk are provided. The algorithm performed well for a study area of 109 trees (about 2/3 Norway Spruce and 1/3 European Beech, with a point density of 7.6 points per m2, while a detection rate of about 75% and an overall accuracy of 84% were reached. Compared to crown-based tree detection methods, the aTrunk approach has the advantages of a high reliability (5% commission error and its high tree positioning accuracy (0.59m average difference and 0.78m RMSE. The usage of overlapping segments with parametrizable size allows a seamless detection of the tree trunks.

  14. NOVEL QUANTUM-INSPIRED GENETIC ALGORITHM BASED ON IMMUNITY

    Institute of Scientific and Technical Information of China (English)

    Li Ying; Zhao Rongchun; Zhang Yanning; Jiao Licheng

    2005-01-01

    A novel algorithm, the Immune Quantum-inspired Genetic Algorithm (IQGA), is proposed by introducing immune concepts and methods into Quantum-inspired Genetic Algorithm (QGA). With the condition of preserving QGA's advantages, IQGA utilizes the characteristics and knowledge in the pending problems for restraining the repeated and ineffective operations during evolution, so as to improve the algorithm efficiency. The experimental results of the knapsack problem show that the performance of IQGA is superior to the Conventional Genetic Algorithm (CGA), the Immune Genetic Algorithm (IGA) and QGA.

  15. A New Aloha Anti-Collision Algorithm Based on CDMA

    Science.gov (United States)

    Bai, Enjian; Feng, Zhu

    The tags' collision is a common problem in RFID (radio frequency identification) system. The problem has affected the integrity of the data transmission during the process of communication in the RFID system. Based on analysis of the existing anti-collision algorithm, a novel anti-collision algorithm is presented. The new algorithm combines the group dynamic frame slotted Aloha algorithm with code division multiple access technology. The algorithm can effectively reduce the collision probability between tags. Under the same number of tags, the algorithm is effective in reducing the reader recognition time and improve overall system throughput rate.

  16. Solving SAT by Algorithm Transform of Wu‘s Method

    Institute of Scientific and Technical Information of China (English)

    贺思敏; 张钹

    1999-01-01

    Recently algorithms for solving propositional satisfiability problem, or SAT,have aroused great interest,and more attention has been paid to transformation problem solving.The commonly used transformation is representation transform,but since its intermediate computing procedure is a black box from the viewpoint of the original problem,this approach has many limitations.In this paper,a new approach called algorithm transform is proposed and applied to solving SAT by Wu's method,a general algorithm for solving polynomial equations.B y establishing the correspondence between the primitive operation in Wu's method and clause resolution is SAT,it is shown that Wu's method,when used for solving SAT,,is primarily a restricted clause resolution procedure.While Wu's method introduces entirely new concepts.e.g.characteristic set of clauses,to resolution procedure,the complexity result of resolution procedure suggests an exponential lower bound to Wu's method for solving general polynomial equations.Moreover,this algorithm transform can help achieve a more efficient implementation of Wu's method since it can avoid the complex manipulation of polynomials and can make the best use of domain specific knowledge.

  17. A research on fast FCM algorithm based on weighted sample

    Institute of Scientific and Technical Information of China (English)

    KUANG Ping; ZHU Qing-xin; WANG Ming-wen; CHEN Xu-dong; QING Li

    2006-01-01

    To improve the computational performance of the fuzzy C-means (FCM) algorithm used in dataset clustering with large numbers,the concepts of the equivalent samples and the weighting samples based on eigenvalue distribution of the samples in the feature space were introduced and a novel fast cluster algorithm named weighted fuzzy C-means (WFCM) algorithm was put forward,which came from the traditional FCM algorithm.It was proved that the duster results were equivalent in dataset with two different cluster algorithms:WFCM and FCM.Furthermore,the WFCM algorithm had better computational performance than the ordinary FCM algorithm.The experiment of the gray image segmentation showed that the WFCM algorithm is a fast and effective cluster algorithm.

  18. APPLICATION OF GPU-BASED GRAPHICS ACCELERATION ALGORITHM IN DISCRETE ELEMENT METHOD%基于G PU的图形学加速算法在离散元法中的应用

    Institute of Scientific and Technical Information of China (English)

    鲍春永; 赵啦啦; 刘万英; 杨康康

    2016-01-01

    Particle discrete element method is a kind of numerical simulation method widely used in the research of granular material mechanics behaviour.Computation efficiency is one of the main factors that restricts its development and application.In this paper,we build a hopper model by using Pro/E software,and use Stream DEM software to study the stimulations of discrete element method in regard to hopper’s particles filling process.We also compare the operation processes and results of CPU-based and GPU-based acceleration algorithms. Results show that the GPU-based computer graphics acceleration algorithm can dramatically improve the computation efficiency of the simulation process of particle discrete element method.When the number of particles to be filled reaches 130 000,its computational efficiency improves over 10 times than that of the CPU-based acceleration algorithm.%颗粒离散元法是一种广泛应用于研究颗粒物料力学行为的数值模拟方法,而计算效率是制约其发展和应用的主要因素之一。通过Pro/E软件建立了料斗模型,利用Stream DEM软件对料斗的颗粒充填过程进行离散元法模拟研究,并对基于CPU 和GPU加速算法的运算过程和结果进行对比。结果表明,基于GPU的计算机图形学加速算法可大幅提高颗粒离散元法模拟过程的运算效率。当填充颗粒数量达到13万时,其运算效率比基于CPU的运算效率提高了10倍以上。

  19. A RBF Network Learning Scheme Using Immune Algorithm Based on Information Entropy

    Institute of Scientific and Technical Information of China (English)

    GONG Xin-bao; ZANG Xiao-gang; ZHOU Xi-lang

    2005-01-01

    A hybrid learning method combining immune algorithm and least square method is proposed to design the radial basis function(RBF) networks. The immune algorithm based on information entropy is used to determine the structure and parameters of RBF nonlinear hidden layer, and weights of RBF linear output layer are computed with least square method. By introducing the diversity control and immune memory mechanism, the algorithm improves the efficiency and overcomes the immature problem in genetic algorithm. Computer simulations demonstrate that the RBF networks designed in this method have fast convergence speed with good performances.

  20. A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing

    Science.gov (United States)

    Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian

    2016-01-01

    Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users’ smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users’ explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623