Model-based Bayesian signal extraction algorithm for peripheral nerves
Eggers, Thomas E.; Dweiri, Yazan M.; McCallum, Grant A.; Durand, Dominique M.
2017-10-01
Objective. Multi-channel cuff electrodes have recently been investigated for extracting fascicular-level motor commands from mixed neural recordings. Such signals could provide volitional, intuitive control over a robotic prosthesis for amputee patients. Recent work has demonstrated success in extracting these signals in acute and chronic preparations using spatial filtering techniques. These extracted signals, however, had low signal-to-noise ratios and thus limited their utility to binary classification. In this work a new algorithm is proposed which combines previous source localization approaches to create a model based method which operates in real time. Approach. To validate this algorithm, a saline benchtop setup was created to allow the precise placement of artificial sources within a cuff and interference sources outside the cuff. The artificial source was taken from five seconds of chronic neural activity to replicate realistic recordings. The proposed algorithm, hybrid Bayesian signal extraction (HBSE), is then compared to previous algorithms, beamforming and a Bayesian spatial filtering method, on this test data. An example chronic neural recording is also analyzed with all three algorithms. Main results. The proposed algorithm improved the signal to noise and signal to interference ratio of extracted test signals two to three fold, as well as increased the correlation coefficient between the original and recovered signals by 10-20%. These improvements translated to the chronic recording example and increased the calculated bit rate between the recovered signals and the recorded motor activity. Significance. HBSE significantly outperforms previous algorithms in extracting realistic neural signals, even in the presence of external noise sources. These results demonstrate the feasibility of extracting dynamic motor signals from a multi-fascicled intact nerve trunk, which in turn could extract motor command signals from an amputee for the end goal of
Phase shift extraction algorithm based on Euclidean matrix norm.
Deng, Jian; Wang, Hankun; Zhang, Desi; Zhong, Liyun; Fan, Jinping; Lu, Xiaoxu
2013-05-01
In this Letter, the character of Euclidean matrix norm (EMN) of the intensity difference between phase-shifting interferograms, which changes in sinusoidal form with the phase shifts, is presented. Based on this character, an EMN phase shift extraction algorithm is proposed. Both the simulation calculation and experimental research show that the phase shifts with high precision can be determined with the proposed EMN algorithm easily. Importantly, the proposed EMN algorithm will supply a powerful tool for the rapid calibration of the phase shifts.
A Block-Based Multi-Scale Background Extraction Algorithm
Directory of Open Access Journals (Sweden)
Seyed H. Davarpanah
2010-01-01
Full Text Available Problem statement: To extract the moving objects, vision-based surveillance systems subtract the current image from a predefined background image. The efficiency of these systems mainly depends on accuracy of the extracted background image. It should be able to adapt to the changes continuously. In addition, especially in real-time applications the time complexity of this adaptation is a critical matter. Approach: In this study, to extract an adaptive background, a combination of blocking and multi-scale methods is presented. Because of being less sensitive to local movements, block-based techniques are proper to control the non-stationary objects movements, especially in outdoor applications. They can be useful to reduce the effect of these objects on the extracted background. We also used the blocking method to intelligently select the regions which the temporal filtering has to be applied on. In addition, an amended multi-scale algorithm is introduced. This algorithm is a hybrid algorithm, a combination of some nonparametric and parametric filters. It uses a nonparametric filter in the spatial domain to initiate two primary backgrounds. In continue two adapted two-dimensional filters will be used to extract the final background. Results: The qualitative and quantitative results of our experiments certify not only the quality of the final extracted background is acceptable, but also its time consumption is approximately half in compare to the similar methods. Conclusion: Using Multi scaling filtering and applying the filters just to some selected nonoverlapped blocks reduce the time consumption of the extracting background algorithm.
A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs.
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2015-02-01
A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms' parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.
HTML Extraction Algorithm Based on Property and Data Cell
Purnamasari, Detty; Wayan Simri Wicaksana, I.; Harmanto, Suryadi; Yuniar Banowosari, Lintang
2013-06-01
The data available on the Internet is in various models and formats. One form of data representation is a table. Tables extraction is used in process more than one table on the Internet from different sources. Currently the effort is done by using copy-paste that is not automatic process. This article presents an approach to prepare the area, so tables in HTML format can be extracted and converted into a database that make easier to combine the data from many resources. This article was tested on the algorithm 1 used to determine the actual number of columns and rows of the table, as well as algorithm 2 are used to determine the boundary line of the property. Tests conducted at 100 tabular HTML format, and the test results provide the accuracy of the algorithm 1 is 99.9% and the accuracy of the algorithm 2 is 84%.
Fingerprint Feature Extraction Algorithm
Directory of Open Access Journals (Sweden)
Mehala. G
2014-03-01
Full Text Available The goal of this paper is to design an efficient Fingerprint Feature Extraction (FFE algorithm to extract the fingerprint features for Automatic Fingerprint Identification Systems (AFIS. FFE algorithm, consists of two major subdivisions, Fingerprint image preprocessing, Fingerprint image postprocessing. A few of the challenges presented in an earlier are, consequently addressed, in this paper. The proposed algorithm is able to enhance the fingerprint image and also extracting true minutiae.
Key Frames Extraction Based on the Improved Genetic Algorithm
Institute of Scientific and Technical Information of China (English)
ZHOU Dong-sheng; JIANG Wei; YI Peng-fei; LIURui
2014-01-01
In order toovercomethe poor local search ability of genetic algorithm, resulting in the basic genetic algorithm is time-consuming, and low search abilityin the late evolutionary, we use thegray coding instead ofbinary codingatthebeginning of the coding;we use multi-point crossoverto replace the originalsingle-point crossoveroperation.Finally, theexperimentshows that the improved genetic algorithmnot only has a strong search capability, but also thestability has been effectively improved.
Heterogeneous Web Data Extraction Algorithm Based On Modified Hidden Conditional Random Fields
Cui Cheng
2014-01-01
As it is of great importance to extract useful information from heterogeneous Web data, in this paper, we propose a novel heterogeneous Web data extraction algorithm using a modified hidden conditional random fields model. Considering the traditional linear chain based conditional random fields can not effectively solve the problem of complex and heterogeneous Web data extraction, we modify the standard hidden conditional random fields in three aspects, which are 1) Using the hidden Markov mo...
Fingerprint Feature Extraction Algorithm
Mehala. G
2014-01-01
The goal of this paper is to design an efficient Fingerprint Feature Extraction (FFE) algorithm to extract the fingerprint features for Automatic Fingerprint Identification Systems (AFIS). FFE algorithm, consists of two major subdivisions, Fingerprint image preprocessing, Fingerprint image postprocessing. A few of the challenges presented in an earlier are, consequently addressed, in this paper. The proposed algorithm is able to enhance the fingerprint image and also extractin...
RBF neural network and active circles based algorithm for contours extraction
Institute of Scientific and Technical Information of China (English)
Zhou Zhiheng; Zeng Delu; Xie Shengli
2007-01-01
For the contours extraction from the images, active contour model and self-organizing map based approach are popular nowadays. But they are still confronted with the problems that the optimization of energy function will trap in local minimums and the contour evolutions greatly depend on the initial contour selection. Addressing to these problems, a contours extraction algorithm based on RBF neural network is proposed here. A series of circles with adaptive radius and center is firstly used to search image feature points that are scattered enough. After the feature points are clustered, a group of radial basis functions are constructed. Using the pixels' intensities and gradients as the input vector, the final object contour can be obtained by the predicting ability of the neural network. The RBF neural network based algorithm is tested on three kinds of images, such as changing topology, complicated background, and blurring or noisy boundary. Simulation results show that the proposed algorithm performs contours extraction greatly.
Directory of Open Access Journals (Sweden)
A F M Saifuddin Saif
Full Text Available Fast and computationally less complex feature extraction for moving object detection using aerial images from unmanned aerial vehicles (UAVs remains as an elusive goal in the field of computer vision research. The types of features used in current studies concerning moving object detection are typically chosen based on improving detection rate rather than on providing fast and computationally less complex feature extraction methods. Because moving object detection using aerial images from UAVs involves motion as seen from a certain altitude, effective and fast feature extraction is a vital issue for optimum detection performance. This research proposes a two-layer bucket approach based on a new feature extraction algorithm referred to as the moment-based feature extraction algorithm (MFEA. Because a moment represents the coherent intensity of pixels and motion estimation is a motion pixel intensity measurement, this research used this relation to develop the proposed algorithm. The experimental results reveal the successful performance of the proposed MFEA algorithm and the proposed methodology.
[Research on non-rigid medical image registration algorithm based on SIFT feature extraction].
Wang, Anna; Lu, Dan; Wang, Zhe; Fang, Zhizhen
2010-08-01
In allusion to non-rigid registration of medical images, the paper gives a practical feature points matching algorithm--the image registration algorithm based on the scale-invariant features transform (Scale Invariant Feature Transform, SIFT). The algorithm makes use of the image features of translation, rotation and affine transformation invariance in scale space to extract the image feature points. Bidirectional matching algorithm is chosen to establish the matching relations between the images, so the accuracy of image registrations is improved. On this basis, affine transform is chosen to complement the non-rigid registration, and normalized mutual information measure and PSO optimization algorithm are also chosen to optimize the registration process. The experimental results show that the method can achieve better registration results than the method based on mutual information.
Directory of Open Access Journals (Sweden)
Xu Sun
2015-12-01
Full Text Available Mixed pixels are common in hyperspectral remote sensing images. Endmember extraction is a key step in spectral unmixing. The linear spectral mixture model (LSMM constitutes a geometric approach that is commonly used for this purpose. This paper introduces the use of artificial bee colony (ABC algorithms for spectral unmixing. First, the objective function of the external minimum volume model is improved to enhance the robustness of the results, and then, the ABC-based endmember extraction process is presented. Depending on the characteristics of the objective function, two algorithms, Artificial Bee Colony Endmember Extraction-RMSE (ABCEE-R and ABCEE-Volume (ABCEE-V are proposed. Finally, two sets of experiment using synthetic data and one set of experiments using a real hyperspectral image are reported. Comparative experiments reveal that ABCEE-R and ABCEE-V can achieve better endmember extraction results than other algorithms when processing data with a low signal-to-noise ratio (SNR. ABCEE-R does not require high accuracy in the number of endmembers, and it can always obtain the result with the best root mean square error (RMSE; when the number of endmembers extracted and the true number of endmembers does not match, the RMSE of the ABCEE-V results is usually not as good as that of ABCEE-R, but the endmembers extracted using the former algorithm are closer to the true endmembers.
Comparisons of feature extraction algorithm based on unmanned aerial vehicle image
Xi, Wenfei; Shi, Zhengtao; Li, Dongsheng
2017-07-01
Feature point extraction technology has become a research hotspot in the photogrammetry and computer vision. The commonly used point feature extraction operators are SIFT operator, Forstner operator, Harris operator and Moravec operator, etc. With the high spatial resolution characteristics, UAV image is different from the traditional aviation image. Based on these characteristics of the unmanned aerial vehicle (UAV), this paper uses several operators referred above to extract feature points from the building images, grassland images, shrubbery images, and vegetable greenhouses images. Through the practical case analysis, the performance, advantages, disadvantages and adaptability of each algorithm are compared and analyzed by considering their speed and accuracy. Finally, the suggestions of how to adapt different algorithms in diverse environment are proposed.
A New Feature Extraction Algorithm Based on Entropy Cloud Characteristics of Communication Signals
Directory of Open Access Journals (Sweden)
Jingchao Li
2015-01-01
Full Text Available Identifying communication signals under low SNR environment has become more difficult due to the increasingly complex communication environment. Most relevant literatures revolve around signal recognition under stable SNR, but not applicable under time-varying SNR environment. To solve this problem, we propose a new feature extraction method based on entropy cloud characteristics of communication modulation signals. The proposed algorithm extracts the Shannon entropy and index entropy characteristics of the signals first and then effectively combines the entropy theory and cloud model theory together. Compared with traditional feature extraction methods, instability distribution characteristics of the signals’ entropy characteristics can be further extracted from cloud model’s digital characteristics under low SNR environment by the proposed algorithm, which improves the signals’ recognition effects significantly. The results from the numerical simulations show that entropy cloud feature extraction algorithm can achieve better signal recognition effects, and even when the SNR is −11 dB, the signal recognition rate can still reach 100%.
Rule Extraction from Trained Artificial Neural Network Based on Genetic Algorithm
Institute of Scientific and Technical Information of China (English)
WANG Wen-jian; ZHANG Li-xia
2002-01-01
This paper discusses how to extract symbolic rules from trained artificial neural network (ANN) in domains involving classification using genetic algorithms (GA). Previous methods based on an exhaustive analysis of network connections and output values have already been demonstrated to be intractable in that the scale-up factor increases with the number of nodes and connections in the network.Some experiments explaining effectiveness of the presented method are given as well.
A wavelet based algorithm for DTM extraction from airborne laser scanning data
Xu, Liang; Yang, Yan; Tian, Qingjiu
2007-06-01
The automatic extraction of Digital Terrain Model (DTM) from point clouds acquired by airborne laser scanning (ALS) equipment remains a problem in ALS data filtering nowadays. Many filter algorithms have been developed to remove object points and outliers, and to extract DTM automatically. However, it is difficult to filter in areas where few points have identical morphological or geological features that can present the bare earth. Especially in sloped terrain covered by dense vegetation, points representing bare earth are often identified as noisy data below ground. To extract terrain surface in these areas, a new algorithm is proposed. First, the point clouds are cut into profiles based on a scan line segmentation algorithm. In each profile, a 1D filtering procedure is determined from the wavelet theory, which is superior in detecting high frequency discontinuities. After combining profiles from different directions, an interpolated grid data representing DTM is generated. In order to evaluate the performance of this new approach, we applied it to the data set used in the ISPRS filter test in 2003. 2 samples containing mostly vegetation on slopes have been processed by the proposed algorithm. It can be seen that it filtered most of the objects like vegetation and buildings in sloped area, and smoothed the hilly mountain to be more close to its real terrain surface.
Directory of Open Access Journals (Sweden)
Hanane FROUD
2013-11-01
Full Text Available Document Clustering algorithms goal is to create clusters that are coherent internally, but clearly different from each other. The useful expressions in the documents is often accompanied by a large amount of noise that is caused by the use of unnecessary words, so it is indispensable to eliminate it and keeping just the useful information. Keyphrases extraction systems in Arabic are new phenomena. A number of Text Mining applications can use it to improve her results. The Keyphrases are defined as phrases that capture the main topics discussed in document; they offer a brief and precise summary of document content. Therefore, it can be a good solution to get rid of the existent noise from documents. In this paper, we propose a new method to solve the problem cited above especially for Arabic language documents, which is one of the most complex languages, by using a new Keyphrases extraction algorithm based on the Suffix Tree data structure (KpST. To evaluate our approach, we conduct an experimental study on Arabic Documents Clustering using the most popular approach of Hierarchical algorithms: Agglomerative Hierarchical algorithm with seven linkage techniques and a variety of distance functions and similarity measures to perform Arabic Document Clustering task. The obtained results show that our approach for extracting Keyphrases improves the clustering results.
Institute of Scientific and Technical Information of China (English)
WANG Gang; RAO NiNi; ZHANG Ying
2008-01-01
The analysis and the characterization of atrial fibrillation (AF) requires,in a previous key step,the extraction of the atrial activity (AA) free from 12-lead electrocardiogram (ECG).This contribution proposes a novel non-invasive approach for the AA estimation in AF episodes.The method is based on blind source extraction (BSE) using high order statistics (HOS).The validity and performance of this algorithm are confirmed by extensive computer simulations and experiments on realworld data.In contrast to blind source separation (BSS) methods,BSE only extract one desired signal,and it is easy for the machine to judge whether the extracted signal is AA source by calculating its spectrum concentration,while it is hard for the machine using BSS method to judge which one of the separated twelve signals is AA source.Therefore,the proposed method is expected to have great potential in clinical monitoring.
Evaluation of Rule Extraction Algorithms
Directory of Open Access Journals (Sweden)
Tiruveedula GopiKrishna
2014-05-01
Full Text Available For the data mining domain, the lack of explanation facilities seems to be a serious drawback for techniques based on Artificial Neural Networks, or, for that matter, any technique producing opaque models In particular, the ability to generate even limited explanations is absolutely crucial for user acceptance of such systems. Since the purpose of most data mining systems is to support decision making,the need for explanation facilities in these systems is apparent. The task for the data miner is thus to identify the complex but general relationships that are likely to carry over to production data and the explanation facility makes this easier. Also focused the quality of the extracted rules; i.e. how well the required explanation is performed. In this research some important rule extraction algorithms are discussed and identified the algorithmic complexity; i.e. how efficient the underlying rule extraction algorithm is
Endmember extraction from hyperspectral image based on discrete firefly algorithm (EE-DFA)
Zhang, Chengye; Qin, Qiming; Zhang, Tianyuan; Sun, Yuanheng; Chen, Chao
2017-04-01
This study proposed a novel method to extract endmembers from hyperspectral image based on discrete firefly algorithm (EE-DFA). Endmembers are the input of many spectral unmixing algorithms. Hence, in this paper, endmember extraction from hyperspectral image is regarded as a combinational optimization problem to get best spectral unmixing results, which can be solved by the discrete firefly algorithm. Two series of experiments were conducted on the synthetic hyperspectral datasets with different SNR and the AVIRIS Cuprite dataset, respectively. The experimental results were compared with the endmembers extracted by four popular methods: the sequential maximum angle convex cone (SMACC), N-FINDR, Vertex Component Analysis (VCA), and Minimum Volume Constrained Nonnegative Matrix Factorization (MVC-NMF). What's more, the effect of the parameters in the proposed method was tested on both synthetic hyperspectral datasets and AVIRIS Cuprite dataset, and the recommended parameters setting was proposed. The results in this study demonstrated that the proposed EE-DFA method showed better performance than the existing popular methods. Moreover, EE-DFA is robust under different SNR conditions.
An efficient method of key-frame extraction based on a cluster algorithm.
Zhang, Qiang; Yu, Shao-Pei; Zhou, Dong-Sheng; Wei, Xiao-Peng
2013-12-18
This paper proposes a novel method of key-frame extraction for use with motion capture data. This method is based on an unsupervised cluster algorithm. First, the motion sequence is clustered into two classes by the similarity distance of the adjacent frames so that the thresholds needed in the next step can be determined adaptively. Second, a dynamic cluster algorithm called ISODATA is used to cluster all the frames and the frames nearest to the center of each class are automatically extracted as key-frames of the sequence. Unlike many other clustering techniques, the present improved cluster algorithm can automatically address different motion types without any need for specified parameters from users. The proposed method is capable of summarizing motion capture data reliably and efficiently. The present work also provides a meaningful comparison between the results of the proposed key-frame extraction technique and other previous methods. These results are evaluated in terms of metrics that measure reconstructed motion and the mean absolute error value, which are derived from the reconstructed data and the original data.
Directory of Open Access Journals (Sweden)
Lingli Cui
2014-09-01
Full Text Available This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and
Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui
2014-09-09
This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm
Chen, Lei; Li, Dehua; Yang, Jie
2007-12-01
Constructing virtual international strategy environment needs many kinds of information, such as economy, politic, military, diploma, culture, science, etc. So it is very important to build an information auto-extract, classification, recombination and analysis management system with high efficiency as the foundation and component of military strategy hall. This paper firstly use improved Boost algorithm to classify obtained initial information, then use a strategy intelligence extract algorithm to extract strategy intelligence from initial information to help strategist to analysis information.
Vibration extraction based on fast NCC algorithm and high-speed camera.
Lei, Xiujun; Jin, Yi; Guo, Jie; Zhu, Chang'an
2015-09-20
In this study, a high-speed camera system is developed to complete the vibration measurement in real time and to overcome the mass introduced by conventional contact measurements. The proposed system consists of a notebook computer and a high-speed camera which can capture the images as many as 1000 frames per second. In order to process the captured images in the computer, the normalized cross-correlation (NCC) template tracking algorithm with subpixel accuracy is introduced. Additionally, a modified local search algorithm based on the NCC is proposed to reduce the computation time and to increase efficiency significantly. The modified algorithm can rapidly accomplish one displacement extraction 10 times faster than the traditional template matching without installing any target panel onto the structures. Two experiments were carried out under laboratory and outdoor conditions to validate the accuracy and efficiency of the system performance in practice. The results demonstrated the high accuracy and efficiency of the camera system in extracting vibrating signals.
Improved LMD algorithm based on extraction of extrema of envelope curve
Song, Yuqian; Zhao, Jun; Guo, Tiantai; Kong, Ming; Wang, Yingjun; Shan, Liang
2015-02-01
Local mean decomposition (LMD) is a time-frequency analysis approach to deal with complex multi-frequency signal. However, as the decomposition process is sensitive to noise, there is a distinct limit when it is applied to analysis of the vibration signals of machinery with serious background noise. An improved LMD algorithm based on extracting the extrema of envelope curve is put forward to reduce the influence of high-frequency noise effectively. To verify its effect, three different de-noising methods, i.e., band-pass filter method, wavelet method and lift wavelet method are used, respectively. And the comparison result of the 4 methods shows that the proposed method has satisfactory reproducibility. Then the new algorithm is applied to real bearing signal, and experimental results show that it is effective and reliable. The method also has certain significance for the subsequent eigenvector research in intelligent fault diagnosis.
A fingerprint feature extraction algorithm based on curvature of Bezier curve
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
Fingerprint feature extraction is a key step of fingerprint identification. A novel feature extraction algorithm is proposed in this paper, which describes fingerprint feature with the bending information of fingerprint ridges. Ridges in the specific region of fingerprint images are traced firstly in the algorithm, and then these ridges are fit with Bezier curve. Finally, the point that has the maximal curvature on Bezier curve is defined as a feature point. Experimental results demonstrate that this kind of feature points characterize the bending trend of fingerprint ridges effectively, and they are robust to noise, in addition, the extraction precision of this algorithm is also better than the conventional approaches.
Du, Yanqin; Huang, Hua
2011-10-01
Fetal electrocardiogram (FECG) is an objective index of the activities of fetal cardiac electrophysiology. The acquired FECG is interfered by maternal electrocardiogram (MECG). How to extract the fetus ECG quickly and effectively has become an important research topic. During the non-invasive FECG extraction algorithms, independent component analysis(ICA) algorithm is considered as the best method, but the existing algorithms of obtaining the decomposition of the convergence properties of the matrix do not work effectively. Quantum particle swarm optimization (QPSO) is an intelligent optimization algorithm converging in the global. In order to extract the FECG signal effectively and quickly, we propose a method combining ICA and QPSO. The results show that this approach can extract the useful signal more clearly and accurately than other non-invasive methods.
A new algorithm to extract hidden rules of gastric cancer data based on ontology.
Mahmoodi, Seyed Abbas; Mirzaie, Kamal; Mahmoudi, Seyed Mostafa
2016-01-01
Cancer is the leading cause of death in economically developed countries and the second leading cause of death in developing countries. Gastric cancers are among the most devastating and incurable forms of cancer and their treatment may be excessively complex and costly. Data mining, a technology that is used to produce analytically useful information, has been employed successfully with medical data. Although the use of traditional data mining techniques such as association rules helps to extract knowledge from large data sets, sometimes the results obtained from a data set are so large that it is a major problem. In fact, one of the disadvantages of this technique is a lot of nonsense and redundant rules due to the lack of attention to the concept and meaning of items or the samples. This paper presents a new method to discover association rules using ontology to solve the expressed problems. This paper reports a data mining based on ontology on a medical database containing clinical data on patients referring to the Imam Reza Hospital at Tabriz. The data set used in this paper is gathered from 490 random visitors to the Imam Reza Hospital at Tabriz, who had been suspicions of having gastric cancer. The proposed data mining algorithm based on ontology makes rules more intuitive, appealing and understandable, eliminates waste and useless rules, and as a minor result, significantly reduces Apriori algorithm running time. The experimental results confirm the efficiency and advantages of this algorithm.
The Surface Extraction from TIN based Search-space Minimization (SETSM) algorithm
Noh, Myoung-Jong; Howat, Ian M.
2017-07-01
Digital Elevation Models (DEMs) provide critical information for a wide range of scientific, navigational and engineering activities. Submeter resolution, stereoscopic satellite imagery with high geometric and radiometric quality, and wide spatial coverage are becoming increasingly accessible for generating stereo-photogrammetric DEMs. However, low contrast and repeatedly-textured surfaces, such as snow and glacial ice at high latitudes, and mountainous terrains challenge existing stereo-photogrammetric DEM generation techniques, particularly without a-priori information such as existing seed DEMs or the manual setting of terrain-specific parameters. To utilize these data for fully-automatic DEM extraction at a large scale, we developed the Surface Extraction from TIN-based Search-space Minimization (SETSM) algorithm. SETSM is fully automatic (i.e. no search parameter settings are needed) and uses only the sensor model Rational Polynomial Coefficients (RPCs). SETSM adopts a hierarchical, combined image- and object-space matching strategy utilizing weighted normalized cross-correlation with both original distorted and geometrically corrected images for overcoming ambiguities caused by foreshortening and occlusions. In addition, SETSM optimally minimizes search-spaces to extract optimal matches over problematic terrains by iteratively updating object surfaces within a Triangulated Irregular Network, and utilizes a geometric-constrained blunder and outlier detection in object space. We prove the ability of SETSM to mitigate typical stereo-photogrammetric matching problems over a range of challenging terrains. SETSM is the primary DEM generation software for the US National Science Foundation's ArcticDEM project.
STATISTICAL PROBABILITY BASED ALGORITHM FOR EXTRACTING FEATURE POINTS IN 2-DIMENSIONAL IMAGE
Institute of Scientific and Technical Information of China (English)
Guan Yepeng; Gu Weikang; Ye Xiuqing; Liu Jilin
2004-01-01
An algorithm for automatically extracting feature points is developed after the area of feature points in 2-dimensional (2D) imagebeing located by probability theory, correlated methods and criterion for abnormity. Feature points in 2D image can be extracted only by calculating standard deviation of gray within sampled pixels area in our approach statically. While extracting feature points, the limitation to confirm threshold by tentative method according to some a priori information on processing image can be avoided. It is proved that the proposed algorithm is valid and reliable by extracting feature points on actual natural images with abundant and weak texture, including multi-object with complex background, respectively. It can meet the demand of extracting feature points of 2D image automatically in machine vision system.
Zhang, Yan-jun; Liu, Wen-zhe; Fu, Xing-hu; Bi, Wei-hong
2015-10-01
According to the high precision extracting characteristics of scattering spectrum in Brillouin optical time domain reflection optical fiber sensing system, this paper proposes a new algorithm based on flies optimization algorithm with adaptive mutation and generalized regression neural network. The method takes advantages of the generalized regression neural network which has the ability of the approximation ability, learning speed and generalization of the model. Moreover, by using the strong search ability of flies optimization algorithm with adaptive mutation, it can enhance the learning ability of the neural network. Thus the fitting degree of Brillouin scattering spectrum and the extraction accuracy of frequency shift is improved. Model of actual Brillouin spectrum are constructed by Gaussian white noise on theoretical spectrum, whose center frequency is 11.213 GHz and the linewidths are 40-50, 30-60 and 20-70 MHz, respectively. Comparing the algorithm with the Levenberg-Marquardt fitting method based on finite element analysis, hybrid algorithm particle swarm optimization, Levenberg-Marquardt and the least square method, the maximum frequency shift error of the new algorithm is 0.4 MHz, the fitting degree is 0.991 2 and the root mean square error is 0.024 1. The simulation results show that the proposed algorithm has good fitting degree and minimum absolute error. Therefore, the algorithm can be used on distributed optical fiber sensing system based on Brillouin optical time domain reflection, which can improve the fitting of Brillouin scattering spectrum and the precision of frequency shift extraction effectively.
Optical phase extraction algorithm based on the continuous wavelet and the Hilbert transforms
Bahich, Mustapha; Barj, Elmostafa
2010-01-01
In this paper we present an algorithm for optical phase evaluation based on the wavelet transform technique. The main advantage of this method is that it requires only one fringe pattern. This algorithm is based on the use of a second {\\pi}/2 phase shifted fringe pattern where it is calculated via the Hilbert transform. To test its validity, the algorithm was used to demodulate a simulated fringe pattern giving the phase distribution with a good accuracy.
Rajashekararadhya, S. V.; Ranjan, P. Vanaja
India is a multi-lingual multi script country, where eighteen official scripts are accepted and have over hundred regional languages. In this paper we propose a zone based hybrid feature extraction algorithm scheme towards the recognition of off-line handwritten numerals of south Indian scripts. The character centroid is computed and the image (character/numeral) is further divided in to n equal zones. Average distance and Average angle from the character centroid to the pixels present in the zone are computed (two features). Similarly zone centroid is computed (two features). This procedure is repeated sequentially for all the zones/grids/boxes present in the numeral image. There could be some zones that are empty, and then the value of that particular zone image value in the feature vector is zero. Finally 4*n such features are extracted. Nearest neighbor classifier is used for subsequent classification and recognition purpose. We obtained 97.55 %, 94 %, 92.5% and 95.2 % recognition rate for Kannada, Telugu, Tamil and Malayalam numerals respectively.
Rules Extraction with an Immune Algorithm
Directory of Open Access Journals (Sweden)
Deqin Yan
2007-12-01
Full Text Available In this paper, a method of extracting rules with immune algorithms from information systems is proposed. Designing an immune algorithm is based on a sharing mechanism to extract rules. The principle of sharing and competing resources in the sharing mechanism is consistent with the relationship of sharing and rivalry among rules. In order to extract rules efficiently, a new concept of flexible confidence and rule measurement is introduced. Experiments demonstrate that the proposed method is effective.
Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan
2016-04-22
The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.
Tang, Xiao-yan; Gao, Kun; Ni, Guo-qiang; Zhu, Zhen-yu; Cheng, Hao-bo
2013-09-01
An improved N-FINDR endmember extraction algorithm by combining manifold learning and spatial information is presented under nonlinear mixing assumptions. Firstly, adaptive local tangent space alignment is adapted to seek potential intrinsic low-dimensional structures of hyperspectral high-diemensional data and reduce original data into a low-dimensional space. Secondly, spatial preprocessing is used by enhancing each pixel vector in spatially homogeneous areas, according to the continuity of spatial distribution of the materials. Finally, endmembers are extracted by looking for the largest simplex volume. The proposed method can increase the precision of endmember extraction by solving the nonlinearity of hyperspectral data and taking advantage of spatial information. Experimental results on simulated and real hyperspectral data demonstrate that the proposed approach outperformed the geodesic simplex volume maximization (GSVM), vertex component analysis (VCA) and spatial preprocessing N-FINDR method (SPPNFINDR).
Zhou, Tingting; Gu, Lingjia; Ren, Ruizhi; Cao, Qiong
2016-09-01
With the rapid development of remote sensing technology, the spatial resolution and temporal resolution of satellite imagery also have a huge increase. Meanwhile, High-spatial-resolution images are becoming increasingly popular for commercial applications. The remote sensing image technology has broad application prospects in intelligent traffic. Compared with traditional traffic information collection methods, vehicle information extraction using high-resolution remote sensing image has the advantages of high resolution and wide coverage. This has great guiding significance to urban planning, transportation management, travel route choice and so on. Firstly, this paper preprocessed the acquired high-resolution multi-spectral and panchromatic remote sensing images. After that, on the one hand, in order to get the optimal thresholding for image segmentation, histogram equalization and linear enhancement technologies were applied into the preprocessing results. On the other hand, considering distribution characteristics of road, the normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) were used to suppress water and vegetation information of preprocessing results. Then, the above two processing result were combined. Finally, the geometric characteristics were used to completed road information extraction. The road vector extracted was used to limit the target vehicle area. Target vehicle extraction was divided into bright vehicles extraction and dark vehicles extraction. Eventually, the extraction results of the two kinds of vehicles were combined to get the final results. The experiment results demonstrated that the proposed algorithm has a high precision for the vehicle information extraction for different high resolution remote sensing images. Among these results, the average fault detection rate was about 5.36%, the average residual rate was about 13.60% and the average accuracy was approximately 91.26%.
Ballistic missile precession frequency extraction based on the Viterbi & Kalman algorithm
Wu, Longlong; Xie, Yongjie; Xu, Daping; Ren, Li
2015-12-01
Radar Micro-Doppler signatures are of great potential for target detection, classification and recognition. In the mid-course phase, warheads flying outside the atmosphere are usually accompanied by precession. Precession may induce additional frequency modulations on the returned radar signal, which can be regarded as a unique signature and provide additional information that is complementary to existing target recognition methods. The main purpose of this paper is to establish a more actual precession model of conical ballistic missile warhead and extract the precession parameters by utilizing Viterbi & Kalman algorithm, which improving the precession frequency estimation accuracy evidently , especially in low SNR.
Text Region Extraction: A Morphological Based Image Analysis Using Genetic Algorithm
Directory of Open Access Journals (Sweden)
Dhirendra Pal Singh
2015-01-01
Full Text Available Image analysis belongs to the area of computer vision and pattern recognition. These areas are also a part of digital image processing, where researchers have a great attention in the area of content retrieval information from various types of images having complex background, low contrast background or multi-spectral background etc. These contents may be found in any form like texture data, shape, and objects. Text Region Extraction as a content from an mage is a class of problems in Digital Image Processing Applications that aims to provides necessary information which are widely used in many fields medical imaging, pattern recognition, Robotics, Artificial intelligent Transport systems etc. To extract the text data information has becomes a challenging task. Since, Text extraction are very useful for identifying and analysis the whole information about image, Therefore, In this paper, we propose a unified framework by combining morphological operations and Genetic Algorithms for extracting and analyzing the text data region which may be embedded in an image by means of variety of texts: font, size, skew angle, distortion by slant and tilt, shape of the object which texts are on, etc. We have established our proposed methods on gray level image sets and make qualitative and quantitative comparisons with other existing methods and concluded that proposed method is better than others.
Algorithm research on infrared imaging target extraction based on GAC model
Li, Yingchun; Fan, Youchen; Wang, Yanqing
2016-10-01
Good target detection and tracking technique is significantly meaningful to increase infrared target detection distance and enhance resolution capacity. For the target detection problem about infrared imagining, firstly, the basic principles of level set method and GAC model are is analyzed in great detail. Secondly, "convergent force" is added according to the defect that GAC model is stagnant outside the deep concave region and cannot reach deep concave edge to build the promoted GAC model. Lastly, the self-adaptive detection method in combination of Sobel operation and GAC model is put forward by combining the advantages that subject position of the target could be detected with Sobel operator and the continuous edge of the target could be obtained through GAC model. In order to verify the effectiveness of the model, the two groups of experiments are carried out by selecting the images under different noise effects. Besides, the comparative analysis is conducted with LBF and LIF models. The experimental result shows that target could be better locked through LIF and LBF algorithms for the slight noise effect. The accuracy of segmentation is above 0.8. However, as for the strong noise effect, the target and noise couldn't be distinguished under the strong interference of GAC, LIF and LBF algorithms, thus lots of non-target parts are extracted during iterative process. The accuracy of segmentation is below 0.8. The accurate target position is extracted through the algorithm proposed in this paper. Besides, the accuracy of segmentation is above 0.8.
Computerized lung nodule detection using 3D feature extraction and learning based algorithms.
Ozekes, Serhat; Osman, Onur
2010-04-01
In this paper, a Computer Aided Detection (CAD) system based on three-dimensional (3D) feature extraction is introduced to detect lung nodules. First, eight directional search was applied in order to extract regions of interests (ROIs). Then, 3D feature extraction was performed which includes 3D connected component labeling, straightness calculation, thickness calculation, determining the middle slice, vertical and horizontal widths calculation, regularity calculation, and calculation of vertical and horizontal black pixel ratios. To make a decision for each ROI, feed forward neural networks (NN), support vector machines (SVM), naive Bayes (NB) and logistic regression (LR) methods were used. These methods were trained and tested via k-fold cross validation, and results were compared. To test the performance of the proposed system, 11 cases, which were taken from Lung Image Database Consortium (LIDC) dataset, were used. ROC curves were given for all methods and 100% detection sensitivity was reached except naive Bayes.
Zhang, Yanjun; Zhao, Yu; Fu, Xinghu; Xu, Jinrui
2016-10-01
A novel particle swarm optimization algorithm based on adaptive inertia weight and chaos optimization is proposed for extracting the features of Brillouin scattering spectra. Firstly, the adaptive inertia weight parameter of the velocity is introduced to the basic particle swarm algorithm. Based on the current iteration number of particles and the adaptation value, the algorithm can change the weight coefficient and adjust the iteration speed of searching space for particles, so the local optimization ability can be enhanced. Secondly, the logical self-mapping chaotic search is carried out by using the chaos optimization in particle swarm optimization algorithm, which makes the particle swarm optimization algorithm jump out of local optimum. The novel algorithm is compared with finite element analysis-Levenberg Marquardt algorithm, particle swarm optimization-Levenberg Marquardt algorithm and particle swarm optimization algorithm by changing the linewidth, the signal-to-noise ratio and the linear weight ratio of Brillouin scattering spectra. Then the algorithm is applied to the feature extraction of Brillouin scattering spectra in different temperatures. The simulation analysis and experimental results show that this algorithm has a high fitting degree and small Brillouin frequency shift error for different linewidth, SNR and linear weight ratio. Therefore, this algorithm can be applied to the distributed optical fiber sensing system based on Brillouin optical time domain reflection, which can effectively improve the accuracy of Brillouin frequency shift extraction.
Hortos, William S.
2008-04-01
Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at
A motif extraction algorithm based on hashing and modulo-4 arithmetic.
Sheng, Huitao; Mehrotra, Kishan; Mohan, Chilukuri; Raina, Ramesh
2008-01-01
We develop an algorithm to identify cis-elements in promoter regions of coregulated genes. This algorithm searches for subsequences of desired length whose frequency of occurrence is relatively high, while accounting for slightly perturbed variants using hash table and modulo arithmetic. Motifs are evaluated using profile matrices and higher-order Markov background model. Simulation results show that our algorithm discovers more motifs present in the test sequences, when compared with two well-known motif-discovery tools (MDScan and AlignACE). The algorithm produces very promising results on real data set; the output of the algorithm contained many known motifs.
New algorithm of extraction of palmprint region of interest (ROI)
Harun, Nurzalina; Rahman, Wan Eny Zarina Wan Abd; Zaleha Zainal Abidin, Siti; Jaya Othman, Puwira
2017-09-01
Palmprint contains numerous patterns that are unique and distinct in detecting human identity. The process of extracting the palmprint ROI is very important since its result provide initial information for personal identification. In obtaining palmprint ROI, there are numerous algorithms that have been proposed by past researchers. However, as best to our knowledge, the extracted ROI using these algorithms only cover a small part of the palmprint. Due to this limitation, some important features that can be used as an individual identification might be neglected. In this research, a new algorithm for extracting the palmprint ROI is proposed. The performance of the proposed algorithm is compared to two other existing algorithms. The three algorithms are tested based on the location and size of the extracted ROI. The result obtained shows that the proposed algorithm has successfully extract larger palmprint ROI compared to existing algorithms. The results provide platform for further use on identifying and verifying the identity of an individual.
基于包络提取的TOA估计算法%A TOA estimation algorithm based on envelope extraction
Institute of Scientific and Technical Information of China (English)
刘倩; 夏斌; 彭荣群; 陈乃澍
2012-01-01
针对超宽带无线传感器网络中噪声影响定位精度的问题，提出了一种基于包络提取的到达时间（timeofar—rival，TOA）估计算法。该算法首先利用小波变换的多分辨率分析有效地去除信号中的噪声成分，之后对去噪的信号进行希尔伯特变换提取其包络，最后选取第一个包络的最大值作为TOA的估计值。仿真结果表明，该算法抑制了噪声对TOA估计的影响，提高了估计精度。%The noise could affect the location precision in ultra wideband wireless sensor network. In order to solve this problem, a time of arrival (TOA) estimation algorithm based on envdope extraction was proposed. The noise compo- nent of signal could be effectively removed by the multi-resolution analysis of wavelet transform. Then the envelope of denoising signal was extracted by Hilbert transform. Finally, the maximum value of the first envelope was used as the TOA estimation value. The simulation results showed that the algorithm could suppress the noise and improve estimation precision.
Watermark Extraction Optimization Using PSO Algorithm
Directory of Open Access Journals (Sweden)
Mohammad Dehghani Soltani
2013-04-01
Full Text Available In this study we propose an improved method for watermarking based on ML detector that in comparison with similar methods this scheme has more robustness against attacks, with the same embedded length of logo. Embedding the watermark will perform in the low frequency coefficients of wavelet transform of high entropy blocks (blocks which have more information. Then in the watermark extraction step by using PSO algorithm in a way that maximum quality in comparison with previous methods obtain, by optimizing the Lagrange factor in the Neyman-Peyrson test, we extract the watermark. Finally, performance of proposed scheme has been investigated and accuracy of results are shown by simulation.
The analysis and detection of hypernasality based on a formant extraction algorithm
Qian, Jiahui; Fu, Fanglin; Liu, Xinyi; He, Ling; Yin, Heng; Zhang, Han
2017-08-01
In the clinical practice, the effective assessment of cleft palate speech disorders is important. For hypernasal speech, the resonance between nasal cavity and oral cavity causes an additional nasal formant. Thus, the formant frequency is a crucial cue for the judgment of hypernasality in cleft palate speech. Due to the existence of nasal formant, the peak merger occurs to the spectrum of nasal speech more often. However, the peak merger could not be solved by classical linear prediction coefficient root extraction method. In this paper, a method is proposed to detect the additional nasal formant in low-frequency region and obtain the formant frequency. The experiment results show that the proposed method could locate the nasal formant preferably. Moreover, the formants are regarded as the extraction features to proceed the detection of hypernasality. 436 phonemes, which are collected from Hospital of Stomatology, are used to carry out the experiment. The detection accuracy of hypernasality in cleft palate speech is 95.2%.
Xu, Wei-Heng; Feng, Zhong-Ke; Su, Zhi-Fang; Xu, Hui; Jiao, You-Quan; Deng, Ou
2014-02-01
Tree crown projection area and crown volume are the important parameters for the estimation of biomass, tridimensional green biomass and other forestry science applications. Using conventional measurements of tree crown projection area and crown volume will produce a large area of errors in the view of practical situations referring to complicated tree crown structures or different morphological characteristics. However, it is difficult to measure and validate their accuracy through conventional measurement methods. In view of practical problems which include complicated tree crown structure, different morphological characteristics, so as to implement the objective that tree crown projection and crown volume can be extracted by computer program automatically. This paper proposes an automatic untouched measurement based on terrestrial three-dimensional laser scanner named FARO Photon120 using plane scattered data point convex hull algorithm and slice segmentation and accumulation algorithm to calculate the tree crown projection area. It is exploited on VC+6.0 and Matlab7.0. The experiments are exploited on 22 common tree species of Beijing, China. The results show that the correlation coefficient of the crown projection between Av calculated by new method and conventional method A4 reaches 0.964 (p3D LIDAR point cloud data of individual tree, tree crown structure was reconstructed at a high rate of speed with high accuracy, and crown projection and volume of individual tree were extracted by this automatical untouched method, which can provide a reference for tree crown structure studies and be worth to popularize in the field of precision forestry.
A method for extracting fetal ECG based on EMD-NMF single channel blind source separation algorithm.
He, Pengju; Chen, Xiaomeng
2015-01-01
Nowadays, detecting fetal ECG using abdominal signal is a commonly used method, but fetal ECG signal will be affected by maternal ECG. Current FECG extraction algorithms are mainly aiming at multiple channels signal. They often assume there is only one fetus and did not consider multiple births. This paper proposed a single channel blind source separation algorithm to process single abdominal acquired signal. This algorithm decomposed single abdominal signal into multiple intrinsic mode function (IMF) utilizing empirical mode decomposition (EMD). Correlation matrix of IMF was calculated and independent ECG signal number was estimated using eigenvalue method. Nonnegative matrix was constructed according to determined number and decomposed IMF. Separation of MECG and FECG was achieved utilizing nonnegative matrix factorization (NMF). Experiments selected four channels man-made signal and two channels ECG to verify correctness and feasibility of proposed algorithm. Results showed that the proposed algorithm could determine number of independent signal in single acquired signal. FECG could be extracted from single channel observed signal and the algorithm can be used to solve separation of MECG and FECG.
Directory of Open Access Journals (Sweden)
Yoichi Hayashi
2016-01-01
Full Text Available Historically, the assessment of credit risk has proved to be both highly important and extremely difficult. Currently, financial institutions rely on the use of computer-generated credit scores for risk assessment. However, automated risk evaluations are currently imperfect, and the loss of vast amounts of capital could be prevented by improving the performance of computerized credit assessments. A number of approaches have been developed for the computation of credit scores over the last several decades, but these methods have been considered too complex without good interpretability and have therefore not been widely adopted. Therefore, in this study, we provide the first comprehensive comparison of results regarding the assessment of credit risk obtained using 10 runs of 10-fold cross validation of the Re-RX algorithm family, including the Re-RX algorithm, the Re-RX algorithm with both discrete and continuous attributes (Continuous Re-RX, the Re-RX algorithm with J48graft, the Re-RX algorithm with a trained neural network (Sampling Re-RX, NeuroLinear, NeuroLinear+GRG, and three unique rule extraction techniques involving support vector machines and Minerva from four real-life, two-class mixed credit-risk datasets. We also discuss the roles of various newly-extended types of the Re-RX algorithm and high performance classifiers from a Pareto optimal perspective. Our findings suggest that Continuous Re-RX, Re-RX with J48graft, and Sampling Re-RX comprise a powerful management tool that allows the creation of advanced, accurate, concise and interpretable decision support systems for credit risk evaluation. In addition, from a Pareto optimal perspective, the Re-RX algorithm family has superior features in relation to the comprehensibility of extracted rules and the potential for credit scoring with Big Data.
Endmember extraction algorithms from hyperspectral images
Directory of Open Access Journals (Sweden)
M. C. Cantero
2006-06-01
Full Text Available During the last years, several high-resolution sensors have been developed for hyperspectral remote sensing applications. Some of these sensors are already available on space-borne devices. Space-borne sensors are currently acquiring a continual stream of hyperspectral data, and new efficient unsupervised algorithms are required to analyze the great amount of data produced by these instruments. The identification of image endmembers is a crucial task in hyperspectral data exploitation. Once the individual endmembers have been identified, several methods can be used to map their spatial distribution, associations and abundances. This paper reviews the Pixel Purity Index (PPI, N-FINDR and Automatic Morphological Endmember Extraction (AMEE algorithms developed to accomplish the task of finding appropriate image endmembers by applying them to real hyperspectral data. In order to compare the performance of these methods a metric based on the Root Mean Square Error (RMSE between the estimated and reference abundance maps is used.
A Line Extraction Algorithm for Hand Drawings
Institute of Scientific and Technical Information of China (English)
赵明
1995-01-01
This paper presents an algorithm for extracting lines from hand drawings.It starts from contour pixel tracing,fits them into contour segments,and then extracts skeleton lines from the contour segments.The algorithm finds all contours in one scan of the input matrix without detecting and marking multiple pixels.In line extraction,the method Elastic Contour Segment Tracing is proposed which extracts lines by referring to the contour segments at both sides,overcoming noise and passing through blotted areas by fitting and extrapolation. Experiments on free hand mechanical drawings,sketches,letter/numerals,as well as Chinese characters are carried out and satisfactory results are achieved.
Liu, Cheng; Cai, Guowei; Yang, Deyou; Sun, Zhenglong
2016-08-01
In this paper, a robust online approach based on wavelet transform and matrix pencil (WTMP) is proposed to extract the dominant oscillation mode and parameters (frequency, damping, and mode shape) of a power system from wide-area measurements. For accurate and robust extraction of parameters, WTMP is verified as an effective identification algorithm for output-only modal analysis. First, singular value decomposition (SVD) is used to reduce the covariance signals obtained by natural excitation technique. Second, the orders and range of the corresponding frequency are determined by SVD from positive power spectrum matrix. Finally, the modal parameters are extracted from each mode of reduced signals using the matrix pencil algorithm in different frequency ranges. Compared with the original algorithm, the advantage of the proposed method is that it reduces computation data size and can extract mode shape. The effectiveness of the scheme, which is used for accurate extraction of the dominant oscillation mode and its parameters, is thoroughly studied and verified using the response signal data generated from 4-generator 2-area and 16-generator 5-area test systems.
A Fingerprint Feature Extraction Algorithm Based on Wavelet Transform%一种基于小波变换的指纹特征提取算法
Institute of Scientific and Technical Information of China (English)
李峰岳; 李星野
2012-01-01
提出了一种基于小波变换的指纹纹理特征提取算法.首先以指纹图像的核心点为中心分割出一片有效的矩形区域,然后对分割后的有效区域做二维小波分解,提取小波分解后各通道的能量值作为特征值来进行识别.与传统的基于指纹细节点的识别算法相比该算法一定程度上减少了计算量,对指纹图像的质量要求也不高,并且识别精度也得到了保证.%A fingerprint feature extraction algorithm based on wavelet transform was proposed. Firstly, the paper centered on the core-points, then divided the fingerprint image into an effective area. Next, the area was analyzed by two-dimension wavelet transform, and the energy of every passage was accurately extracted as the fingerprint features. The proposed algorithm required less computational effort than conventional algorithms which were based upon minutia features extraction. In addition, this algorithm did not need the high quality fingerprint image. Besides, the correct recognition rate also reached a high level.
Key Frame Extraction Based on Improved K-means Algorithm%基于改进K-means算法的关键帧提取
Institute of Scientific and Technical Information of China (English)
孙淑敏; 张建明; 孙春梅
2012-01-01
为克服传统聚类算法在关键帧提取过程中对初始参数较为敏感的问题,提出一种基于改进K-means算法的关键帧提取算法.在人工鱼群算法中,依据人工鱼群体相似度对提取的特征向量进行自组织聚类,采用进步最大原则使人工鱼聚集在几个极值点位置,以每个极值点群体相似度最高的人工鱼为初始聚类中心,执行K-means算法,得到聚类结果,并提取关键帧.实验结果表明,该算法的准确率较高,能较好地表达视频的主要内容.%In order to overcome the problems that the traditional clustering algorithm is sensitive to the initial parameter in the key frame extraction process, an efficient algorithm for key frame extraction based on improved K-means algorithm is proposed in this paper. In the implementation of the Artificial Fish Swarm Algorithm(AFSA) clustering algorithm, the artificial fish implement self-organization clustering under the guidance of group similarity and ultimately the artificial fish gathered in several extreme points, according to the greatest progress principle. The artificial fish with the biggest group similarity in each extreme point is set as the initialized cluster center. This paper implements K-means algorithm to obtain the final clustering result and extracts key frame. Experimental result shows that the accuracy of this algorithm is high, and can well express the main content of the video.
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
Extraction of impervious surfaces is one of the necessary processes in urban change detection.This paper derived a unified conceptual model (UCM) from the vegetation-impervious surface-soil (VIS) model to make the extraction more effective and accurate.UCM uses the decision tree algorithm with indices of spectrum and texture,etc.In this model,we found both dependent and independent indices for multi-source satellite imagery according to their similarity and dissimilarity.The purpose of the indices is to remove the other land-use and land-cover types (e.g.,vegetation and soil) from the imagery,and delineate the impervious surfaces as the result.UCM has the same steps conducted by decision tree algorithm.The Landsat-5 TM image (30 m) and the Satellite Probatoire d’Observation de la Terre (SPOT-4) image (20 m) from Chaoyang District (Beijing) in 2007 were used in this paper.The results show that the overall accuracy in Landsat-5 TM image is 88%,while 86.75% in SPOT-4 image.It is an appropriate method to meet the demand of urban change detection.
Case study of isosurface extraction algorithm performance
Energy Technology Data Exchange (ETDEWEB)
Sutton, P M; Hansen, C D; Shen, H; Schikore, D
1999-12-14
Isosurface extraction is an important and useful visualization method. Over the past ten years, the field has seen numerous isosurface techniques published leaving the user in a quandary about which one should be used. Some papers have published complexity analysis of the techniques yet empirical evidence comparing different methods is lacking. This case study presents a comparative study of several representative isosurface extraction algorithms. It reports and analyzes empirical measurements of execution times and memory behavior for each algorithm. The results show that asymptotically optimal techniques may not be the best choice when implemented on modern computer architectures.
Kim, Keo Sik; Seo, Jeong Hwan; Kang, Jin U; Song, Chul Gyu
2009-05-01
Vibroarthrographic (VAG) signals, generated by human knee movement, are non-stationary and multi-component in nature and their time-frequency distribution (TFD) provides a powerful means to analyze such signals. The objective of this paper is to improve the classification accuracy of the features, obtained from the TFD of normal and abnormal VAG signals, using segmentation by the dynamic time warping (DTW) and denoising algorithm by the singular value decomposition (SVD). VAG and knee angle signals, recorded simultaneously during one flexion and one extension of the knee, were segmented and normalized at 0.5 Hz by the DTW method. Also, the noise within the TFD of the segmented VAG signals was reduced by the SVD algorithm, and a back-propagation neural network (BPNN) was used to classify the normal and abnormal VAG signals. The characteristic parameters of VAG signals consist of the energy, energy spread, frequency and frequency spread parameter extracted by the TFD. A total of 1408 segments (normal 1031, abnormal 377) were used for training and evaluating the BPNN. As a result, the average classification accuracy was 91.4 (standard deviation +/-1.7) %. The proposed method showed good potential for the non-invasive diagnosis and monitoring of joint disorders such as osteoarthritis.
Xiong, Kai; Yang, Kai; Wang, Yu-Xiang
2017-08-01
To extract a high-quality data space (the so-called kinematic invariants) is a key factor to a successful implementation of stereo-tomography. The structure tensor algorithm demonstrated itself a robust tool to pick the kinematic invariants for stereo-tomography. However, if there are lots of diffractions and other noises in the data, it could be risky to extract the data space from the data domain. Meanwhile, for any reflector, we try to pick all the relevant primary reflections as much as possible within a wide offset range. To achieve this, in this paper, we design a scheme to extract a high-quality data space for stereo-tomography based on 3D structure tensor and kinematic de-migration. Firstly, we apply an automatic, dense volumetric picking for residual move-out (RMO) and the structural dip in the depth-migrated domain with an advanced 3D structure tensor algorithm. Then, a set of key horizons are picked manually in a few selected depth-migrated common offset gathers. Finally, all the picked horizons are extrapolated along the offset axis based on the RMO information picked in advance. Thus, the initial high-density points picked in the depth-migrated volume are greatly refined. After this processing, a final and refined data space for stereo-tomography is extracted through a kinematic de-migration. We demonstrate the correctness and the robustness of the presented scheme with synthetic and real data examples.
Noh, M. J.; Howat, I. M.; Porter, C. C.; Willis, M. J.; Morin, P. J.
2016-12-01
The Arctic is undergoing rapid change associated with climate warming. Digital Elevation Models (DEMs) provide critical information for change measurement and infrastructure planning in this vulnerable region, yet the existing quality and coverage of DEMs in the Arctic is poor. Low contrast and repeatedly-textured surfaces, such as snow and glacial ice and mountain shadows, all common in the Arctic, challenge existing stereo-photogrammetric techniques. Submeter resolution, stereoscopic satellite imagery with high geometric and radiometric quality, and wide spatial coverage are becoming increasingly accessible to the scientific community. To utilize these imagery for extracting DEMs at a large scale over glaciated and high latitude regions we developed the Surface Extraction from TIN-based Searchspace Minimization (SETSM) algorithm. SETSM is fully automatic (i.e. no search parameter settings are needed) and uses only the satellite rational polynomial coefficients (RPCs). Using SETSM, we have generated a large number of DEMs (> 100,000 scene pair) from WorldView, GeoEye and QuickBird stereo images collected by DigitalGlobe Inc. and archived by the Polar Geospatial Center (PGC) at the University of Minnesota through an academic licensing program maintained by the US National Geospatial-Intelligence Agency (NGA). SETSM is the primary DEM generation software for the US National Science Foundation's ArcticDEM program, with the objective of generating high resolution (2-8m) topography for the entire Arctic landmass, including seamless DEM mosaics and repeat DEM strips for change detection. ArcticDEM is collaboration between multiple US universities, governmental agencies and private companies, as well as international partners assisting with quality control and registration. ArcticDEM is being produced using the petascale Blue Waters supercomputer at the National Center for Supercomputer Applications at the University of Illinois. In this paper, we introduce the SETSM
基于粗糙集的压缩域关键帧提取算法%Key Frame Extraction Algorithm Based on Rough Setin Compressed Domain
Institute of Scientific and Technical Information of China (English)
史丽春; 蔡静之; 张明新
2011-01-01
提出一种基于粗糙集(RS)的压缩域关键帧提取算法.该算法从视频的压缩域数据流中提取I帧,以相邻I帧的差值为行、多个特征属性为列建立信息系统,对该系统进行归一化、离散化,采用RS的属性约简理论从中得到没有冗余的属性核,利用不可分辨关系进行关键帧提取.与像素差法、直流系数法的性能比较结果表明,该算法的计算复杂度更低,且适用于不同类型的视频.%This paper proposes a key frame extraction algorithm based on Rough Set(RS) in compressed domain. It extracts I frame from compressed domain data sequence, and constructs information system with the difference between two adjacent I frames in column and attributes sets which are extracted from decompressed I frames in row, then the established information system is normalized and discredited. It simplifies information system by using attribute reduce theory of RS to obtain attribute cores without redundancy, and key frame is extracted by using the indiscernibility relation of RS. Experimental result shows mat, competed with pixel difference algorithm and direct current coefficient algorithm, the algorithm proposed can reduce the complexity of computing and can be used for different types of video effectively.
基于视频聚类的关键帧提取算法%Key Frame Extraction Algorithm Based on Video Clustering
Institute of Scientific and Technical Information of China (English)
刘华咏; 郝会芬; 李涛
2014-01-01
关键帧可以有效减少视频索引的数据量，是分析和检索视频的关键。在提取关键帧过程中，为了解决传统聚类算法对初始参数敏感的问题，提出了一种改进的基于视频聚类的关键帧提取算法。首先，提取视频帧的特征，依据帧间相似度，对视频帧进行层次聚类，并得到初始聚类结果；接着使用K-means算法对初始聚类结果进行优化，最后提取聚类的中心作为视频的关键帧。实验结果表明该方法可以大幅提高关键帧的准确率和查全率，能较好地表达视频的主要内容。%Key frame candramatically reduce the data of video indexing, and it is the fundamental processes in video analysis and video retrieval.In order to solve the problems that the traditional clustering algorithm is sensitive to the initial parameter in key frame extraction process, we propose an improved key frame extraction algorithm based on video clustering.Firstly, we extract the features of video frames. And the hierarchical clustering algorithm is used to obtain an initial clustering result, according to thesimilarity between two video frames.Then, K-means algorithm is conducted to optimize the initial clustering result and obtain the ifnal clustering result. Finally, the center frame of each clustering is extracted as key frame. Experimental results show that the precision and recall ratio of our proposed algorithm are greatly improved. The key frames extracted by our algorithm are better to express the primary content of video.
Robustness of Tree Extraction Algorithms from LIDAR
Dumitru, M.; Strimbu, B. M.
2015-12-01
Forest inventory faces a new era as unmanned aerial systems (UAS) increased the precision of measurements, while reduced field effort and price of data acquisition. A large number of algorithms were developed to identify various forest attributes from UAS data. The objective of the present research is to assess the robustness of two types of tree identification algorithms when UAS data are combined with digital elevation models (DEM). The algorithms use as input photogrammetric point cloud, which are subsequent rasterized. The first type of algorithms associate tree crown with an inversed watershed (subsequently referred as watershed based), while the second type is based on simultaneous representation of tree crown as an individual entity, and its relation with neighboring crowns (subsequently referred as simultaneous representation). A DJI equipped with a SONY a5100 was used to acquire images over an area from center Louisiana. The images were processed with Pix4D, and a photogrammetric point cloud with 50 points / m2 was attained. DEM was obtained from a flight executed in 2013, which also supplied a LIDAR point cloud with 30 points/m2. The algorithms were tested on two plantations with different species and crown class complexities: one homogeneous (i.e., a mature loblolly pine plantation), and one heterogeneous (i.e., an unmanaged uneven-aged stand with mixed species pine -hardwoods). Tree identification on photogrammetric point cloud reveled that simultaneous representation algorithm outperforms watershed algorithm, irrespective stand complexity. Watershed algorithm exhibits robustness to parameters, but the results were worse than majority sets of parameters needed by the simultaneous representation algorithm. The simultaneous representation algorithm is a better alternative to watershed algorithm even when parameters are not accurately estimated. Similar results were obtained when the two algorithms were run on the LIDAR point cloud.
SIFT based algorithm for point feature tracking
Directory of Open Access Journals (Sweden)
Adrian BURLACU
2007-12-01
Full Text Available In this paper a tracking algorithm for SIFT features in image sequences is developed. For each point feature extracted using SIFT algorithm a descriptor is computed using information from its neighborhood. Using an algorithm based on minimizing the distance between two descriptors tracking point features throughout image sequences is engaged. Experimental results, obtained from image sequences that capture scaling of different geometrical type object, reveal the performances of the tracking algorithm.
AN ALGORITHM FOR EXTRACTING CHARACTER FEATURE POINTS BASED ON CIRCULAR TEMPLATE%基于圆形模板的字符特征点提取算法
Institute of Scientific and Technical Information of China (English)
曹中华; 郭斌; 王艳
2011-01-01
提出一种新的基于圆形模板的字符特征点提取算法,通过分析圆形模板被字符笔画分割后得到的背景区域个数及其相关特点,确定字符的特征点位置,并运用最大圆规则精确叉点位置.实验表明该算法能快速、准确、鲁棒地提取出特征点信息.%This paper presents a new algorithm for extracting character feature points based on circular template. By analysing the background region numbers and relevant characteristics of the circular template divided by the character strokes, the algorithm determines the feature point position of the character. Then the fork position is exactly located using maximum circle criterion. Experiments show that the algorithm is fast,accurate and robust in extracting the feature point information.
Improved Text Feature Extraction Algorithm Based on N-Gram%基于N—Gram文本特征提取的改进算法
Institute of Scientific and Technical Information of China (English)
余小军; 刘峰; 张春
2012-01-01
提出一种改进的N—Gram文本特征提取算法。该算法将词性分析与权重过滤引入到N—Gram特征向量提取的过程，有效地解决N—Gram适91差、特征向量冗余大、与文本属性无关等问题。实验结果表明，该特征提取算法能够更加准确地描述文本特征．能较好地适用于文本特征处理、Web文本数据挖掘等中文信息处理领域。%Introduces an improved text feature extraction algorithm based on N-Gram. The algorithm solves the difficult suitability problem of N-Gram, the redundancy problem of feature vector and the independence problem of text property through the introduction of weight filtering and part of speech analysis. The result shows that the improved text feature extraction algorithm can more accurately describe the text feature, so it can be used in the field of Chinese information processing, for example, text retrieval, Web text data mining and so on.
Gray Cerebrovascular Image Skeleton Extraction Algorithm Using Level Set Model
Directory of Open Access Journals (Sweden)
Jian Wu
2010-06-01
Full Text Available The ambiguity and complexity of medical cerebrovascular image makes the skeleton gained by conventional skeleton algorithm discontinuous, which is sensitive at the weak edges, with poor robustness and too many burrs. This paper proposes a cerebrovascular image skeleton extraction algorithm based on Level Set model, using Euclidean distance field and improved gradient vector flow to obtain two different energy functions. The first energy function controls the obtain of topological nodes for the beginning of skeleton curve. The second energy function controls the extraction of skeleton surface. This algorithm avoids the locating and classifying of the skeleton connection points which guide the skeleton extraction. Because all its parameters are gotten by the analysis and reasoning, no artificial interference is needed.
MERIT: Minutiae Extraction using Rotation Invariant Algorithm
Directory of Open Access Journals (Sweden)
Avinash Pokhriyal,
2010-07-01
Full Text Available Thinning a fingerprint makes its ridges as thin as one pixel and still retaining its basic structure. So many algorithms are devised by researchers to extract skeleton of a fingerprint image, but the problem is that they produce different results with different rotations of the same fingerprint image. This results in inefficient minutiae extraction. In this paper, a new way of thinning a fingerprint image is proposed. This method is called MERIT (Minutiae Extraction using Rotation Invariant Thinning, as it thins a fingerprint image irrespective of the fingerprint's position and then extracts minutiae points from a fingerprint image. First of all, we binarize the fingerprint image and convert it into a 0-1 pattern. Then, we apply some morphological operations like dilation and erosion, and also some if-then rules governing a 3x3 mask that is to be convoluted throughout the image to skeletonize it. In the end,some postprocessing is done on the thinned fingerprint image to remove false minutiae structures from it. Finally genuine minutiae points are extracted from the thinned fingerprint image along with their directions. Results show that the proposed method extracts genuine minutiae points even from low-quality fingerprint images.
Features extraction algorithm of CT image based on GNSCT-LCM%基于NSCT-GLCM的CT图像特征提取算法
Institute of Scientific and Technical Information of China (English)
张人上
2014-01-01
Feature extraction is a key problem for the mass CT image segmentation, a novel features extraction algorithm of CT image is proposed based on Non-Subsampled Contourlet Transform(NSCT)and Gray Level Co-occurrence Matrix (GLCM)in this paper. Firstly, CT image is multi-scale, multi direction decomposed by the NSCT, and the co-occurrence features of sub-images are extracted by GLCM, and then the redundant features are eliminated by the principal component analysis and feature vectors are composed, finally CT image is segmented by the support vector machine based on multi-feature vector space. The experimental results show that the proposed algorithm can extract features of CT image, and has improved the segmentation accuracy of CT images, can provide assisted information for the doctor diagnosis.%针对海量CT图像分割中特征提取的难题，提出一种非下采样轮廓变换（NSCT）和灰度共生矩阵（GLCM）相融合的CT图像特征提取算法。首先采用NSCT对CT图像进行多尺度、多方向分解，并采用GLCM提取子带图像的共生特征量，然后对共生特征量进行主成分分析，消除冗余特征量，构成多特征矢量，最后利用支持向量机完成多特征矢量空间的划分，实现CT图像分割。实验结果表明，NSCT-GLCM能够较好地提取CT图像特征，提高了CT图像分割准确率，可以为医生诊断提供辅助信息。
Shortcut Algorithm for Simulation of Batch Extractive Distillation
Institute of Scientific and Technical Information of China (English)
WU Huixiong; XU Shimin; HU Hui; XIAO Bin
2007-01-01
The batch extractive distillation (BED) process has the advantages of both batch and extractive distillation. It is one of the most promising means for the separation of azeotropic and close-boiling point systems. However, so far this process has not been applied in industry due to its over-complexity. A new shortcut model was proposed to simulate the operation of the batch extractive distillation operations. This algorithm is based on the assumption that the batch extractive distillation column can be considered as a continuous extractive distillation column with changing feed at anytime. Namely, the whole batch process is simulated as a succession of a finite number of steady states of short duration, in which holdup is considered as constant mole. For each period of time the batch extractive distillation process is solved through the algorithm for continuous extractive distillation. Finally, the practical implementation of the shortcut model is discussed and data from the lab-oratory and literature are presented. It is found that this model has better adaptability, more satisfactory accuracy and less calculative load than previous rigorous model. Hence the algorithm for simulating BED is verified.
Directory of Open Access Journals (Sweden)
T. Karpagam
2012-01-01
Full Text Available Problem statement: Network topology design problems find application in several real life scenario. Approach: Most designs in the past either optimize for a single criterion like shortest or cost minimization or maximum flow. Results: This study discussed about solving a multi objective network topology design problem for a realistic traffic model specifically in the pipeline transportation. Here flow based algorithm focusing to transport liquid goods with maximum capacity with shortest distance, this algorithm developed with the sense of basic pert and critical path method. Conclusion/Recommendations: This flow based algorithm helps to give optimal result for transporting maximum capacity with minimum cost. It could be used in the juice factory, milk industry and its best alternate for the vehicle routing problem.
Gradient Algorithm on Stiefel Manifold and Application in Feature Extraction
Directory of Open Access Journals (Sweden)
Zhang Jian-jun
2013-09-01
Full Text Available To improve the computational efficiency of system feature extraction, reduce the occupied memory space, and simplify the program design, a modified gradient descent method on Stiefel manifold is proposed based on the optimization algorithm of geometry frame on the Riemann manifold. Different geodesic calculation formulas are used for different scenarios. A polynomial is also used to lie close to the geodesic equations. JiuZhaoQin-Horner polynomial algorithm and the strategies of line-searching technique and change of the step size of iteration are also adopted. The gradient descent algorithm on Stiefel manifold applied in Principal Component Analysis (PCA is discussed in detail as an example of system feature extraction. Theoretical analysis and simulation experiments show that the new method can achieve superior performance in both the convergence rate and calculation efficiency while ensuring the unitary column orthogonality. In addition, it is easier to implement by software or hardware.
Research of incremental extraction based on MD5 and HASH algorithm%融入MD5的HASH线性获取增量算法研究
Institute of Scientific and Technical Information of China (English)
郭亮; 杨金民
2014-01-01
To achieve rapid incremental extraction of database, an algorithm which is blended MD5 in HASH linear scan-ning to obtain increment is put forward based on analyzing the traditional incremental extraction. Each record in database can be seen as a character string and it can be generated into hash table as duplicate record, which is explored in hash table through traditional record to obtain increment and decrease frequency of comparison. Meanwhile, the fingerprint of each record can be generated with using MD5 algorithm, which reduces the length of character string in every HASH algorithm and comparison and improves efficiency. This algorithm is applicably tested in ORACLE database and the result shows that it is improved on calculative efficiency at a large extent compared with traditional algorithm.%为了实现数据库中的快速增量提取，在剖析传统的增量提取方法上，提出了一种融入MD5的HASH线性扫描来获取增量的算法。数据库中的每条记录都可视为一个字符串，利用HASH算法生成备份记录的散列表，通过原始记录去散列表中探测来达到线性扫描就能获取增量的目的，减少了比对次数；同时利用MD5算法生成每条记录的“指纹”，降低了每次HASH运算和比对的字符串长度，提高了效率。对所提出算法在ORACLE数据库上进行了应用测试，结果表明该算法效率较传统方法有很大提高。
Institute of Scientific and Technical Information of China (English)
赵媛媛; 王力
2014-01-01
The paper prefers to study speech feature extraction based on ISOP manifold learning algorithm. The research attempts to apply ISOP manifold learning algorithm to the model of speech recognition feature extraction. Simulation experiments results show that the proposed algorithm can get a higher recognition rate than the traditional feature extraction algorithm , such as MFCC, LPCC etc.%主要研究了基于流形学习 ISOP 算法的语音特征提取。将流形学习 ISOP 算法应用到语音识别特征提取模块中。仿真实验结果表明，该算法与传统的特征提取算法 MFCC、LPCC 等相比，可以取得较高的识别率。
Parameter selection of pocket extraction algorithm using interaction interface
Institute of Scientific and Technical Information of China (English)
KIM Chong-Min; WON Chung-In; RYU Joonghyun; CHO Cheol-Hyung; BHAK Jonghwa; KIM Deok-Soo
2006-01-01
Pockets in proteins have been known to be very important for the life process. There have been several studies in the past to automatically extract the pockets from the structure information of known proteins. However, it is difficult to find a study comparing the precision of the extracted pockets from known pockets on the protein. In this paper, we propose an algorithm for extracting pockets from structure data of proteins and analyze the quality of the algorithm by comparing the extracted pockets with some known pockets. These results in this paper can be used to set the parameter values of the pocket extraction algorithm for getting better results.
基于图割的压缩域运动对象提取%Moving object extraction algorithm based on graph-cuts in compressed domain
Institute of Scientific and Technical Information of China (English)
鲁建飞; 刘渊; 谢振平; 吴昊天
2015-01-01
With the increasing volume of video data in the applications such as surveillance, how to process and analyze video content in a fast and effective way has still been an attractive topic. The pixel-domain analysis method is widely adopted in moving object extraction. Although good performance can be achieved, there are some restrictions in the practical appli-cations due to its high computational complexity. In this paper, a new moving object extraction algorithm based on graph-cuts in the compressed domain is proposed. Background modeling is performed on the 4×4 blocks in the compressed domain so that the initial probability of each block can be obtained. Then a graph-cuts energy function can be constructed with the initial probability and Motion Vector (MV) information associated with each block. By introducing the graph-cuts algorithm to refine the foreground region, moving object segmentation can be quickly accomplished. Experimental results show that the new algorithm has high accuracy and low computational complexity which is very important for real application.%随着在视频监控等方面的应用，视频数据量不断增加，如何快速有效地处理和分析视频内容仍然是一个亟待解决的问题。目前的运动对象提取通常采用像素域的分析方法，虽然有较好的主客观效果，但由于计算复杂度高，在实际应用中有诸多限制。因此，提出了一种基于图割的压缩域运动对象提取算法。该算法基于4×4分块的高斯背景建模，得到视频帧中各子块的初始概率，结合运动矢量（Motion Vector）信息构造压缩域图割能量函数，利用图割算法对前景区域进行修正，从而实现对运动对象的快速提取。与其他运动区域提取算法的对比实验表明，该算法具有较高的准确率和较低的计算复杂度，具有较高的实际使用价值。
AN ANT COLONY ALGORITHM FOR MINIMUM UNSATISFIABLE CORE EXTRACTION
Institute of Scientific and Technical Information of China (English)
Zhang Jianmin; Shen Shengyu; Li Sikun
2008-01-01
Explaining the causes of infeasibility of Boolean formulas has many practical applications in electronic design automation and formal verification of hardware. Furthermore,a minimum explanation of infeasibility that excludes all irrelevant information is generally of interest. A smallest-cardinality unsatisfiable subset called a minimum unsatisfiable core can provide a succinct explanation of infea-sibility and is valuable for applications. However,little attention has been concentrated on extraction of minimum unsatisfiable core. In this paper,the relationship between maximal satisfiability and mini-mum unsatisfiability is presented and proved,then an efficient ant colony algorithm is proposed to derive an exact or ncarly exact minimum unsatisfiable core based on the relationship. Finally,ex-perimental results on practical benchmarks compared with the best known approach are reported,and the results show that the ant colony algorithm strongly outperforms the best previous algorithm.
Institute of Scientific and Technical Information of China (English)
陆军; 彭仲涛; 董东来; 宋景豪
2014-01-01
三维点云配准在机器人环境感知与建模、逆向工程等领域有着广泛的应用前景。针对获得的不同视角下点云数据的配准问题，设计一种 FPFH 特征提取优化配准方法。在提取关键点的基础上优化了FPFH 特征描述子计算过程中法向量计算，根据测量点及其邻域点估算每个关键点和它邻域点的曲面法矢，使用 SAC-IA 算法获得点云初始坐标变换矩阵，最后使用 ICP 算法精确配准。设计了三种配准方案，实验结果表明，只计算关键点及其周围一定范围内点法向量配准方法具有配准速度快、精度高的特点。%The three-dimensional point cloud registration has wide application prospect in fields of environmental perception and modeling of robot,reverse engineering,etc. An optimal feature extraction method of FPFH feature is designed to solve registration problem under different view of point cloud data. Based on the extracted key points,normal vector calculation used for getting FPFH feature descriptor is optimized. Based on the measurement point and its neighborhood points,the surface normal of each key points and its neighboring points are estimated. SAC-IA algorithm is used to obtain the initial point cloud coordinate transform matrix. Finally the ICP algorithm is used for precise registration. Three registration schemes are designed. The experimental results show that the scheme which only calculates normal of key points and their neighbors is faster and has high precision.
Feature extraction and classification algorithms for high dimensional data
Lee, Chulhee; Landgrebe, David
1993-01-01
Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized
Parameter Extraction of Solar Photovoltaic Modules Using Gravitational Search Algorithm
Directory of Open Access Journals (Sweden)
R. Sarjila
2016-01-01
Full Text Available Parameter extraction of a solar photovoltaic system is a nonlinear problem. Many optimization algorithms are implemented for this purpose, which failed in giving better results at low irradiance levels. This article presents a novel method for parameter extraction using gravitational search algorithm. The proposed method evaluates the parameters of different PV panels at various irradiance levels. A critical evaluation and comparison of gravitational search algorithm with other optimization techniques such as genetic algorithm are given. Extensive simulation analyses are carried out on the proposed method and show that GSA is much suitable for parameter extraction problem.
Online Feature Extraction Algorithms for Data Streams
Ozawa, Seiichi
Along with the development of the network technology and high-performance small devices such as surveillance cameras and smart phones, various kinds of multimodal information (texts, images, sound, etc.) are captured real-time and shared among systems through networks. Such information is given to a system as a stream of data. In a person identification system based on face recognition, for example, image frames of a face are captured by a video camera and given to the system for an identification purpose. Those face images are considered as a stream of data. Therefore, in order to identify a person more accurately under realistic environments, a high-performance feature extraction method for streaming data, which can be autonomously adapted to the change of data distributions, is solicited. In this review paper, we discuss a recent trend on online feature extraction for streaming data. There have been proposed a variety of feature extraction methods for streaming data recently. Due to the space limitation, we here focus on the incremental principal component analysis.
Sakashita, Makiko; Kitasaka, Takayuki; Mori, Kensaku; Suenaga, Yasuhito; Nawano, Shigeru
2007-03-01
This paper presents a method for extracting multi-organs from four-phase contrasted CT images taken at different contrast timings (non-contrast, early, portal, and late phases). First, we apply a median filter to each CT image and align four-phase CT images by performing non-rigid volumetric image registration. Then, a three-dimensional joint histogram of CT values is computed from three-phase (early-, portal-, and late-) CT images. We assume that this histogram is a mixture of normal distributions corresponding to the liver, spleen, kidney, vein, artery, muscle, and bone regions. The EM algorithm is employed to estimate each normal distribution. Organ labels are assigned to each voxel using the mahalanobis distance measure. Connected component analysis is applied to correct the shape of each organ region. After that, the pancreas region is extracted from non-contrasted CT images in which other extracted organs and vessel regions are excluded. The EM algorithm is also employed for estimating the distribution of CT values inside the pancreas. We applied this method to seven cases of four-phase CT images. Extraction results show that the proposed method extracted multi-organs satisfactorily.
A Novel Multithreaded Algorithm For Extracting Maximal Chordal Subgraphs
Energy Technology Data Exchange (ETDEWEB)
Halappanavar, Mahantesh; Feo, John T.; Dempsey, Kathryn; Ali, Hesham; Bhowmick, Sanjukta
2012-10-25
Chordal graphs are triangulated graphs where any cycle larger than three is bisected by a chord. Many combinatorial optimization problems such as computing the minimum fill-in, the size of the maximum clique and the chromatic number are NP-hard on general graphs but have polynomial time solutions on chordal graphs. In this paper, we present a novel multithreaded algorithm to extract a maximal chordal subgraph from a general graph. Our algorithm is based on an iterative approach where each thread can asynchronously update a subset of edges that are dynamically assigned to it. We implement our algorithm on two different multithreaded architectures – Cray XMT, a massively multithreaded platform, and AMD Magny-Cours, a shared memory multicore platform. In addition to the proof of correctness, we present the performance of our algorithm using a testset of carefully generated synthetical graphs with up to half-a-billion edges and real world networks from gene correlation studies. We demonstrate that our algorithm achieves high scalability for all inputs on both types of architectures.
基于共生矩阵纹理特征提取的改进算法%Improved texture feature extraction algorithm based on GLCM
Institute of Scientific and Technical Information of China (English)
龚家强; 李晓宁
2011-01-01
After studly for GLCM and its improved algorithm, especialy for its excessive computational burden, a novel method based on grey level co-occurrence hybrid structure (GLCHS) and discrete Fourier transform is presented to achieve texture feature extraction. First,the result image is devided into several blocks which can reduce the number of gray levels. Then, gray normalization is done to reduce the range of the gray value. Finally, the GLCHS is used to compute the five dimensional vectors to describe the image' s texture. The experiment results indicate that the improved method reduces the computational complexity and greatly reduces the time of feature extraction.%深入研究灰度共生矩阵及其改进算法,针对其计算量大、耗时等问题,提出一种基于灰度共生混合结构和离散傅立叶变换的方法来实现纹理特征的提取.对傅立叶变换后的频谱图进行分块计算,以此降低计算时的灰度级,再采用正规化的方式减少特征的分布范围,并利用灰度共生混合结构算法计算一个5维的特征向量来描述图像的纹理.实验结果表明,改进后的算法降低了计算复杂度,极大地减少了图像纹理特征提取的时间.
HISTORY BASED PROBABILISTIC BACKOFF ALGORITHM
Directory of Open Access Journals (Sweden)
Narendran Rajagopalan
2012-01-01
Full Text Available Performance of Wireless LAN can be improved at each layer of the protocol stack with respect to energy efficiency. The Media Access Control layer is responsible for the key functions like access control and flow control. During contention, Backoff algorithm is used to gain access to the medium with minimum probability of collision. After studying different variations of back off algorithms that have been proposed, a new variant called History based Probabilistic Backoff Algorithm is proposed. Through mathematical analysis and simulation results using NS-2, it is seen that proposed History based Probabilistic Backoff algorithm performs better than Binary Exponential Backoff algorithm.
Research on a New Key-Frame Extraction Algorithm of Printing Video
You, Fucheng; Chen, Yujie
The scenes and contents of printing video almost describes the operation, usage and running of printing machines. Based on these features of printing video, this paper introduces some approaches of key-frame extraction. In commonly used traditional algorithms, key-frame extraction algorithms based on average histogram and based on shot boundary are described in detail. Moreover, this paper proposes a novel approach of key-frame extraction based on average frame-difference. The experimental results indicate that the new approach is very effective and efficient for extracting key-frames of printing video.
Vein image extraction method based on improved NIBLACK algorithm%基于NIBLACK改进算法的静脉识别特征提取
Institute of Scientific and Technical Information of China (English)
郑均辉; 喻恒
2015-01-01
静脉识别是一种新兴的生物特征识别技术，为了满足静脉识别中的特征提取需求，对手背静脉提取方法进行了研究。首先采用CLAHE算法对手背静脉图像进行增强处理，然后针对传统NIBALCK二值化算法的不足，提出一种局部静态阈值与NIBLACK相结合的改进算法。实验证明，该方法能有效消除传统方法中噪声过多、纹络断裂的现象，克服光强因素对图像提取的影响，保持完整清晰的静脉纹络结构，从而满足后续识别工作的需要。%Vein recognition is an emerging biometric feature recognition technology. In order to meet the demands for the feature extraction in hand vein recognition,a study on hand vein extraction methods is made in this paper. The hand vein image is enhanced with CLAHE algorithm first before the image processing. Because the traditional NIBLACK algorithm for image bina⁃rization processing has some flaws,an improved algorithm combining local static threshold method with NIBLACK algorithm is proposed in this paper. Experimental results show that the method can effectively eliminate the phenomena of noise nimiety and texture break,and overcome the impact of light intensity on the image extraction,which can keep the clear vein structure.
TreeWrap- Data Extraction Using Tree Matching Algorithm
Directory of Open Access Journals (Sweden)
Fariza Fauzi
2010-06-01
Full Text Available In this paper, we develop a non-visual automatic wrapper to extract data records from search engine results pages which contain important information for computer users. Our wrapper consists of a series of data filter to detect and remove irrelevant data from the web page. In the filtering stages, we incorporate two main algorithms which are able to check the similarity of data records and to detect and extract the correct data region based on their component sizes. To evaluate the performance of our algorithm, we carry out experimental and deletion tests. Experimental tests show that our wrapper outperforms the existing state of the art wrappers such as ViNT and DEPTA. Deletion studies by replacing our novel techniques with state of the art conventional techniques show that our wrapper design is efficient and could robustly extract data records from search engine results pages. With the speed advantages, our wrapper could be beneficial in processing large amount of web sites data, which could be helpful in meta search engine development.
A Text Categorization Algorithm Based on Sense Group
Directory of Open Access Journals (Sweden)
Jing Wan
2013-02-01
Full Text Available Giving further consideration on linguistic feature, this study proposes an algorithm of Chinese text categorization based on sense group. The algorithm extracts sense group by analyzing syntactic and semantic properties of Chinese texts and builds the category sense group library. SVM is used for the experiment of text categorization. The experimental results show that the precision and recall of the new algorithm based on sense group is better than that of traditional algorithms.
Cryptographic Protocols Based on Root Extracting
DEFF Research Database (Denmark)
Koprowski, Maciej
In this thesis we design new cryptographic protocols, whose security is based on the hardness of root extracting or more speci cally the RSA problem. First we study the problem of root extraction in nite Abelian groups, where the group order is unknown. This is a natural generalization of the...... complexity of root extraction, even if the algorithm can choose the "public exponent'' itself. In other words, both the standard and the strong RSA assumption are provably true w.r.t. generic algorithms. The results hold for arbitrary groups, so security w.r.t. generic attacks follows for any cryptographic...... construction based on root extracting. As an example of this, we modify Cramer-Shoup signature scheme such that it becomes a genericm algorithm. We discuss then implementing it in RSA groups without the original restriction that the modulus must be a product of safe primes. It can also be implemented in class...
Web Based Genetic Algorithm Using Data Mining
Ashiqur Rahman; Asaduzzaman Noman; Md. Ashraful Islam; Al-Amin Gaji
2016-01-01
This paper presents an approach for classifying students in order to predict their final grade based on features extracted from logged data in an education web-based system. A combination of multiple classifiers leads to a significant improvement in classification performance. Through weighting the feature vectors using a Genetic Algorithm we can optimize the prediction accuracy and get a marked improvement over raw classification. It further shows that when the number of features is few; fea...
Institute of Scientific and Technical Information of China (English)
李冰; 孙辉; 孙宁; 王坤
2015-01-01
To solve the problem of endmember extraction for hyperspectral remote sensing imagery,a new endmember extraction method based on improved artificial bee colony algorithm is proposed. First,the weighted generated bee guided search strategy is used to balance the exploration and exploitation in ABC, and a new algorithm named IABC is proposed. Experiments are carried out on 8 benchmark functions,and the results show that the performance of the new algorithm is significantly improved. Then,the core idea and the main steps of the IABC-based extraction are introduced. The results show that the new algorithm has better applicability compared with ABC and conventional extraction algorithm in the simulation and real hyperspectral data.%针对高光谱图像中端元提取的问题，提出了一种基于改进人工蜂群算法的提取方法。首先，为平衡人工蜂群算法全局和局部搜索能力，研究了加权构造蜂引导的搜索策略，构造了改进人工蜂群算法。在8个基准测试函数中进行实验，验证了新算法的性能有明显提升。然后，介绍了基于IABC端元提取的核心思想与主要步骤，与ABC和常规提取算法在模拟和真实高光谱遥感数据中进行实验对比，结果表明了新算法具有更好的适用性。
基于光谱分类的端元提取算法研究%Research on Endmember Extraction Algorithm Based on Spectral Classification
Institute of Scientific and Technical Information of China (English)
高晓惠; 相里斌; 魏儒义; 吕群波; 卫俊霞
2011-01-01
目前成熟的端元提取算法是基于单形体几何学的像元纯度指数(PPO)算法,N-FINDR,VCA等算法.这些算法从图像所有像元中提取纯光谱,具有提取速度慢、精度不高的缺点;部分算法需要进行光谱降维,不利于小目标信息的提取.该文提出先利用基于空间特征的光谱分类算法进行分类,将格个图像划分成空间相邻、光谱相似的若干类,每一类的均值光谱作为标准光谱,从所有类别的标准光谱中提取纯光谱,使得运算量明显减少,并且降低了噪声对算法的影响,极大的提高了端元提取的速度和精度.同时采用基于光谱冗余的端元提取算法进行端元提取,不需要预先设定端元数目,相对于PPI,N-FINDR等算法,该算法更具合理性.将该算法处理结果与ENVI中的SMACC算法处理结果进行比较,表明该算法具有端元提取准确,空间连续性好,抗噪能力强等特点.%Spectral unmixing is an important task for data processing of hyperspectral remote sensing, which is comprised of extracting the pure spectra (endmenber) and calculating the abundance value of pure spectra The most efficient endmember extrating algorithms (EEAs) is designed baed on convexity geomtry such as pure pixel index (PPI), N-finder algorithm (N-FINDR). Most EEAs choose pure spectra from all pixels of an image so that they have disadvantages Like slow processing speed and poor precision Partial algorithms need reducing the spectral dimension, which results in the difficulty in small target identification This paper proposed an algorithm that classifies the hyperspetral image into some classes with homogeneous spectra and considers the mean spectra of a class as standard spectra for the class, then extracts pure spectrum from all standard spectra of classes It reduces computation and the effect of system error, enhancing the speed and precision of endmember extraction Using the least squares with constraints on spectral extraction
Accurate and efficient maximal ball algorithm for pore network extraction
Arand, Frederick; Hesser, Jürgen
2017-04-01
The maximal ball (MB) algorithm is a well established method for the morphological analysis of porous media. It extracts a network of pores and throats from volumetric data. This paper describes structural modifications to the algorithm, while the basic concepts are preserved. Substantial improvements to accuracy and efficiency are achieved as follows: First, all calculations are performed on a subvoxel accurate distance field, and no approximations to discretize balls are made. Second, data structures are simplified to keep memory usage low and improve algorithmic speed. Third, small and reasonable adjustments increase speed significantly. In volumes with high porosity, memory usage is improved compared to classic MB algorithms. Furthermore, processing is accelerated more than three times. Finally, the modified MB algorithm is verified by extracting several network properties from reference as well as real data sets. Runtimes are measured and compared to literature.
The Result Integration Algorithm Based on Matching Strategy
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
The following paper provides a new algorithm: a result integration algorithm based on matching strategy. The algorithm extracts the title and the abstract of Web pages, calculates the relevance between the query string and the Web pages, decides the Web pages accepted, rejected and sorts them out in user interfaces. The experiment results indicate obviously that the new algorithms improve the precision of meta-search engine. This technique is very useful to meta-search engine.
一种基于散乱点云的边界提取算法%A BOUNDARY EXTRACTION ALGORITHM BASED ON SCATTERED POINT CLOUD
Institute of Scientific and Technical Information of China (English)
吴禄慎; 晏海平; 陈华伟; 高项清
2014-01-01
The boundary of point cloud is one of the most important features of the surface, extracting the boundary line quickly and accurately is important to improve the efficiency and quality of surface reconstruction.First, we use kd-tree based searching method to establish topological relationship in cloud point space and carry out K neighbourhood fast search, and fit the micro tangent plane by using the sampling points and its K neighbourhood as the reference basis of the partial type surface, then project these points onto the micro tangent plane.Secondly, we set up a local coordinate system on the plane and parameterise the projecting points, and identify the boundary characteristic points according to the theory that the sum of the field force magnitudes of the neighbourhood point sets on sampling points can represent the average effects of the point sets.Finally, in perspective of improving the continuity of boundary lines, we use NURBS curve interpolation method to connect the boundary lines.Experimental results show that the algorithm can extract the boundary features of point clouds quickly and effectively, and get the boundary line with C2 continuity, meets the requirement of surface reconstruction.%点云边界是曲面的重要特征之一，边界线的快速准确提取对于提高曲面重构的效率和质量具有重要意义。首先，采用基于kd-tree搜索的方法建立点云空间拓补关系，进行K邻域快速搜索，以采样点及其K邻域作为局部型面参考依据拟合微切平面，将其向微切平面投影；其次，在微切平面上建立局部坐标系，并对投影点进行参数化，根据邻域点集在采样点处的场力大小之和可以表示点集的平均作用来识别点云的边界特征点；最后，从提高边界线连续性的角度，利用NURBS曲线插值方法连接边界线。实验结果表明，该算法可以快速、有效地提取出点云的边界特征点，并得到C2连续的边界线，满足曲面重构的要求。
Variables Bounding Based Retiming Algorithm
Institute of Scientific and Technical Information of China (English)
宫宗伟; 林争辉; 陈后鹏
2002-01-01
Retiming is a technique for optimizing sequential circuits. In this paper, wediscuss this problem and propose an improved retiming algorithm based on variables bounding.Through the computation of the lower and upper bounds on variables, the algorithm can signi-ficantly reduce the number of constraints and speed up the execution of retiming. Furthermore,the elements of matrixes D and W are computed in a demand-driven way, which can reducethe capacity of memory. It is shown through the experimental results on ISCAS89 benchmarksthat our algorithm is very effective for large-scale sequential circuits.
Institute of Scientific and Technical Information of China (English)
尹辉增; 孙轩; 聂振钢
2012-01-01
An efficient automated extraction algorithm of power lines based on Airborne Laser Scanning ( ALS) data was put forward. The algorithm adopted point clouds classification based on region part height histogram distribution patterns,lines extraction method with global direction feature in Hough space, mathematical estimating method of hanging point position and local partitioned polynomial fitting method. Four key problems were solved by use of the algorithm,namely, point clouds classification, plane position extraction of power lines,power line hanging points extraction and power line fitting. Finally,the applicability of the algorithm was proved by some practical engineering data%设计并开发了一种从机载激光扫描的三维点云数据中自动提取电力线的算法,采用局部高程分布直方图模式分类滤波、Hough特征空间中全局方向特征优先的线特征提取、悬挂点位置数学推算和局部分段多项式拟合的方法,有效解决了电力线提取过程中电力线点云与电塔点云的自动分类、电力线平面位置提取、电力线悬挂点提取、电力线拟合问题.最后通过实际的工程数据验证了该算法的实用性.
A New Matlab De-noising Algorithm for Signal Extraction
Institute of Scientific and Technical Information of China (English)
ZHANG Fu-ming; WU Song-lin
2007-01-01
The goal of a de-noising algorithm is to reconstruct a signal from its noise-corrupted observations. Perfect reconstruction is seldom possible and performance is measured under a given fidelity criterion. In a recent work, the authors addressed a new Matlab algorithm for de-noising. A key method of the algorithm is selecting an optimal basis from a library of wavelet bases for ideal de-noising. The algorithm with an optimal basis from a library of wavelet bases for de-noising was created through making use of Matlab′s Wavelet Toolbox. The experimental results show that the new algorithm is efficient in signal de-nosing.
A Robust Algorithm of Contour Extraction for Vehicle Tracking
Institute of Scientific and Technical Information of China (English)
FANZhimin; ZHOUJie; GAODashan
2003-01-01
Contour extraction of moving vehicle is an important and challenging issue in traffic surveillance. In this paper, a robust algorithm is proposed for contour ex-traction and moving vehicle tracking. First, we establish a modified snake model and utilize the directional infor-mation of the edge map to guide the snaxels' behavior.Then an adaptive shape restriction is embedded into the algorithm to govern the scope of the snake's motion, and Kalman filter is employed to estimate spatio-temporal rela-tionship between successive frames. In addition~ multiple refinements are suggested to compensate for the snake's vulnerability to fake edges. All of them contribute to a ro-bust overall performance in contour extraction and vehicle tracking. Experimental results in real traffic scene prove the effectiveness of our algorithm.The comparison with conventional snakes is also provided.
Institute of Scientific and Technical Information of China (English)
张建明; 张小丽; 任鑫博
2011-01-01
The traditional key frame extraction algorithms under compressed domain have the problems that the selected feature is single, the accuracy of the extracted key frames is low, and the efficiency of the algorithms' performance is not high. This paper proposed a key frame extraction algorithm under the compressed domain based on curvature detection. The proposed algorithm denoted the significant changes of the curve using high curvature points on the curve. Then, it constructed the feature similarity curve by using the AC coefficient and DC coefficient of DCT which were the inherent characteristics of compressed video. Finally, the introduced algorithm extracted key frame by performancing two detection of the feature similarity curve. The experimental result shows that the proposed algorithm can automatic extract key frames fast and more efficient, and it can also improve the precision and recall of the key frames.%针对目前压缩域下提取视频关键帧的算法存在特征选取单一、提取的关键帧准确性不高、算法效率低的缺点,提出了压缩域下基于两次曲线曲率检测的关键帧提取算法.算法利用曲线上的高曲率点表示曲线的显著变化,并在此基础上利用压缩视频的固有特征,即离散余弦变换之后的AC系数和DC系数特征,构建特征相似度曲线,进而对曲线进行两次高曲率点检测并提取视频关键帧.实验表明,该算法能快速有效地实现关键帧的自动提取,并可以提高提取关键帧的查准率和查全率.
A FAST AND ROBUST ALGORITHM FOR ROAD EDGES EXTRACTION FROM LIDAR DATA
Directory of Open Access Journals (Sweden)
K. Qiu
2016-06-01
Full Text Available Fast mapping of roads plays an important role in many geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance. How to extract various road edges fast and robustly is a challenging task. In this paper, we present a fast and robust algorithm for the automatic road edges extraction from terrestrial mobile LiDAR data. The algorithm is based on a key observation: most roads around edges have difference in elevation and road edges with pavement are seen in two different planes. In our algorithm, we firstly extract a rough plane based on RANSAC algorithm, and then multiple refined planes which only contains pavement are extracted from the rough plane. The road edges are extracted based on these refined planes. In practice, there is a serious problem that the rough and refined planes usually extracted badly due to rough roads and different density of point cloud. To eliminate the influence of rough roads, the technology which is similar with the difference of DSM (digital surface model and DTM (digital terrain model is used, and we also propose a method which adjust the point clouds to a similar density to eliminate the influence of different density. Experiments show the validities of the proposed method with multiple datasets (e.g. urban road, highway, and some rural road. We use the same parameters through the experiments and our algorithm can achieve real-time processing speeds.
a Fast and Robust Algorithm for Road Edges Extraction from LIDAR Data
Qiu, Kaijin; Sun, Kai; Ding, Kou; Shu, Zhen
2016-06-01
Fast mapping of roads plays an important role in many geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance. How to extract various road edges fast and robustly is a challenging task. In this paper, we present a fast and robust algorithm for the automatic road edges extraction from terrestrial mobile LiDAR data. The algorithm is based on a key observation: most roads around edges have difference in elevation and road edges with pavement are seen in two different planes. In our algorithm, we firstly extract a rough plane based on RANSAC algorithm, and then multiple refined planes which only contains pavement are extracted from the rough plane. The road edges are extracted based on these refined planes. In practice, there is a serious problem that the rough and refined planes usually extracted badly due to rough roads and different density of point cloud. To eliminate the influence of rough roads, the technology which is similar with the difference of DSM (digital surface model) and DTM (digital terrain model) is used, and we also propose a method which adjust the point clouds to a similar density to eliminate the influence of different density. Experiments show the validities of the proposed method with multiple datasets (e.g. urban road, highway, and some rural road). We use the same parameters through the experiments and our algorithm can achieve real-time processing speeds.
Institute of Scientific and Technical Information of China (English)
袁晖; 元昌安; 覃晓; 彭昱忠
2013-01-01
视频关键帧提取技术是视频数据处理研究领域的热点研究问题。该文针对现有的镜头边界检测技术不能有效提取关键帧的不足，提出一种基于小波边缘检测算子的自适应分块视频镜头边界检测算法。通过检测视频镜头变化，得到分割的镜头，然后对视频帧提取图像特征，并利用基因表达式编程（GEP）的自动聚类功能对视频帧进行聚类，提出并实现了基于GEP自动聚类的视频关键帧提取算法（KFC‐GEP）。实验证明该方法能较好的提取视频序列的关键帧。%The technology of key frame extraction is a research focus in video data processing do‐main .A video shot boundary detection algorithm with adaptive division based on wavelet edge detec‐tion is presented to overcome the drawbacks of the available algorithms for shot detection technology in this paper .First ,we obtain the video shot segmentation by detection of video shot change .Fur‐thermore ,we extract the image feature from video ,which cluster by autoclustering based on Gene Expression Programming ,propose and implement the video key frame extraction using an autocluster‐ing algorithm based on Gene Expression Programming (KFC‐GEP) .The proposed method is demon‐strated efficiently and effectively for extracting the key frame in video experimental results .
A Mining Algorithm for Extracting Decision Process Data Models
Directory of Open Access Journals (Sweden)
Cristina-Claudia DOLEAN
2011-01-01
Full Text Available The paper introduces an algorithm that mines logs of user interaction with simulation software. It outputs a model that explicitly shows the data perspective of the decision process, namely the Decision Data Model (DDM. In the first part of the paper we focus on how the DDM is extracted by our mining algorithm. We introduce it as pseudo-code and, then, provide explanations and examples of how it actually works. In the second part of the paper, we use a series of small case studies to prove the robustness of the mining algorithm and how it deals with the most common patterns we found in real logs.
Fingerprint Recognition: Enhancement, Feature Extraction and Automatic Evaluation of Algorithms
Turroni, Francesco
2012-01-01
The identification of people by measuring some traits of individual anatomy or physiology has led to a specific research area called biometric recognition. This thesis is focused on improving fingerprint recognition systems considering three important problems: fingerprint enhancement, fingerprint orientation extraction and automatic evaluation of fingerprint algorithms. An effective extraction of salient fingerprint features depends on the quality of the input fingerprint. If the fingerp...
Institute of Scientific and Technical Information of China (English)
周杉; 黄襄念; 汤翾
2014-01-01
Based on the application background of sign language recognition system, with Microsoft Kinect platform, extracts the main features of the hand from the user. For algorithm improvement based on CLTree algorithm and Integrating Boosting lgorithm, proposes a new algorithm Double Mixing on the extracted features of the treatment, a series of sign language hand shape feature can make the computer recognition will be got, which may lay the foundation for subsequent experimental work of sign language recognition.%在手语识别系统的应用背景下，基于微软的Kinect平台对使用者的手部主要特征进行提取。在算法CLTree 和Integrating Boosting的基础上进行算法改进，提出新的算法Double Mixing对提取的特征进行处理，得到一系列能够让计算机进行识别的手语手型特征信息，为后续的手语识别工作打下实验性的基础。
Evolutionary algorithm based index assignment algorithm for noisy channel
Institute of Scientific and Technical Information of China (English)
李天昊; 余松煜
2004-01-01
A globally optimal solution to vector quantization (VQ) index assignment on noisy channel, the evolutionary algorithm based index assignment algorithm (EAIAA), is presented. The algorithm yields a significant reduction in average distortion due to channel errors, over conventional arbitrary index assignment, as confirmed by experimental results over the memoryless binary symmetric channel (BSC) for any bit error.
Melody Extraction Method from MIDI Based on H-K Algorithm%基于H-K算法的MIDI音乐主旋律提取
Institute of Scientific and Technical Information of China (English)
刘勇; 林景栋; 穆伟力
2011-01-01
音乐的主旋律音轨包含了很多重要的音乐旋律信息,是音乐特征识别的基础,也是音乐灯光表演方案设计的前提工作.这方面的工作涉及了音乐旋律的表达、旋律特征的抽取以及分类技术等许多内容.针对多音轨MIDI文件,介绍一种多音轨MIDI音乐主旋律识别方法.通过对表征音乐旋律的特征量的提取,采用H-K分类算法,构建音轨分类器模型,对MIDI主旋律音轨和伴奏旋律音轨进行分类,从而提取MIDI音乐主旋律音轨.实验结果显示取得了较好的效果,为音乐灯光表演方案的自动设计做了必要的前提工作.%Main melody is the basis of the music feature recognition and the premise of music light show design work by compute, as it contains the most important information about music melody.This work includes many questions, such as melody representation, melody character exwaction and classifying technology.This paper represents a model for auto melody extraction method for multi-track MIDI files.Construct the feature vector space by means of extracting the music melody feature.To classify the melody of the main track and the accompaniment tracks with the H-K algorithm in order t6 exwact the main melody track, and achieved good results.It prepares for the program of automatic design the music light show.
Feature-extraction algorithms for the PANDA electromagnetic calorimeter
Kavatsyuk, M.; Guliyev, E.; Lemmens, P. J. J.; Loehner, H.; Poelman, T. P.; Tambave, G.; Yu, B
2009-01-01
The feature-extraction algorithms are discussed which have been developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA detector at the future FAIR facility. Performance parameters have been derived in test measurements with cosmic rays, particle and photon
Feature-extraction algorithms for the PANDA electromagnetic calorimeter
Kavatsyuk, M.; Guliyev, E.; Lemmens, P. J. J.; Loehner, H.; Poelman, T. P.; Tambave, G.; Yu, B
2009-01-01
The feature-extraction algorithms are discussed which have been developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA detector at the future FAIR facility. Performance parameters have been derived in test measurements with cosmic rays, particle and photon be
Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs.
Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina
2015-01-01
In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of "C programming language" are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate.
Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs
Chen, Haijian; Han, Dongmei; Zhao, Lina
2015-01-01
In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738
Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs
Directory of Open Access Journals (Sweden)
Haijian Chen
2015-01-01
Full Text Available In recent years, Massive Open Online Courses (MOOCs are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate.
Active Sonar Detection in Reverberation via Signal Subspace Extraction Algorithm
Directory of Open Access Journals (Sweden)
Ma Xiaochuan
2010-01-01
Full Text Available This paper presents a new algorithm called Signal Subspace Extraction (SSE for detecting and estimating target echoes in reverberation. The new algorithm can be taken as an extension of the Principal Component Inverse (PCI and maintains the benefit of PCI algorithm and moreover shows better performance due to a more reasonable reverberation model. In the SSE approach, a best low-rank estimate of a target echo is extracted by decomposing the returns into short duration subintervals and by invoking the Eckart-Young theorem twice. It was assumed that CW is less efficiency in lower Doppler than broadband waveforms in spectrum methods; however, the subspace methods show good performance in detection whatever the respective Doppler is. Hence, the signal emitted by active sonar is CW in the new algorithm which performs well in detection and estimation even when low Doppler is low. Further, a block forward matrix is proposed to extend the algorithm to the sensor array problem. The comparison among the block forward matrix, the conventional matrix, and the three-mode array is discussed. Echo separation is also provided by the new algorithm. Examples are presented using both real, active-sonar data and simulated data.
Institute of Scientific and Technical Information of China (English)
黄蕾; 邹海
2015-01-01
图像特征提取是数字图像处理与模式识别领域中的关键问题，各种特征提取方法层出不穷。相位一致图像特征提取方法是基于局部相位信息进行图像特征提取，具有亮度和对比度不变性的优点，但是在轮廓特征提取方面存在缺陷。考虑到多分辨率、多尺度对图像特征提取的影响，提出一种基于相位一致的多尺度金字塔图像特征提取算法，新算法的关键在于拉普拉斯金字塔的分解和多尺度特征图像的融合。实验结果表明，该算法在提取图像轮廓特征方面要优于传统的相位一致图像特征提取算法。%Image feature extraction is the key issue in the field of digital image processing and pattern recognition. The feature extraction methods are emerging in endlessly. Phase congruency image feature extraction method is based on local phase information for feature ex-traction,which has advantage of brightness and contrast invariance,but there are still insufficient for contour feature extraction. In order to fully consider the influence of multi-resolution,multi-scale image,present a feature extraction algorithm of multi-scale pyramid based on the phase congruency. The key is Laplacian pyramid decomposition and multi-scale feature fusion. The experimental results show that the new algorithm is superior to phase congruency image feature extraction algorithm in the contour of image feature extracting.
基于AdaBoost算法的遥感影像水体信息提取%Water body extraction from remote sensing image based on AdaBoost algorithm
Institute of Scientific and Technical Information of China (English)
李长春; 张光胜; 慎利
2013-01-01
AdaBoost算法利用每个特征构造一个简单分类器,然后将简单分类器进行训练组合成一个强分类器.算法能够充分利用每个分类器的优势并避免其劣势,得到一个最佳判别,达到提高分类精度的目的.本文利用TM影像,将影像各波段灰度、水体指数和谱间关系特征相结合,构成提取水体的强分类器,实现水体提取.实验结果表明,算法能够非常有效地、高精度地提取水体信息.%For water body extraction from RS image, previous algorithms have devoted to found a sophisticated classifier based on single feature, which can not make full use of image information and are very difficult to train complex classifier. Adaboost algorithm constructs a simple classifier using each feature, the " strong classifier" is formed after training these simple classifiers. The algorithm can make use of the advantages of the various classifiers and avoid their weaknesses, so it can get a prefer judgment and improve the classification accuracy. In this paper, the strong classifier of extracting body information was constructed after combining spectral information of each band, water index and relationship between spectrums. Using TM imagery as study data, the results of experiment showed that the algorithm could extract water body information effectively and accurately.
Road network extraction in classified SAR images using genetic algorithm
Institute of Scientific and Technical Information of China (English)
肖志强; 鲍光淑; 蒋晓确
2004-01-01
Due to the complicated background of objectives and speckle noise, it is almost impossible to extract roads directly from original synthetic aperture radar(SAR) images. A method is proposed for extraction of road network from high-resolution SAR image. Firstly, fuzzy C means is used to classify the filtered SAR image unsupervisedly, and the road pixels are isolated from the image to simplify the extraction of road network. Secondly, according to the features of roads and the membership of pixels to roads, a road model is constructed, which can reduce the extraction of road network to searching globally optimization continuous curves which pass some seed points. Finally, regarding the curves as individuals and coding a chromosome using integer code of variance relative to coordinates, the genetic operations are used to search global optimization roads. The experimental results show that the algorithm can effectively extract road network from high-resolution SAR images.
Application of detecting algorithm based on network
Institute of Scientific and Technical Information of China (English)
张凤斌; 杨永田; 江子扬; 孙冰心
2004-01-01
Because currently intrusion detection systems cannot detect undefined intrusion behavior effectively,according to the robustness and adaptability of the genetic algorithms, this paper integrates the genetic algorithms into an intrusion detection system, and a detection algorithm based on network traffic is proposed. This algorithm is a real-time and self-study algorithm and can detect undefined intrusion behaviors effectively.
New Iris Localization Method Based on Chaos Genetic Algorithm
Institute of Scientific and Technical Information of China (English)
Jia Dongli; Muhammad Khurram Khan; Zhang Jiashu
2005-01-01
This paper present a new method based on Chaos Genetic Algorithm (CGA) to localize the human iris in a given image. First, the iris image is preprocessed to estimate the range of the iris localization, and then CGA is used to extract the boundary of the iris. Simulation results show that the proposed algorithms is efficient and robust, and can achieve sub pixel precision. Because Genetic Algorithms (GAs) can search in a large space, the algorithm does not need accurate estimation of iris center for subsequent localization, and hence can lower the requirement for original iris image processing. On this point, the present localization algirithm is superior to Daugmans algorithm.
LSB Based Quantum Image Steganography Algorithm
Jiang, Nan; Zhao, Na; Wang, Luo
2016-01-01
Quantum steganography is the technique which hides a secret message into quantum covers such as quantum images. In this paper, two blind LSB steganography algorithms in the form of quantum circuits are proposed based on the novel enhanced quantum representation (NEQR) for quantum images. One algorithm is plain LSB which uses the message bits to substitute for the pixels' LSB directly. The other is block LSB which embeds a message bit into a number of pixels that belong to one image block. The extracting circuits can regain the secret message only according to the stego cover. Analysis and simulation-based experimental results demonstrate that the invisibility is good, and the balance between the capacity and the robustness can be adjusted according to the needs of applications.
Institute of Scientific and Technical Information of China (English)
汤海林
2013-01-01
指纹图像特征点提取的数量与准确度直接影响到指纹特征匹配效率，在分析指纹图像特征的基础上，提出了基于方向场均值的Poincare索引算法，阐述了算法的基本模型、表示方法与具体实现过程。经实验证明，采用该算法能快速、准确地提取指纹图像的奇异点，对各种图像具有较好的适应性。%The quantity and accuracy of the fingerprint image feature point extraction effects the efficiency of the fingerpirnt feature matching directly. In the foundation of the analysis of the fingerprint image feature, this paper proposes the algorithm of Poincare index based on the mean value of direction field. It details the basic model, expression methods and the implementation process of the algorithm. Experimental results proof that, this algorithm can extract fingerprint image singular point rapidly and accurately, and it has high adaptability for all kinds of images.
Meghanathan, Natarajan; Isokpehi, Raphael; Cohly, Hari; 10.5121/ijcsit.2010.2307
2010-01-01
The high-level contribution of this paper is the development and implementation of an algorithm to selfextract secondary keywords and their combinations (combo words) based on abstracts collected using standard primary keywords for research areas from reputed online digital libraries like IEEE Explore, PubMed Central and etc. Given a collection of N abstracts, we arbitrarily select M abstracts (M<< N; M/N as low as 0.15) and parse each of the M abstracts, word by word. Upon the first-time appearance of a word, we query the user for classifying the word into an Accept-List or non-Accept-List. The effectiveness of the training approach is evaluated by measuring the percentage of words for which the user is queried for classification when the algorithm parses through the words of each of the M abstracts. We observed that as M grows larger, the percentage of words for which the user is queried for classification reduces drastically. After the list of acceptable words is built by parsing the M abstracts, we ...
Diversity-Based Boosting Algorithm
Directory of Open Access Journals (Sweden)
Jafar A. Alzubi
2016-05-01
Full Text Available Boosting is a well known and efficient technique for constructing a classifier ensemble. An ensemble is built incrementally by altering the distribution of training data set and forcing learners to focus on misclassification errors. In this paper, an improvement to Boosting algorithm called DivBoosting algorithm is proposed and studied. Experiments on several data sets are conducted on both Boosting and DivBoosting. The experimental results show that DivBoosting is a promising method for ensemble pruning. We believe that it has many advantages over traditional boosting method because its mechanism is not solely based on selecting the most accurate base classifiers but also based on selecting the most diverse set of classifiers.
Cryptographic Protocols Based on Root Extracting
DEFF Research Database (Denmark)
Koprowski, Maciej
In this thesis we design new cryptographic protocols, whose security is based on the hardness of root extracting or more speci cally the RSA problem. First we study the problem of root extraction in nite Abelian groups, where the group order is unknown. This is a natural generalization of the...... construction based on root extracting. As an example of this, we modify Cramer-Shoup signature scheme such that it becomes a genericm algorithm. We discuss then implementing it in RSA groups without the original restriction that the modulus must be a product of safe primes. It can also be implemented in class......, providing a currently acceptable level of security. This allows us to propose the rst practical blind signature scheme provably secure, without relying on heuristics called random oracle model (ROM). We obtain the protocol for issuing blind signatures by implementing our modi ed Fischlin's signing algorithm...
Bolton, Adam S
2009-01-01
We describe a new algorithm for the "perfect" extraction of one-dimensional spectra from two-dimensional CCD images of optical fiber spectrographs, based on accurate two-dimensional (2D) forward modeling of the raw pixel data. The algorithm is correct for arbitrarily complicated 2D point-spread functions (PSFs), as compared to the traditional optimal extraction algorithm, which is only correct for a limited class of separable PSFs. The algorithm results in statistically independent extracted samples in the 1D spectrum, and preserves the full native resolution of the 2D spectrograph without degradation. Both the statistical errors and the 1D resolution of the extracted spectrum are precisely determined. Using a model PSF similar to that found in the red channel of the Sloan Digital Sky Survey spectrograph, we compare the performance of our algorithm to that of cross-section based optimal extraction, and also demonstrate that our method allows coaddition and foreground estimation to be carried out as an integra...
改进的蚁群算法与凝聚相结合的关键帧提取%Key frame extraction based on improved ant algorithm and agglomerative
Institute of Scientific and Technical Information of China (English)
张建明; 刘海燕; 孙淑敏
2013-01-01
The key frame extraction is very important to content-based video retrieval. In order to extract key frame efficiently from different types of video, an efficient method for key frame extraction based on improved ant algorithm and agglomerative is proposed. An improved ant algorithm is applied to the histogram differences and texture of video shot self-organized, and obtains an initial clustering result; agglomerative is conducted to optimize the initial clustering result and then a final clustering result is obtained; the center frame of each clustering is extracted as the key frame. The experiment result shows that, the key frames extracted by using this algorithm can adequately express the primary content of the video, and proper quantities of key frames also can be extracted according to the change of the video content.%关键帧提取技术,对基于内容的视频检索有着重要的作用.为了从不同类型的视频中有效地提取关键帧,提出了改进的蚁群算法与凝聚相结合的关键帧提取算法.该方法提取视频中每帧的颜色与边缘特征向量,利用改进的蚁群算法自组织地对颜色和边缘特征向量进行聚类,得到初始聚类.通过凝聚算法对初始聚类进行优化,得到最终聚类.提取每类中距离聚类中心最近的向量,将其对应帧作为关键帧.实验结果表明:使用该算法提取的关键帧不仅可以充分表达出视频的主要内容,而且可以根据视频内容的变化提取出适当数量的关键帧.
A Genetic Algorithm-Based Feature Selection
Directory of Open Access Journals (Sweden)
Babatunde Oluleye
2014-07-01
Full Text Available This article details the exploration and application of Genetic Algorithm (GA for feature selection. Particularly a binary GA was used for dimensionality reduction to enhance the performance of the concerned classifiers. In this work, hundred (100 features were extracted from set of images found in the Flavia dataset (a publicly available dataset. The extracted features are Zernike Moments (ZM, Fourier Descriptors (FD, Lengendre Moments (LM, Hu 7 Moments (Hu7M, Texture Properties (TP and Geometrical Properties (GP. The main contributions of this article are (1 detailed documentation of the GA Toolbox in MATLAB and (2 the development of a GA-based feature selector using a novel fitness function (kNN-based classification error which enabled the GA to obtain a combinatorial set of feature giving rise to optimal accuracy. The results obtained were compared with various feature selectors from WEKA software and obtained better results in many ways than WEKA feature selectors in terms of classification accuracy
Institute of Scientific and Technical Information of China (English)
王玉德; 赵焕利; 薛乃玉
2012-01-01
从最优化的角度出发,提出了一种基于分块小波变换和二维主成分分析法(2DPCA)的人脸特征提取与识别算法.该方法首先对人脸图像进行分块小波变换,并对各分块的高、低频分量进行组合处理,然后对小波系数特征应用2DPCA方法进行变换并将分块特征进行融合得到人脸鉴别特征,最后在ORL人脸库上应用支持向量机(SVM)对该特征进行分类识别.试验结果表明,该算法能有效地提高人脸识别性能,具有较短的识别时间和较高的识别准确率,优于传统的人脸识别方法.%How to extract face recognition information from an image was investigated in this paper. A feature extraction and recognition algorithm of intersected human face based on wavelet transform and 2DPCA was proposed, by which recognition features of an image could be easily extracted for a discriminant method for face recognition. Firstly, the intersected huaman face was transformed with wavelet and different coefficients were extracted. Wavelet coefficient features were gotten by combing low frequency coefficients with high frequency coefficients of each block. Then, the 2DPCA method was used to extract features from wavelet coeficinent features and fused into ultimate discriminant features. Finally, the features were classified and recognized by SVM. The efficiency of proposed algorithm was validated.Experiment results demonstrate that the proposed method is not only good at recognition speed, but also achives a higher accuracy than classical methods.
Institute of Scientific and Technical Information of China (English)
汪欢文; 陆海良; 单宇翔
2015-01-01
利用客户关系图可以很清晰地看出企业与客户之间的各类关系,便于企业决策者采取针对性的措施来改善客户关系.该文提出了一种基于改进的FP-Growth算法进行客户关系图提取的方法,通过最小支持度寻找到所有的频繁项集,然后结合最小置信度,筛选出所需要的关联规则来提高算法的效率.本方法已应用于浙江中烟CRM系统,结果证明该改进算法有比较好的效果.%Customer relationships can be clearly seen in customer relationship graph, thus business decision-makers can take spe-cific measures to facilitate customer relationships. This paper presents an improved algorithm based on FP-Growth algorithm to ex-tract customer relationship graph. We find all frequent itemsets through minimum support, then filter out the desired association rules integrated with the minimum confidence, which can improve the efficiency of the algorithm considerably. This method has been applied to Zhejiang Tobacco CRM system, and the results show that the improved algorithm is very effective.
Institute of Scientific and Technical Information of China (English)
李国芳
2014-01-01
Face-image feature extraction is one of the key technologies and problems in face recognition systems. Manifold learn⁃ing algorithm, as a non-linear dimension reduction method, has been used in facial recognition field and speech recognition field in recent years. A new feature extraction based on 2DPCA(Two-Dimentional PCA) and LPP(Locality Preserving Projections) al⁃gorithm of the manifold learning is proposed through systematic study of facial recognition system. And it may provide a good ref⁃erence for further study of facial recognition technology. The simulation experiment shows that this algorithm has better recogni⁃tion rate as compared with PCA, LDA algorithms of traditional feature extraction.%人脸图像的特征提取是人脸识别系统中最关键同时也是难题之一。流形学习算法是近些年的人脸识别和语音识别两个领域应用较多的非线性降维方法。通过对人脸识别系统的研究，现提出一种全新的基于2DPCA(Two-Dimentional PCA)和流形学习LPP(Locality Preserving Projections)算法的特征提取方法，可为今后深入研究人脸识别技术提供较好的参考。仿真实验表明，该算法与传统特征提取PCA、LDA算法相比，可以取得更好的识别率。
Institute of Scientific and Technical Information of China (English)
唐成华; 汤申生; 谢逸
2014-01-01
Aiming at the problems of difficult and inefficient sample index extraction in the general reduction algorithm, rough set theory is introduced to extract situation index. The decision table information is reduced based on the discernibility matrix compression and classified selection, and the index selection is adjusted by combining the importance measure of the export knowledge. Meanwhile, a situation index extraction algorithm based on improved discernibility matrix and expert knowledge is proposed . It is analyzed and verified in the example of situation index system. Experimental results show that the proposed algorithm has good effect on the situation index reduction, and the extracted indexes are rational in the network security assessment. It provides a feasible solution for extracting the situation index.%针对常规粗糙集约简算法在应用中的样本指标提取困难和效率低下等问题，将粗糙集理论引入到态势指标提取中，基于差别矩阵压缩和分类选择实现态势指标决策表的信息约简，结合专家知识的指标重要性度量调整态势指标的选择，提出一种基于改进差别矩阵和专家知识的态势指标提取算法。在态势指标体系实例中进行分析和验证，实验表明该算法具有较好的态势指标约简效果，提取后的指标在网络安全评估中是合理的，因此该研究为态势指标的有效提取提供一种可行的解决途径。
Rule Extraction Algorithm for Deep Neural Networks: A Review
Hailesilassie, Tameru
2016-01-01
Despite the highest classification accuracy in wide varieties of application areas, artificial neural network has one disadvantage. The way this Network comes to a decision is not easily comprehensible. The lack of explanation ability reduces the acceptability of neural network in data mining and decision system. This drawback is the reason why researchers have proposed many rule extraction algorithms to solve the problem. Recently, Deep Neural Network (DNN) is achieving a profound result ove...
RGANN: An Efficient Algorithm to Extract Rules from ANNs
Kamruzzaman, S M
2010-01-01
This paper describes an efficient rule generation algorithm, called rule generation from artificial neural networks (RGANN) to generate symbolic rules from ANNs. Classification rules are sought in many areas from automatic knowledge acquisition to data mining and ANN rule extraction. This is because classification rules possess some attractive features. They are explicit, understandable and verifiable by domain experts, and can be modified, extended and passed on as modular knowledge. A standard three-layer feedforward ANN is the basis of the algorithm. A four-phase training algorithm is proposed for backpropagation learning. Comparing them to the symbolic rules generated by other methods supports explicitness of the generated rules. Generated rules are comparable with other methods in terms of number of rules, average number of conditions for a rule, and predictive accuracy. Extensive experimental studies on several benchmarks classification problems, including breast cancer, wine, season, golf-playing, and ...
A cooperative fast annealing coevolutionary algorithm for protein motif extraction
Institute of Scientific and Technical Information of China (English)
CHEN Chao; TIAN YuanXin; ZOU XiaoYong; CAI PeiXiang; MO JinYuan
2007-01-01
By integrating the cooperative approach with the fast annealing coevolutionary algorithm (FAEA), a so-called cooperative fast annealing coevolutionary algorithm (CFACA) is presented in this paper for the purpose of solving high-dimensional problems. After the partition of the search space in CFACA, each smaller one is then searched by a separate FAEA. The fitness function is evaluated by combining sub-solutions found by each of the FAEAs. It demonstrates that the CFACA outperforms the FAEA in the domain of function optimization, especially in terms of convergence rate. The current algorithm is also applied to a real optimization problem of protein motif extraction. And a satisfactory result has been obtained with the accuracy of prediction achieving 67.0%, which is in agreement with the result in the PROSITE database.
GPU-accelerated phase extraction algorithm for interferograms: a real-time application
Zhu, Xiaoqiang; Wu, Yongqian; Liu, Fengwei
2016-11-01
Optical testing, having the merits of non-destruction and high sensitivity, provides a vital guideline for optical manufacturing. But the testing process is often computationally intensive and expensive, usually up to a few seconds, which is sufferable for dynamic testing. In this paper, a GPU-accelerated phase extraction algorithm is proposed, which is based on the advanced iterative algorithm. The accelerated algorithm can extract the right phase-distribution from thirteen 1024x1024 fringe patterns with arbitrary phase shifts in 233 milliseconds on average using NVIDIA Quadro 4000 graphic card, which achieved a 12.7x speedup ratio than the same algorithm executed on CPU and 6.6x speedup ratio than that on Matlab using DWANING W5801 workstation. The performance improvement can fulfill the demand of computational accuracy and real-time application.
基于H-K算法的MIDI文件主旋律音轨提取探讨%Exploration of MIDI Files Based on H-K Algorithm Extract Melody Tracks
Institute of Scientific and Technical Information of China (English)
马青阁; 曹西征; 赵宛; 汪旭彬; 关金晨
2016-01-01
主旋律是音乐旋律信息的重要组成部分，文章通过对表征音乐旋律特征向量的提取，采用H- K分类算法构建音轨分类器模型，对MIDI片旋律音轨和伴奏旋律音轨进行分类。最后通过候选音轨提取出主旋律音轨，研究表明，这种方法对10个音轨内的主旋律提取准确率由75%提升到90%，对于2个音轨以内的主旋律提取可以达到95%的准确率。%Theme is an important part of music melody information, based on the characterization of music melody feature vector extraction, the H-K audio classiifcation algorithm to construct classiifer model, MIDI audio piece of melody and accompaniment melody track for classiifcation. Finally through the candidate track extract melody track, research has shown that this approach within the 10 tracks on the main melody extraction accuracy increase from 75%to 75%, the two tracks of less than the main melody of extraction can reach 95%accuracy.
Institute of Scientific and Technical Information of China (English)
蔡云骧; 薛士强
2011-01-01
To determine the main color of the background is a key issue about camouflage pattern Aiming at the disadvantages of the existing main color extracting method, a main color extracting algorithm based on octree algorithm and statistic in color list is presented. The images of background are done octree color quantization firstly. Then the colors with the list structure are described. At last, the main colors are extracted according as the human vision characteristic and rule. This method can extract main color from images one time, and the running time can be shorten greatly. Experiments show that this method can meet the need of the color selecting in camouflage.%背景主色的确定是迷彩伪装的关键问题,针对现有提取方法的不足,提出一种基于八叉树颜色量化和链表统计的背景主色提取算法.首先对图像进行八叉树颜色量化,然后统计量化后图像的颜色,并用链表存储,最后依据人眼视觉特性和相应准则确定背景主色.该算法可以对多 幅图像进行处理且运算时间较短.实验表明这种方法能够满足迷彩伪装颜色确定的要求.
Bolton, Adam S.; Schlegel, David J.
2010-02-01
We describe a new algorithm for the “perfect” extraction of one-dimensional (1D) spectra from two-dimensional (2D) digital images of optical fiber spectrographs, based on accurate 2D forward modeling of the raw pixel data. The algorithm is correct for arbitrarily complicated 2D point-spread functions (PSFs), as compared to the traditional optimal extraction algorithm, which is only correct for a limited class of separable PSFs. The algorithm results in statistically independent extracted samples in the 1D spectrum, and preserves the full native resolution of the 2D spectrograph without degradation. Both the statistical errors and the 1D resolution of the extracted spectrum are accurately determined, allowing a correct χ2 comparison of any model spectrum with the data. Using a model PSF similar to that found in the red channel of the Sloan Digital Sky Survey spectrograph, we compare the performance of our algorithm to that of cross-section based optimal extraction, and also demonstrate that our method allows coaddition and foreground estimation to be carried out as an integral part of the extraction step. This work demonstrates the feasibility of current and next-generation multifiber spectrographs for faint-galaxy surveys even in the presence of strong night-sky foregrounds. We describe the handling of subtleties arising from fiber-to-fiber cross talk, discuss some of the likely challenges in deploying our method to the analysis of a full-scale survey, and note that our algorithm could be generalized into an optimal method for the rectification and combination of astronomical imaging data.
Algorithm for feature line extraction based on 3D point cloud models%基于三维点云模型的特征线提取算法
Institute of Scientific and Technical Information of China (English)
刘倩; 耿国华; 周明全; 赵璐璐; 李姬俊男
2013-01-01
针对以往算法存在无法区分尖锐和非尖锐特征点、提取的特征点与视角有关、特征点未连线等问题,提出一种基于高斯映射和曲率值分析的三维点云模型尖锐特征线提取算法.该算法先进行点云数据点的离散高斯映射,并将映射点集聚类；然后使用自适应迭代过程得到两个或多个面的相交线上曲率值和法向量发生突变的尖锐特征点,这些点与视角无关；最后,用改进的特征折线生长算法,将特征点连接,得到光顺特征线.实验证明,该算法具有良好的自适应性、抗噪性和准确性,是一种有效的三维模型特征线提取算法.%Abstract: This paper proposed a sharp feature line extraction algorithm of 3D point cloud models based on Gaussian map and curvature value analysis, which aimed to solve the problems that previous algorithms existed, including could not distinguish sharp and non-sharp feature points,the extracted feature points were relative to perspective, or feature points weren' t connected. First, this algorithm conducted discrete Gaussian map for point cloud data, and clustered these mapping point sets. Then it used an adaptive iterative procedure to get sharp feature points, these points mainty located on the intersection line of two or more point cloud surfaces, where curvature value or normal vector suddenly changed and they were independent of perspective. Finally,it used an improved feature polyline propagation algorithm, connected the feature points, and got smoothing feature 1ines. Experiments show that the algorithm has good adaptability, noise immunity and accuracy, it is an effective feature ex-tiaction algorithm for 3D models.
Enterprise Human Resources Information Mining Based on Improved Apriori Algorithm
Directory of Open Access Journals (Sweden)
Lei He
2013-05-01
Full Text Available With the unceasing development of information and technology in today’s modern society, enterprises’ demand of human resources information mining is getting bigger and bigger. Based on the enterprise human resources information mining situation, this paper puts forward a kind of improved Apriori algorithm based model on the enterprise human resources information mining, this model introduced data mining technology and traditional Apriori algorithm, and improved on its basis, divided the association rules mining task of the original algorithm into two subtasks of producing frequent item sets and producing rule, using SQL technology to directly generating frequent item sets, and using the method of establishing chart to extract the information which are interested to customers. The experimental results show that the improved Apriori algorithm based model on the enterprise human resources information mining is better in efficiency than the original algorithm, and the practical application test results show that the improved algorithm is practical and effective.
Institute of Scientific and Technical Information of China (English)
YE YaLan; SHEU Phillip C-Y; ZENG JiaZhi; WANG Gang; LU Ke
2009-01-01
In many applications, such as biomedical engineering, it is often required to extract a desired signal instead of all source signals. This can be achieved by blind source extraction (BSE) or semi-blind source extraction, which is a powerful technique emerging from the neural network field. In this paper, we propose an efficient semi-blind source extraction algorithm to extract a desired source signal as its first output signal by using a priori information about its kurtosis range. The algorithm is robust to outliers and spiky noise because of adopting a classical robust contrast function. And it is also robust to the estimation errors of the kurtoaia range of the desired signal providing the estimation errors are not large. The algorithm has good extraction performance, even in some poor situations when the kurtosis values of some source signals are very close to each other. Its convergence stability and robustness are theoretically analyzed. Simulations and experiments on artificial generated data and real-world data have confirmed these results.
Institute of Scientific and Technical Information of China (English)
王超; 徐杰锋
2012-01-01
This paper discusses an approach based on CURE algorithm of Web pages segmentation and text extraction rules. The main idea is to add attributes to nodes of a standardization DOM tree to convert it into the extended DOM tree with the infor- mation node offset. Subsequently, we use the CURE algorithm to cluster information nodes. And each result of the cluster represent different block of the page. Finally, we extracts three nmin features of the text block and construct information weights formula which can distinguish text blocks.%研究基于CURE聚类的Web页面分块方法及正文块的提取规则。对页面DOM树增加节点属性，使其转换成为带有信息节点偏移量的扩展DOM树。利用CURE算法进行信息节点聚类，各个结果簇即代表页面的不同块。最后提取了正文块的三个主要特征，构造信息块权值公式，利用该公式识别正文块。
Topic extraction method using RED-NMF algorithm for detecting outbreak of some disease on Twitter
Iskandar, Afif Akbar
2017-03-01
Indonesia is one of the biggest user of social media, this can be useful for detecting a popular trend in an era, including some disease outbreak that we get from topic extracting method, e.g.: NMF. However, the texts that were spread was an unstructured text which needs to be cleaned before being processed. One of the cleaning methods of texts is using regular expression. However, data texts from social media have a lot of variations, which means that regular expression that were being made have to adapt each data that will be cleaned, hence, we need an algorithm to "learn" the form of texts that need to be cleaned. In this paper, we purposed a framework for cleaning and extracting topic from Twitter data called RED-NMF, feature extraction and filtering method based on regular expression discovery algorithm for data cleaning and non-negative matrix factorization for extract the topic.
Directory of Open Access Journals (Sweden)
Claudionor Ribeiro da Silva
2010-07-01
Full Text Available O objetivo deste trabalho é a extração semiautomática de estradas vicinais. Este estudo é dividido em duas fases distintas. Na primeira fase é proposto um método para determinar larguras de estradas; na segunda fase é proposta uma função de aptidão para algoritmos genéticos que possibilita a detecção de segmentos candidatos a eixo deestradas vicinais. Três imagens digitais são usadas na realização dos experimentos. Para o estudo, a validação dos resultados obtidos foi realizada a partir de uma imagem de referência. A imagem de referência foi criada por meio de vetorização manual. O julgamento da acurácia foi computado com base nos índices: completeza, correção eRMS. Os resultados obtidos apontam a metodologia proposta como promissora. The aim objective of this paper is to present a semi-automatic extraction of local roads. The research is divided in two different phases. In the first one, a method to determine road width is proposed; in the second one, a fitness function for genetic algorithms is proposed. The referred function makes possible the detection of candidate segments for local roads axis. Three digital images are used in conducting the experiments. For the study, the validation of the obtained results was accomplished based on a reference image. The reference image was created through manualvectoring, and an accuracy evaluation was computed based on the following indexes: completeness, correctness and RMS. The obtained results point out the proposed methodology as promising.
一种基于交叉验证的稳健SL0目标参数提取算法%Cross validation based robust-SL0 algorithm for target parameter extraction
Institute of Scientific and Technical Information of China (English)
贺亚鹏; 庄珊娜; 张燕洪; 朱晓华
2012-01-01
Utilizing the space sparsity property of radar targets, a compressive sensing based pseudo-random step frequency radar (CS-PRSFR) is studied. Firstly, the CS-PRSFR targets echo is analyzed and the targets parameter extracting model is constructed. To solve the problem of inapplicability of traditional sparse signal reconstruction algorithms amid noise of unknown statistics, a cross validation based robust SLO (CV-RSLO) algorithm extracting the parameter of targets is proposed. Because of the better incoherence of the sensing matrix, the CS-PRSFR can obtain a higher range-velocity joint resolution performance. The proposed algorithm needs no prior information of the noise statistics, and the performance of its targets parameter extraction can rapidly approach the lower bound of the best estimator as the signal to noise ratio improving. Simulation results illuminate the correctness and efficiency of this method.%利用雷达目标在空间的稀疏特性,研究了一种基于压缩感知的伪随机频率步进雷达( compressive sensing based pseudo-random step frequency radar,CS-PRSFR).首先,在分析CS-PRSFR目标回波的基础上,建立了目标参数提取模型；然后,针对在噪声统计特性未知时,传统稀疏信号重构算法无法适用的问题,提出一种基于交叉验证的稳健SL0 (robust SL0 based on cross validation,CV-RSL0)目标参数提取算法.CS-PRSFR由于其感知矩阵较强的非相关性,可获得更高的距离-速度联合分辨性能；该算法无需已知噪声统计特性,随着信噪比的提高,其目标参数提取性能能够快速逼近最佳估计的下限.仿真结果表明该方法的正确性和有效性.
Gao, Yingbin; Kong, Xiangyu; Zhang, Huihui; Hou, Li'an
2017-05-01
Minor component (MC) plays an important role in signal processing and data analysis, so it is a valuable work to develop MC extraction algorithms. Based on the concepts of weighted subspace and optimum theory, a weighted information criterion is proposed for searching the optimum solution of a linear neural network. This information criterion exhibits a unique global minimum attained if and only if the state matrix is composed of the desired MCs of an autocorrelation matrix of an input signal. By using gradient ascent method and recursive least square (RLS) method, two algorithms are developed for multiple MCs extraction. The global convergences of the proposed algorithms are also analyzed by the Lyapunov method. The proposed algorithms can extract the multiple MCs in parallel and has advantage in dealing with high dimension matrices. Since the weighted matrix does not require an accurate value, it facilitates the system design of the proposed algorithms for practical applications. The speed and computation advantages of the proposed algorithms are verified through simulations. Copyright © 2017 Elsevier Ltd. All rights reserved.
AN SVAD ALGORITHM BASED ON FNNKD METHOD
Institute of Scientific and Technical Information of China (English)
Chen Dong; Zhang Yan; Kuang Jingming
2002-01-01
The capacity of mobile communication system is improved by using Voice Activity Detection (VAD) technology. In this letter, a novel VAD algorithm, SVAD algorithm based on Fuzzy Neural Network Knowledge Discovery (FNNKD) method is proposed. The performance of SVAD algorithm is discussed and compared with traditional algorithm recommended by ITU G.729B in different situations. The simulation results show that the SVAD algorithm performs better.
Human resource recommendation algorithm based on ensemble learning and Spark
Cong, Zihan; Zhang, Xingming; Wang, Haoxiang; Xu, Hongjie
2017-08-01
Aiming at the problem of “information overload” in the human resources industry, this paper proposes a human resource recommendation algorithm based on Ensemble Learning. The algorithm considers the characteristics and behaviours of both job seeker and job features in the real business circumstance. Firstly, the algorithm uses two ensemble learning methods-Bagging and Boosting. The outputs from both learning methods are then merged to form user interest model. Based on user interest model, job recommendation can be extracted for users. The algorithm is implemented as a parallelized recommendation system on Spark. A set of experiments have been done and analysed. The proposed algorithm achieves significant improvement in accuracy, recall rate and coverage, compared with recommendation algorithms such as UserCF and ItemCF.
SVD-TLS extending Prony algorithm for extracting UWB radar target feature
Institute of Scientific and Technical Information of China (English)
Liu Donghong; Hu Wenlong; Chen Zhijie
2008-01-01
A now method, SVD-TLS extending Prony algorithm, is introduced for extracting UWB radar target features. The method is a modified classical Prony method based on singular value decomposition and total least squares that can improve robust for spectrum estimation. Simulation results show that poles and residuum of target echo can be extracted effectively using this method, and at the same time, random noises can be restrained to some degree. It is applicable for target feature extraction such as UWB radar or other high resolution range radars.
Brand, Jonathan; Zhang, Zheming; Agarwal, Ramesh K.
2014-02-01
A simple but reasonably accurate battery model is required for simulating the performance of electrical systems that employ a battery for example an electric vehicle, as well as for investigating their potential as an energy storage device. In this paper, a relatively simple equivalent circuit based model is employed for modeling the performance of a battery. A computer code utilizing a multi-objective genetic algorithm is developed for the purpose of extracting the battery performance parameters. The code is applied to several existing industrial batteries as well as to two recently proposed high performance batteries which are currently in early research and development stage. The results demonstrate that with the optimally extracted performance parameters, the equivalent circuit based battery model can accurately predict the performance of various batteries of different sizes, capacities, and materials. Several test cases demonstrate that the multi-objective genetic algorithm can serve as a robust and reliable tool for extracting the battery performance parameters.
A New Page Ranking Algorithm Based On WPRVOL Algorithm
Directory of Open Access Journals (Sweden)
Roja Javadian Kootenae
2013-03-01
Full Text Available The amount of information on the web is always growing, thus powerful search tools are needed to search for such a large collection. Search engines in this direction help users so they can find their desirable information among the massive volume of information in an easier way. But what is important in the search engines and causes a distinction between them is page ranking algorithm used in them. In this paper a new page ranking algorithm based on "Weighted Page Ranking based on Visits of Links (WPRVOL Algorithm" for search engines is being proposed which is called WPR'VOL for short. The proposed algorithm considers the number of visits of first and second level in-links. The original WPRVOL algorithm takes into account the number of visits of first level in-links of the pages and distributes rank scores based on the popularity of the pages whereas the proposed algorithm considers both in-links of that page (first level in-links and in-links of the pages that point to it (second level in-links in order to calculation of rank of the page, hence more related pages are displayed at the top of search result list. In the summary it is said that the proposed algorithm assigns higher rank to pages that both themselves and pages that point to them be important.
Institute of Scientific and Technical Information of China (English)
亢华爱
2015-01-01
Web网络中存在海量文本，需要进行合理高效的文本抽取，实现Web文本数据挖掘。由于Web文本数据的高维特性，文本抽取过程中自动分类配对困难。提出一种基于RBF神经网络隐节点共振致密配对的Web数据文本抽取算法，进行Web数据文本特征采样与关联主特征挖掘，在每次移动中形成RBF隐节点共振致密配对，得到最优文本特征选择的路径，建立RBF神经网络分类器，实现基于蚁群算法的特征抽取算法改进。实验结果表明，该算法能有效实现对隐节点的共振致密配对，特征挖掘跟踪性能较好，保障了挖掘性能，系统所提取的特征分量与其他模糊分量差距较小，文本正确抽取召回率高于传统方法，在Web网络数据文本抽取中具有优越可靠的应用价值。%There are a large number of text in Web network, we need for text extraction efficiently and implementation of Web text data mining. Due to the high dimensionality of the Web text data, matching difficulties of automatic text classifica⁃tion extraction process. A Web data text extraction algorithm of RBF neural network hidden nodes resonance dense pairing based text feature is proposed, Web data sampling associated with main feature mining, random selection to the mass data information, the formation of RBF hidden nodes in each move in resonance dense matching, get the path optimal text fea⁃ture selection, RBF neural network classifier, the improved feature extraction algorithm is obtained based on ant colony al⁃gorithm. The experimental results show that the algorithm can effectively realize the resonance of hidden nodes dense matching, feature mining has good tracking performance, it can guarantee the mining performance, the component features extracted from the system and other fuzzy component gap is smaller, the recall rate of correct text extraction is higher than traditional method, and it has good application value
DNA Coding Based Knowledge Discovery Algorithm
Institute of Scientific and Technical Information of China (English)
LI Ji-yun; GENG Zhao-feng; SHAO Shi-huang
2002-01-01
A novel DNA coding based knowledge discovery algorithm was proposed, an example which verified its validity was given. It is proved that this algorithm can discover new simplified rules from the original rule set efficiently.
Directory of Open Access Journals (Sweden)
Rudy A.G. Gultom
2011-01-01
Full Text Available Problem statement: Nowadays, various types of data in web table can be easily extracted from the Internet, although not all of web tables are relevant to the users. As we may know, most web pages are in unstructured HTML format, making web table extraction process very time consuming and costly. HTML format only focuses on the presentation, not based on the database system. Therefore, users need a tool in dealing with that process. Approach: This research proposed an approach for implementing web table extraction and making a Mashup from HTML web pages using Xtractorz application. It is also discussed on how to collaborate and integrate a web table extraction process in the stage of building a Mashup, i.e., Data Retrieval, Data Source Modeling, Data Cleaning/ Filtering, Data Integration and Data Visualization. The main issue lies in stage of data modeling creation, in which Xtractorz must be able to automatically render Document Object Model (DOM tree in accordance to HTML tag or code of the web page from which the table is extracted. To overcome that, the Xtractorz is equipped with algorithm and rules so it can enable to specifically analyze the HTML tags and to extract the data into a new table format. The algorithm is created by using recursive technique within a user-friendly GUI of Xtractorz. Results: The approach was evaluated by conducting experiment using Xtractorz and other similar applications, such as RoboMaker and Karma. The result of experiment showed that Xtractorz is more efficient in completing the experiment tasks, since Xtractorz has fewer steps to complete the whole tasks. Conclusion: Xtractorz can give a positive contribution in terms of algorithm technique and a new approach method to web table extraction process and making a Mashup, where the core algorithm can extracts web data tables using recursive technique while rendering the DOM tree model automatically.
Hadia, Sarman K.; Thakker, R. A.; Bhatt, Kirit R.
2016-05-01
The study proposes an application of evolutionary algorithms, specifically an artificial bee colony (ABC), variant ABC and particle swarm optimisation (PSO), to extract the parameters of metal oxide semiconductor field effect transistor (MOSFET) model. These algorithms are applied for the MOSFET parameter extraction problem using a Pennsylvania surface potential model. MOSFET parameter extraction procedures involve reducing the error between measured and modelled data. This study shows that ABC algorithm optimises the parameter values based on intelligent activities of honey bee swarms. Some modifications have also been applied to the basic ABC algorithm. Particle swarm optimisation is a population-based stochastic optimisation method that is based on bird flocking activities. The performances of these algorithms are compared with respect to the quality of the solutions. The simulation results of this study show that the PSO algorithm performs better than the variant ABC and basic ABC algorithm for the parameter extraction of the MOSFET model; also the implementation of the ABC algorithm is shown to be simpler than that of the PSO algorithm.
Distance Concentration-Based Artificial Immune Algorithm
Institute of Scientific and Technical Information of China (English)
LIU Tao; WANG Yao-cai; WANG Zhi-jie; MENG Jiang
2005-01-01
The diversity, adaptation and memory of biological immune system attract much attention of researchers. Several optimal algorithms based on immune system have also been proposed up to now. The distance concentration-based artificial immune algorithm (DCAIA) is proposed to overcome defects of the classical artificial immune algorithm (CAIA) in this paper. Compared with genetic algorithm (GA) and CAIA, DCAIA is good for solving the problem of precocity,holding the diversity of antibody, and enhancing convergence rate.
QRS Detection Based on an Advanced Multilevel Algorithm
Wissam Jenkal; Rachid Latif; Ahmed Toumanari; Azzedine Dliou; Oussama El B’charri; Fadel Mrabih Rabou Maoulainine
2016-01-01
This paper presents an advanced multilevel algorithm used for the QRS complex detection. This method is based on three levels. The first permits the extraction of higher peaks using an adaptive thresholding technique. The second allows the QRS region detection. The last level permits the detection of Q, R and S waves. The proposed algorithm shows interesting results compared to recently published methods. The perspective of this work is the implementation of this method on an embedded system ...
Institute of Scientific and Technical Information of China (English)
王兴梅; 印桂生; 门志国; 仇晨光
2011-01-01
为在动态场景中准确完成运动目标的提取,提出一种新的基于MRF的运动目标提取方法,即提出在双尺度二阶邻域各向同性的MRF模型中,利用最小二乘法对初始分割结果进行MRF初始参数的自动求取,通过ICM算法实现最大后验概率的估计问题,获得MRF检测结果.采用形态学中的闭运算进行区域填充处理,根据二值化图像水平和垂直投影的顶点坐标实现运动目标区域的准确提取.对标准图像序列Coastguard和实际拍摄的动态场景图像序列的实验分析表明,提出的方法具有较高的精度和适应性,能够有效地完成运动目标的准确提取.%In order to extract a moving target precisely in the dynamic scene, a novel moving target extraction algo-rithm based on Markov random fields (MRF) was proposed. The initial MRF parameters, composed of an isotropic clique of a double scale neighborhood MRF model, were automatically computed by the least square method according to the initial segmentation result. In order to obtain the MRF detection result, the iterated conditional mode ( ICM) algorithm was used to estimate the maximum a posteriori ( MAP) . The holes and discontinuous regions were filled by closing operation of mathematical morphology. Finally, the moving target was extracted accurately according to the coordinates of the horizontal and perpendicular projection vertexes of a binary image. The experiments of the Coastguard standard image sequence along with a practical one with a dynamic scene demonstrate that the proposed algorithm is highly accurate, adaptive, and effective.
Rasmussen, Luke V; Thompson, Will K; Pacheco, Jennifer A; Kho, Abel N; Carrell, David S; Pathak, Jyotishman; Peissig, Peggy L; Tromp, Gerard; Denny, Joshua C; Starren, Justin B
2014-10-01
Design patterns, in the context of software development and ontologies, provide generalized approaches and guidance to solving commonly occurring problems, or addressing common situations typically informed by intuition, heuristics and experience. While the biomedical literature contains broad coverage of specific phenotype algorithm implementations, no work to date has attempted to generalize common approaches into design patterns, which may then be distributed to the informatics community to efficiently develop more accurate phenotype algorithms. Using phenotyping algorithms stored in the Phenotype KnowledgeBase (PheKB), we conducted an independent iterative review to identify recurrent elements within the algorithm definitions. We extracted and generalized recurrent elements in these algorithms into candidate patterns. The authors then assessed the candidate patterns for validity by group consensus, and annotated them with attributes. A total of 24 electronic Medical Records and Genomics (eMERGE) phenotypes available in PheKB as of 1/25/2013 were downloaded and reviewed. From these, a total of 21 phenotyping patterns were identified, which are available as an online data supplement. Repeatable patterns within phenotyping algorithms exist, and when codified and cataloged may help to educate both experienced and novice algorithm developers. The dissemination and application of these patterns has the potential to decrease the time to develop algorithms, while improving portability and accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.
Institute of Scientific and Technical Information of China (English)
徐君; 徐富红; 蔡体健; 王彩玲; 黄德昌; 李伟平
2015-01-01
In hyperspectral unmixing, PPI algorithm is a relatively mature algorithm, but each projection vector in PPI algorithm is generated randomly, and the endmembers extracted by PPI algorithm are not stable. That is, different endmembers can be obtained from the same image by repeatedly running PPI algorithm. This paper, based on the convex geometry description of linear spectral mixing model, utilized the feature that the endmem-bers are the endpoints of the single convex body enclosed in the hyperspectral image feature space, and proposed a novel pure pixel index algorithm for endmember extraction based on the maximum distance. The average of the spectral vectors of all the sample points is calculated and used as the center of a hypersphere. Next, we calcu-late the Euclidean distances of all the sample points to the center of the hypersphere, and design a radius of equal to or greater than the maximum distance for the hypersphere in the feature space to include all of the sample points. We evenly select the reference points on the surface of the hypersphere. The farthest sample point with re-spect to each reference point can be found by calculating the Euclidean distance. Subsequently, every sample point’s frequency of being the most distant to the reference points is recorded as an index to evaluate whether the sample point is an endmember or not. Finally, we use the AVIRIS data of Nevada Cuprite to testify this algo-rithm. The experimental results illustrate that the precision of the endmember extraction using the algorithm pro-posed in this paper is better than N-FINDR algorithm and VCA algorithm in general. Moreover, it has a good ro-bustness and could overcome the instability of PPI algorithm caused by random projection.%在高光谱混合像元分解中，PPI算法是一种比较成熟的算法，但PPI算法中每次投影向量的生成都是随机的，多次执行PPI算法后端元提取的结果并不稳定。本文以线性
Chaos-Based Multipurpose Image Watermarking Algorithm
Institute of Scientific and Technical Information of China (English)
ZHU Congxu; LIAO Xuefeng; LI Zhihua
2006-01-01
To achieve the goal of image content authentication and copyright protection simultaneously, this paper presents a novel image dual watermarking method based on chaotic map. Firstly, the host image was split into many nonoverlapping small blocks, and the block-wise discrete cosine transform (DCT) is computed. Secondly, the robust watermarks, shuffled by the chaotic sequences, are embedded in the DC coefficients of blocks to achieve the goal of copyright protection. The semi-fragile watermarks, generated by chaotic map, are embedded in the AC coefficients of blocks to obtain the aim of image authentication. Both of them can be extracted without the original image. Simulation results demonstrate the effectiveness of our algorithm in terms of robustness and fragility.
Institute of Scientific and Technical Information of China (English)
胡宏伟; 王泽湘; 王哲; 王向红; 杜剑
2015-01-01
To solve the problem of defect extraction because of noise interference and low instrument precision dur-ing ultrasonic phased array testing, a new defect extraction algorithm combining iterative method with erosion algo-rithm for ultrasonic phased array B-scan image was proposed.Based on the ultrasonic phased array B-scan images of a rim block with side through holes, combining the iterative method with the erosion algorithm, the ultrasonic phased array image defect extraction and quantitative method were studied, which were compared with the conven-tional iterative method, Otsu method and Bernsen method.The influences of the aperture size on defects character-istics during phased array inspection were discussed, including the extracted defect area, the perimeter, the sizes of the long axis and the short axis.The results show that the defect feature value extracted relative error is less than 12% by using this method when the aperture size is greater than 20 .This method is beneficial to improving ultra-sonic phased array inspection imaging quality and increasing the accuracy of defect feature extraction.%为解决超声相控阵检测时噪声干扰及仪器精度造成的缺陷提取困难等问题，提出了一种基于迭代法及腐蚀算法的超声相控阵B扫图像缺陷提取方法。以轮辋试块侧通孔的超声相控阵B扫图像为对象，结合迭代法与腐蚀算法，研究了超声相控阵图像缺陷提取及量化方法，并与传统迭代法、Otsu法、Bernsen法进行对比，且讨论了相控阵检测时检测孔径对提取的缺陷特征值精度的影响。结果显示孔径大于20时，采用迭代法与腐蚀算法相结合提取的缺陷特征值相对误差均小于12％，且远小于采用传统迭代法、Otsu法、Bernsen法提取的相对误差。随着检测孔径增大，检测精度提高，但提高的速度在减缓。研究结果对改善超声相控阵检测成像质量，提高缺陷特征提取及量化精度提供了参考。
Neural Network-Based Hyperspectral Algorithms
2016-06-07
Neural Network-Based Hyperspectral Algorithms Walter F. Smith, Jr. and Juanita Sandidge Naval Research Laboratory Code 7340, Bldg 1105 Stennis Space...our effort is development of robust numerical inversion algorithms , which will retrieve inherent optical properties of the water column as well as...validate the resulting inversion algorithms with in-situ data and provide estimates of the error bounds associated with the inversion algorithm . APPROACH
Area Variation Based Color Snake Algorithm for Moving Object Tracking
Institute of Scientific and Technical Information of China (English)
Shoum-ik ROYCHOUDHURY; Young-joon HAN
2010-01-01
A snake algorithm has been known that it has a strong point in extracting the exact contour of an object.But it is apt to be influenced by scattered edges around the control points.Since the shape of a moving object in 2D image changes a lot due ta its rotation and translation in the 3D space,the conventional algorithm that takes into account slowly moving objects cannot provide an appropriate solution.To utilize the advantages of the snake algrithm while minimizing the drawbacks,this paper proposes the area variation based color snake algorithm for moving object tracking.The proposed algorithm includes a new energy term which is used for preserving the shape of an object between two consecutive inages.The proposed one can also segment precisely interesting objects on complex image since it is based on color information.Experiment results show that the proposed algorithm is very effective in various environments.
CUDA Based Speed Optimization of the PCA Algorithm
Directory of Open Access Journals (Sweden)
Salih Görgünoğlu
2016-05-01
Full Text Available Principal Component Analysis (PCA is an algorithm involving heavy mathematical operations with matrices. The data extracted from the face images are usually very large and to process this data is time consuming. To reduce the execution time of these operations, parallel programming techniques are used. CUDA is a multipurpose parallel programming architecture supported by graphics cards. In this study we have implemented the PCA algorithm using both the classical programming approach and CUDA based implementation using different configurations. The algorithm is subdivided into its constituent calculation steps and evaluated for the positive effects of parallelization on each step. Therefore, the parts of the algorithm that cannot be improved by parallelization are identified. On the other hand, it is also shown that, with CUDA based approach dramatic improvements in the overall performance of the algorithm arepossible.
Repetitive transients extraction algorithm for detecting bearing faults
He, Wangpeng; Ding, Yin; Zi, Yanyang; Selesnick, Ivan W.
2017-02-01
Rolling-element bearing vibrations are random cyclostationary. This paper addresses the problem of noise reduction with simultaneous components extraction in vibration signals for faults diagnosis of bearing. The observed vibration signal is modeled as a summation of two components contaminated by noise, and each component composes of repetitive transients. To extract the two components simultaneously, an approach by solving an optimization problem is proposed in this paper. The problem adopts convex sparsity-based regularization scheme for decomposition, and non-convex regularization is used to further promote the sparsity but preserving the global convexity. A synthetic example is presented to illustrate the performance of the proposed approach for repetitive feature extraction. The performance and effectiveness of the proposed method are further demonstrated by applying to compound faults and single fault diagnosis of a locomotive bearing. The results show the proposed approach can effectively extract the features of outer and inner race defects.
Medical Images Watermarking Algorithm Based on Improved DCT
Directory of Open Access Journals (Sweden)
Yv-fan SHANG
2013-12-01
Full Text Available Targeting at the incessant securities problems of digital information management system in modern medical system, this paper presents the robust watermarking algorithm for medical images based on Arnold transformation and DCT. The algorithm first deploys the scrambling technology to encrypt the watermark information and then combines it with the visual feature vector of the image to generate a binary logic series through the hash function. The sequence as taken as keys and stored in the third party to obtain ownership of the original image. Having no need for artificial selection of a region of interest, no capacity constraint, no participation of the original medical image, such kind of watermark extracting solves security and speed problems in the watermark embedding and extracting. The simulation results also show that the algorithm is simple in operation and excellent in robustness and invisibility. In a word, it is more practical compared with other algorithms
An online peak extraction algorithm for ion mobility spectrometry data.
Kopczynski, Dominik; Rahmann, Sven
2015-01-01
Ion mobility (IM) spectrometry (IMS), coupled with multi-capillary columns (MCCs), has been gaining importance for biotechnological and medical applications because of its ability to detect and quantify volatile organic compounds (VOC) at low concentrations in the air or in exhaled breath at ambient pressure and temperature. Ongoing miniaturization of spectrometers creates the need for reliable data analysis on-the-fly in small embedded low-power devices. We present the first fully automated online peak extraction method for MCC/IMS measurements consisting of several thousand individual spectra. Each individual spectrum is processed as it arrives, removing the need to store the measurement before starting the analysis, as is currently the state of the art. Thus the analysis device can be an inexpensive low-power system such as the Raspberry Pi. The key idea is to extract one-dimensional peak models (with four parameters) from each spectrum and then merge these into peak chains and finally two-dimensional peak models. We describe the different algorithmic steps in detail and evaluate the online method against state-of-the-art peak extraction methods.
Institute of Scientific and Technical Information of China (English)
孙志强; 陈延平
2013-01-01
The Burg algorithm based AR model spectral estimation was used to analyze the output signals of a vortex flowmeter within water flow rates from 3 to 35 m3/h, to extract the vortex shedding frequency from noisy signals accurately and effectively. The effect of the AR model's order on the estimation of vortex shedding frequency was discussed. A fitting correlation was established between the minimum AR model's order and the vortex shedding frequency with an error of 3%. The results show that the accuracy of the vortex shedding frequency extracted is high by the Burg algorithm based AR model spectral estimation. The AR model's order has a marked influence on the spectral estimation accuracy and the computation efficiency of Burg algorithm, which suggests that an appropriate order be selected to estimate the vortex shedding frequency from vortex flowmeter output signals. The minimum AR model's order decreases with the increase of vortex shedding frequency.%为了从含噪信号中准确高效地提取出旋涡脱落频率,采用基于Burg算法的AR模型谱估计对测量介质为水、流量范围为3～35 m3/h的涡街流量计输出信号进行分析,讨论AR模型阶次对旋涡脱落频率估计性能的影响,建立频率相对误差小于3％的AR模型最小阶次与旋涡脱落频率的拟合关系式.结果表明,基于Burg算法的AR模型谱估计对涡街流量计旋涡脱落频率的提取精度较高；AR模型的阶次对Burg算法的谱估计精度和计算效率有重要影响,应选用匹配的AR模型阶次对涡街流量计输出信号进行估计；AR模型最小阶次随旋涡脱落频率的增大而减小.
A New Page Ranking Algorithm Based On WPRVOL Algorithm
Roja Javadian Kootenae; Seyyed Mohsen Hashemi; mehdi afzali
2013-01-01
The amount of information on the web is always growing, thus powerful search tools are needed to search for such a large collection. Search engines in this direction help users so they can find their desirable information among the massive volume of information in an easier way. But what is important in the search engines and causes a distinction between them is page ranking algorithm used in them. In this paper a new page ranking algorithm based on "Weighted Page Ranking based on Visits of ...
Automatic Image Registration Algorithm Based on Wavelet Transform
Institute of Scientific and Technical Information of China (English)
LIU Qiong; NI Guo-qiang
2006-01-01
An automatic image registration approach based on wavelet transform is proposed. This proposed method utilizes multiscale wavelet transform to extract feature points. A coarse-to-fine feature matching method is utilized in the feature matching phase. A two-way matching method based on cross-correlation to get candidate point pairs and a fine matching based on support strength combine to form the matching algorithm. At last, based on an affine transformation model, the parameters are iteratively refined by using the least-squares estimation approach. Experimental results have verified that the proposed algorithm can realize automatic registration of various kinds of images rapidly and effectively.
Institute of Scientific and Technical Information of China (English)
纪滨; 杨盼盼; 申元霞
2016-01-01
白细胞显微图像病理分析中，人眼关注的白细胞是感兴趣的区域。ITTI视觉模型是提取图像感兴趣区域(ROI)的有效办法。为了进一步改善其提取的准确性，提出了基于改进的ITTI视觉模型与粒子群优化算法相结合的目标控制方法，并将其应用于医学骨髓细胞图像中的白细胞区域提取。首先利用高斯滤波和多尺度归一化的方法分别提取原始图像的方向、亮度、颜色显著性特征，再根据人眼的视觉对不同显著性特征敏感程度不同的特性对3种显著性特征采用自适应系数相融合的方式得到显著图，最后利用基于改进的粒子群优化算法的Otsu法对显著图进行ROI的提取，并采用数字形态学的方法对其进行后续处理。结果表明，本文算法可以较好地提取完整的白细胞区域，有助于提高病理分析的效率。%In the process of pathological analysis of the Leukocyte microscopic image, the leukocyte areas are regions of interest(ROI). The ITTI visual model is an effective method of extracting the ROI from image. For improving the extracting accuracy, an object extracting method combining the improved ITTI visual model with particle swarm optimization algorithm is proposed and used to extract the ROI from the bone marrow cell image. Firstly, based on Gaussian filter and multi-scale normalization, features of the orientation, brightness, and color are computed from the original image. And then, according the fact that the sensitivity of eyes is not the same as different features the saliency map is obtained with adaptive coefficient from three significant characteristics. Finally, by using Otsu method based on the improved particle swarm optimization(PSO) algorithm, the ROIs are extracted and subsequent processed with method of morphology. Experimental results show that this method can extract white blood cell areas perfectly, which is helpful for improving the efficiency of
PDE Based Algorithms for Smooth Watersheds.
Hodneland, Erlend; Tai, Xue-Cheng; Kalisch, Henrik
2016-04-01
Watershed segmentation is useful for a number of image segmentation problems with a wide range of practical applications. Traditionally, the tracking of the immersion front is done by applying a fast sorting algorithm. In this work, we explore a continuous approach based on a geometric description of the immersion front which gives rise to a partial differential equation. The main advantage of using a partial differential equation to track the immersion front is that the method becomes versatile and may easily be stabilized by introducing regularization terms. Coupling the geometric approach with a proper "merging strategy" creates a robust algorithm which minimizes over- and under-segmentation even without predefined markers. Since reliable markers defined prior to segmentation can be difficult to construct automatically for various reasons, being able to treat marker-free situations is a major advantage of the proposed method over earlier watershed formulations. The motivation for the methods developed in this paper is taken from high-throughput screening of cells. A fully automated segmentation of single cells enables the extraction of cell properties from large data sets, which can provide substantial insight into a biological model system. Applying smoothing to the boundaries can improve the accuracy in many image analysis tasks requiring a precise delineation of the plasma membrane of the cell. The proposed segmentation method is applied to real images containing fluorescently labeled cells, and the experimental results show that our implementation is robust and reliable for a variety of challenging segmentation tasks.
Kernel method-based fuzzy clustering algorithm
Institute of Scientific and Technical Information of China (English)
Wu Zhongdong; Gao Xinbo; Xie Weixin; Yu Jianping
2005-01-01
The fuzzy C-means clustering algorithm(FCM) to the fuzzy kernel C-means clustering algorithm(FKCM) to effectively perform cluster analysis on the diversiform structures are extended, such as non-hyperspherical data, data with noise, data with mixture of heterogeneous cluster prototypes, asymmetric data, etc. Based on the Mercer kernel, FKCM clustering algorithm is derived from FCM algorithm united with kernel method. The results of experiments with the synthetic and real data show that the FKCM clustering algorithm is universality and can effectively unsupervised analyze datasets with variform structures in contrast to FCM algorithm. It is can be imagined that kernel-based clustering algorithm is one of important research direction of fuzzy clustering analysis.
ILU preconditioning based on the FAPINV algorithm
Directory of Open Access Journals (Sweden)
Davod Khojasteh Salkuyeh
2015-01-01
Full Text Available A technique for computing an ILU preconditioner based on the factored approximate inverse (FAPINV algorithm is presented. We show that this algorithm is well-defined for H-matrices. Moreover, when used in conjunction with Krylov-subspace-based iterative solvers such as the GMRES algorithm, results in reliable solvers. Numerical experiments on some test matrices are given to show the efficiency of the new ILU preconditioner.
一种改进的基于二阶统计量的盲源抽取算法%An Improved Blind Source Extraction Algorithm Based on Second Order Statistics
Institute of Scientific and Technical Information of China (English)
李飞; 李国林; 谢鑫
2016-01-01
针对之前盲源抽取算法存在鞍点的问题，提出一种新的基于二阶统计量的盲源抽取算法。通过利用自回归模型对抽取信号向量进行估计，并利用估计值与抽取向量之差提出一种新的代价函数，证明了代价函数的有效性。通过利用最速下降法对抽取向量以及FIR滤波器权值向量的计算，求解出抽取向量最优值。最后通过仿真证明算法相对之前两种算法有更高的可靠性，且在低信噪比的环境下，算法抽取效果依然良好且保持很高的抽取正确率。%In view of saddle points in blind source extraction algorithm before , an improved blind source extraction algorithm was proposed . Firstly, target signal vector was estimated by using autoregressive model , and a new cost function was proposed by using difference between the estimated vector and the extracted signal vector , effectiveness of new cost function was proved .The optimal solution of the extracted vector was computed by using the steepest descent method to extract vector and weight vector of FIR filter .At last, computer simulation proved that the improved algorithm shows higher reliability than the other two algorithms , and with better extraction effect and keeps higher accuracy in low SNR environment .
An algorithm for automatic parameter adjustment for brain extraction in BrainSuite
Rajagopal, Gautham; Joshi, Anand A.; Leahy, Richard M.
2017-02-01
Brain Extraction (classification of brain and non-brain tissue) of MRI brain images is a crucial pre-processing step necessary for imaging-based anatomical studies of the human brain. Several automated methods and software tools are available for performing this task, but differences in MR image parameters (pulse sequence, resolution) and instrumentand subject-dependent noise and artefacts affect the performance of these automated methods. We describe and evaluate a method that automatically adapts the default parameters of the Brain Surface Extraction (BSE) algorithm to optimize a cost function chosen to reflect accurate brain extraction. BSE uses a combination of anisotropic filtering, Marr-Hildreth edge detection, and binary morphology for brain extraction. Our algorithm automatically adapts four parameters associated with these steps to maximize the brain surface area to volume ratio. We evaluate the method on a total of 109 brain volumes with ground truth brain masks generated by an expert user. A quantitative evaluation of the performance of the proposed algorithm showed an improvement in the mean (s.d.) Dice coefficient from 0.8969 (0.0376) for default parameters to 0.9509 (0.0504) for the optimized case. These results indicate that automatic parameter optimization can result in significant improvements in definition of the brain mask.
Institute of Scientific and Technical Information of China (English)
易晨辉; 刘梦赤; 胡婕
2016-01-01
Conventional information extraction methods of semi-structured pages usually assume that valid data have relatively strong structural similarity, divide the page into data records and data region with similar characteristics and then extract from them. However, faculty list pages of universities mostly are written artificially and filled by human beings instead of automatic generation by using templates, so their structure is not rigorous. This paper proposes a fac-ulty information extraction method based on LCA (lowest common ancestor) segmentation algorithm, introduces the connection between LCA and semantic relation into Web segmentation, and presents the new concepts of basic semantic blocks and effective semantic blocks. After converting the page into a DOM (document object model) tree and the pre-processing, the page is divided into the basic semantic blocks with LCA algorithm firstly. Then the basic semantic blocks are merged into their corresponding effective semantic blocks with complete personnel information. Finally, according to the alignment of effective semantic blocks, all faculty information mapped by all relationships in current page is gotten. The experimental results show that the proposed method still has high precision and recall rates in the segmentation and extraction of quantities of real university research faculty list pages by compared with the MDR (mining data records) algorithm.%现有的半结构化网页信息抽取方法主要假设有效数据间具有较强结构相似性，将网页分割为具有类似特征的数据记录与数据区域然后进行抽取。但是存有大学科研人员信息的网页大多是人工编写填入内容，结构特征并不严谨。针对这类网页的弱结构性，提出了一种基于最近公共祖先（lowest common ancestor， LCA）分块算法的人员信息抽取方法，将LCA和语义相关度强弱的联系引入网页分块中，并提出了基本语义块与有效语义块的概念
A new algorithm for frequency-dependent shear-wave splitting parameters extraction
Zhang, Jian-li; Wang, Yun; Lu, Jun
2013-10-01
In the exploration of a fractured reservoir, it is very important for reservoir engineers to get information about fracture sizes, because macro-scale fractures are more significant to the control of reservoir storability and fluid flow even though both micro-scale cracks and macro-scale fractures contribute to the dominant anisotropy. Recently, a poroelastic equivalent medium model was proposed by Chapman, which describes the frequency-dependent anisotropy effect with the fracture size being one of the key parameters. Based on this model, geophysicists have done work to measure fracture sizes from seismic data. However, it is necessary to extract frequency-dependent anisotropy before inverting for fracture size. In this paper, a new algorithm is developed for extracting frequency-dependent anisotropic parameters from surface multi-component seismic data, especially from a common-receiver-gather. Compared with the conventional method of extracting the splitting parameters only for different frequency bands, it is possible to extract splitting parameters for each frequency with the new algorithm. To check the reliability of the algorithm, a common-receiver-all-azimuth-gather is synthesized by the vector convolution method, involving the splitting parameters dependent on frequency. Test results show that the frequency-dependent splitting parameters will be extracted accurately with a general level of noise (the signal to noise ratio, SNR for shot, equals 3). More importantly, under the joint constraints of multi-azimuth data, a satisfactory result will be obtained even if the noise is significant (SNR equals 1). The good performance of the algorithm in a model test indicates its potential for field applications.
Multicast Routing Based on Hybrid Genetic Algorithm
Institute of Scientific and Technical Information of China (English)
CAO Yuan-da; CAI Gui
2005-01-01
A new multicast routing algorithm based on the hybrid genetic algorithm (HGA) is proposed. The coding pattern based on the number of routing paths is used. A fitness function that is computed easily and makes algorithm quickly convergent is proposed. A new approach that defines the HGA's parameters is provided. The simulation shows that the approach can increase largely the convergent ratio, and the fitting values of the parameters of this algorithm are different from that of the original algorithms. The optimal mutation probability of HGA equals 0.50 in HGA in the experiment, but that equals 0.07 in SGA. It has been concluded that the population size has a significant influence on the HGA's convergent ratio when it's mutation probability is bigger. The algorithm with a small population size has a high average convergent rate. The population size has little influence on HGA with the lower mutation probability.
Algorithms and semantic infrastructure for mutation impact extraction and grounding.
Laurila, Jonas B; Naderi, Nona; Witte, René; Riazanov, Alexandre; Kouznetsov, Alexandre; Baker, Christopher J O
2010-12-02
Mutation impact extraction is a hitherto unaccomplished task in state of the art mutation extraction systems. Protein mutations and their impacts on protein properties are hidden in scientific literature, making them poorly accessible for protein engineers and inaccessible for phenotype-prediction systems that currently depend on manually curated genomic variation databases. We present the first rule-based approach for the extraction of mutation impacts on protein properties, categorizing their directionality as positive, negative or neutral. Furthermore protein and mutation mentions are grounded to their respective UniProtKB IDs and selected protein properties, namely protein functions to concepts found in the Gene Ontology. The extracted entities are populated to an OWL-DL Mutation Impact ontology facilitating complex querying for mutation impacts using SPARQL. We illustrate retrieval of proteins and mutant sequences for a given direction of impact on specific protein properties. Moreover we provide programmatic access to the data through semantic web services using the SADI (Semantic Automated Discovery and Integration) framework. We address the problem of access to legacy mutation data in unstructured form through the creation of novel mutation impact extraction methods which are evaluated on a corpus of full-text articles on haloalkane dehalogenases, tagged by domain experts. Our approaches show state of the art levels of precision and recall for Mutation Grounding and respectable level of precision but lower recall for the task of Mutant-Impact relation extraction. The system is deployed using text mining and semantic web technologies with the goal of publishing to a broad spectrum of consumers.
Algorithms and semantic infrastructure for mutation impact extraction and grounding
Directory of Open Access Journals (Sweden)
Kouznetsov Alexandre
2010-12-01
Full Text Available Abstract Background Mutation impact extraction is a hitherto unaccomplished task in state of the art mutation extraction systems. Protein mutations and their impacts on protein properties are hidden in scientific literature, making them poorly accessible for protein engineers and inaccessible for phenotype-prediction systems that currently depend on manually curated genomic variation databases. Results We present the first rule-based approach for the extraction of mutation impacts on protein properties, categorizing their directionality as positive, negative or neutral. Furthermore protein and mutation mentions are grounded to their respective UniProtKB IDs and selected protein properties, namely protein functions to concepts found in the Gene Ontology. The extracted entities are populated to an OWL-DL Mutation Impact ontology facilitating complex querying for mutation impacts using SPARQL. We illustrate retrieval of proteins and mutant sequences for a given direction of impact on specific protein properties. Moreover we provide programmatic access to the data through semantic web services using the SADI (Semantic Automated Discovery and Integration framework. Conclusion We address the problem of access to legacy mutation data in unstructured form through the creation of novel mutation impact extraction methods which are evaluated on a corpus of full-text articles on haloalkane dehalogenases, tagged by domain experts. Our approaches show state of the art levels of precision and recall for Mutation Grounding and respectable level of precision but lower recall for the task of Mutant-Impact relation extraction. The system is deployed using text mining and semantic web technologies with the goal of publishing to a broad spectrum of consumers.
A Robust Formant Extraction Algorithm Combining Spectral Peak Picking and Root Polishing
Directory of Open Access Journals (Sweden)
Seo Kwang-deok
2006-01-01
Full Text Available We propose a robust formant extraction algorithm that combines the spectral peak picking, formants location examining for peak merger checking, and the root extraction methods. The spectral peak picking method is employed to locate the formant candidates, and the root extraction is used for solving the peak merger problem. The location and the distance between the extracted formants are also utilized to efficiently find out suspected peak mergers. The proposed algorithm does not require much computation, and is shown to be superior to previous formant extraction algorithms through extensive tests using TIMIT speech database.
Face Recognition Algorithms Based on Transformed Shape Features
Directory of Open Access Journals (Sweden)
Sambhunath Biswas
2012-05-01
Full Text Available Human face recognition is, indeed, a challenging task, especially under illumination and pose variations. We examine in the present paper effectiveness of two simple algorithms using coiflet packet and Radon transforms to recognize human faces from some databases of still gray level images, under the environment of illumination and pose variations. Both the algorithms convert 2-D gray level training face images into their respective depth maps or physical shape which are subsequently transformed by Coiflet packet and Radon transforms to compute energy for feature extraction. Experiments show that such transformed shape features are robust to illumination and pose variations. With the features extracted, training classes are optimally separated through linear discriminant analysis (LDA, while classification for test face images is made through a k-NN classifier, based on L1 norm and Mahalanobis distance measures. Proposed algorithms are then tested on face images that differ in illumination,expression or pose separately, obtained from three databases,namely, ORL, Yale and Essex-Grimace databases. Results, so obtained, are compared with two different existing algorithms.Performance using Daubechies wavelets is also examined. It is seen that the proposed Coiflet packet and Radon transform based algorithms have significant performance, especially under different illumination conditions and pose variation. Comparison shows the proposed algorithms are superior.
Information criterion based fast PCA adaptive algorithm
Institute of Scientific and Technical Information of China (English)
Li Jiawen; Li Congxin
2007-01-01
The novel information criterion (NIC) algorithm can find the principal subspace quickly, but it is not an actual principal component analysis (PCA) algorithm and hence it cannot find the orthonormal eigen-space which corresponds to the principal component of input vector.This defect limits its application in practice.By weighting the neural network's output of NIC, a modified novel information criterion (MNIC) algorithm is presented.MNIC extractes the principal components and corresponding eigenvectors in a parallel online learning program, and overcomes the NIC's defect.It is proved to have a single global optimum and nonquadratic convergence rate, which is superior to the conventional PCA online algorithms such as Oja and LMSER.The relationship among Oja, LMSER and MNIC is exhibited.Simulations show that MNIC could converge to the optimum fast.The validity of MNIC is proved.
Institute of Scientific and Technical Information of China (English)
於实
2012-01-01
提出了一种基于改进的隐条件随机场的异构Web数据源数据抽取算法.通过对隐条件随机场进行的改进,对隐含变量进行更为准确的计算,并且克服了该模型的性能严重依赖于初始参数选择的问题,而且进行模型训练时不需要大量的人工标注的样本数据.实验结果表明,对比已有方法,本文算法在对具有缺省属性以及多属性特征的网站进行数据抽取时,在查全率,查准率以及F1值上都获得了令人满意的性能.%In this paper, we propose a novel heterogeneous Web data extraction algorithm based on modified hidden conditional random fields model. Firstly, the hidden conditional random fields model is modified to obtain more accurate calculation of implicit variables, and the problem that the model' s performance is heavily dependent on the choice of initial parameters is well solved. Moreover, the proposed model does not require a lot of manual labeling sample data to construct training data. Experimental results show that compared with the existing method, the proposed algorithm can obtain satisfactory performance both in websites with the default attributes and the websites with multi-attributes.
Refining Automatically Extracted Knowledge Bases Using Crowdsourcing
Directory of Open Access Journals (Sweden)
Chunhua Li
2017-01-01
Full Text Available Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.
A Dither Modulation Audio Watermarking Algorithm Based on HAS
Directory of Open Access Journals (Sweden)
Yi-bo Huang
2012-11-01
Full Text Available In this study, we propose a dither modulation audio watermarking algorithm based on human auditory system which applied the theory of dither modulation. The algorithm made the two-value image watermarking to one-dimensional digital sequence firstly and used the Fibonacci to transform one-dimensional digital sequence. Then divide the audio into audio data segment and made discrete wavelet transform with audio data segment, every segment can adaptive choose quantization step. Finally put low frequency coefficients transformed embedding the watermarking which applied the dither modulation. When extract the watermark with no original audio, they realized blind extraction. The experimental results show that this algorithm has preferable robustness to against the attack from noise addition, compression, low pass filtering and re-sampling.
Automatic Foreground Extraction Based on Difference of Gaussian
Directory of Open Access Journals (Sweden)
Yubo Yuan
2014-01-01
Full Text Available A novel algorithm for automatic foreground extraction based on difference of Gaussian (DoG is presented. In our algorithm, DoG is employed to find the candidate keypoints of an input image in different color layers. Then, a keypoints filter algorithm is proposed to get the keypoints by removing the pseudo-keypoints and rebuilding the important keypoints. Finally, Normalized cut (Ncut is used to segment an image into several regions and locate the foreground with the number of keypoints in each region. Experiments on the given image data set demonstrate the effectiveness of our algorithm.
Eigenvalue Decomposition-Based Modified Newton Algorithm
Directory of Open Access Journals (Sweden)
Wen-jun Wang
2013-01-01
Full Text Available When the Hessian matrix is not positive, the Newton direction may not be the descending direction. A new method named eigenvalue decomposition-based modified Newton algorithm is presented, which first takes the eigenvalue decomposition of the Hessian matrix, then replaces the negative eigenvalues with their absolute values, and finally reconstructs the Hessian matrix and modifies the searching direction. The new searching direction is always the descending direction. The convergence of the algorithm is proven and the conclusion on convergence rate is presented qualitatively. Finally, a numerical experiment is given for comparing the convergence domains of the modified algorithm and the classical algorithm.
Web Based Genetic Algorithm Using Data Mining
Directory of Open Access Journals (Sweden)
Ashiqur Rahman
2016-09-01
Full Text Available This paper presents an approach for classifying students in order to predict their final grade based on features extracted from logged data in an education web-based system. A combination of multiple classifiers leads to a significant improvement in classification performance. Through weighting the feature vectors using a Genetic Algorithm we can optimize the prediction accuracy and get a marked improvement over raw classification. It further shows that when the number of features is few; feature weighting is works better than just feature selection. Many leading educational institutions are working to establish an online teaching and learning presence. Several systems with different capabilities and approaches have been developed to deliver online education in an academic setting. In particular, Michigan State University (MSU has pioneered some of these systems to provide an infrastructure for online instruction. The research presented here was performed on a part of the latest online educational system developed at MSU, the Learning Online Network with Computer-Assisted Personalized Approach (LON-CAPA
Fingerprint Feature Extraction Based on Macroscopic Curvature
Institute of Scientific and Technical Information of China (English)
Zhang Xiong; He Gui-ming; Zhang Yun
2003-01-01
In the Automatic Fingerprint Identification System (AFIS), extracting the feature of fingerprint is very important. The local curvature of ridges of fingerprint is irregular, so people have the barrier to effectively extract the fingerprint curve features to describe fingerprint. This article proposes a novel algorithm; it embraces information of few nearby fingerprint ridges to extract a new characteristic which can describe the curvature feature of fingerprint. Experimental results show the algorithm is feasible, and the characteristics extracted by it can clearly show the inner macroscopic curve properties of fingerprint. The result also shows that this kind of characteristic is robust to noise and pollution.
Fingerprint Feature Extraction Based on Macroscopic Curvature
Institute of Scientific and Technical Information of China (English)
Zhang; Xiong; He; Gui-Ming; 等
2003-01-01
In the Automatic Fingerprint Identification System(AFIS), extracting the feature of fingerprint is very important. The local curvature of ridges of fingerprint is irregular, so people have the barrier to effectively extract the fingerprint curve features to describe fingerprint. This article proposes a novel algorithm; it embraces information of few nearby fingerprint ridges to extract a new characterstic which can describe the curvature feature of fingerprint. Experimental results show the algorithm is feasible, and the characteristics extracted by it can clearly show the inner macroscopic curve properties of fingerprint. The result also shows that this kind of characteristic is robust to noise and pollution.
A New Multi-tree and Dual Index based Firewall Optimization Algorithm
Directory of Open Access Journals (Sweden)
Cuixia Ni
2013-05-01
Full Text Available Using statistical analysis strategy, a large-scale firewall log files is analyzed and two main characteristics, the protocol field and the IP address field, is extracted in this paper. Based on the extracted features and the characteristics of multi-tree and dual-index strategy, we design a better firewall optimization algorithm. Compared with the Stochastic Distribution Multibit-trie (SDMTrie algorithm, our proposed algorithm can greatly decrease the preprocessing time and improve the searching and filtering process.
Texture orientation-based algorithm for detecting infrared maritime targets.
Wang, Bin; Dong, Lili; Zhao, Ming; Wu, Houde; Xu, Wenhai
2015-05-20
Infrared maritime target detection is a key technology for maritime target searching systems. However, in infrared maritime images (IMIs) taken under complicated sea conditions, background clutters, such as ocean waves, clouds or sea fog, usually have high intensity that can easily overwhelm the brightness of real targets, which is difficult for traditional target detection algorithms to deal with. To mitigate this problem, this paper proposes a novel target detection algorithm based on texture orientation. This algorithm first extracts suspected targets by analyzing the intersubband correlation between horizontal and vertical wavelet subbands of the original IMI on the first scale. Then the self-adaptive wavelet threshold denoising and local singularity analysis of the original IMI is combined to remove false alarms further. Experiments show that compared with traditional algorithms, this algorithm can suppress background clutter much better and realize better single-frame detection for infrared maritime targets. Besides, in order to guarantee accurate target extraction further, the pipeline-filtering algorithm is adopted to eliminate residual false alarms. The high practical value and applicability of this proposed strategy is backed strongly by experimental data acquired under different environmental conditions.
Immune Based Intrusion Detector Generating Algorithm
Institute of Scientific and Technical Information of China (English)
DONG Xiao-mei; YU Ge; XIANG Guang
2005-01-01
Immune-based intrusion detection approaches are studied. The methods of constructing self set and generating mature detectors are researched and improved. A binary encoding based self set construction method is applied. First,the traditional mature detector generating algorithm is improved to generate mature detectors and detect intrusions faster. Then, a novel mature detector generating algorithm is proposed based on the negative selection mechanism. Accord ing to the algorithm, less mature detectors are needed to detect the abnormal activities in the network. Therefore, the speed of generating mature detectors and intrusion detection is improved. By comparing with those based on existing algo rithms, the intrusion detection system based on the algorithm has higher speed and accuracy.
Algorithms for the extraction of various diameter vessels.
Tremblais, B; Capelle-Laize, A S; Augereau, B
2007-01-01
In this communication we propose a new and automatic strategy for the multi-scale extraction of vessels. The objective is to obtain a good representation of the vessels. That is to say a precise characterization of their centerlines and diameters. The adopted solution requires the generation of an image scale-space in which the various levels of details allow to process arteries of any diameter. The proposed method is implemented using the Partial Differential Equations (PDE) and differential geometry formalisms. The differential geometry allows, by the computation of a new valley response, to characterize the centerlines of vessels as well as the bottom lines of the valleys of the image surface. The information given by the centerlines and valley response at different scales are used to obtain the 2D multi-scale centerlines of the arteries. To that purpose, we construct a multi-scale adjacency graph which permits to keep the K strongest detections. Then, the detection we obtain is coded as an attributed graph. The suggested algorithm is applied in the scope of two kinds of angiograms: coronaries and retinal angiograms.
A robust DCT domain watermarking algorithm based on chaos system
Xiao, Mingsong; Wan, Xiaoxia; Gan, Chaohua; Du, Bo
2009-10-01
Digital watermarking is a kind of technique that can be used for protecting and enforcing the intellectual property (IP) rights of the digital media like the digital images containting in the transaction copyright. There are many kinds of digital watermarking algorithms. However, existing digital watermarking algorithms are not robust enough against geometric attacks and signal processing operations. In this paper, a robust watermarking algorithm based on chaos array in DCT (discrete cosine transform)-domain for gray images is proposed. The algorithm provides an one-to-one method to extract the watermark.Experimental results have proved that this new method has high accuracy and is highly robust against geometric attacks, signal processing operations and geometric transformations. Furthermore, the one who have on idea of the key can't find the position of the watermark embedded in. As a result, the watermark not easy to be modified, so this scheme is secure and robust.
Institute of Scientific and Technical Information of China (English)
李传江; 费敏锐; 胡豁生; 张自强; 邓丽
2012-01-01
In high-precision dynamic balance measurement system, signal-to-noise ratio is very low. Conventional methods, such as fast Fourier transformation(FFT) , can not extract unbalance signal accurately. An unbalance signal extraction method based on harmonic wavelet and Prony algorithm is proposed. Firstly, adaptive harmonic wavelet band-pass filter is used to improve signal-to-noise ratio. And then with the filtered data as sample, Prony method is adopted to identify the amplitude and phase of the unbalance signal, which effectively solves the problem of unable to accurately extract the unbalance signal induced by adjacent frequency interference. Simulation and experiment results show that the proposed method has the advantages of high precision and good repeatability, and is specially suitable for high-precision dynamic balance measurement system with low signal to noise ratio.%高精度动平衡测量系统中的信噪比很低,尤其是在邻近频率干扰背景下,常规的快速博里叶变换(fast Fourier trasformation,FFT)等方法无法准确提取不平衡信号.提出一种基于谐波小波和Prony算法的不平衡信号提取方法,先采用自适应谐波小波带通滤波提高信噪比,然后以滤波后的数据作为样本,采用Prony算法识别不平衡信号的幅值和相位,有效解决了邻近频率干扰导致的不平衡信号无法准确提取的问题.仿真和实验结果表明,该方法具有精度高、重复性好等优点,特别适合于信噪比较低的高精度动平衡测量系统中.
Use of Genetic Algorithm for Cohesive Summary Extraction to Assist Reading Difficulties
Directory of Open Access Journals (Sweden)
K. Nandhini
2013-01-01
Full Text Available Learners with reading difficulties normally face significant challenges in understanding the text-based learning materials. In this regard, there is a need for an assistive summary to help such learners to approach the learning documents with minimal difficulty. An important issue in extractive summarization is to extract cohesive summary from the text. Existing summarization approaches focus mostly on informative sentences rather than cohesive sentences. We considered several existing features, including sentence location, cardinality, title similarity, and keywords to extract important sentences. Moreover, learner-dependent readability-related features such as average sentence length, percentage of trigger words, percentage of polysyllabic words, and percentage of noun entity occurrences are considered for the summarization purpose. The objective of this work is to extract the optimal combination of sentences that increase readability through sentence cohesion using genetic algorithm. The results show that the summary extraction using our proposed approach performs better in -measure, readability, and cohesion than the baseline approach (lead and the corpus-based approach. The task-based evaluation shows the effect of summary assistive reading in enhancing readability on reading difficulties.
Lane Detection Based on Machine Learning Algorithm
National Research Council Canada - National Science Library
Chao Fan; Jingbo Xu; Shuai Di
2013-01-01
In order to improve accuracy and robustness of the lane detection in complex conditions, such as the shadows and illumination changing, a novel detection algorithm was proposed based on machine learning...
QPSO-based adaptive DNA computing algorithm.
Karakose, Mehmet; Cigdem, Ugur
2013-01-01
DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm.
QPSO-Based Adaptive DNA Computing Algorithm
Directory of Open Access Journals (Sweden)
Mehmet Karakose
2013-01-01
Full Text Available DNA (deoxyribonucleic acid computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO. Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1 parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2 adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3 numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm.
Evolutionary algorithm based configuration interaction approach
Chakraborty, Rahul
2016-01-01
A stochastic configuration interaction method based on evolutionary algorithm is designed as an affordable approximation to full configuration interaction (FCI). The algorithm comprises of initiation, propagation and termination steps, where the propagation step is performed with cloning, mutation and cross-over, taking inspiration from genetic algorithm. We have tested its accuracy in 1D Hubbard problem and a molecular system (symmetric bond breaking of water molecule). We have tested two different fitness functions based on energy of the determinants and the CI coefficients of determinants. We find that the absolute value of CI coefficients is a more suitable fitness function when combined with a fixed selection scheme.
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
A new FFT algorithm has been deduced, which is called the base-6 FFT algorithm. The amount for calculating the DFT of complex sequence of N=2r by the base-6 FFT algorithm is Mr(N)=14/3*Nlog6N-4N+4 for multiplication operation of real number and Ar(N)=23/3*Nlog6N-2N+2 for addition operation of real number. The amount for calculating the DFT of real sequence is a half of it with the complex sequence.
Lee, K J; Jenet, F A; Martinez, J; Dartez, L P; Mata, A; Lunsford, G; Cohen, S; Biwer, C M; Rohr, M; Flanigan, J; Walker, A; Banaszak, S; Allen, B; Barr, E D; Bhat, N D R; Bogdanov, S; Brazier, A; Camilo, F; Champion, D J; Chatterjee, S; Cordes, J; Crawford, F; Deneva, J; Desvignes, G; Ferdman, R D; Freire, P; Hessels, J W T; Karuppusamy, R; Kaspi, V M; Knispel, B; Kramer, M; Lazarus, P; Lynch, R; Lyne, A; McLaughlin, M; Ransom, S; Scholz, P; Siemens, X; Spitler, L; Stairs, I; Tan, M; van Leeuwen, J; Zhu, W W
2013-01-01
Modern radio pulsar surveys produce a large volume of prospective candidates, the majority of which are polluted by human-created radio frequency interference or other forms of noise. Typically, large numbers of candidates need to be visually inspected in order to determine if they are real pulsars. This process can be labor intensive. In this paper, we introduce an algorithm called PEACE (Pulsar Evaluation Algorithm for Candidate Extraction) which improves the efficiency of identifying pulsar signals. The algorithm ranks the candidates based on a score function. Unlike popular machine-learning based algorithms, no prior training data sets are required. This algorithm has been applied to data from several large-scale radio pulsar surveys. Using the human-based ranking results generated by students in the Arecibo Remote Command enter programme, the statistical performance of PEACE was evaluated. It was found that PEACE ranked 68% of the student-identified pulsars within the top 0.17% of sorted candidates, 95% ...
A SAR IMAGE REGISTRATION METHOD BASED ON SIFT ALGORITHM
Directory of Open Access Journals (Sweden)
W. Lu
2017-09-01
Full Text Available In order to improve the stability and rapidity of synthetic aperture radar (SAR images matching, an effective method was presented. Firstly, the adaptive smoothing filtering was employed for image denoising in image processing based on Wallis filtering to avoid the follow-up noise is amplified. Secondly, feature points were extracted by a simplified SIFT algorithm. Finally, the exact matching of the images was achieved with these points. Compared with the existing methods, it not only maintains the richness of features, but a-lso reduces the noise of the image. The simulation results show that the proposed algorithm can achieve better matching effect.
QRS Detection Based on an Advanced Multilevel Algorithm
Directory of Open Access Journals (Sweden)
Wissam Jenkal
2016-01-01
Full Text Available This paper presents an advanced multilevel algorithm used for the QRS complex detection. This method is based on three levels. The first permits the extraction of higher peaks using an adaptive thresholding technique. The second allows the QRS region detection. The last level permits the detection of Q, R and S waves. The proposed algorithm shows interesting results compared to recently published methods. The perspective of this work is the implementation of this method on an embedded system for a real time ECG monitoring system.
a SAR Image Registration Method Based on Sift Algorithm
Lu, W.; Yue, X.; Zhao, Y.; Han, C.
2017-09-01
In order to improve the stability and rapidity of synthetic aperture radar (SAR) images matching, an effective method was presented. Firstly, the adaptive smoothing filtering was employed for image denoising in image processing based on Wallis filtering to avoid the follow-up noise is amplified. Secondly, feature points were extracted by a simplified SIFT algorithm. Finally, the exact matching of the images was achieved with these points. Compared with the existing methods, it not only maintains the richness of features, but a-lso reduces the noise of the image. The simulation results show that the proposed algorithm can achieve better matching effect.
A Fast Multipole Algorithm with Virtual Cube Partitioning for 3-D Capacitance Extraction
Institute of Scientific and Technical Information of China (English)
YANGZhaozhi; WANGZeyi
2004-01-01
In this paper a fast indirect boundaryelement method based on the multipole algorithm for capacitance extraction of three-dimensional (3-D) geometries, virtual cube multipole algorithm, is described. First,each 2-D boundary element is regarded as a set of particles with charge rather than a single particle, so the relations between the positions of elements themselves are considered instead of the relations between the center-points of the elements, and a new strategy for cube partitioning is introduced. This strategy overcomes the inadequacy of the methods that associating panels to particles, does not need to break up every panel contained in more than one cube, and has higher speed and precision. Next, a new method is proposed to accelerate the potential integration between the panels that are near to each other. Making good use of the similarity in the 2-D boundary integration,the fast potential integral approach decreases the burden of direct potential computing. Experiments confirm that the algorithm is accurate and has nearly linear computational growth as O(nm), where n is the number of panels and rn is the number of conductors. The new algorithm is implemented and the performance is compared with previous algorithms, such as Fastcap2 of MIT, for k×k bus examples.
Institute of Scientific and Technical Information of China (English)
薛凌云; 李欣
2015-01-01
According to the fifth set RMB serial numbers, tunnel magneto resistance ( TMR) is used to acquire the magnetic signal of serial numbers , the approach of wavelet transform is adopted to reduce the signal noise , the method based on energy D-value is used to cut out the effective signal , to extract several time domain features and build characteristic criterion sample library , BP neural network is used to recognize magnetic signal of serial numbers and classify counterfeit and banknote .The study shows that the collected character magnetic signal of different RMB is stable and saturate , the method of wavelet transform reduces the signal noise well , complete and effective magnetic signal can be acquired by using the method based on energy D-value , computing speed of BP neural network algorithm is fast and recognition currency is 100%.%针对第5套人民币冠字码，采用TMR传感器获取冠字码磁信号，经小波变换对信号降噪，采用能量差法截取冠字码有效磁信号，对其提取多个时域特征，构建特征判据样本库，利用BP神经网络识别冠字码磁信号，实现真假币分类。实验结果表明，所采集的不同面额纸币的冠字码磁信号稳定饱和，小波变换对磁信号降噪效果良好，能量差法可以获取完整、有效的冠字码磁信号，BP神经网络算法运算速度快、识别率达100％。
An improved real-time background extraction algorithm based on Jung＇s method%基于改进Jung方法的实时背景提取算法
Institute of Scientific and Technical Information of China (English)
张挺; 赵向东; 李文军; 柴智
2012-01-01
结合帧间差分信息和Jung背景提取算法，提出了一种改进的用于运动目标检测的实时背景提取算法。该算法利用视频连续图像帧之间的差异信息加速背景更新过程，提取的背景图像能够快速适应背景中物体的变化，同时保留了基本Jung背景提取方法结构简单、运算速度快的特点．在PETS2001数据集上对本算法进行了有效性验证，实验结果表明。该算法可以实时准确地提取背景图像。%By eombining the Jung＇s background extraction algorithm and difference infonnation between the adjacent frames, an improved background extraction method is proposed in this paper. In order to accelerate the method, the difference information between the adjacent frames is employed. The background image extracted by the proposed method can adapt the change of the image sequence quickly, preserving the advantages of Jung＇s algorithm at the same time. The validity of the proposed algorithm is demonstrated through the PETS2001 data sets The experiment results show that the background image can be real-time extracted accurately.
Seizure detection algorithms based on EMG signals
DEFF Research Database (Denmark)
Conradsen, Isa
Background: the currently used non-invasive seizure detection methods are not reliable. Muscle fibers are directly connected to the nerves, whereby electric signals are generated during activity. Therefore, an alarm system on electromyography (EMG) signals is a theoretical possibility. Objective......: to show whether medical signal processing of EMG data is feasible for detection of epileptic seizures. Methods: EMG signals during generalised seizures were recorded from 3 patients (with 20 seizures in total). Two possible medical signal processing algorithms were tested. The first algorithm was based...... on the amplitude of the signal. The other algorithm was based on information of the signal in the frequency domain, and it focused on synchronisation of the electrical activity in a single muscle during the seizure. Results: The amplitude-based algorithm reliably detected seizures in 2 of the patients, while...
A fusion algorithm of the infrared and visible image based on target extraction%基于目标提取的红外与可见光图像融合新算法
Institute of Scientific and Technical Information of China (English)
聂其贵; 马惠珠
2014-01-01
针对灰色系统理论在空间域对红外与可见光图像融合的不足，以及非下采样Contourlet变换（ NSCT）在图像融合领域的优势，提出了一种基于目标提取的红外与可见光图像融合新算法。首先，对红外和可见光图像分别进行NSCT变换；其次，对红外低频分量应用灰色系统理论进行目标提取，并利用所提融合规则对低频分量进行融合，同时对高频分量采用常用融合规则进行融合；最后，对融合后高、低频分量进行NSCT逆变换，得到融合图像。通过与4种常用方法进行实验对比，结果表明，文中算法得到的融合图像视觉效果较好，某些客观评价指标提升明显。%Considering the shortcomings of the infrared and visible light image fusion based on grey system theory in the spatial domain, and utilizing the advantages of nonsubsampled Contourlet transform ( NSCT) in image fusion, a fusion algorithm of the infrared and visible light image based on target extraction is proposed.Firstly, the nonsub-sampled Contourlet transform is performed respectively on infrared image and visible light image.Secondly, the grey system theory is applied to the low frequency component of the infrared image for target extraction, and then the low frequency components are fused by making use of the proposed fusion rule.At the same time the common fusion rule is applied to the high frequency components.Finally, the reverse nonsubsampled Contourlet transform is performed on the fused low frequency part and high frequency part in order to obtain the fusion image.Compared with four commonly used methods, the results show that the fusion image has a good visual effect, and some objective evalua-tion indexes are improved obviously.
Applying a Locally Linear Embedding Algorithm for Feature Extraction and Visualization of MI-EEG
Directory of Open Access Journals (Sweden)
Mingai Li
2016-01-01
Full Text Available Robotic-assisted rehabilitation system based on Brain-Computer Interface (BCI is an applicable solution for stroke survivors with a poorly functioning hemiparetic arm. The key technique for rehabilitation system is the feature extraction of Motor Imagery Electroencephalography (MI-EEG, which is a nonlinear time-varying and nonstationary signal with remarkable time-frequency characteristic. Though a few people have made efforts to explore the nonlinear nature from the perspective of manifold learning, they hardly take into full account both time-frequency feature and nonlinear nature. In this paper, a novel feature extraction method is proposed based on the Locally Linear Embedding (LLE algorithm and DWT. The multiscale multiresolution analysis is implemented for MI-EEG by DWT. LLE is applied to the approximation components to extract the nonlinear features, and the statistics of the detail components are calculated to obtain the time-frequency features. Then, the two features are combined serially. A backpropagation neural network is optimized by genetic algorithm and employed as a classifier to evaluate the effectiveness of the proposed method. The experiment results of 10-fold cross validation on a public BCI Competition dataset show that the nonlinear features visually display obvious clustering distribution and the fused features improve the classification accuracy and stability. This paper successfully achieves application of manifold learning in BCI.
L-Tree Match: A New Data Extraction Model and Algorithm for Huge Text Stream with Noises
Institute of Scientific and Technical Information of China (English)
Xu-Bin Deng; Yang-Yong Zhu
2005-01-01
In this paper, a new method, named as L-tree match, is presented for extracting data from complex data sources. Firstly, based on data extraction logic presented in this work, a new data extraction model is constructed in which model components are structurally correlated via a generalized template. Secondly, a database-populating mechanism is built, along with some object-manipulating operations needed for flexible database design, to support data extraction from huge text stream. Thirdly, top-down and bottom-up strategies are combined to design a new extraction algorithm that can extract data from data sources with optional, unordered, nested, and/or noisy components. Lastly, this method is applied to extract accurate data from biological documents amounting to 100GB for the first online integrated biological data warehouse of China.
An Algorithm on Generating Lattice Based on Layered Concept Lattice
Directory of Open Access Journals (Sweden)
Zhang Chang-sheng
2013-08-01
Full Text Available Concept lattice is an effective tool for data analysis and rule extraction, a bottleneck factor on impacting the applications of concept lattice is how to generate lattice efficiently. In this paper, an algorithm LCLG on generating lattice in batch processing based on layered concept lattice is developed, this algorithm is based on layered concept lattice, the lattice is generated downward layer by layer through concept nodes and provisional nodes in current layer; the concept nodes are found parent-child relationships upward layer by layer, then the Hasse diagram of inter-layer connection is generated; in the generated process of the lattice nodes in each layer, we do the pruning operations dynamically according to relevant properties, and delete some unnecessary nodes, such that the generating speed is improved greatly; the experimental results demonstrate that the proposed algorithm has good performance.
Behavior Based Social Dimensions Extraction for Multi-Label Classification.
Li, Le; Xu, Junyi; Xiao, Weidong; Ge, Bin
2016-01-01
Classification based on social dimensions is commonly used to handle the multi-label classification task in heterogeneous networks. However, traditional methods, which mostly rely on the community detection algorithms to extract the latent social dimensions, produce unsatisfactory performance when community detection algorithms fail. In this paper, we propose a novel behavior based social dimensions extraction method to improve the classification performance in multi-label heterogeneous networks. In our method, nodes' behavior features, instead of community memberships, are used to extract social dimensions. By introducing Latent Dirichlet Allocation (LDA) to model the network generation process, nodes' connection behaviors with different communities can be extracted accurately, which are applied as latent social dimensions for classification. Experiments on various public datasets reveal that the proposed method can obtain satisfactory classification results in comparison to other state-of-the-art methods on smaller social dimensions.
Duality based optical flow algorithms with applications
DEFF Research Database (Denmark)
Rakêt, Lars Lau
We consider the popular TV-L1 optical flow formulation, and the so-called duality based algorithm for minimizing the TV-L1 energy. The original formulation is extended to allow for vector valued images, and minimization results are given. In addition we consider different definitions of total...... variation regularization, and related formulations of the optical flow problem that may be used with a duality based algorithm. We present a highly optimized algorithmic setup to estimate optical flows, and give five novel applications. The first application is registration of medical images, where X......-ray images of different hands, taken using different imaging devices are registered using a TV-L1 optical flow algorithm. We propose to regularize the input images, using sparsity enhancing regularization of the image gradient to improve registration results. The second application is registration of 2D...
Duality based optical flow algorithms with applications
DEFF Research Database (Denmark)
Rakêt, Lars Lau
We consider the popular TV-L1 optical flow formulation, and the so-called duality based algorithm for minimizing the TV-L1 energy. The original formulation is extended to allow for vector valued images, and minimization results are given. In addition we consider different definitions of total...... variation regularization, and related formulations of the optical flow problem that may be used with a duality based algorithm. We present a highly optimized algorithmic setup to estimate optical flows, and give five novel applications. The first application is registration of medical images, where X......-ray images of different hands, taken using different imaging devices are registered using a TV-L1 optical flow algorithm. We propose to regularize the input images, using sparsity enhancing regularization of the image gradient to improve registration results. The second application is registration of 2D...
Institute of Scientific and Technical Information of China (English)
李振辉; 李凯; 姜美雷; 孔祥源
2013-01-01
The genetic algorithm is the parallel search method developed by simulating the biological evolution mechanism. It has become an active branch of artificial intelligence. The sine wave signal parameters (frequency, amplitude and phase) ex-traction from the limited samples is an important part of signal processing. The definition, operation procedure and parameter se-lection of the genetic algorithm is introduced. The realization of the genetic algorithm in the sine wave signal parameter extrac-tion in the Matlab environment is elaborated. The algorithm was tested.%遗传算法是模拟生物进化机制而发展起来的一种并行全局搜索方法.近年来,遗传算法作为一种非经典的数学方法应用到越来越广泛的领域中,成为了人工智能理论中一个很活跃的分支学科.从有限采样样本中提取正弦信号参数(包括频率、幅度、相位等)是信号处理中一类重要的估计问题.综合介绍了遗传算法的定义、操作流程和参数选择等；重点阐述了在Matlab环境下,遗传算法在正弦波信号参数提取问题中的实现；对算法进行了测试.
Function Optimization Based on Quantum Genetic Algorithm
Directory of Open Access Journals (Sweden)
Ying Sun
2014-01-01
Full Text Available Optimization method is important in engineering design and application. Quantum genetic algorithm has the characteristics of good population diversity, rapid convergence and good global search capability and so on. It combines quantum algorithm with genetic algorithm. A novel quantum genetic algorithm is proposed, which is called Variable-boundary-coded Quantum Genetic Algorithm (vbQGA in which qubit chromosomes are collapsed into variable-boundary-coded chromosomes instead of binary-coded chromosomes. Therefore much shorter chromosome strings can be gained. The method of encoding and decoding of chromosome is first described before a new adaptive selection scheme for angle parameters used for rotation gate is put forward based on the core ideas and principles of quantum computation. Eight typical functions are selected to optimize to evaluate the effectiveness and performance of vbQGA against standard Genetic Algorithm (sGA and Genetic Quantum Algorithm (GQA. The simulation results show that vbQGA is significantly superior to sGA in all aspects and outperforms GQA in robustness and solving velocity, especially for multidimensional and complicated functions.
Institute of Scientific and Technical Information of China (English)
陈婧; 张苏
2014-01-01
According to fingerprint characteristics and the characteristics of the fingerprint singular points, the methods of multi-scale filtering and complex filtering are used to analyze fingerprint singularity feature extraction algorithm in order to improve the effi-ciency of automatic fingerprint identification.%根据指纹特征及指纹奇异点的特点，利用多尺度滤波及复数滤波方法，分析改进了指纹奇异特征提取算法，提高了自动指纹识别的效率。
Edge Crossing Minimization Algorithm for Hierarchical Graphs Based on Genetic Algorithms
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
We present an edge crossing minimization algorithm forhierarchical gr aphs based on genetic algorithms, and comparing it with some heuristic algorithm s. The proposed algorithm is more efficient and has the following advantages: th e frame of the algorithms is unified, the method is simple, and its implementati on and revision are easy.
Solar Cell Parameters Extraction from a Current-Voltage Characteristic Using Genetic Algorithm
Directory of Open Access Journals (Sweden)
Sanjaykumar J. Patel
2013-05-01
Full Text Available The determination of solar cell parameters is very important for the evaluation of the cell performance as well as to extract maximum possible output power from the cell. In this paper, we propose a computational based binary-coded genetic algorithm (GA to extract the parameters (I0, Iph and n for a single diode model of solar cell from its current-voltage (I-V characteristic. The algorithm was implemented using LabVIEW as a programming tool and validated by applying it to the I-V curve synthesized from the literature using reported values. The values of parameters obtained by GA are in good agreement with those of the reported values for silicon and plastic solar cells. change to “After the validation of the program, it was used to extract parameters for an experimental I-V characteristic of 4 × 4 cm2 polycrystalline silicon solar cell measured under 900 W/m. The I-V characteristic obtained using GA shows excellent match with the experimental one.
Institute of Scientific and Technical Information of China (English)
李荣; 胡志军; 郑家恒
2012-01-01
In order to further enhance the accuracy and efficiency of Web information extraction, for the shortcomings of hybrid method of genetic algorithm and first-order hidden Markov model in the initial value selection and parameter optimization, an improved combined method embedded with genetic algorithm and second-order hidden Markov model was presented. In the hierarchical preprocessing phase, text was segmented hierarchically into proper lines,blocks and words by using the format information and text features. And then the embedded genetic algorithm and second-order hidden Markov hybrid model were adopted to train parameters,and the optimal and sub-optimal chromosomes were all retained to modify initial parameters of Baum-Welch algorithm and genetic algorithm was used repeatedly to fine-tune the second-order hidden Markov model. Finally the improved Viterbi algorithm was used to extract Web information. Experimental results show that the new method improves the performance in precision, recall and time.%为了进一步提高Web信息抽取的准确性和效率,针对Web信息抽取的遗传算法和一阶隐马尔可夫模型混合方法在初值选取和参数寻优上的不足,提出了一种遗传算法和二阶隐马尔可夫模型内嵌结合的改进方法.在分层预处理阶段,利用格式信息和文本特征将文本切分成文本行、块或单个的词等恰当的层次；然后采用内嵌的遗传算法和二阶隐马尔可夫混合模型训练参数,保留最优和次优染色体,修正Baum-Welch算法的初始参数,多次使用遗传算法微调二阶隐马尔可夫模型；最后用改进的Viterbi算法实现Web信息抽取.实验结果表明,改进方法在精确度、召回率指标和时间性能上均比遗传算法和一阶隐马尔可夫模型的混合方法具有更好的性能.
A fast onboard star-extraction algorithm optimized for the SVOM Visible Telescope
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
The Space multi-band Variable Object Monitor(SVOM) is a proposed Chinese astronomical satellite,dedicated to the detection,localization and measurement of gamma-ray bursts(GRBs) on a cosmological scale.An efficient algorithm is developed for the purpose of onboard star extraction from the CCD images obtained with the Visible Telescope(VT) onboard the SVOM.The CCD pixel coordinates of the reference stars will be used to refine the astronomical position of the satellite,which will facilitate triggering rapid ground-based follow-up observations of the GRBs.In this algorithm,the image is divided into a number of grid cells and the "global" pixel-value maximum within each cell is taken as the first-guess position of a "bright" star.The correct center position of a star is then computed using a simple iterative method.Applying two additional strategies,i.e.,scanning the image only by even(or odd) lines or in a black-white chess board mode,we propose to further reduce the time to extract the stars.To examine the efficiency of the above algorithms,we applied them to the experimental images obtained with a ground-based telescope.We find that the accuracy of the astronomical positioning achieved by our method is comparable to that derived by using the conventional star-extraction method,while the former needs about 25 times less CPU time than the latter.This will significantly improve the performance of the SVOM VT mission.
AN ADAPTIVE DIGITAL IMAGE WATERMARK ALGORITHM BASED ON GRAY-SCALE MORPHOLOGY
Institute of Scientific and Technical Information of China (English)
Tong Ming; Hu Jia; Ji Hongbing
2009-01-01
An adaptive digital image watermark algorithm with strong robustness based on gray-scale morphology is proposed in this paper.The embedded strategies include:The algorithm seeks and extracts adaptively the image strong texture regions.The algorithm maps the image strong texture region to the wavelet tree structures,and embeds adaptively watermark into the wavelet coefficients corresponding to the image's strong texture regions.According to the visual masking features,the algorithm adjusts adaptively the watermark-embedding intensity.Experimental results show the algorithm is robust to compression,filtering,noise as well as strong shear attacks.The algorithm is blind watermark scheme.The image strong texture region extraction method based on morphology in this algorithm is simple and effective and adaptive to various images.
Research on palmprint identification method based on quantum algorithms.
Li, Hui; Zhang, Zhanzhan
2014-01-01
Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT) is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%.
Research on Palmprint Identification Method Based on Quantum Algorithms
Directory of Open Access Journals (Sweden)
Hui Li
2014-01-01
Full Text Available Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%.
Half-global discretization algorithm based on rough set theory
Institute of Scientific and Technical Information of China (English)
Tan Xu; Chen Yingwu
2009-01-01
It is being widely studied how to extract knowledge from a decision table based on rough set theory. The novel problem is how to discretize a decision table having continuous attribute. In order to obtain more reasonable discretization results, a discretization algorithm is proposed, which arranges half-global discretization based on the correlational coefficient of each continuous attribute while considering the uniqueness of rough set theory. When choosing heuristic information, stability is combined with rough entropy. In terms of stability, the possibility of classifying objects belonging to certain sub-interval of a given attribute into neighbor sub-intervals is minimized. By doing this, rational discrete intervals can be determined. Rough entropy is employed to decide the optimal cut-points while guaranteeing the consistency of the decision table after discretization. Thought of this algorithm is elaborated through Iris data and then some experiments by comparing outcomes of four discritized datasets are also given, which are calculated by the proposed algorithm and four other typical algorithms for discritization respectively. After that, classification rules are deduced and summarized through rough set based classifiers. Results show that the proposed discretization algorithm is able to generate optimal classification accuracy while minimizing the number of discrete intervals. It displays superiority especially when dealing with a decision table having a large attribute number.
Method of stereo matching based on genetic algorithm
Lu, Chaohui; An, Ping; Zhang, Zhaoyang
2003-09-01
A new stereo matching scheme based on image edge and genetic algorithm (GA) is presented to improve the conventional stereo matching method in this paper. In order to extract robust edge feature for stereo matching, infinite symmetric exponential filter (ISEF) is firstly applied to remove the noise of image, and nonlinear Laplace operator together with local variance of intensity are then used to detect edges. Apart from the detected edge, the polarity of edge pixels is also obtained. As an efficient search method, genetic algorithm is applied to find the best matching pair. For this purpose, some new ideas are developed for applying genetic algorithm to stereo matching. Experimental results show that the proposed methods are effective and can obtain good results.
A Reversible Image Steganographic Algorithm Based on Slantlet Transform
Directory of Open Access Journals (Sweden)
Sushil Kumar
2013-07-01
Full Text Available In this paper we present a reversible imagesteganography technique based on Slantlet transform (SLTand using advanced encryption standard (AES method. Theproposed method first encodes the message using two sourcecodes, viz., Huffman codes and a self-synchronizing variablelength code known as, T-code. Next, the encoded binarystring is encrypted using an improved AES method. Theencrypted data so obtained is embedded in the middle andhigh frequency sub-bands, obtained by applying 2-level ofSLT to the cover-image, using thresholding method. Theproposed algorithm is compared with the existing techniquesbased on wavelet transform. The Experimental results showthat the proposed algorithm can extract hidden message andrecover the original cover image with low distortion. Theproposed algorithm offers acceptable imperceptibility,security (two-layer security and provides robustness againstGaussian and Salt-n-Pepper noise attack.
Compressed domain moving object extraction algorithm for MPEG-2 video stream
Yang, Gaobo; Wang, Xiaojing; Zhang, Zhaoyang
2007-11-01
In this paper, a compressed domain moving object extraction algorithm is proposed for MPEG-2 video stream. It is mainly based on the histogram analysis of motion vectors, which can be easily obtained by partially decoding the MPEG-2 video stream. The whole algorithm framework can be divided into three key steps: motion vector pre-processing, histogram analysis of motion vector and motion vector similarity based region growing for final mask generation. A piecewise cubic hermit interpolation is utilized to form a dense motion field. The outputs of region growing algorithm based on similarity matching are the final segmentation results of moving object. These final segmentation results are further smoothed and interpolated by B-spline curve estimation. Experimental results on several test sequences demonstrate that desirable segmentation results are obtained. The accuracy of segmentation results is improved obviously, nearly to pixel level accuracy because of B-spline curve representation of segmented object. For segmentation efficiency, the processing speed is about 30ms per frame, which can meet the requirements of real time applications.
Cippitelli, Enea; Gasparrini, Samuele; Spinsante, Susanna; Gambi, Ennio
2015-01-14
The Microsoft Kinect sensor has gained attention as a tool for gait analysis for several years. Despite the many advantages the sensor provides, however, the lack of a native capability to extract joints from the side view of a human body still limits the adoption of the device to a number of relevant applications. This paper presents an algorithm to locate and estimate the trajectories of up to six joints extracted from the side depth view of a human body captured by the Kinect device. The algorithm is then applied to extract data that can be exploited to provide an objective score for the "Get Up and Go Test", which is typically adopted for gait analysis in rehabilitation fields. Starting from the depth-data stream provided by the Microsoft Kinect sensor, the proposed algorithm relies on anthropometric models only, to locate and identify the positions of the joints. Differently from machine learning approaches, this solution avoids complex computations, which usually require significant resources. The reliability of the information about the joint position output by the algorithm is evaluated by comparison to a marker-based system. Tests show that the trajectories extracted by the proposed algorithm adhere to the reference curves better than the ones obtained from the skeleton generated by the native applications provided within the Microsoft Kinect (Microsoft Corporation, Redmond,WA, USA, 2013) and OpenNI (OpenNI organization, Tel Aviv, Israel, 2013) Software Development Kits.
Directory of Open Access Journals (Sweden)
Enea Cippitelli
2015-01-01
Full Text Available The Microsoft Kinect sensor has gained attention as a tool for gait analysis for several years. Despite the many advantages the sensor provides, however, the lack of a native capability to extract joints from the side view of a human body still limits the adoption of the device to a number of relevant applications. This paper presents an algorithm to locate and estimate the trajectories of up to six joints extracted from the side depth view of a human body captured by the Kinect device. The algorithm is then applied to extract data that can be exploited to provide an objective score for the “Get Up and Go Test”, which is typically adopted for gait analysis in rehabilitation fields. Starting from the depth-data stream provided by the Microsoft Kinect sensor, the proposed algorithm relies on anthropometric models only, to locate and identify the positions of the joints. Differently from machine learning approaches, this solution avoids complex computations, which usually require significant resources. The reliability of the information about the joint position output by the algorithm is evaluated by comparison to a marker-based system. Tests show that the trajectories extracted by the proposed algorithm adhere to the reference curves better than the ones obtained from the skeleton generated by the native applications provided within the Microsoft Kinect (Microsoft Corporation, Redmond,WA, USA, 2013 and OpenNI (OpenNI organization, Tel Aviv, Israel, 2013 Software Development Kits.
A novel blinding digital watermark algorithm based on lab color space
Dong, Bing-feng; Qiu, Yun-jie; Lu, Hong-tao
2010-02-01
It is necessary for blinding digital image watermark algorithm to extract watermark information without any extra information except the watermarked image itself. But most of the current blinding watermark algorithms have the same disadvantage: besides the watermarked image, they also need the size and other information about the original image when extracting the watermark. This paper presents an innovative blinding color image watermark algorithm based on Lab color space, which does not have the disadvantages mentioned above. This algorithm first marks the watermark region size and position through embedding some regular blocks called anchor points in image spatial domain, and then embeds the watermark into the image. In doing so, the watermark information can be easily extracted after doing cropping and scale change to the image. Experimental results show that the algorithm is particularly robust against the color adjusting and geometry transformation. This algorithm has already been used in a copyright protecting project and works very well.
Secure OFDM communications based on hashing algorithms
Neri, Alessandro; Campisi, Patrizio; Blasi, Daniele
2007-10-01
In this paper we propose an OFDM (Orthogonal Frequency Division Multiplexing) wireless communication system that introduces mutual authentication and encryption at the physical layer, without impairing spectral efficiency, exploiting some freedom degrees of the base-band signal, and using encrypted-hash algorithms. FEC (Forward Error Correction) is instead performed through variable-rate Turbo Codes. To avoid false rejections, i.e. rejections of enrolled (authorized) users, we designed and tested a robust hash algorithm. This robustness is obtained both by a segmentation of the hash domain (based on BCH codes) and by the FEC capabilities of Turbo Codes.
Graphical model construction based on evolutionary algorithms
Institute of Scientific and Technical Information of China (English)
Youlong YANG; Yan WU; Sanyang LIU
2006-01-01
Using Bayesian networks to model promising solutions from the current population of the evolutionary algorithms can ensure efficiency and intelligence search for the optimum. However, to construct a Bayesian network that fits a given dataset is a NP-hard problem, and it also needs consuming mass computational resources. This paper develops a methodology for constructing a graphical model based on Bayesian Dirichlet metric. Our approach is derived from a set of propositions and theorems by researching the local metric relationship of networks matching dataset. This paper presents the algorithm to construct a tree model from a set of potential solutions using above approach. This method is important not only for evolutionary algorithms based on graphical models, but also for machine learning and data mining.The experimental results show that the exact theoretical results and the approximations match very well.
Dilated contour extraction and component labeling algorithm for object vector representation
Skourikhine, Alexei N.
2005-08-01
Object boundary extraction from binary images is important for many applications, e.g., image vectorization, automatic interpretation of images containing segmentation results, printed and handwritten documents and drawings, maps, and AutoCAD drawings. Efficient and reliable contour extraction is also important for pattern recognition due to its impact on shape-based object characterization and recognition. The presented contour tracing and component labeling algorithm produces dilated (sub-pixel) contours associated with corresponding regions. The algorithm has the following features: (1) it always produces non-intersecting, non-degenerate contours, including the case of one-pixel wide objects; (2) it associates the outer and inner (i.e., around hole) contours with the corresponding regions during the process of contour tracing in a single pass over the image; (3) it maintains desired connectivity of object regions as specified by 8-neighbor or 4-neighbor connectivity of adjacent pixels; (4) it avoids degenerate regions in both background and foreground; (5) it allows an easy augmentation that will provide information about the containment relations among regions; (6) it has a time complexity that is dominantly linear in the number of contour points. This early component labeling (contour-region association) enables subsequent efficient object-based processing of the image information.
A Practical Propositional Knowledge Base Revision Algorithm
Institute of Scientific and Technical Information of China (English)
陶雪红; 孙伟; 等
1997-01-01
This paper gives an outline of knowledge base revision and some recently presented complexity results about propostitional knowledge base revision.Different methods for revising propositional knowledge base have been proposed recently by several researchers,but all methods are intractable in the general case.For practical application,this paper presents a revision method for special case,and gives its corresponding polynomial algorithm.
Object Extraction Based on Evolutionary Morphological Processing
Institute of Scientific and Technical Information of China (English)
LI Bin; PAN Li
2004-01-01
This paper introduces a novel technique for object detection using genetic algorithms and morphological processing. The method employs a kind of object oriented structure element, which is derived by genetic algorithms. The population of morphological filters is iteratively evaluated according to a statistical performance index corresponding to object extraction ability, and evolves into an optimal structuring element using the evolution principles of genetic search. Experimental results of road extraction from high resolution satellite images are presented to illustrate the merit and feasibility of the proposed method.
Second Attribute Algorithm Based on Tree Expression
Institute of Scientific and Technical Information of China (English)
Su-Qing Han; Jue Wang
2006-01-01
One view of finding a personalized solution of reduct in an information system is grounded on the viewpoint that attribute order can serve as a kind of semantic representation of user requirements. Thus the problem of finding personalized solutions can be transformed into computing the reduct on an attribute order. The second attribute theorem describes the relationship between the set of attribute orders and the set of reducts, and can be used to transform the problem of searching solutions to meet user requirements into the problem of modifying reduct based on a given attribute order. An algorithm is implied based on the second attribute theorem, with computation on the discernibility matrix. Its time complexity is O(n2 × m) (n is the number of the objects and m the number of the attributes of an information system).This paper presents another effective second attribute algorithm for facilitating the use of the second attribute theorem,with computation on the tree expression of an information system. The time complexity of the new algorithm is linear in n. This algorithm is proved to be equivalent to the algorithm on the discernibility matrix.
Structure-Based Algorithms for Microvessel Classification
Smith, Amy F.
2015-02-01
© 2014 The Authors. Microcirculation published by John Wiley & Sons Ltd. Objective: Recent developments in high-resolution imaging techniques have enabled digital reconstruction of three-dimensional sections of microvascular networks down to the capillary scale. To better interpret these large data sets, our goal is to distinguish branching trees of arterioles and venules from capillaries. Methods: Two novel algorithms are presented for classifying vessels in microvascular anatomical data sets without requiring flow information. The algorithms are compared with a classification based on observed flow directions (considered the gold standard), and with an existing resistance-based method that relies only on structural data. Results: The first algorithm, developed for networks with one arteriolar and one venular tree, performs well in identifying arterioles and venules and is robust to parameter changes, but incorrectly labels a significant number of capillaries as arterioles or venules. The second algorithm, developed for networks with multiple inlets and outlets, correctly identifies more arterioles and venules, but is more sensitive to parameter changes. Conclusions: The algorithms presented here can be used to classify microvessels in large microvascular data sets lacking flow information. This provides a basis for analyzing the distinct geometrical properties and modelling the functional behavior of arterioles, capillaries, and venules.
Fuzzy logic-based diagnostic algorithm for implantable cardioverter defibrillators.
Bárdossy, András; Blinowska, Aleksandra; Kuzmicz, Wieslaw; Ollitrault, Jacky; Lewandowski, Michał; Przybylski, Andrzej; Jaworski, Zbigniew
2014-02-01
The paper presents a diagnostic algorithm for classifying cardiac tachyarrhythmias for implantable cardioverter defibrillators (ICDs). The main aim was to develop an algorithm that could reduce the rate of occurrence of inappropriate therapies, which are often observed in existing ICDs. To achieve low energy consumption, which is a critical factor for implantable medical devices, very low computational complexity of the algorithm was crucial. The study describes and validates such an algorithm and estimates its clinical value. The algorithm was based on the heart rate variability (HRV) analysis. The input data for our algorithm were: RR-interval (I), as extracted from raw intracardiac electrogram (EGM), and in addition two other features of HRV called here onset (ONS) and instability (INST). 6 diagnostic categories were considered: ventricular fibrillation (VF), ventricular tachycardia (VT), sinus tachycardia (ST), detection artifacts and irregularities (including extrasystoles) (DAI), atrial tachyarrhythmias (ATF) and no tachycardia (i.e. normal sinus rhythm) (NT). The initial set of fuzzy rules based on the distributions of I, ONS and INST in the 6 categories was optimized by means of a software tool for automatic rule assessment using simulated annealing. A training data set with 74 EGM recordings was used during optimization, and the algorithm was validated with a validation data set with 58 EGM recordings. Real life recordings stored in defibrillator memories were used. Additionally the algorithm was tested on 2 sets of recordings from the PhysioBank databases: MIT-BIH Arrhythmia Database and MIT-BIH Supraventricular Arrhythmia Database. A custom CMOS integrated circuit implementing the diagnostic algorithm was designed in order to estimate the power consumption. A dedicated Web site, which provides public online access to the algorithm, has been created and is available for testing it. The total number of events in our training and validation sets was 132. In
Feature Selection for Image Retrieval based on Genetic Algorithm
Directory of Open Access Journals (Sweden)
Preeti Kushwaha
2016-12-01
Full Text Available This paper describes the development and implementation of feature selection for content based image retrieval. We are working on CBIR system with new efficient technique. In this system, we use multi feature extraction such as colour, texture and shape. The three techniques are used for feature extraction such as colour moment, gray level co- occurrence matrix and edge histogram descriptor. To reduce curse of dimensionality and find best optimal features from feature set using feature selection based on genetic algorithm. These features are divided into similar image classes using clustering for fast retrieval and improve the execution time. Clustering technique is done by k-means algorithm. The experimental result shows feature selection using GA reduces the time for retrieval and also increases the retrieval precision, thus it gives better and faster results as compared to normal image retrieval system. The result also shows precision and recall of proposed approach compared to previous approach for each image class. The CBIR system is more efficient and better performs using feature selection based on Genetic Algorithm.
Ghulam Saber, Md; Arif Shahriar, Kh; Ahmed, Ashik; Hasan Sagor, Rakibul
2016-10-01
Particle swarm optimization (PSO) and invasive weed optimization (IWO) algorithms are used for extracting the modeling parameters of materials useful for optics and photonics research community. These two bio-inspired algorithms are used here for the first time in this particular field to the best of our knowledge. The algorithms are used for modeling graphene oxide and the performances of the two are compared. Two objective functions are used for different boundary values. Root mean square (RMS) deviation is determined and compared.
Model based development of engine control algorithms
Dekker, H.J.; Sturm, W.L.
1996-01-01
Model based development of engine control systems has several advantages. The development time and costs are strongly reduced because much of the development and optimization work is carried out by simulating both engine and control system. After optimizing the control algorithm it can be executed b
Verification-Based Interval-Passing Algorithm for Compressed Sensing
Wu, Xiaofu; Yang, Zhen
2013-01-01
We propose a verification-based Interval-Passing (IP) algorithm for iteratively reconstruction of nonnegative sparse signals using parity check matrices of low-density parity check (LDPC) codes as measurement matrices. The proposed algorithm can be considered as an improved IP algorithm by further incorporation of the mechanism of verification algorithm. It is proved that the proposed algorithm performs always better than either the IP algorithm or the verification algorithm. Simulation resul...
Efficient algorithms for extracting biological key pathways with global constraints
DEFF Research Database (Denmark)
Baumbach, Jan; Friedrich, T.; Kötzing, T.
2012-01-01
The integrated analysis of data of different types and with various interdependencies is one of the major challenges in computational biology. Recently, we developed KeyPathwayMiner, a method that combines biological networks modeled as graphs with disease-specific genetic expression data gained...... this strategy GLONE (global node exceptions); the previous problem we call INES (individual node exceptions). Since finding GLONE-components is computationally hard, we developed an Ant Colony Optimization algorithm and implemented it with the KeyPathwayMiner Cytoscape framework as an alternative to the INES...... algorithms. KeyPathwayMiner 3.0 now offers both the INES and the GLONE algorithms. It is available as plugin from Cytoscape and online at http://keypathwayminer.mpi-inf. mpg.de. © 2012 ACM....
Geometric Algorithm for Received Signal Strength Based Mobile Positioning
Directory of Open Access Journals (Sweden)
J. Duha
2005-06-01
Full Text Available Mobile positioning is one of the fastest growing areas for thedevelopment of new technologies, services and applications. This paperdescribes a simple and efficient geometric algorithm using receivedsignal strength measurements extracted from at least three basestations. This method is compared with standard Least Squares method.The simulation results show, that geometric algorithm gives moreaccurately location estimation than LS algorithm in multipathpropagation.
Optimal Hops-Based Adaptive Clustering Algorithm
Xuan, Xin; Chen, Jian; Zhen, Shanshan; Kuo, Yonghong
This paper proposes an optimal hops-based adaptive clustering algorithm (OHACA). The algorithm sets an energy selection threshold before the cluster forms so that the nodes with less energy are more likely to go to sleep immediately. In setup phase, OHACA introduces an adaptive mechanism to adjust cluster head and load balance. And the optimal distance theory is applied to discover the practical optimal routing path to minimize the total energy for transmission. Simulation results show that OHACA prolongs the life of network, improves utilizing rate and transmits more data because of energy balance.
Numerical Algorithms Based on Biorthogonal Wavelets
Ponenti, Pj.; Liandrat, J.
1996-01-01
Wavelet bases are used to generate spaces of approximation for the resolution of bidimensional elliptic and parabolic problems. Under some specific hypotheses relating the properties of the wavelets to the order of the involved operators, it is shown that an approximate solution can be built. This approximation is then stable and converges towards the exact solution. It is designed such that fast algorithms involving biorthogonal multi resolution analyses can be used to resolve the corresponding numerical problems. Detailed algorithms are provided as well as the results of numerical tests on partial differential equations defined on the bidimensional torus.
Algorithmic Differentiation for Calculus-based Optimization
Walther, Andrea
2010-10-01
For numerous applications, the computation and provision of exact derivative information plays an important role for optimizing the considered system but quite often also for its simulation. This presentation introduces the technique of Algorithmic Differentiation (AD), a method to compute derivatives of arbitrary order within working precision. Quite often an additional structure exploitation is indispensable for a successful coupling of these derivatives with state-of-the-art optimization algorithms. The talk will discuss two important situations where the problem-inherent structure allows a calculus-based optimization. Examples from aerodynamics and nano optics illustrate these advanced optimization approaches.
Research of Video Steganalysis Algorithm Based on H265 Protocol
Directory of Open Access Journals (Sweden)
Wu Kaicheng
2015-01-01
This paper researches LSB matching VSA based on H265 protocol with the research background of 26 original Video sequences, it firstly extracts classification features out from training samples as input of SVM, and trains in SVM to obtain high-quality category classification model, and then tests whether there is suspicious information in the video sample. The experimental results show that VSA algorithm based on LSB matching can be more practical to obtain all frame embedded secret information and carrier and video of local frame embedded. In addition, VSA adopts the method of frame by frame with a strong robustness in resisting attack in the corresponding time domain.
Directory of Open Access Journals (Sweden)
Daniel H Rapoport
Full Text Available Automated microscopy is currently the only method to non-invasively and label-free observe complex multi-cellular processes, such as cell migration, cell cycle, and cell differentiation. Extracting biological information from a time-series of micrographs requires each cell to be recognized and followed through sequential microscopic snapshots. Although recent attempts to automatize this process resulted in ever improving cell detection rates, manual identification of identical cells is still the most reliable technique. However, its tedious and subjective nature prevented tracking from becoming a standardized tool for the investigation of cell cultures. Here, we present a novel method to accomplish automated cell tracking with a reliability comparable to manual tracking. Previously, automated cell tracking could not rival the reliability of manual tracking because, in contrast to the human way of solving this task, none of the algorithms had an independent quality control mechanism; they missed validation. Thus, instead of trying to improve the cell detection or tracking rates, we proceeded from the idea to automatically inspect the tracking results and accept only those of high trustworthiness, while rejecting all other results. This validation algorithm works independently of the quality of cell detection and tracking through a systematic search for tracking errors. It is based only on very general assumptions about the spatiotemporal contiguity of cell paths. While traditional tracking often aims to yield genealogic information about single cells, the natural outcome of a validated cell tracking algorithm turns out to be a set of complete, but often unconnected cell paths, i.e. records of cells from mitosis to mitosis. This is a consequence of the fact that the validation algorithm takes complete paths as the unit of rejection/acceptance. The resulting set of complete paths can be used to automatically extract important biological parameters
Lane Detection Based on Machine Learning Algorithm
Directory of Open Access Journals (Sweden)
Chao Fan
2013-09-01
Full Text Available In order to improve accuracy and robustness of the lane detection in complex conditions, such as the shadows and illumination changing, a novel detection algorithm was proposed based on machine learning. After pretreatment, a set of haar-like filters were used to calculate the eigenvalue in the gray image f(x,y and edge e(x,y. Then these features were trained by using improved boosting algorithm and the final class function g(x was obtained, which was used to judge whether the point x belonging to the lane or not. To avoid the over fitting in traditional boosting, Fisher discriminant analysis was used to initialize the weights of samples. After testing by many road in all conditions, it showed that this algorithm had good robustness and real-time to recognize the lane in all challenging conditions.
Enhancement of Twins Fetal ECG Signal Extraction Based on Hybrid Blind Extraction Techniques
Directory of Open Access Journals (Sweden)
Ahmed Kareem Abdullah
2017-07-01
Full Text Available ECG machines are noninvasive system used to measure the heartbeat signal. It’s very important to monitor the fetus ECG signals during pregnancy to check the heat activity and to detect any problem early before born, therefore the monitoring of ECG signals have clinical significance and importance. For multi-fetal pregnancy case the classical filtering algorithms are not sufficient to separate the ECG signals between mother and fetal. In this paper the mixture consists of mixing from three ECG signals, the first signal is the mother ECG (M-ECG signal, second signal the Fetal-1 ECG (F1-ECG, and third signal is the Fetal-2 ECG (F2-ECG, these signals are extracted based on modified blind source extraction (BSE techniques. The proposed work based on hybridization between two BSE techniques to ensure that the extracted signals separated well. The results demonstrate that the proposed work very efficiently to extract the useful ECG signals
FACT. New image parameters based on the watershed-algorithm
Energy Technology Data Exchange (ETDEWEB)
Linhoff, Lena; Bruegge, Kai Arno; Buss, Jens [TU Dortmund (Germany). Experimentelle Physik 5b; Collaboration: FACT-Collaboration
2016-07-01
FACT, the First G-APD Cherenkov Telescope, is the first imaging atmospheric Cherenkov telescope that is using Geiger-mode avalanche photodiodes (G-APDs) as photo sensors. The raw data produced by this telescope are processed in an analysis chain, which leads to a classification of the primary particle that induce a shower and to an estimation of its energy. One important step in this analysis chain is the parameter extraction from shower images. By the application of a watershed algorithm to the camera image, new parameters are computed. Perceiving the brightness of a pixel as height, a set of pixels can be seen as 'landscape' with hills and valleys. A watershed algorithm groups all pixels to a cluster that belongs to the same hill. From the emerging segmented image, one can find new parameters for later analysis steps, e.g. number of clusters, their shape and containing photon charge. For FACT data, the FellWalker algorithm was chosen from the class of watershed algorithms, because it was designed to work on discrete distributions, in this case the pixels of a camera image. The FellWalker algorithm is implemented in FACT-tools, which provides the low level analysis framework for FACT. This talk will focus on the computation of new, FellWalker based, image parameters, which can be used for the gamma-hadron separation. Additionally, their distributions concerning real and Monte Carlo Data are compared.
Energy Technology Data Exchange (ETDEWEB)
Lee, Hong Kyu; Seo, Won Chan; Park, Chan [Pukyong National University, Busan (Korea, Republic of); Lee, Jong O; Son, Young Ho [KIMM, Daejeon (Korea, Republic of)
2009-10-15
An ultrasonic signal processing algorithm was developed for extracting the information of cracks generated around the keyway of a turbine rotor disk. B-scan images were obtained by using keyway specimens and an ultrasonic scan system with x-y position controller. The B-scan images were used as input images for 2-Dimensional signal processing, and the algorithm was constructed with four processing stages of pre-processing, crack candidate region detection, crack region classification and crack information extraction. It is confirmed by experiments that the developed algorithm is effective for the quantitative evaluation of cracks generated around the keyway of turbine rotor disk
AN OPTIMIZATION ALGORITHM BASED ON BACTERIA BEHAVIOR
Directory of Open Access Journals (Sweden)
Ricardo Contreras
2014-09-01
Full Text Available Paradigms based on competition have shown to be useful for solving difficult problems. In this paper we present a new approach for solving hard problems using a collaborative philosophy. A collaborative philosophy can produce paradigms as interesting as the ones found in algorithms based on a competitive philosophy. Furthermore, we show that the performance - in problems associated to explosive combinatorial - is comparable to the performance obtained using a classic evolutive approach.
Directory of Open Access Journals (Sweden)
Mohammad Subhi Al-batah
2014-01-01
Full Text Available To date, cancer of uterine cervix is still a leading cause of cancer-related deaths in women worldwide. The current methods (i.e., Pap smear and liquid-based cytology (LBC to screen for cervical cancer are time-consuming and dependent on the skill of the cytopathologist and thus are rather subjective. Therefore, this paper presents an intelligent computer vision system to assist pathologists in overcoming these problems and, consequently, produce more accurate results. The developed system consists of two stages. In the first stage, the automatic features extraction (AFE algorithm is performed. In the second stage, a neuro-fuzzy model called multiple adaptive neuro-fuzzy inference system (MANFIS is proposed for recognition process. The MANFIS contains a set of ANFIS models which are arranged in parallel combination to produce a model with multi-input-multioutput structure. The system is capable of classifying cervical cell image into three groups, namely, normal, low-grade squamous intraepithelial lesion (LSIL and high-grade squamous intraepithelial lesion (HSIL. The experimental results prove the capability of the AFE algorithm to be as effective as the manual extraction by human experts, while the proposed MANFIS produces a good classification performance with 94.2% accuracy.
Perceptual Object Extraction Based on Saliency and Clustering
Directory of Open Access Journals (Sweden)
Qiaorong Zhang
2010-08-01
Full Text Available Object-based visual attention has received an increasing interest in recent years. Perceptual object is the basic attention unit of object-based visual attention. The definition and extraction of perceptual objects is one of the key technologies in object-based visual attention computation model. A novel perceptual object definition and extraction method is proposed in this paper. Based on Gestalt theory and visual feature integration theory, perceptual object is defined using homogeneity region, salient region and edges. An improved saliency map generating algorithm is employed first. Based on the saliency map, salient edges are extracted. Then graph-based clustering algorithm is introduced to get homogeneity regions in the image. Finally an integration strategy is adopted to combine salient edges and homogeneity regions to extract perceptual objects. The proposed perceptual object extraction method has been tested on lots of natural images. Experiment results and analysis are presented in this paper also. Experiment results show that the proposed method is reasonable and valid.
Web-Based Information Extraction Technology
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
Information extraction techniques on the Web are the current research hotspot. Now many information extraction techniques based on different principles have appeared and have different capabilities. We classify the existing information extraction techniques by the principle of information extraction and analyze the methods and principles of semantic information adding, schema defining,rule expression, semantic items locating and object locating in the approaches. Based on the above survey and analysis,several open problems are discussed.
Web entity extraction based on entity attribute classification
Li, Chuan-Xi; Chen, Peng; Wang, Ru-Jing; Su, Ya-Ru
2011-12-01
The large amount of entity data are continuously published on web pages. Extracting these entities automatically for further application is very significant. Rule-based entity extraction method yields promising result, however, it is labor-intensive and hard to be scalable. The paper proposes a web entity extraction method based on entity attribute classification, which can avoid manual annotation of samples. First, web pages are segmented into different blocks by algorithm Vision-based Page Segmentation (VIPS), and a binary classifier LibSVM is trained to retrieve the candidate blocks which contain the entity contents. Second, the candidate blocks are partitioned into candidate items, and the classifiers using LibSVM are performed for the attributes annotation of the items and then the annotation results are aggregated into an entity. Results show that the proposed method performs well to extract agricultural supply and demand entities from web pages.
Institute of Scientific and Technical Information of China (English)
郝旺; 杨润泽; 尹晓春
2011-01-01
在炮弹引信铆点机器视觉识别中,有效消除图像中各种干扰、正确提取铆点特征是关键,解决好铆点特征的准确提取问题,是炮弹引信安全破除铆点的前提条件;根据铆点的结构特点,分析了产生各种干扰的原因及特点,在高斯滤波,阀值分割、形态学运算等基本运算的基础上,提出了一种基于行扫描的铆点特征提取方法;测试结果表明,提出的方法可行,铆点正确识别率达到90%以上,具有高效性和很好的鲁棒性的铆点特征提取,使得炮弹铆点的准确破除成为可能.%In the process of Cannon-ball Fuse Riveted point recognizing by machine vision, to eliminate the image interference effectively and to extract the riveted point feature accurately is the key point. The solution of Riveted point recognizing problem, is the condition of destroying Riveted point safely. Researching the reason of interference and its characteristic, after using basic image processing method by linear Gauss filtering, threshold segmentation, mathematical morphology, a line scanning algorithm is presented. This algorithm can resolve the problem of riveted point recognizing successfully. The recognizing accuracy ratio can reach 90 percents. To extract the riveted point feature effectively and robustly. make the operation of Fuse Riveted Point can be realized.
Biomedical Image Processing Using FCM Algorithm Based on the Wavelet Transform
Institute of Scientific and Technical Information of China (English)
YAN Yu-hua; WANG Hui-min; LI Shi-pu
2004-01-01
An effective processing method for biomedical images and the Fuzzy C-mean (FCM) algorithm based on the wavelet transform are investigated.By using hierarchical wavelet decomposition, an original image could be decomposed into one lower image and several detail images. The segmentation started at the lowest resolution with the FCM clustering algorithm and the texture feature extracted from various sub-bands. With the improvement of the FCM algorithm, FCM alternation frequency was decreased and the accuracy of segmentation was advanced.
Cloud-based Evolutionary Algorithms: An algorithmic study
Merelo, Juan-J; Mora, Antonio M; Castillo, Pedro; Romero, Gustavo; Laredo, JLJ
2011-01-01
After a proof of concept using Dropbox(tm), a free storage and synchronization service, showed that an evolutionary algorithm using several dissimilar computers connected via WiFi or Ethernet had a good scaling behavior in terms of evaluations per second, it remains to be proved whether that effect also translates to the algorithmic performance of the algorithm. In this paper we will check several different, and difficult, problems, and see what effects the automatic load-balancing and asynchrony have on the speed of resolution of problems.
A novel tree structure based watermarking algorithm
Lin, Qiwei; Feng, Gui
2008-03-01
In this paper, we propose a new blind watermarking algorithm for images which is based on tree structure. The algorithm embeds the watermark in wavelet transform domain, and the embedding positions are determined by significant coefficients wavelets tree(SCWT) structure, which has the same idea with the embedded zero-tree wavelet (EZW) compression technique. According to EZW concepts, we obtain coefficients that are related to each other by a tree structure. This relationship among the wavelet coefficients allows our technique to embed more watermark data. If the watermarked image is attacked such that the set of significant coefficients is changed, the tree structure allows the correlation-based watermark detector to recover synchronously. The algorithm also uses a visual adaptive scheme to insert the watermark to minimize watermark perceptibility. In addition to the watermark, a template is inserted into the watermarked image at the same time. The template contains synchronization information, allowing the detector to determine the geometric transformations type applied to the watermarked image. Experimental results show that the proposed watermarking algorithm is robust against most signal processing attacks, such as JPEG compression, median filtering, sharpening and rotating. And it is also an adaptive method which shows a good performance to find the best areas to insert a stronger watermark.
Fast Algorithms for Model-Based Diagnosis
Fijany, Amir; Barrett, Anthony; Vatan, Farrokh; Mackey, Ryan
2005-01-01
Two improved new methods for automated diagnosis of complex engineering systems involve the use of novel algorithms that are more efficient than prior algorithms used for the same purpose. Both the recently developed algorithms and the prior algorithms in question are instances of model-based diagnosis, which is based on exploring the logical inconsistency between an observation and a description of a system to be diagnosed. As engineering systems grow more complex and increasingly autonomous in their functions, the need for automated diagnosis increases concomitantly. In model-based diagnosis, the function of each component and the interconnections among all the components of the system to be diagnosed (for example, see figure) are represented as a logical system, called the system description (SD). Hence, the expected behavior of the system is the set of logical consequences of the SD. Faulty components lead to inconsistency between the observed behaviors of the system and the SD. The task of finding the faulty components (diagnosis) reduces to finding the components, the abnormalities of which could explain all the inconsistencies. Of course, the meaningful solution should be a minimal set of faulty components (called a minimal diagnosis), because the trivial solution, in which all components are assumed to be faulty, always explains all inconsistencies. Although the prior algorithms in question implement powerful methods of diagnosis, they are not practical because they essentially require exhaustive searches among all possible combinations of faulty components and therefore entail the amounts of computation that grow exponentially with the number of components of the system.
A unified self-stabilizing neural network algorithm for principal and minor components extraction.
Kong, Xiangyu; Hu, Changhua; Ma, Hongguang; Han, Chongzhao
2012-02-01
Recently, many unified learning algorithms have been developed for principal component analysis and minor component analysis. These unified algorithms can be used to extract principal components and, if altered simply by the sign, can also serve as a minor component extractor. This is of practical significance in the implementations of algorithms. This paper proposes a unified self-stabilizing neural network learning algorithm for principal and minor components extraction, and studies the stability of the proposed unified algorithm via the fixed-point analysis method. The proposed unified self-stabilizing algorithm for principal and minor components extraction is extended for tracking the principal subspace (PS) and minor subspace (MS). The averaging differential equation and the energy function associated with the unified algorithm for tracking PS and MS are given. It is shown that the averaging differential equation will globally asymptotically converge to an invariance set, and the corresponding energy function exhibit a unique global minimum attained if and only if its state matrices span the PS or MS of the autocorrelation matrix of a vector data stream. It is concluded that the proposed unified algorithm for tracking PS and MS can efficiently track an orthonormal basis of the PS or MS. Simulations are carried out to further illustrate the theoretical results achieved.
Virtual speed sensors based algorithm for expressway traffic state estimation
Institute of Scientific and Technical Information of China (English)
XU DongWei; DONG HongHui; JIA LiMin; QIN Yong
2012-01-01
The accurate estimation of expressway traffic state can provide decision-making for both travelers and traffic managers.The speed is one of the most representative parameter of the traffic state.So the expressway speed spatial distribution can be taken as the expressway traffic state equivalent.In this paper,an algorithm based on virtual speed sensors (VSS) is presented to estimate the expressway traffic state (the speed spatial distribution).To gain the spatial distribution of expressway traffic state,virtual speed sensors are defined between adjacent traffic flow sensors.Then,the speed data extracted from traffic flow sensors in time series are mapped to space series to design virtual speed sensors.Then the speed of virtual speed sensors can be calculated with the weight matrix which is related with the speed of virtual speed sensors and the speed data extracted from traffic flow sensors and the speed data extracted from traffic flow sensors in time series.Finally,the expressway traffic state (the speed spatial distribution) can be gained.The acquisition of average travel speed of the expressway is taken for application of this traffic state estimation algorithm.One typical expressway in Beijing is adopted for the experiment analysis.The results prove that this traffic state estimation approach based on VSS is feasible and can achieve a high accuracy.
Research of the Kernel Operator Library Based on Cryptographic Algorithm
Institute of Scientific and Technical Information of China (English)
王以刚; 钱力; 黄素梅
2001-01-01
The variety of encryption mechanism and algorithms which were conventionally used have some limitations.The kernel operator library based on Cryptographic algorithm is put forward. Owing to the impenetrability of algorithm, the data transfer system with the cryptographic algorithm library has many remarkable advantages in algorithm rebuilding and optimization,easily adding and deleting algorithm, and improving the security power over the traditional algorithm. The user can choose any one in all algorithms with the method against any attack because the cryptographic algorithm library is extensible.
Network-based recommendation algorithms: A review
Yu, Fei; Gillard, Sebastien; Medo, Matus
2015-01-01
Recommender systems are a vital tool that helps us to overcome the information overload problem. They are being used by most e-commerce web sites and attract the interest of a broad scientific community. A recommender system uses data on users' past preferences to choose new items that might be appreciated by a given individual user. While many approaches to recommendation exist, the approach based on a network representation of the input data has gained considerable attention in the past. We review here a broad range of network-based recommendation algorithms and for the first time compare their performance on three distinct real datasets. We present recommendation topics that go beyond the mere question of which algorithm to use - such as the possible influence of recommendation on the evolution of systems that use it - and finally discuss open research directions and challenges.
Network-based recommendation algorithms: A review
Yu, Fei; Zeng, An; Gillard, Sébastien; Medo, Matúš
2016-06-01
Recommender systems are a vital tool that helps us to overcome the information overload problem. They are being used by most e-commerce web sites and attract the interest of a broad scientific community. A recommender system uses data on users' past preferences to choose new items that might be appreciated by a given individual user. While many approaches to recommendation exist, the approach based on a network representation of the input data has gained considerable attention in the past. We review here a broad range of network-based recommendation algorithms and for the first time compare their performance on three distinct real datasets. We present recommendation topics that go beyond the mere question of which algorithm to use-such as the possible influence of recommendation on the evolution of systems that use it-and finally discuss open research directions and challenges.
Directory of Open Access Journals (Sweden)
Nikky Suryawanshi Rai
2014-01-01
Full Text Available Association Rule mining is very efficient technique for finding strong relation between correlated data. The correlation of data gives meaning full extraction process. For the mining of positive and negative rules, a variety of algorithms are used such as Apriori algorithm and tree based algorithm. A number of algorithms are wonder performance but produce large number of negative association rule and also suffered from multi-scan problem. The idea of this paper is to eliminate these problems and reduce large number of negative rules. Hence we proposed an improved approach to mine interesting positive and negative rules based on genetic and MLMS algorithm. In this method we used a multi-level multiple support of data table as 0 and 1. The divided process reduces the scanning time of database. The proposed algorithm is a combination of MLMS and genetic algorithm. This paper proposed a new algorithm (MIPNAR_GA for mining interesting positive and negative rule from frequent and infrequent pattern sets. The algorithm is accomplished in to three phases: a.Extract frequent and infrequent pattern sets by using apriori method b.Efficiently generate positive and negative rule. c.Prune redundant rule by applying interesting measures. The process of rule optimization is performed by genetic algorithm and for evaluation of algorithm conducted the real world dataset such as heart disease data and some standard data used from UCI machine learning repository.
Automatic Rotation Recovery Algorithm for Accurate Digital Image and Video Watermarks Extraction
Directory of Open Access Journals (Sweden)
Nasr addin Ahmed Salem Al-maweri
2016-11-01
Full Text Available Research in digital watermarking has evolved rapidly in the current decade. This evolution brought various different methods and algorithms for watermarking digital images and videos. Introduced methods in the field varies from weak to robust according to how tolerant the method is implemented to keep the existence of the watermark in the presence of attacks. Rotation attacks applied to the watermarked media is one of the serious attacks which many, if not most, algorithms cannot survive. In this paper, a new automatic rotation recovery algorithm is proposed. This algorithm can be plugged to any image or video watermarking algorithm extraction component. The main job for this method is to detect the geometrical distortion happens to the watermarked image/images sequence; recover the distorted scene to its original state in a blind and automatic way and then send it to be used by the extraction procedure. The work is limited to have a recovery process to zero padded rotations for now, cropped images after rotation is left as future work. The proposed algorithm is tested on top of extraction component. Both recovery accuracy and the extracted watermarks accuracy showed high performance level.
A Single Pattern Matching Algorithm Based on Character Frequency
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
Based on the study of single pattern matching, MBF algorithm is proposed by imitating the string searching procedure of human. The algorithm preprocesses the pattern by using the idea of Quick Search algorithm and the already-matched pattern psefix and suffix information. In searching phase, the algorithm makes use of the!character using frequency and the continue-skip idea. The experiment shows that MBF algorithm is more efficient than other algorithms.
FPGA-Based Implementation of Lithuanian Isolated Word Recognition Algorithm
Directory of Open Access Journals (Sweden)
Tomyslav Sledevič
2013-05-01
Full Text Available The paper describes the FPGA-based implementation of Lithuanian isolated word recognition algorithm. FPGA is selected for parallel process implementation using VHDL to ensure fast signal processing at low rate clock signal. Cepstrum analysis was applied to features extraction in voice. The dynamic time warping algorithm was used to compare the vectors of cepstrum coefficients. A library of 100 words features was created and stored in the internal FPGA BRAM memory. Experimental testing with speaker dependent records demonstrated the recognition rate of 94%. The recognition rate of 58% was achieved for speaker-independent records. Calculation of cepstrum coefficients lasted for 8.52 ms at 50 MHz clock, while 100 DTWs took 66.56 ms at 25 MHz clock.Article in Lithuanian
Schaap, Michiel; Metz, Coert T.; van Walsum, Theo; van der Giessen, Alina G.; Weustink, Annick C.; Mollet, Nico R.; Bauer, Christian; Bogunović, Hrvoje; Castro, Carlos; Deng, Xiang; Dikici, Engin; O’Donnell, Thomas; Frenay, Michel; Friman, Ola; Hoyos, Marcela Hernández; Kitslaar, Pieter H.; Krissian, Karl; Kühnel, Caroline; Luengo-Oroz, Miguel A.; Orkisz, Maciej; Smedby, Örjan; Styner, Martin; Szymczak, Andrzej; Tek, Hüseyin; Wang, Chunliang; Warfield, Simon K.; Zambal, Sebastian; Zhang, Yong; Krestin, Gabriel P.; Niessen, Wiro J.
2013-01-01
Efficiently obtaining a reliable coronary artery centerline from computed tomography angiography data is relevant in clinical practice. Whereas numerous methods have been presented for this purpose, up to now no standardized evaluation methodology has been published to reliably evaluate and compare the performance of the existing or newly developed coronary artery centerline extraction algorithms. This paper describes a standardized evaluation methodology and reference database for the quantitative evaluation of coronary artery centerline extraction algorithms. The contribution of this work is fourfold: 1) a method is described to create a consensus centerline with multiple observers, 2) well-defined measures are presented for the evaluation of coronary artery centerline extraction algorithms, 3) a database containing thirty-two cardiac CTA datasets with corresponding reference standard is described and made available, and 4) thirteen coronary artery centerline extraction algorithms, implemented by different research groups, are quantitatively evaluated and compared. The presented evaluation framework is made available to the medical imaging community for benchmarking existing or newly developed coronary centerline extraction algorithms. PMID:19632885
Zhang, Yanjun; Yu, Chunjuan; Fu, Xinghu; Liu, Wenzhe; Bi, Weihong
2015-12-01
In the distributed optical fiber sensing system based on Brillouin scattering, strain and temperature are the main measuring parameters which can be obtained by analyzing the Brillouin center frequency shift. The novel algorithm which combines the cuckoo search algorithm (CS) with the improved differential evolution (IDE) algorithm is proposed for the Brillouin scattering parameter estimation. The CS-IDE algorithm is compared with CS algorithm and analyzed in different situation. The results show that both the CS and CS-IDE algorithm have very good convergence. The analysis reveals that the CS-IDE algorithm can extract the scattering spectrum features with different linear weight ratio, linewidth combination and SNR. Moreover, the BOTDR temperature measuring system based on electron optical frequency shift is set up to verify the effectiveness of the CS-IDE algorithm. Experimental results show that there is a good linear relationship between the Brillouin center frequency shift and temperature changes.
Chen, Limin; Liang, Yin; Wan, Guojin
2012-04-01
An regularization approach is introduced into the online identification of inverse model for predistortion. It is based on a modified backpropagation Levenberg-Marquardt algorithm with sliding window. Adaptive predistorter with feedback was identified respectively based on direct learning and indirect learning architectures. Length of the sliding window was discussed. Compared with the Recursive Prediction Error Method (RPEM) algorithm and Nonlinear Filtered Least-Mean-Square (NFxLMS) algorithm, the algorithm is tested by identification of infinite impulse response Wiener predistorter. It is found that the proposed algorithm is much more efficient than either of the other techniques. The values of the parameters are also smaller than those extracted by the ordinary least-squares algorithm since the proposed algorithm constrains the L2-norm of the parameters.
Extraction of Textual Causal Relationships based on Natural Language Processing
Directory of Open Access Journals (Sweden)
Sepideh Jamshidi-Nejad
2015-11-01
Full Text Available Natural language processing is a highly important subcategory in the wide area of artificial intelligence. Employing appropriate computational algorithms on sophisticated linguistic operations is the aim of natural language processing to extract and create computational theories from languages. In order to achieve this goal, the knowledge of linguists is needed in addition to computer science. In the field of linguistics, the syntactic and semantic relation of words and phrases and the extraction of causation is very significant which the latter is an information retrieval challenge. Recently, there is an increased attention towards the automatic extraction of causation from textual data sets. Although, previous research extracted the casual relations from uninterrupted data sets by using knowledge-based inference technologies and manual coding. Recently, finding comprehensive approaches for detection and extractions of causal arguments is a research area in the field of natural language processing.In this paper, a three-stepped approach is established through which, the position of words with syntax trees is obtained by extracting causation from causal and non-causal sentences of Web text. The arguments of events were extracted according to the dependency tree of phrases implemented by Python packages. Then potential causal relations were extracted by the extraction of specific nodes of the tree. In the final step, a statistical model is introduced for measuring the potential causal relations. Experimental results and evaluations with Recall, Precision and F-measure metrics show the accuracy and efficiency of the suggested model.
Chandratilleke, Dinusha; Silvestrini, Roger; Culican, Sue; Campbell, David; Byth-Wilson, Karen; Swaminathan, Sanjay; Lin, Ming-Wei
2016-08-01
Extractable nuclear antigen (ENA) antibody testing is often requested in patients with suspected connective tissue diseases. Most laboratories in Australia use a two step process involving a high sensitivity screening assay followed by a high specificity confirmation test. Multiplexing technology with Addressable Laser Bead Immunoassay (e.g., FIDIS) offers simultaneous detection of multiple antibody specificities, allowing a single step screening and confirmation. We compared our current diagnostic laboratory testing algorithm [Organtec ELISA screen / Euroimmun line immunoassay (LIA) confirmation] and the FIDIS Connective Profile. A total of 529 samples (443 consecutive+86 known autoantibody positivity) were run through both algorithms, and 479 samples (90.5%) were concordant. The same autoantibody profile was detected in 100 samples (18.9%) and 379 were concordant negative samples (71.6%). The 50 discordant samples (9.5%) were subdivided into 'likely FIDIS or current method correct' or 'unresolved' based on ancillary data. 'Unresolved' samples (n = 25) were subclassified into 'potentially' versus 'potentially not' clinically significant based on the change to clinical interpretation. Only nine samples (1.7%) were deemed to be 'potentially clinically significant'. Overall, we found that the FIDIS Connective Profile ENA kit is non-inferior to the current ELISA screen/LIA characterisation. Reagent and capital costs may be limiting factors in using the FIDIS, but potential benefits include a single step analysis and simultaneous detection of dsDNA antibodies.
A Hybrid Algorithm for Satellite Data Transmission Schedule Based on Genetic Algorithm
Institute of Scientific and Technical Information of China (English)
LI Yun-feng; WU Xiao-yue
2008-01-01
A hybrid scheduling algorithm based on genetic algorithm is proposed in this paper for reconnaissance satellite data transmission. At first, based on description of satellite data transmission request, satellite data transmission task modal and satellite data transmission scheduling problem model are established. Secondly, the conflicts in scheduling are discussed. According to the meaning of possible conflict, the method to divide possible conflict task set is given. Thirdly, a hybrid algorithm which consists of genetic algorithm and heuristic information is presented. The heuristic information comes from two concepts, conflict degree and conflict number. Finally, an example shows the algorithm's feasibility and performance better than other traditional algorithms.
Generalized Rule Induction Based on Immune Algorithm
Institute of Scientific and Technical Information of China (English)
郑建国; 刘芳; 焦李成
2002-01-01
A generalized rule induction mechanism, immune algorithm, for knowledge bases is building an inheritance hierarchy of classes based on the content of their knowledge objects. This hierarchy facilitates group-related processing tasks such as answering set queries, discriminating between objects, finding similarities among objects, etc. Building this hierarchy is a difficult task for knowledge engineers. Conceptual induction may be used to automate or assist engineers in the creation of such a classification structure. This paper introduces a new conceptual rule induction method, which addresses the problem of clustering large amounts of structured objects. The conditions under which the method is applicable are discussed.
Continuous Attributes Discretization Algorithm based on FPGA
Directory of Open Access Journals (Sweden)
Guoqiang Sun
2013-07-01
Full Text Available The paper addresses the problem of Discretization of continuous attributes in rough set. Discretization of continuous attributes is an important part of rough set theory because most of data that we usually gain are continuous data. In order to improve processing speed of discretization, we propose a FPGA-based discretization algorithm of continuous attributes making use of the speed advantage of FPGA. Combined attributes dependency degree of rough ret, the discretization system was divided into eight modules according to block design. This method can save much time of pretreatment in rough set and improve operation efficiency. Extensive experiments on a certain fighter fault diagnosis validate the effectiveness of the algorithm.
Sparsity-based algorithm for detecting faults in rotating machines
He, Wangpeng; Ding, Yin; Zi, Yanyang; Selesnick, Ivan W.
2016-05-01
This paper addresses the detection of periodic transients in vibration signals so as to detect faults in rotating machines. For this purpose, we present a method to estimate periodic-group-sparse signals in noise. The method is based on the formulation of a convex optimization problem. A fast iterative algorithm is given for its solution. A simulated signal is formulated to verify the performance of the proposed approach for periodic feature extraction. The detection performance of comparative methods is compared with that of the proposed approach via RMSE values and receiver operating characteristic (ROC) curves. Finally, the proposed approach is applied to single fault diagnosis of a locomotive bearing and compound faults diagnosis of motor bearings. The processed results show that the proposed approach can effectively detect and extract the useful features of bearing outer race and inner race defect.
An Algorithm to Extract Rules from Artificial Neural Networks for Medical Diagnosis Problems
Kamruzzaman, S M
2010-01-01
Artificial neural networks (ANNs) have been successfully applied to solve a variety of classification and function approximation problems. Although ANNs can generally predict better than decision trees for pattern classification problems, ANNs are often regarded as black boxes since their predictions cannot be explained clearly like those of decision trees. This paper presents a new algorithm, called rule extraction from ANNs (REANN), to extract rules from trained ANNs for medical diagnosis problems. A standard three-layer feedforward ANN with four-phase training is the basis of the proposed algorithm. In the first phase, the number of hidden nodes in ANNs is determined automatically by a constructive algorithm. In the second phase, irrelevant connections and input nodes are removed from trained ANNs without sacrificing the predictive accuracy of ANNs. The continuous activation values of the hidden nodes are discretized by using an efficient heuristic clustering algorithm in the third phase. Finally, rules ar...
Multi-Agent Reinforcement Learning Algorithm Based on Action Prediction
Institute of Scientific and Technical Information of China (English)
TONG Liang; LU Ji-lian
2006-01-01
Multi-agent reinforcement learning algorithms are studied. A prediction-based multi-agent reinforcement learning algorithm is presented for multi-robot cooperation task. The multi-robot cooperation experiment based on multi-agent inverted pendulum is made to test the efficency of the new algorithm, and the experiment results show that the new algorithm can achieve the cooperation strategy much faster than the primitive multiagent reinforcement learning algorithm.
Pattern Based Term Extraction Using ACABIT System
Takeuchi, Koichi; Koyama, Teruo; Daille, Béatrice; Romary, Laurent
2009-01-01
In this paper, we propose a pattern-based term extraction approach for Japanese, applying ACABIT system originally developed for French. The proposed approach evaluates termhood using morphological patterns of basic terms and term variants. After extracting term candidates, ACABIT system filters out non-terms from the candidates based on log-likelihood. This approach is suitable for Japanese term extraction because most of Japanese terms are compound nouns or simple phrasal patterns.
A Novel Short-term Event Extraction Algorithm for Biomedical Signals.
Yazdani, Sasan; Fallet, Sibylle; Vesin, Jean-Marc
2017-06-21
In this paper we propose a fast novel non-linear filtering method named Relative-Energy (Rel-En), for robust short-term event extraction from biomedical signals. We developed an algorithm that extracts short- and long-term energies in a signal and provides a coefficient vector with which the signal is multiplied, heightening events of interest. This algorithm is thoroughly assessed on benchmark datasets in three different biomedical applications namely, ECG QRS-complex detection, EEG K-complex detection, and imaging photoplethysmography (iPPG) peak detection. Rel-En successfully identified the events in these settings. Compared to the state-of-the-art, better or comparable results were obtained on QRS-complex and K-complex detection. For iPPG peak detection, the proposed method was used as a preprocessing step to a fixed threshold algorithm that lead to a significant improvement in overall results. While easily defined and computed, Rel-En robustly extracted short-term events of interest. The proposed algorithm can be implemented by two filters and its parameters can be selected easily and intuitively. Furthermore, Rel-En algorithm can be used in other biomedical signal processing applications where a need of short-term event extraction is present.
Algorithm Research of Individualized Travelling Route Recommendation Based on Similarity
Directory of Open Access Journals (Sweden)
Xue Shan
2015-01-01
Full Text Available Although commercial recommendation system has made certain achievement in travelling route development, the recommendation system is facing a series of challenges because of people’s increasing interest in travelling. It is obvious that the core content of the recommendation system is recommendation algorithm. The advantages of recommendation algorithm can bring great effect to the recommendation system. Based on this, this paper applies traditional collaborative filtering algorithm for analysis. Besides, illustrating the deficiencies of the algorithm, such as the rating unicity and rating matrix sparsity, this paper proposes an improved algorithm combing the multi-similarity algorithm based on user and the element similarity algorithm based on user, so as to compensate for the deficiencies that traditional algorithm has within a controllable range. Experimental results have shown that the improved algorithm has obvious advantages in comparison with the traditional one. The improved algorithm has obvious effect on remedying the rating matrix sparsity and rating unicity.
An improved SIFT algorithm based on KFDA in image registration
Chen, Peng; Yang, Lijuan; Huo, Jinfeng
2016-03-01
As a kind of stable feature matching algorithm, SIFT has been widely used in many fields. In order to further improve the robustness of the SIFT algorithm, an improved SIFT algorithm with Kernel Discriminant Analysis (KFDA-SIFT) is presented for image registration. The algorithm uses KFDA to SIFT descriptors for feature extraction matrix, and uses the new descriptors to conduct the feature matching, finally chooses RANSAC to deal with the matches for further purification. The experiments show that the presented algorithm is robust to image changes in scale, illumination, perspective, expression and tiny pose with higher matching accuracy.
Application of the Algorithm of Fractional Dimension to Extraction Image Edge
Institute of Scientific and Technical Information of China (English)
Qiang Luo; Qingli Ren
2006-01-01
The idea of fractional dimension was stated in brief firstly. Then, adopting the fractional statistical similar principle,the method of the least square minimum error was applied to evaluate the fractional dimension of per image pixel depending on the fractional property of image. And the image edge is extracted by magnitude of fractional dimension of image pixel. We presented the algorithm of the local fractional dimension, which made the rule of window size and sentencing the fractional dimension of edge. Although this algorithm was waste time, it is better than the classical ones in extraction edge and anti-jamming.
Spatial mask filtering algorithm for partial discharge pulse extraction of large generators
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
A spatial mask filter algorithm (SMF) for partial discharge (PD) pulse extraction is proposed in this then direct multiplication of coefficients at two adjacent scales is used to detect singularity points of the signal tain the last spatial mask filter. By multiplication of wavelet coefficients with the final mask filter and wavelet reconstruction process, partial discharge pulses are extracted. The results of digital simulation and practical experiment show that this method is superior to traditional wavelet shrinkage method (TWS). This algorithm not only can increase the signal to noise ratio (SNR), but also can preserve the energy and pulse amplitude.
Du, Gang; Jiang, Zhibin; Diao, Xiaodi; Yao, Yang
2012-04-01
Although the clinical pathway (CP) predefines predictable standardized care process for a particular diagnosis or procedure, many variances may still unavoidably occur. Some key index parameters have strong relationship with variances handling measures of CP. In real world, these problems are highly nonlinear in nature so that it's hard to develop a comprehensive mathematic model. In this paper, a rule extraction approach based on combing hybrid genetic double multi-group cooperative particle swarm optimization algorithm (PSO) and discrete PSO algorithm (named HGDMCPSO/DPSO) is developed to discovery the previously unknown and potentially complicated nonlinear relationship between key parameters and variances handling measures of CP. Then these extracted rules can provide abnormal variances handling warning for medical professionals. Three numerical experiments on Iris of UCI data sets, Wisconsin breast cancer data sets and CP variances data sets of osteosarcoma preoperative chemotherapy are used to validate the proposed method. When compared with the previous researches, the proposed rule extraction algorithm can obtain the high prediction accuracy, less computing time, more stability and easily comprehended by users, thus it is an effective knowledge extraction tool for CP variances handling.
Asian Option Pricing Based on Genetic Algorithms
Institute of Scientific and Technical Information of China (English)
YunzhongLiu; HuiyuXuan
2004-01-01
The cross-fertilization between artificial intelligence and computational finance has resulted in some of the most active research areas in financial engineering. One direction is the application of machine learning techniques to pricing financial products, which is certainly one of the most complex issues in finance. In the literature, when the interest rate,the mean rate of return and the volatility of the underlying asset follow general stochastic processes, the exact solution is usually not available. In this paper, we shall illustrate how genetic algorithms (GAs), as a numerical approach, can be potentially helpful in dealing with pricing. In particular, we test the performance of basic genetic algorithms by using it to the determination of prices of Asian options, whose exact solutions is known from Black-Scholesoption pricing theory. The solutions found by basic genetic algorithms are compared with the exact solution, and the performance of GAs is ewluated accordingly. Based on these ewluations, some limitations of GAs in option pricing are examined and possible extensions to future works are also proposed.
An Algorithm of Extracting I-Frame in Compressed Video
Directory of Open Access Journals (Sweden)
Zhu Yaling
2015-01-01
Full Text Available The MPEG video data includes three types of frames, that is: I-frame, P-frame and B-frame. However, the I-frame records the main information of video data, the P-frame and the B-frame are just regarded as motion compensations of the I-frame. This paper presents the approach which analyzes the MPEG video stream in the compressed domain, and find out the key frame of MPEG video stream by extracting the I-frame. Experiments indicated that this method can be automatically realized in the compressed MPEG video and it will lay the foundation for the video processing in the future.
Feature extraction for deep neural networks based on decision boundaries
Woo, Seongyoun; Lee, Chulhee
2017-05-01
Feature extraction is a process used to reduce data dimensions using various transforms while preserving the discriminant characteristics of the original data. Feature extraction has been an important issue in pattern recognition since it can reduce the computational complexity and provide a simplified classifier. In particular, linear feature extraction has been widely used. This method applies a linear transform to the original data to reduce the data dimensions. The decision boundary feature extraction method (DBFE) retains only informative directions for discriminating among the classes. DBFE has been applied to various parametric and non-parametric classifiers, which include the Gaussian maximum likelihood classifier (GML), the k-nearest neighbor classifier, support vector machines (SVM) and neural networks. In this paper, we apply DBFE to deep neural networks. This algorithm is based on the nonparametric version of DBFE, which was developed for neural networks. Experimental results with the UCI database show improved classification accuracy with reduced dimensionality.
Lee, K. J.; Stovall, K.; Jenet, F. A.; Martinez, J.; Dartez, L. P.; Mata, A.; Lunsford, G.; Cohen, S.; Biwer, C. M.; Rohr, M.; Flanigan, J.; Walker, A.; Banaszak, S.; Allen, B.; Barr, E. D.; Bhat, N. D. R.; Bogdanov, S.; Brazier, A.; Camilo, F.; Champion, D. J.; Chatterjee, S.; Cordes, J.; Crawford, F.; Deneva, J.; Desvignes, G.; Ferdman, R. D.; Freire, P.; Hessels, J. W. T.; Karuppusamy, R.; Kaspi, V. M.; Knispel, B.; Kramer, M.; Lazarus, P.; Lynch, R.; Lyne, A.; McLaughlin, M.; Ransom, S.; Scholz, P.; Siemens, X.; Spitler, L.; Stairs, I.; Tan, M.; van Leeuwen, J.; Zhu, W. W.
2013-07-01
Modern radio pulsar surveys produce a large volume of prospective candidates, the majority of which are polluted by human-created radio frequency interference or other forms of noise. Typically, large numbers of candidates need to be visually inspected in order to determine if they are real pulsars. This process can be labour intensive. In this paper, we introduce an algorithm called Pulsar Evaluation Algorithm for Candidate Extraction (PEACE) which improves the efficiency of identifying pulsar signals. The algorithm ranks the candidates based on a score function. Unlike popular machine-learning-based algorithms, no prior training data sets are required. This algorithm has been applied to data from several large-scale radio pulsar surveys. Using the human-based ranking results generated by students in the Arecibo Remote Command Center programme, the statistical performance of PEACE was evaluated. It was found that PEACE ranked 68 per cent of the student-identified pulsars within the top 0.17 per cent of sorted candidates, 95 per cent within the top 0.34 per cent and 100 per cent within the top 3.7 per cent. This clearly demonstrates that PEACE significantly increases the pulsar identification rate by a factor of about 50 to 1000. To date, PEACE has been directly responsible for the discovery of 47 new pulsars, 5 of which are millisecond pulsars that may be useful for pulsar timing based gravitational-wave detection projects.
Improved pulse laser ranging algorithm based on high speed sampling
Gao, Xuan-yi; Qian, Rui-hai; Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; He, Shi-jie; Guo, Xiao-kang
2016-10-01
Narrow pulse laser ranging achieves long-range target detection using laser pulse with low divergent beams. Pulse laser ranging is widely used in military, industrial, civil, engineering and transportation field. In this paper, an improved narrow pulse laser ranging algorithm is studied based on the high speed sampling. Firstly, theoretical simulation models have been built and analyzed including the laser emission and pulse laser ranging algorithm. An improved pulse ranging algorithm is developed. This new algorithm combines the matched filter algorithm and the constant fraction discrimination (CFD) algorithm. After the algorithm simulation, a laser ranging hardware system is set up to implement the improved algorithm. The laser ranging hardware system includes a laser diode, a laser detector and a high sample rate data logging circuit. Subsequently, using Verilog HDL language, the improved algorithm is implemented in the FPGA chip based on fusion of the matched filter algorithm and the CFD algorithm. Finally, the laser ranging experiment is carried out to test the improved algorithm ranging performance comparing to the matched filter algorithm and the CFD algorithm using the laser ranging hardware system. The test analysis result demonstrates that the laser ranging hardware system realized the high speed processing and high speed sampling data transmission. The algorithm analysis result presents that the improved algorithm achieves 0.3m distance ranging precision. The improved algorithm analysis result meets the expected effect, which is consistent with the theoretical simulation.
Algorithm for Extracting Digital Terrain Models under Forest Canopy from Airborne LiDAR Data
Directory of Open Access Journals (Sweden)
Almasi S. Maguya
2014-07-01
Full Text Available Extracting digital elevationmodels (DTMs from LiDAR data under forest canopy is a challenging task. This is because the forest canopy tends to block a portion of the LiDAR pulses from reaching the ground, hence introducing gaps in the data. This paper presents an algorithm for DTM extraction from LiDAR data under forest canopy. The algorithm copes with the challenge of low data density by generating a series of coarse DTMs by using the few ground points available and using trend surfaces to interpolate missing elevation values in the vicinity of the available points. This process generates a cloud of ground points from which the final DTM is generated. The algorithm has been compared to two other algorithms proposed in the literature in three different test sites with varying degrees of difficulty. Results show that the algorithm presented in this paper is more tolerant to low data density compared to the other two algorithms. The results further show that with decreasing point density, the differences between the three algorithms dramatically increased from about 0.5m to over 10m.
A new optimization algorithm based on chaos
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
In this article, some methods are proposed for enhancing the converging velocity of the COA (chaos optimization algorithm) based on using carrier wave two times, which can greatly increase the speed and efficiency of the first carrier wave's search for the optimal point in implementing the sophisticated searching during the second carrier wave is faster and more accurate.In addition, the concept of using the carrier wave three times is proposed and put into practice to tackle the multi-variables optimization problems, where the searching for the optimal point of the last several variables is frequently worse than the first several ones.
Serag, Ahmed; Blesa, Manuel; Moore, Emma J.; Pataky, Rozalia; Sparrow, Sarah A.; Wilkinson, A. G.; MacNaught, Gillian; Semple, Scott I.; Boardman, James P.
2016-03-01
Accurate whole-brain segmentation, or brain extraction, of magnetic resonance imaging (MRI) is a critical first step in most neuroimage analysis pipelines. The majority of brain extraction algorithms have been developed and evaluated for adult data and their validity for neonatal brain extraction, which presents age-specific challenges for this task, has not been established. We developed a novel method for brain extraction of multi-modal neonatal brain MR images, named ALFA (Accurate Learning with Few Atlases). The method uses a new sparsity-based atlas selection strategy that requires a very limited number of atlases ‘uniformly’ distributed in the low-dimensional data space, combined with a machine learning based label fusion technique. The performance of the method for brain extraction from multi-modal data of 50 newborns is evaluated and compared with results obtained using eleven publicly available brain extraction methods. ALFA outperformed the eleven compared methods providing robust and accurate brain extraction results across different modalities. As ALFA can learn from partially labelled datasets, it can be used to segment large-scale datasets efficiently. ALFA could also be applied to other imaging modalities and other stages across the life course.
Vision-Based Faint Vibration Extraction Using Singular Value Decomposition
Directory of Open Access Journals (Sweden)
Xiujun Lei
2015-01-01
Full Text Available Vibration measurement is important for understanding the behavior of engineering structures. Unlike conventional contact-type measurements, vision-based methodologies have attracted a great deal of attention because of the advantages of remote measurement, nonintrusive characteristic, and no mass introduction. It is a new type of displacement sensor which is convenient and reliable. This study introduces the singular value decomposition (SVD methods for video image processing and presents a vibration-extracted algorithm. The algorithms can successfully realize noncontact displacement measurements without undesirable influence to the structure behavior. SVD-based algorithm decomposes a matrix combined with the former frames to obtain a set of orthonormal image bases while the projections of all video frames on the basis describe the vibration information. By means of simulation, the parameters selection of SVD-based algorithm is discussed in detail. To validate the algorithm performance in practice, sinusoidal motion tests are performed. Results indicate that the proposed technique can provide fairly accurate displacement measurement. Moreover, a sound barrier experiment showing how the high-speed rail trains affect the sound barrier nearby is carried out. It is for the first time to be realized at home and abroad due to the challenge of measuring environment.
Function Optimization Based on Quantum Genetic Algorithm
Ying Sun; Hegen Xiong
2014-01-01
Optimization method is important in engineering design and application. Quantum genetic algorithm has the characteristics of good population diversity, rapid convergence and good global search capability and so on. It combines quantum algorithm with genetic algorithm. A novel quantum genetic algorithm is proposed, which is called Variable-boundary-coded Quantum Genetic Algorithm (vbQGA) in which qubit chromosomes are collapsed into variable-boundary-coded chromosomes instead of binary-coded c...
Function Optimization Based on Quantum Genetic Algorithm
Ying Sun; Yuesheng Gu; Hegen Xiong
2013-01-01
Quantum genetic algorithm has the characteristics of good population diversity, rapid convergence and good global search capability and so on.It combines quantum algorithm with genetic algorithm. A novel quantum genetic algorithm is proposed ,which is called variable-boundary-coded quantum genetic algorithm (vbQGA) in which qubit chromosomes are collapsed into variableboundary- coded chromosomes instead of binary-coded chromosomes. Therefore much shorter chromosome strings can be gained.The m...
Extracting gene networks for low-dose radiation using graph theoretical algorithms.
Directory of Open Access Journals (Sweden)
Brynn H Voy
2006-07-01
Full Text Available Genes with common functions often exhibit correlated expression levels, which can be used to identify sets of interacting genes from microarray data. Microarrays typically measure expression across genomic space, creating a massive matrix of co-expression that must be mined to extract only the most relevant gene interactions. We describe a graph theoretical approach to extracting co-expressed sets of genes, based on the computation of cliques. Unlike the results of traditional clustering algorithms, cliques are not disjoint and allow genes to be assigned to multiple sets of interacting partners, consistent with biological reality. A graph is created by thresholding the correlation matrix to include only the correlations most likely to signify functional relationships. Cliques computed from the graph correspond to sets of genes for which significant edges are present between all members of the set, representing potential members of common or interacting pathways. Clique membership can be used to infer function about poorly annotated genes, based on the known functions of better-annotated genes with which they share clique membership (i.e., "guilt-by-association". We illustrate our method by applying it to microarray data collected from the spleens of mice exposed to low-dose ionizing radiation. Differential analysis is used to identify sets of genes whose interactions are impacted by radiation exposure. The correlation graph is also queried independently of clique to extract edges that are impacted by radiation. We present several examples of multiple gene interactions that are altered by radiation exposure and thus represent potential molecular pathways that mediate the radiation response.
Efficient sparse kernel feature extraction based on partial least squares.
Dhanjal, Charanpal; Gunn, Steve R; Shawe-Taylor, John
2009-08-01
The presence of irrelevant features in training data is a significant obstacle for many machine learning tasks. One approach to this problem is to extract appropriate features and, often, one selects a feature extraction method based on the inference algorithm. Here, we formalize a general framework for feature extraction, based on Partial Least Squares, in which one can select a user-defined criterion to compute projection directions. The framework draws together a number of existing results and provides additional insights into several popular feature extraction methods. Two new sparse kernel feature extraction methods are derived under the framework, called Sparse Maximal Alignment (SMA) and Sparse Maximal Covariance (SMC), respectively. Key advantages of these approaches include simple implementation and a training time which scales linearly in the number of examples. Furthermore, one can project a new test example using only k kernel evaluations, where k is the output dimensionality. Computational results on several real-world data sets show that SMA and SMC extract features which are as predictive as those found using other popular feature extraction methods. Additionally, on large text retrieval and face detection data sets, they produce features which match the performance of the original ones in conjunction with a Support Vector Machine.
Directory of Open Access Journals (Sweden)
Peidong Liang
2016-01-01
Full Text Available We have developed a new discrete-time algorithm of stiffness extraction from muscle surface electromyography (sEMG collected from human operator’s arms and have applied it for antidisturbance control in robot teleoperation. The variation of arm stiffness is estimated from sEMG signals and transferred to a telerobot under variable impedance control to imitate human motor control behaviours, particularly for disturbance attenuation. In comparison to the estimation of stiffness from sEMG, the proposed algorithm is able to reduce the nonlinear residual error effect and to enhance robustness and to simplify stiffness calibration. In order to extract a smoothing stiffness enveloping from sEMG signals, two enveloping methods are employed in this paper, namely, fast linear enveloping based on low pass filtering and moving average and amplitude monocomponent and frequency modulating (AM-FM method. Both methods have been incorporated into the proposed stiffness variance estimation algorithm and are extensively tested. The test results show that stiffness variation extraction based on the two methods is sensitive and robust to attenuation disturbance. It could potentially be applied for teleoperation in the presence of hazardous surroundings or human robot physical cooperation scenarios.
Dynamic route guidance algorithm based algorithm based on artificial immune system
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
To improve the performance of the K-shortest paths search in intelligent traffic guidance systems,this paper proposes an optimal search algorithm based on the intelligent optimization search theory and the memphor mechanism of vertebrate immune systems.This algorithm,applied to the urban traffic network model established by the node-expanding method,can expediently realize K-shortest paths search in the urban traffic guidance systems.Because of the immune memory and global parallel search ability from artificial immune systems,K shortest paths can be found without any repeat,which indicates evidently the superiority of the algorithm to the conventional ones.Not only does it perform a better parallelism,the algorithm also prevents premature phenomenon that often occurs in genetic algorithms.Thus,it is especially suitable for real-time requirement of the traffic guidance system and other engineering optimal applications.A case study verifies the efficiency and the practicability of the algorithm aforementioned.
An open-source framework for stress-testing non-invasive foetal ECG extraction algorithms.
Andreotti, Fernando; Behar, Joachim; Zaunseder, Sebastian; Oster, Julien; Clifford, Gari D
2016-05-01
Over the past decades, many studies have been published on the extraction of non-invasive foetal electrocardiogram (NI-FECG) from abdominal recordings. Most of these contributions claim to obtain excellent results in detecting foetal QRS (FQRS) complexes in terms of location. A small subset of authors have investigated the extraction of morphological features from the NI-FECG. However, due to the shortage of available public databases, the large variety of performance measures employed and the lack of open-source reference algorithms, most contributions cannot be meaningfully assessed. This article attempts to address these issues by presenting a standardised methodology for stress testing NI-FECG algorithms, including absolute data, as well as extraction and evaluation routines. To that end, a large database of realistic artificial signals was created, totaling 145.8 h of multichannel data and over one million FQRS complexes. An important characteristic of this dataset is the inclusion of several non-stationary events (e.g. foetal movements, uterine contractions and heart rate fluctuations) that are critical for evaluating extraction routines. To demonstrate our testing methodology, three classes of NI-FECG extraction algorithms were evaluated: blind source separation (BSS), template subtraction (TS) and adaptive methods (AM). Experiments were conducted to benchmark the performance of eight NI-FECG extraction algorithms on the artificial database focusing on: FQRS detection and morphological analysis (foetal QT and T/QRS ratio). The overall median FQRS detection accuracies (i.e. considering all non-stationary events) for the best performing methods in each group were 99.9% for BSS, 97.9% for AM and 96.0% for TS. Both FQRS detections and morphological parameters were shown to heavily depend on the extraction techniques and signal-to-noise ratio. Particularly, it is shown that their evaluation in the source domain, obtained after using a BSS technique, should be
Cognitive radio resource allocation based on coupled chaotic genetic algorithm
Institute of Scientific and Technical Information of China (English)
Zu Yun-Xiao; Zhou Jie; Zeng Chang-Chang
2010-01-01
A coupled chaotic genetic algorithm for cognitive radio resource allocation which is based on genetic algorithm and coupled Logistic map is proposed. A fitness function for cognitive radio resource allocation is provided. Simulations are conducted for cognitive radio resource allocation by using the coupled chaotic genetic algorithm, simple genetic algorithm and dynamic allocation algorithm respectively. The simulation results show that, compared with simple genetic and dynamic allocation algorithm, coupled chaotic genetic algorithm reduces the total transmission power and bit error rate in cognitive radio system, and has faster convergence speed.
Cognitive radio resource allocation based on coupled chaotic genetic algorithm
Zu, Yun-Xiao; Zhou, Jie; Zeng, Chang-Chang
2010-11-01
A coupled chaotic genetic algorithm for cognitive radio resource allocation which is based on genetic algorithm and coupled Logistic map is proposed. A fitness function for cognitive radio resource allocation is provided. Simulations are conducted for cognitive radio resource allocation by using the coupled chaotic genetic algorithm, simple genetic algorithm and dynamic allocation algorithm respectively. The simulation results show that, compared with simple genetic and dynamic allocation algorithm, coupled chaotic genetic algorithm reduces the total transmission power and bit error rate in cognitive radio system, and has faster convergence speed.
An assembly sequence planning method based on composite algorithm
Directory of Open Access Journals (Sweden)
Enfu LIU
2016-02-01
Full Text Available To solve the combination explosion problem and the blind searching problem in assembly sequence planning of complex products, an assembly sequence planning method based on composite algorithm is proposed. In the composite algorithm, a sufficient number of feasible assembly sequences are generated using formalization reasoning algorithm as the initial population of genetic algorithm. Then fuzzy knowledge of assembly is integrated into the planning process of genetic algorithm and ant algorithm to get the accurate solution. At last, an example is conducted to verify the feasibility of composite algorithm.
Directory of Open Access Journals (Sweden)
D. Pylarinos
2013-12-01
Full Text Available A number of 387 discharge portraying waveforms recorded on 18 different 150 kV post insulators installed at two different Substations in Crete, Greece are considered in this paper. Twenty different features are extracted from each waveform and two feature selection algorithms (t-test and mRMR are employed. Genetic algorithms are used to classify waveforms in two different classes related to the portrayed discharges. Five different data sets are employed (1. the original feature vector, 2. time domain features, 3. frequency domain features, 4. t-test selected features 5. mRMR selected features. Results are discussed and compared with previous classification implementations on this particular data group.
Directory of Open Access Journals (Sweden)
Usman Babawuro
2012-07-01
Full Text Available Satellite images are used for feature extraction among other functions. They are used to extract linear features, like roads, etc. These linear features extractions are important operations in computer vision. Computer vision has varied applications in photogrammetric, hydrographic, cartographic and remote sensing tasks. The extraction of linear features or boundaries defining the extents of lands, land covers features are equally important in Cadastral Surveying. Cadastral Surveying is the cornerstone of any Cadastral System. A two dimensional cadastral plan is a model which represents both the cadastral and geometrical information of a two dimensional labeled Image. This paper aims at using and widening the concepts of high resolution Satellite imagery data for extracting representations of cadastral boundaries using image processing algorithms, hence minimizing the human interventions. The Satellite imagery is firstly rectified hence establishing the satellite imagery in the correct orientation and spatial location for further analysis. We, then employ the much available Satellite imagery to extract the relevant cadastral features using computer vision and image processing algorithms. We evaluate the potential of using high resolution Satellite imagery to achieve Cadastral goals of boundary detection and extraction of farmlands using image processing algorithms. This method proves effective as it minimizes the human demerits associated with the Cadastral surveying method, hence providing another perspective of achieving cadastral goals as emphasized by the UN cadastral vision. Finally, as Cadastral science continues to look to the future, this research aimed at the analysis and getting insights into the characteristics and potential role of computer vision algorithms using high resolution satellite imagery for better digital Cadastre that would provide improved socio economic development.
A Survey of Grid Based Clustering Algorithms
Directory of Open Access Journals (Sweden)
MR ILANGO
2010-08-01
Full Text Available Cluster Analysis, an automatic process to find similar objects from a database, is a fundamental operation in data mining. A cluster is a collection of data objects that are similar to one another within the same cluster and are dissimilar to the objects in other clusters. Clustering techniques have been discussed extensively in SimilaritySearch, Segmentation, Statistics, Machine Learning, Trend Analysis, Pattern Recognition and Classification [1]. Clustering methods can be classified into i Partitioning methods ii Hierarchical methods iii Density-based methods iv Grid-based methods v Model-based methods. Grid based methods quantize the object space into a finite number of cells (hyper-rectangles and then perform the required operations on the quantized space. The main advantage of Grid based method is its fast processing time which depends on number of cells in each dimension in quantized space. In this research paper, we present some of the grid based methods such as CLIQUE (CLustering In QUEst [2], STING (STatistical INformation Grid [3], MAFIA (Merging of Adaptive Intervals Approach to Spatial Data Mining [4], Wave Cluster [5]and O-CLUSTER (Orthogonal partitioning CLUSTERing [6], as a survey andalso compare their effectiveness in clustering data objects. We also present some of the latest developments in Grid Based methods such as Axis Shifted Grid Clustering Algorithm [7] and Adaptive Mesh Refinement [Wei-Keng Liao etc] [8] to improve the processing time of objects.
VHDL implementation of feature-extraction algorithm for the PANDA electromagnetic calorimeter
Guliyev, E.; Kavatsyuk, M.; Lemmens, P. J. J.; Tambave, G.; Löhner, H.
2012-01-01
A simple, efficient, and robust feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA spectrometer at FAIR, Darmstadt, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The source-code is available as an open-
VHDL Implementation of Feature-Extraction Algorithm for the PANDA Electromagnetic Calorimeter
Kavatsyuk, M.; Guliyev, E.; Lemmens, P. J. J.; Löhner, H.; Tambave, G.
2010-01-01
The feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA detector at the future FAIR facility, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The use of modified firmware with the running on-line data-proc
Pasquazi, Alessia; Azana, Jose; Moss, David J; Morandotti, Roberto
2014-01-01
We present a novel extraction algorithm for spectral phase interferometry for direct field reconstruction (SPIDER) for the so-called X-SPIDER configuration. Our approach largely extends the measurable time windows of pulses without requiring any modification to the experimental X-SPIDER set-up.
ALGORITHM FOR GENERATING DEM BASED ON CONE
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
Digital elevation model (DEM) has a variety of applications in GIS and CAD.It is the basic model for generating three-dimensional terrain feature.Generally speaking,there are two methods for building DEM.One is based upon the digital terrain model of discrete points,and is characterized by fast speed and low precision.The other is based upon triangular digital terrain model,and slow speed and high precision are the features of the method.Combining the advantages of the two methods,an algorithm for generating DEM with discrete points is presented in this paper.When interpolating elevation,this method can create a triangle which includes interpolating point and the elevation of the interpolating point can be obtained from the triangle.The method has the advantage of fast speed,high precision and less memory.
Institute of Scientific and Technical Information of China (English)
王军
2002-01-01
This paper presents an all-parametric model of radar target in optic region, in which the localized scattering center's frequency and aspect angle dependent scattering level, distance and azimuth locations are modeled as the feature vectors. And the traditional TLS-Prony algorithm is modified to extract these feature vectors. The analysis of CramerRao bound shows that the modified algorithm not only improves the restriction of high signal-to-noise ratio (SNR)threshold of traditional TLS-Prony algorithm, but also is suitable to the extraction of big damped coefficients and highresolution estimation of near separation poles. Finally, an illustrative example is presented to verify its practicability in the applications. The experimental results show that the method developed can not only recognize two airplane-like targets with similar shape at low SNR, but also compress the original radar data with high fidelity.
Blind Extraction of Chaotic Signals by Using the Fast Independent Component Analysis Algorithm
Institute of Scientific and Technical Information of China (English)
CHEN Hong-Bin; FENG Jiu-Chao; FANG Yong
2008-01-01
We report the results of using the fast independent component analysis(FastICA)algorithm to realize blind extraction of chaotic signals.Two cases are taken into consideration:namely,the mixture is noiseless or contaminated by noise.Pre-whitening is employed to reduce the effect of noise before using the FastICA algorithm.The correlation coefficient criterion is adopted to evaluate the performance,and the success rate is defined as a new criterion to indicate the performance with respect to noise or different mixing matrices.Simulation results show that the FastICA algorithm can extract the chaotic signals effectively.The impact of noise,the length of a signal frame,the number of sources and the number of observed mixtures on the performance is investigated in detail.It is also shown that regarding a noise as an independent source is not always correct.
Flat-relative optimal extraction. A quick and efficient algorithm for stabilised spectrographs
Zechmeister, M; Reiners, A
2013-01-01
Optimal extraction is a key step in processing the raw images of spectra as registered by two-dimensional detector arrays to a one-dimensional format. Previously reported algorithms reconstruct models for a mean one-dimensional spatial profile to assist a properly weighted extraction. We outline a simple optimal extraction algorithm including error propagation, which is very suitable for stabilised, fibre-fed spectrographs and does not model the spatial profile shape. A high signal-to-noise, master-flat image serves as reference image and is directly used as an extraction profile mask. Each extracted spectral value is the scaling factor relative to the cross-section of the unnormalised master-flat which contains all information about the spatial profile as well as pixel-to-pixel variations, fringing, and blaze. The extracted spectrum is measured relative to the flat spectrum. Using echelle spectra of the HARPS spectrograph we demonstrate a competitive extraction performance in terms of signal-to-noise and sho...
Chinese Term Extraction Based on PAT Tree
Institute of Scientific and Technical Information of China (English)
ZHANG Feng; FAN Xiao-zhong; XU Yun
2006-01-01
A new method of automatic Chinese term extraction is proposed based on Patricia (PAT) tree. Mutual information is calculated based on prefix searching in PAT tree of domain corpus to estimate the internal associative strength between Chinese characters in a string. It can improve the speed of term candidate extraction largely compared with methods based on domain corpus directly. Common collocation suffix, prefix bank are constructed and term part of speech (POS) composing rules are summarized to improve the precision of term extraction. Experiment results show that the F-measure is 74.97 %.
A Trust-region-based Sequential Quadratic Programming Algorithm
DEFF Research Database (Denmark)
Henriksen, Lars Christian; Poulsen, Niels Kjølstad
This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints....
A chaos-based robust wavelet-domain watermarking algorithm
Energy Technology Data Exchange (ETDEWEB)
Zhao Dawei E-mail: davidzhaodw@hotmail.com; Chen Guanrong; Liu Wenbo
2004-10-01
In this paper, a chaos-based watermarking algorithm is developed in the wavelet domain for still images. The wavelet transform is commonly applied for watermarking, where the whole image is transformed in the frequency domain. In contrast to this conventional approach, we apply the wavelet transform only locally. We transform the subimage, which is extracted from the original image, in the frequency domain by using DWT and then embed the chaotic watermark into part of the subband coefficients. As usual, the watermark is detected by computing the correlation between the watermarked coefficients and the watermarking signal, where the watermarking threshold is chosen according to the Neyman-Pearson criterion based on some statistical assumptions. Watermark detection is accomplished without using the original image. Simulation results show that we can gain high fidelity and high robustness, especially under the typical attack of geometric operations.
Speech Enhancement based on Compressive Sensing Algorithm
Sulong, Amart; Gunawan, Teddy S.; Khalifa, Othman O.; Chebil, Jalel
2013-12-01
There are various methods, in performance of speech enhancement, have been proposed over the years. The accurate method for the speech enhancement design mainly focuses on quality and intelligibility. The method proposed with high performance level. A novel speech enhancement by using compressive sensing (CS) is a new paradigm of acquiring signals, fundamentally different from uniform rate digitization followed by compression, often used for transmission or storage. Using CS can reduce the number of degrees of freedom of a sparse/compressible signal by permitting only certain configurations of the large and zero/small coefficients, and structured sparsity models. Therefore, CS is significantly provides a way of reconstructing a compressed version of the speech in the original signal by taking only a small amount of linear and non-adaptive measurement. The performance of overall algorithms will be evaluated based on the speech quality by optimise using informal listening test and Perceptual Evaluation of Speech Quality (PESQ). Experimental results show that the CS algorithm perform very well in a wide range of speech test and being significantly given good performance for speech enhancement method with better noise suppression ability over conventional approaches without obvious degradation of speech quality.
An algorithm for target airplane segmentation & extraction%一种目标飞机分割提取方法
Institute of Scientific and Technical Information of China (English)
谷东格
2016-01-01
In this paper, an algorithm for target airplane segmentation & extraction was proposed. In order to segment &extract the target airplane quickly and accurately, the algorithm adopted GrabCut algorithm which was improved by using pyramid segment tactics and based on color Gaussian Mixture Model and iterative energy minimum. The test results show that in the majority situation, this algorithm can accurately segment&extract the target airplane without using any other interaction while its processing speed is almost five times in comparison to the primary algorithm.%提出了一种目标飞机分割提取方法，该方法采用改进的使用金字塔式分割策略的以彩色高斯混合模型GMM （Gaussian Mixture Model）和迭代能量最小化为基础的GrabCut算法，达到将目标飞机快速精确分割提取的目的。实验结果表明在多数情况下，只需围绕目标飞机画一个框无需额外交互，就可以快速的将目标飞机精确分割提取出来，即便是在某些情况下不能够将目标飞机精确提取分割也只需额外的少数交互就可以达到将目标飞机精确分割提取的目的。
Enhancement of Fingerprint Extraction: Algorithms and Performance Evaluation
Directory of Open Access Journals (Sweden)
Priti Jain H. K. Sawant
2012-04-01
Full Text Available The fingerprint recognition problem can be grouped into two sub-domains: one is fingerprint verification and the other is fingerprint identification. In addition, different from the manual approach for fingerprint recognition by experts, the fingerprint recognition here is referred as Fingerprint Recognition System, which is program-based. Fingerprint verification is to verify the authenticity of one person by his fingerprint. The user provides his fingerprint together with his identity information like his ID number. The fingerprint verification system retrieves the fingerprint template according to the ID number and matches the template with the real-time acquired fingerprint from the user. Usually it is the underlying design principle of Fingerprint Authentication System.
POWER OPTIMIZATION ALGORITHM BASED ON XNOR/OR LOGIC
Institute of Scientific and Technical Information of China (English)
Wang Pengjun; Lu Jingang; Xu Jian; Dai Jing
2009-01-01
Based on the investigation of the XNOR/OR logical expression and the propagation algorithm of signal probability, a low power synthesis algorithm based on the XNOR/OR logic is proposed in this paper. The proposed algorithm has been implemented with C language. Fourteen Microelectronics Center North Carolina (MCNC) benchmarks are tested, and the results show that the proposed algorithm not only significantly reduces the average power consumption up to 27% without area and delay compensations, but also makes the runtime shorter.
Unit Template Synchronous Reference Frame Theory Based Control Algorithm for DSTATCOM
Bangarraju, J.; Rajagopal, V.; Jayalaxmi, A.
2014-04-01
This article proposes new and simplified unit templates instead of standard phase locked loop (PLL) for Synchronous Reference Frame Theory Control Algorithm (SRFT). The extraction of synchronizing components (sinθ and cosθ) for parks and inverse parks transformation using standard PLL takes more execution time. This execution time in control algorithm delays the extraction of reference source current generation. The standard PLL not only takes more execution time but also increases the reactive power burden on the Distributed Static Compensator (DSTATCOM). This work proposes a unit template based SRFT control algorithm for four-leg insulated gate bipolar transistor based voltage source converter for DSTATCOM in distribution systems. This will reduce the execution time and reactive power burden on the DSTATCOM. The proposed DSTATCOM suppress harmonics, regulates the terminal voltage along with neutral current compensation. The DSTATCOM in distribution systems with proposed control algorithm is modeled and simulated using MATLAB using SIMULINK and Simpower systems toolboxes.
A Region-Based GeneSIS Segmentation Algorithm for the Classification of Remotely Sensed Images
Directory of Open Access Journals (Sweden)
Stelios K. Mylonas
2015-03-01
Full Text Available This paper proposes an object-based segmentation/classification scheme for remotely sensed images, based on a novel variant of the recently proposed Genetic Sequential Image Segmentation (GeneSIS algorithm. GeneSIS segments the image in an iterative manner, whereby at each iteration a single object is extracted via a genetic-based object extraction algorithm. Contrary to the previous pixel-based GeneSIS where the candidate objects to be extracted were evaluated through the fuzzy content of their included pixels, in the newly developed region-based GeneSIS algorithm, a watershed-driven fine segmentation map is initially obtained from the original image, which serves as the basis for the forthcoming GeneSIS segmentation. Furthermore, in order to enhance the spatial search capabilities, we introduce a more descriptive encoding scheme in the object extraction algorithm, where the structural search modules are represented by polygonal shapes. Our objectives in the new framework are posed as follows: enhance the flexibility of the algorithm in extracting more flexible object shapes, assure high level classification accuracies, and reduce the execution time of the segmentation, while at the same time preserving all the inherent attributes of the GeneSIS approach. Finally, exploiting the inherent attribute of GeneSIS to produce multiple segmentations, we also propose two segmentation fusion schemes that operate on the ensemble of segmentations generated by GeneSIS. Our approaches are tested on an urban and two agricultural images. The results show that region-based GeneSIS has considerably lower computational demands compared to the pixel-based one. Furthermore, the suggested methods achieve higher classification accuracies and good segmentation maps compared to a series of existing algorithms.
Performance evaluation of sensor allocation algorithm based on covariance control
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
The covariance control capability of sensor allocation algorithms based on covariance control strategy is an important index to evaluate the performance of these algorithms. Owing to lack of standard performance metric indices to evaluate covariance control capability, sensor allocation ratio, etc, there are no guides to follow in the design procedure of sensor allocation algorithm in practical applications. To meet these demands, three quantified performance metric indices are presented, which are average covariance misadjustment quantity (ACMQ), average sensor allocation ratio (ASAR) and matrix metric influence factor (MMIF), where ACMQ, ASAR and MMIF quantify the covariance control capability, the usage of sensor resources and the robustness of sensor allocation algorithm, respectively. Meanwhile, a covariance adaptive sensor allocation algorithm based on a new objective function is proposed to improve the covariance control capability of the algorithm based on information gain. The experiment results show that the proposed algorithm have the advantage over the preceding sensor allocation algorithm in covariance control capability and robustness.
Review: Image Encryption Using Chaos Based algorithms
Directory of Open Access Journals (Sweden)
Er. Ankita Gaur
2014-03-01
Full Text Available Due to the development in the field of network technology and multimedia applications, every minute thousands of messages which can be text, images, audios, videos are created and transmitted over wireless network. Improper delivery of the message may leads to the leakage of important information. So encryption is used to provide security. In last few years, variety of image encryption algorithms based on chaotic system has been proposed to protect image from unauthorized access. 1-D chaotic system using logistic maps has weak security, small key space and due to the floating of pixel values, some data lose occurs and proper decryption of image becomes impossible. In this paper different chaotic maps such as Arnold cat map, sine map, logistic map, tent map have been studied.
An intersection algorithm based on transformation
Institute of Scientific and Technical Information of China (English)
CHEN Xiao-xia; YONG Jun-hai; CHEN Yu-jian
2006-01-01
How to obtain intersection of curves and surfaces is a fundamental problem in many areas such as computer graphics,CAD/CAM,computer animation,and robotics.Especially,how to deal with singular cases,such as tangency or superposition,is a key problem in obtaining intersection results.A method for solving the intersection problem based on the coordinate transformation is presented.With the Lagrange multiplier method,the minimum distance between the center of a circle and a quadric surface is given as well.Experience shows that the coordinate transformation could significantly simplify the method for calculating intersection to the tangency condition.It can improve the stability of the intersection of given curves and surfaces in singularity cases.The new algorithm is applied in a three dimensional CAD software (GEMS),produced by Tsinghua University.
A Novel Image Fusion Algorithm for Visible and PMMW Images based on Clustering and NSCT
Xiong Jintao; Xie Weichao; Yang Jianyu; Fu Yanlong; Hu Kuan; Zhong Zhibin
2016-01-01
Aiming at the fusion of visible and Passive Millimeter Wave (PMMW) images, a novel algorithm based on clustering and NSCT (Nonsubsampled Contourlet Transform) is proposed. It takes advantages of the particular ability of PMMW image in presenting metal target and uses the clustering algorithm for PMMW image to extract the potential target regions. In the process of fusion, NSCT is applied to both input images, and then the decomposition coefficients on different scale are combined using differ...
A “Tuned” Mask Learnt Approach Based on Gravitational Search Algorithm
Youchuan Wan; Mingwei Wang; Zhiwei Ye; Xudong Lai
2016-01-01
Texture image classification is an important topic in many applications in machine vision and image analysis. Texture feature extracted from the original texture image by using “Tuned” mask is one of the simplest and most effective methods. However, hill climbing based training methods could not acquire the satisfying mask at a time; on the other hand, some commonly used evolutionary algorithms like genetic algorithm (GA) and particle swarm optimization (PSO) easily fall into the local optimu...
Approach to extracting hot topics based on network traffic content
Institute of Scientific and Technical Information of China (English)
Yadong ZHOU; Xiaohong GUAN; Qindong SUN; Wei LI; Jing TAO
2009-01-01
This article presents the formal definition and description of popular topics on the Internet,analyzes the relationship between popular words and topics,and finally introduces a method that uses statistics and correlation of the popular words in traffic content and network flow characteristics as input for extracting popular topics on the Internet.Based on this,this article adapts a clustering algorithm to extract popular topics and gives formalized results.The test results show that this method has an accuracy of 16.7% in extracting popular topics on the Internet.Compared with web mining and topic detection and tracking (TDT),it can provide a more suitable data source for effective recovery of Internet public opinions.
An Improved Particle Swarm Optimization Algorithm Based on Ensemble Technique
Institute of Scientific and Technical Information of China (English)
SHI Yan; HUANG Cong-ming
2006-01-01
An improved particle swarm optimization (PSO) algorithm based on ensemble technique is presented. The algorithm combines some previous best positions (pbest) of the particles to get an ensemble position (Epbest), which is used to replace the global best position (gbest). It is compared with the standard PSO algorithm invented by Kennedy and Eberhart and some improved PSO algorithms based on three different benchmark functions. The simulation results show that the improved PSO based on ensemble technique can get better solutions than the standard PSO and some other improved algorithms under all test cases.
[A new algorithm for NIR modeling based on manifold learning].
Hong, Ming-Jian; Wen, Zhi-Yu; Zhang, Xiao-Hong; Wen, Quan
2009-07-01
Manifold learning is a new kind of algorithm originating from the field of machine learning to find the intrinsic dimensionality of numerous and complex data and to extract most important information from the raw data to develop a regression or classification model. The basic assumption of the manifold learning is that the high-dimensional data measured from the same object using some devices must reside on a manifold with much lower dimensions determined by a few properties of the object. While NIR spectra are characterized by their high dimensions and complicated band assignment, the authors may assume that the NIR spectra of the same kind of substances with different chemical concentrations should reside on a manifold with much lower dimensions determined by the concentrations, according to the above assumption. As one of the best known algorithms of manifold learning, locally linear embedding (LLE) further assumes that the underlying manifold is locally linear. So, every data point in the manifold should be a linear combination of its neighbors. Based on the above assumptions, the present paper proposes a new algorithm named least square locally weighted regression (LS-LWR), which is a kind of LWR with weights determined by the least squares instead of a predefined function. Then, the NIR spectra of glucose solutions with various concentrations are measured using a NIR spectrometer and LS-LWR is verified by predicting the concentrations of glucose solutions quantitatively. Compared with the existing algorithms such as principal component regression (PCR) and partial least squares regression (PLSR), the LS-LWR has better predictability measured by the standard error of prediction (SEP) and generates an elegant model with good stability and efficiency.
A New Aloha Anti-Collision Algorithm Based on CDMA
Bai, Enjian; Feng, Zhu
The tags' collision is a common problem in RFID (radio frequency identification) system. The problem has affected the integrity of the data transmission during the process of communication in the RFID system. Based on analysis of the existing anti-collision algorithm, a novel anti-collision algorithm is presented. The new algorithm combines the group dynamic frame slotted Aloha algorithm with code division multiple access technology. The algorithm can effectively reduce the collision probability between tags. Under the same number of tags, the algorithm is effective in reducing the reader recognition time and improve overall system throughput rate.
A research on fast FCM algorithm based on weighted sample
Institute of Scientific and Technical Information of China (English)
KUANG Ping; ZHU Qing-xin; WANG Ming-wen; CHEN Xu-dong; QING Li
2006-01-01
To improve the computational performance of the fuzzy C-means (FCM) algorithm used in dataset clustering with large numbers,the concepts of the equivalent samples and the weighting samples based on eigenvalue distribution of the samples in the feature space were introduced and a novel fast cluster algorithm named weighted fuzzy C-means (WFCM) algorithm was put forward,which came from the traditional FCM algorithm.It was proved that the duster results were equivalent in dataset with two different cluster algorithms:WFCM and FCM.Furthermore,the WFCM algorithm had better computational performance than the ordinary FCM algorithm.The experiment of the gray image segmentation showed that the WFCM algorithm is a fast and effective cluster algorithm.
A modified algorithm used in tree image extraction based on MRF model%一种改进的基于MRF的树木图像提取方法
Institute of Scientific and Technical Information of China (English)
王晓松; 黄心渊
2012-01-01
在自然场景中拍摄的树木图像包含了丰富的信息，并受环境、光照、天气、噪声等的干扰，树木本身及其周围景物的多样性使得自然场景中的树木图像提取成为一项复杂的、探索性很强的工作。本文采用自然图像抠图技术进行树木图像的提取，较好地解决了树木图像内部存在大量空洞和透明现象的问题；提出了关注区域的概念，并引入区域生长的方法，从简化三分图划分、尽可能多地确定前景像素点和减少未知区域待运算像素数目3个方面对基于马尔可夫随机场（MRF）的抠图方法进行了改进。实验结果表明：改进的基于MRF的树木图像抠图算法能够有效地提取树木图像，并简化了人机交互过程，增强了颜色的准确性，同时使运算速度大幅度提高。%Tree image extraction,which is more than basic,provides the fundamental data and technical support for these studies. However,tree image extraction is also a difficult problem since the tree photo shooting in the natural scene contains a wealth of information and is subject to the environment,light, weather,noise and other interference. The diversity of trees and its surrounding scenery make tree image extraction in the natural scene complex and challenging. This paper adopts the natural image matting to extract the tree image,better solve the problems of tree image internal void and image transparent, propose the concept of regions of interest,and introduce the region-growing method to improve the matting based on Markov Random Field ( MRF ) from the aspects of simplifying the division of trimap, determining the foreground pixels as much as possible and reducing the number of pixels of the unknown region. The experiment shows that this method can effectively extract the tree image,simplify the human machine interaction process,and enhance color accuracy,and at the same time increase computing speed greatly.
Using PSO-Based Hierarchical Feature Selection Algorithm
Directory of Open Access Journals (Sweden)
Zhiwei Ji
2014-01-01
Full Text Available Hepatocellular carcinoma (HCC is one of the most common malignant tumors. Clinical symptoms attributable to HCC are usually absent, thus often miss the best therapeutic opportunities. Traditional Chinese Medicine (TCM plays an active role in diagnosis and treatment of HCC. In this paper, we proposed a particle swarm optimization-based hierarchical feature selection (PSOHFS model to infer potential syndromes for diagnosis of HCC. Firstly, the hierarchical feature representation is developed by a three-layer tree. The clinical symptoms and positive score of patient are leaf nodes and root in the tree, respectively, while each syndrome feature on the middle layer is extracted from a group of symptoms. Secondly, an improved PSO-based algorithm is applied in a new reduced feature space to search an optimal syndrome subset. Based on the result of feature selection, the causal relationships of symptoms and syndromes are inferred via Bayesian networks. In our experiment, 147 symptoms were aggregated into 27 groups and 27 syndrome features were extracted. The proposed approach discovered 24 syndromes which obviously improved the diagnosis accuracy. Finally, the Bayesian approach was applied to represent the causal relationships both at symptom and syndrome levels. The results show that our computational model can facilitate the clinical diagnosis of HCC.
An improved localization algorithm based on genetic algorithm in wireless sensor networks.
Peng, Bo; Li, Lei
2015-04-01
Wireless sensor network (WSN) are widely used in many applications. A WSN is a wireless decentralized structure network comprised of nodes, which autonomously set up a network. The node localization that is to be aware of position of the node in the network is an essential part of many sensor network operations and applications. The existing localization algorithms can be classified into two categories: range-based and range-free. The range-based localization algorithm has requirements on hardware, thus is expensive to be implemented in practice. The range-free localization algorithm reduces the hardware cost. Because of the hardware limitations of WSN devices, solutions in range-free localization are being pursued as a cost-effective alternative to more expensive range-based approaches. However, these techniques usually have higher localization error compared to the range-based algorithms. DV-Hop is a typical range-free localization algorithm utilizing hop-distance estimation. In this paper, we propose an improved DV-Hop algorithm based on genetic algorithm. Simulation results show that our proposed algorithm improves the localization accuracy compared with previous algorithms.
A Frequent Pattern Mining Algorithm for Feature Extraction of Customer Reviews
Directory of Open Access Journals (Sweden)
Seyed Hamid Ghorashi
2012-07-01
Full Text Available Online shoppers often have different idea about the same product. They look for the product features that are consistent with their goal. Sometimes a feature might be interesting for one, while it does not make that impression for someone else. Unfortunately, identifying the target product with particular features is a tough task which is not achievable with existing functionality provided by common websites. In this paper, we present a frequent pattern mining algorithm to mine a bunch of reviews and extract product features. Our experimental results indicate that the algorithm outperforms the old pattern mining techniques used by previous researchers.
Uzawa Type Algorithm Based on Dual Mixed Variational Formulation
Institute of Scientific and Technical Information of China (English)
王光辉; 王烈衡
2002-01-01
Based on the dual mixed variational formulation with three variants (stress,displacement, displacement on contact boundary ) and the unilateral beaming problem of finite element discretization, an Uzawa type iterative algorithm is presented. The convergence of this iterative algorithm is proved, and then the efficiency of the algorithm is tested by a numerical example.
Replication-based Inference Algorithms for Hard Computational Problems
Alamino, Roberto C.; Neirotti, Juan P.; Saad, David
2013-01-01
Inference algorithms based on evolving interactions between replicated solutions are introduced and analyzed on a prototypical NP-hard problem - the capacity of the binary Ising perceptron. The efficiency of the algorithm is examined numerically against that of the parallel tempering algorithm, showing improved performance in terms of the results obtained, computing requirements and simplicity of implementation.
Network Intrusion Detection based on GMKL Algorithm
Directory of Open Access Journals (Sweden)
Li Yuxiang
2013-06-01
Full Text Available According to the 31th statistical reports of China Internet network information center (CNNIC, by the end of December 2012, the number of Chinese netizens has reached 564 million, and the scale of mobile Internet users also reached 420 million. But when the network brings great convenience to people's life, it also brings huge threat in the life of people. So through collecting and analyzing the information in the computer system or network we can detect any possible behaviors that can damage the availability, integrity and confidentiality of the computer resource, and make timely treatment to these behaviors which have important research significance to improve the operation environment of network and network service. At present, the Neural Network, Support Vector machine (SVM and Hidden Markov Model, Fuzzy inference and Genetic Algorithms are introduced into the research of network intrusion detection, trying to build a healthy and secure network operation environment. But most of these algorithms are based on the total sample and it also hypothesizes that the number of the sample is infinity. But in the field of network intrusion the collected data often cannot meet the above requirements. It often shows high latitudes, variability and small sample characteristics. For these data using traditional machine learning methods are hard to get ideal results. In view of this, this paper proposed a Generalized Multi-Kernel Learning method to applied to network intrusion detection. The Generalized Multi-Kernel Learning method can be well applied to large scale sample data, dimension complex, containing a large number of heterogeneous information and so on. The experimental results show that applying GMKL to network attack detection has high classification precision and low abnormal practical precision.
Cheng, Beato T.
2010-04-01
With the advances in focal plane, electronics and memory storage technologies, wide area and persistence surveillance capabilities have become a reality in airborne ISR. A WAS system offers many benefits in comparison with the traditional airborne image capturing systems that provide little data overlap, both in terms of space and time. Unlike a fix-mount surveillance camera, a persistence WAS system can be deployed anywhere as desired, although the platform typically has to be in motion, say circling above an area of interest. Therefore, WAS is a perfect choice for surveillance that can provide near real time capabilities such as change detection and target tracking. However, the performance of a WAS system is still limited by the available technologies: the optics that control the field-of-view, the electronics and mechanical subsystems that control the scanning, the focal plane data throughput, and the dynamics of the platform all play key roles in the success of the system. It is therefore beneficial to develop a simulated version that can capture the essence of the system, in order to help provide insights into the design of an optimized system. We describe an approach to the simulation of a generic WAS system that allows focal plane layouts, scanning patterns, flight paths and platform dynamics to be defined by a user. The system generates simulated image data of the area ground coverage from reference databases (e.g. aerial imagery, and elevation data), based on the sensor model. The simulated data provides a basis for further algorithm development, such as image stitching/mosaic, registration, and geolocation. We also discuss an algorithm to extract the terrain elevation from the simulated data, and to compare that with the original DEM data.
Fast Non-Local Means Algorithm Based on Krawtchouk Moments
Institute of Scientific and Technical Information of China (English)
吴一全; 戴一冕; 殷骏; 吴健生
2015-01-01
Non-local means (NLM)method is a state-of-the-art denoising algorithm, which replaces each pixel with a weighted average of all the pixels in the image. However, the huge computational complexity makes it impractical for real applications. Thus, a fast non-local means algorithm based on Krawtchouk moments is proposed to improve the denoising performance and reduce the computing time. Krawtchouk moments of each image patch are calculated and used in the subsequent similarity measure in order to perform a weighted averaging. Instead of computing the Euclid-ean distance of two image patches, the similarity measure is obtained by low-order Krawtchouk moments, which can reduce a lot of computational complexity. Since Krawtchouk moments can extract local features and have a good anti-noise ability, they can classify the useful information out of noise and provide an accurate similarity measure. Detailed experiments demonstrate that the proposed method outperforms the original NLM method and other moment-based methods according to a comprehensive consideration on subjective visual quality, method noise, peak signal to noise ratio (PSNR), structural similarity (SSIM) index and computing time. Most importantly, the proposed method is around 35 times faster than the original NLM method.
Improvements on EMG-based handwriting recognition with DTW algorithm.
Li, Chengzhang; Ma, Zheren; Yao, Lin; Zhang, Dingguo
2013-01-01
Previous works have shown that Dynamic Time Warping (DTW) algorithm is a proper method of feature extraction for electromyography (EMG)-based handwriting recognition. In this paper, several modifications are proposed to improve the classification process and enhance recognition accuracy. A two-phase template making approach has been introduced to generate templates with more salient features, and modified Mahalanobis Distance (mMD) approach is used to replace Euclidean Distance (ED) in order to minimize the interclass variance. To validate the effectiveness of such modifications, experiments were conducted, in which four subjects wrote lowercase letters at a normal speed and four-channel EMG signals from forearms were recorded. Results of offline analysis show that the improvements increased the average recognition accuracy by 9.20%.
An incremental clustering algorithm based on Mahalanobis distance
Aik, Lim Eng; Choon, Tan Wee
2014-12-01
Classical fuzzy c-means clustering algorithm is insufficient to cluster non-spherical or elliptical distributed datasets. The paper replaces classical fuzzy c-means clustering euclidean distance with Mahalanobis distance. It applies Mahalanobis distance to incremental learning for its merits. A Mahalanobis distance based fuzzy incremental clustering learning algorithm is proposed. Experimental results show the algorithm is an effective remedy for the defect in fuzzy c-means algorithm but also increase training accuracy.
Saudi License Plate Recognition Algorithm Based on Support Vector Machine
Institute of Scientific and Technical Information of China (English)
Khaled Suwais; Rana Al-Otaibi; Ali Alshahrani
2013-01-01
License plate recognition (LPR) is an image processing technology that is used to identify vehicles by their license plates. This paper presents a license plate recognition algorithm for Saudi car plates based on the support vector machine (SVM) algorithm. The new algorithm is efficient in recognizing the vehicles from the Arabic part of the plate. The performance of the system has been investigated and analyzed. The recognition accuracy of the algorithm is about 93.3%.
A new classification algorithm based on RGH-tree search
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
In this paper, we put forward a new classification algorithm based on RGH-Tree search and perform the classification analysis and comparison study. This algorithm can save computing resource and increase the classification efficiency. The experiment shows that this algorithm can get better effect in dealing with three dimensional multi-kind data. We find that the algorithm has better generalization ability for small training set and big testing result.
An Incremental Algorithm of Text Clustering Based on Semantic Sequences
Institute of Scientific and Technical Information of China (English)
FENG Zhonghui; SHEN Junyi; BAO Junpeng
2006-01-01
This paper proposed an incremental textclustering algorithm based on semantic sequence.Using similarity relation of semantic sequences and calculating the cover of similarity semantic sequences set, the candidate cluster with minimum entropy overlap value was selected as a result cluster every time in this algorithm.The comparison of experimental results shows that the precision of the algorithm is higher than other algorithms under same conditions and this is obvious especially on long documents set.
Institute of Scientific and Technical Information of China (English)
YIN Hong; CHEN Zeng-qiang; YUAN Zhu-zhi
2006-01-01
@@ A hyperchaos-based watermarking algorithm is developed in the wavelet domain for images.The algorithm is based on discrete wavelet transform and combines the communication model with side information.We utilize a suitable scale factor to scale host image,then construct cosets for embedding digital watermarking according to scale version of the host image.Our scheme makes a tradeoff between imperceptibility and robustness,and achieves security.The extraction algorithm is a blind detection algorithm which retrieves the watermark without the original host image.In addition,we propose a new method for watermark encryption with hyperchaotic sequence.This method overcomes the drawback of small key space of chaotic sequence and improves the watermark security.Simulation results indicate that the algorithm is a well-balanced watermarking method that offers good robustness and imperceptibility.
An enhancement algorithm for low quality fingerprint image based on edge filter and Gabor filter
Xue, Jun-tao; Liu, Jie; Liu, Zheng-guang
2009-07-01
On account of restriction of man-made and collection environment, the fingerprint image generally has low quality, especially a contaminated background. In this paper, an enhancement algorithm based on edge filter and Gabor filter is proposed to solve this kind of fingerprint image. Firstly, a gray-based algorithm is used to enhance the edge and segment the image. Then, a multilevel block size method is used to extract the orientation field from segmented fingerprint image. Finally, Gabor filter is used to fulfill the enhancement of the fingerprint image. The experiment results show that the proposed enhancement algorithm is effective than the normal Gabor filter algorithm. The fingerprint image enhance by our algorithm has better enhancement effect, so it is helpful for the subsequent research, such as classification, feature exaction and identification.
Phase shifts extraction based on time-domain orthogonal character of phase-shifting interferograms
Shou, Junwei; Zhong, Liyun; Zhou, Yunfei; Tian, Jindong; Lu, Xiaoxu
2017-01-01
Based on the time-domain orthogonal character of different pixel intensity variation of phase-shifting interferograms, a novel non-iterative algorithm is proposed to achieve the phase shifts in random phase-shifting interferometry. Due to there is no requirement for the fringe number of phase-shifting interferograms, the proposed algorithm can work well even in the case that the fringe number of interferogram is less than one, which is a difficult problem in interferometry. Moreover, only two one-dimensional vectors, achieved from the average intensity of several pixels of interferogram, are enough to perform the phase shifts extraction, the proposed algorithm reveals rapid processing speed. Specially, compared with the conventional phase shifts extraction algorithms, the proposed algorithm does not need to perform the pixel-pixel calculation or the iterative calculation, so its processing speed is greatly improved. Both the simulation and the experiment demonstrate the outstanding performance of the proposed algorithm.
Data Clustering Analysis Based on Wavelet Feature Extraction
Institute of Scientific and Technical Information of China (English)
QIANYuntao; TANGYuanyan
2003-01-01
A novel wavelet-based data clustering method is presented in this paper, which includes wavelet feature extraction and cluster growing algorithm. Wavelet transform can provide rich and diversified information for representing the global and local inherent structures of dataset. therefore, it is a very powerful tool for clustering feature extraction. As an unsupervised classification, the target of clustering analysis is dependent on the specific clustering criteria. Several criteria that should be con-sidered for general-purpose clustering algorithm are pro-posed. And the cluster growing algorithm is also con-structed to connect clustering criteria with wavelet fea-tures. Compared with other popular clustering methods,our clustering approach provides multi-resolution cluster-ing results,needs few prior parameters, correctly deals with irregularly shaped clusters, and is insensitive to noises and outliers. As this wavelet-based clustering method isaimed at solving two-dimensional data clustering prob-lem, for high-dimensional datasets, self-organizing mapand U-matrlx method are applied to transform them intotwo-dimensional Euclidean space, so that high-dimensional data clustering analysis,Results on some sim-ulated data and standard test data are reported to illus-trate the power of our method.
Ahlers, Volker; Weigl, Paul; Schachtzabel, Hartmut
2005-04-01
Due to the increasing demand for high-quality ceramic crowns and bridges, the CAD/CAM-based production of dental restorations has been a subject of intensive research during the last fifteen years. A prerequisite for the efficient processing of the 3D measurement of prepared teeth with a minimal amount of user interaction is the automatic determination of the preparation line, which defines the sealing margin between the restoration and the prepared tooth. Current dental CAD/CAM systems mostly require the interactive definition of the preparation line by the user, at least by means of giving a number of start points. Previous approaches to the automatic extraction of the preparation line rely on single contour detection algorithms. In contrast, we use a combination of different contour detection algorithms to find several independent potential preparation lines from a height profile of the measured data. The different algorithms (gradient-based, contour-based, and region-based) show their strengths and weaknesses in different clinical situations. A classifier consisting of three stages (range check, decision tree, support vector machine), which is trained by human experts with real-world data, finally decides which is the correct preparation line. In a test with 101 clinical preparations, a success rate of 92.0% has been achieved. Thus the combination of different contour detection algorithms yields a reliable method for the automatic extraction of the preparation line, which enables the setup of a turn-key dental CAD/CAM process chain with a minimal amount of interactive screen work.
Content based Zero-Watermarking Algorithm for Authentication of Text Documents
Jalil, Zunera; Sabir, Maria
2010-01-01
Copyright protection and authentication of digital contents has become a significant issue in the current digital epoch with efficient communication mediums such as internet. Plain text is the rampantly used medium used over the internet for information exchange and it is very crucial to verify the authenticity of information. There are very limited techniques available for plain text watermarking and authentication. This paper presents a novel zero-watermarking algorithm for authentication of plain text. The algorithm generates a watermark based on the text contents and this watermark can later be extracted using extraction algorithm to prove the authenticity of text document. Experimental results demonstrate the effectiveness of the algorithm against tampering attacks identifying watermark accuracy and distortion rate on 10 different text samples of varying length and attacks.
A generalized GPU-based connected component labeling algorithm
Komura, Yukihiro
2016-01-01
We propose a generalized GPU-based connected component labeling (CCL) algorithm that can be applied to both various lattices and to non-lattice environments in a uniform fashion. We extend our recent GPU-based CCL algorithm without the use of conventional iteration to the generalized method. As an application of this algorithm, we deal with the bond percolation problem. We investigate bond percolation on the honeycomb and triangle lattices to confirm the correctness of this algorithm. Moreover, we deal with bond percolation on the Bethe lattice as a substitute for a network structure, and demonstrate the performance of this algorithm on those lattices.
Fixed-point blind source separation algorithm based on ICA
Institute of Scientific and Technical Information of China (English)
Hongyan LI; Jianfen MA; Deng'ao LI; Huakui WANG
2008-01-01
This paper introduces the fixed-point learning algorithm based on independent component analysis (ICA);the model and process of this algorithm and simulation results are presented.Kurtosis was adopted as the estimation rule of independence.The results of the experiment show that compared with the traditional ICA algorithm based on random grads,this algorithm has advantages such as fast convergence and no necessity for any dynamic parameter,etc.The algorithm is a highly efficient and reliable method in blind signal separation.
Adaptive Central Force Optimization Algorithm Based on the Stability Analysis
Directory of Open Access Journals (Sweden)
Weiyi Qian
2015-01-01
Full Text Available In order to enhance the convergence capability of the central force optimization (CFO algorithm, an adaptive central force optimization (ACFO algorithm is presented by introducing an adaptive weight and defining an adaptive gravitational constant. The adaptive weight and gravitational constant are selected based on the stability theory of discrete time-varying dynamic systems. The convergence capability of ACFO algorithm is compared with the other improved CFO algorithm and evolutionary-based algorithm using 23 unimodal and multimodal benchmark functions. Experiments results show that ACFO substantially enhances the performance of CFO in terms of global optimality and solution accuracy.
Clonal Selection Based Memetic Algorithm for Job Shop Scheduling Problems
Institute of Scientific and Technical Information of China (English)
Jin-hui Yang; Liang Sun; Heow Pueh Lee; Yun Qian; Yan-chun Liang
2008-01-01
A clonal selection based memetic algorithm is proposed for solving job shop scheduling problems in this paper. In the proposed algorithm, the clonal selection and the local search mechanism are designed to enhance exploration and exploitation. In the clonal selection mechanism, clonal selection, hypermutation and receptor edit theories are presented to construct an evolutionary searching mechanism which is used for exploration. In the local search mechanism, a simulated annealing local search algorithm based on Nowicki and Smutnicki's neighborhood is presented to exploit local optima. The proposed algorithm is examined using some well-known benchmark problems. Numerical results validate the effectiveness of the proposed algorithm.
Adaptive RED algorithm based on minority game
Wei, Jiaolong; Lei, Ling; Qian, Jingjing
2007-11-01
With more and more applications appearing and the technology developing in the Internet, only relying on terminal system can not satisfy the complicated demand of QoS network. Router mechanisms must be participated into protecting responsive flows from the non-responsive. Routers mainly use active queue management mechanism (AQM) to avoid congestion. In the point of interaction between the routers, the paper applies minority game to describe the interaction of the users and observes the affection on the length of average queue. The parameters α, β of ARED being hard to confirm, adaptive RED based on minority game can depict the interactions of main body and amend the parameter α, β of ARED to the best. Adaptive RED based on minority game optimizes ARED and realizes the smoothness of average queue length. At the same time, this paper extends the network simulator plat - NS by adding new elements. Simulation has been implemented and the results show that new algorithm can reach the anticipative objects.
Class Dependent LDA Optimization Using Genetic Algorithm for Robust MFCC Extraction
Abbasian, Houman; Nasersharif, Babak; Akbari, Ahmad
Linear Discrimination analysis (LDA) finds transformations that maximizes the between-class scatter and minimizes within-class scatter. In this paper, we propose a method to use class-dependent LDA for speech recognition and MFCC extraction. To this end, we first use logarithm of clean speech Mel filter bank energies (LMFE) of each class then we obtain class-dependent LDA transformation matrix using multidimensional genetic algorithm (MGA) and use this matrix in place of DCT in MFCC feature extraction. The experimental results show that proposed speech recognition and optimization methods using class-dependent LDA, achieves to a significant isolated word recognition rate on Aurora2 database.
DYNAMIC LABELING BASED FPGA DELAY OPTIMIZATION ALGORITHM
Institute of Scientific and Technical Information of China (English)
吕宗伟; 林争辉; 张镭
2001-01-01
DAG-MAP is an FPGA technology mapping algorithm for delay optimization and the labeling phase is the algorithm's kernel. This paper studied the labeling phase and presented an improved labeling method. It is shown through the experimental results on MCNC benchmarks that the improved method is more effective than the original method while the computation time is almost the same.
ADAPTIVE FUSION ALGORITHMS BASED ON WEIGHTED LEAST SQUARE METHOD
Institute of Scientific and Technical Information of China (English)
SONG Kaichen; NIE Xili
2006-01-01
Weighted fusion algorithms, which can be applied in the area of multi-sensor data fusion,are advanced based on weighted least square method. A weighted fusion algorithm, in which the relationship between weight coefficients and measurement noise is established, is proposed by giving attention to the correlation of measurement noise. Then a simplified weighted fusion algorithm is deduced on the assumption that measurement noise is uncorrelated. In addition, an algorithm, which can adjust the weight coefficients in the simplified algorithm by making estimations of measurement noise from measurements, is presented. It is proved by emulation and experiment that the precision performance of the multi-sensor system based on these algorithms is better than that of the multi-sensor system based on other algorithms.
Gradient-based Taxis Algorithms for Network Robotics
Blum, Christian; Hafner, Verena V.
2014-01-01
Finding the physical location of a specific network node is a prototypical task for navigation inside a wireless network. In this paper, we consider in depth the implications of wireless communication as a measurement input of gradient-based taxis algorithms. We discuss how gradients can be measured and determine the errors of this estimation. We then introduce a gradient-based taxis algorithm as an example of a family of gradient-based, convergent algorithms and discuss its convergence in th...
LEACH Algorithm Based on Load Balancing
Directory of Open Access Journals (Sweden)
Wangang Wang
2013-09-01
Full Text Available This paper discusses advantages of LEACH Algorithm and the existing improved model which takes the famous hierarchy clustering routing protocol LEACH Algorithm as researching object. Then the paper indicates the problem that in the algorithm capacity factor of cluster head node is not taken into account leading the structure of clusters to be not so reasonable. This research discusses an energy-uniform cluster and cluster head selecting mechanism in which “Pseudo cluster head” concept is introduced in order to coordinate with “Load Monitor” Mechanism and “Load Leisure” Mechanism to maintain load balancing of cluster head character and stability of network topology. On the basis of LEACH Protocol improving algorithm of LEACH-C, CEFL and DCHS. NS2 simulation instrument is applied to do simulation analysis on the improved algorithm. Simulation result shows that LEACH-P Protocol effectively increase energy utilization efficiency, lengthens network lifetime and balances network load.
Motion feature extraction scheme for content-based video retrieval
Wu, Chuan; He, Yuwen; Zhao, Li; Zhong, Yuzhuo
2001-12-01
This paper proposes the extraction scheme of global motion and object trajectory in a video shot for content-based video retrieval. Motion is the key feature representing temporal information of videos. And it is more objective and consistent compared to other features such as color, texture, etc. Efficient motion feature extraction is an important step for content-based video retrieval. Some approaches have been taken to extract camera motion and motion activity in video sequences. When dealing with the problem of object tracking, algorithms are always proposed on the basis of known object region in the frames. In this paper, a whole picture of the motion information in the video shot has been achieved through analyzing motion of background and foreground respectively and automatically. 6-parameter affine model is utilized as the motion model of background motion, and a fast and robust global motion estimation algorithm is developed to estimate the parameters of the motion model. The object region is obtained by means of global motion compensation between two consecutive frames. Then the center of object region is calculated and tracked to get the object motion trajectory in the video sequence. Global motion and object trajectory are described with MPEG-7 parametric motion and motion trajectory descriptors and valid similar measures are defined for the two descriptors. Experimental results indicate that our proposed scheme is reliable and efficient.
Combined string searching algorithm based on knuth-morris- pratt and boyer-moore algorithms
Tsarev, R. Yu; Chernigovskiy, A. S.; Tsareva, E. A.; Brezitskaya, V. V.; Nikiforov, A. Yu; Smirnov, N. A.
2016-04-01
The string searching task can be classified as a classic information processing task. Users either encounter the solution of this task while working with text processors or browsers, employing standard built-in tools, or this task is solved unseen by the users, while they are working with various computer programmes. Nowadays there are many algorithms for solving the string searching problem. The main criterion of these algorithms’ effectiveness is searching speed. The larger the shift of the pattern relative to the string in case of pattern and string characters’ mismatch is, the higher is the algorithm running speed. This article offers a combined algorithm, which has been developed on the basis of well-known Knuth-Morris-Pratt and Boyer-Moore string searching algorithms. These algorithms are based on two different basic principles of pattern matching. Knuth-Morris-Pratt algorithm is based upon forward pattern matching and Boyer-Moore is based upon backward pattern matching. Having united these two algorithms, the combined algorithm allows acquiring the larger shift in case of pattern and string characters’ mismatch. The article provides an example, which illustrates the results of Boyer-Moore and Knuth-Morris- Pratt algorithms and combined algorithm’s work and shows advantage of the latter in solving string searching problem.
Fast Matrix Computation Algorithms Based on Rough Attribute Vector Tree Method in RDSS
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
The concepts of Rough Decision Support System (RDSS)and equivalence matrix are introduced in this paper. Based on a rough attribute vector tree (RAVT) method, two kinds of matrix computation algorithms - Recursive Matrix Computation (RMC) and Parallel Matrix Computation (PMC) are proposed for rules extraction, attributes reduction and data cleaning finished synchronously. The algorithms emphasize the practicability and efficiency of rules generation. A case study of PMC is analyzed, and a comparison experiment of RMC algorithm shows that it is feasible and efficient for data mining and knowledge-discovery in RDSS.
An Improved Singularity Computing Algorithm Based on Wavelet Transform Modulus Maxima Method
Institute of Scientific and Technical Information of China (English)
ZHAO Jian; XIE Duan; FAN Xun-li
2006-01-01
In order to reduce the hidden danger of noise which can be charactered by singularity spectrum, a new algorithm based on wavelet transform modulus maxima method was proposed. Singularity analysis is one of the most promising new approaches for extracting noise hidden information from noisy time series . Because of singularity strength is hard to calculate accurately, a wavelet transform modulus maxima method was used to get singularity spectrum. The singularity spectrum of white noise and aluminium interconnection electromigration noise was calculated and analyzed. The experimental results show that the new algorithm is more accurate than tradition estimating algorithm. The proposed method is feasible and efficient.
An improved PSO algorithm and its application in seismic wavelet extraction
Directory of Open Access Journals (Sweden)
Yongshou Dai
2011-08-01
Full Text Available The seismic wavelet estimation is finally a multi-dimension, multi-extreme and multi-parameter optimization problem. PSO is easy to fall into local optimum, which has simple concepts and fast convergence. This paper proposes an improved PSO with adaptive parameters and boundary constraints, in ensuring accuracy of the algorithm optimization and fast convergence. Simulation results show that the methods have good applicability and stability for seismic wavelet extraction.
An Innovative Thinking-Based Intelligent Information Fusion Algorithm
Directory of Open Access Journals (Sweden)
Huimin Lu
2013-01-01
Full Text Available This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information.
Feature Extraction based Face Recognition, Gender and Age Classification
Directory of Open Access Journals (Sweden)
Venugopal K R
2010-01-01
Full Text Available The face recognition system with large sets of training sets for personal identification normally attains good accuracy. In this paper, we proposed Feature Extraction based Face Recognition, Gender and Age Classification (FEBFRGAC algorithm with only small training sets and it yields good results even with one image per person. This process involves three stages: Pre-processing, Feature Extraction and Classification. The geometric features of facial images like eyes, nose, mouth etc. are located by using Canny edge operator and face recognition is performed. Based on the texture and shape information gender and age classification is done using Posteriori Class Probability and Artificial Neural Network respectively. It is observed that the face recognition is 100%, the gender and age classification is around 98% and 94% respectively.
Institute of Scientific and Technical Information of China (English)
张伟; 王晓青; 孙国清; 丁香; 袁小祥; 郭建兴
2013-01-01
本文针对遥感多时相变化检测应用于地震灾害信息提取中存在的问题,提出一种改进算法,通过搜索震后影像上建筑物在震前影像上对应区域周边一定范围内的影像相关性最大值(区域寻优)来代替传统上严格对应像元的相关系数值,以减少因配准精度、摄影方位、地表高程、建筑物高度等造成的同名地物变形对变化检测结果的影响,提高基于相关系数变化检测的精度.以都江堰市城区在2008年汶川地震震前的QuickBird影像和震后的航空遥感影像为实验对象,采用本文方法进行了震害变化检测实验,并与传统方法进行了比较,表明改进的算法能够提高震害信息提取的精度且具有一定的稳定性.%In this paper, an improved method based on digital change detection of multi-temporal high resolution RS images is proposed to extract the building damage caused by catastrophic earthquakes, which aims to improve the accuracy of classification and to reduce the impact of the deformation of corresponding image point caused by low registration precision, different photographic mode, and the differences of surface elevation and height of buildings by searching the maximum image correlation coefficient of a point on post-event image from a certain spatial domain around the registration point on the pre-event image instead of the conventional correlation coefficient exactly on the the registration point. The improved method is applied to detect the changes of buildings in a test area of the city of Dujiangyan after the 2008 Wenchuan earthquake by using pre-earthquake QuickBird remote sensing data acquired on July 22, 2005 and post-earthquake aerial images acquired on May 18, 2008. The results of the experiments are compared with the results using traditional method and indicate that the improved method is more precise and stable to extract the seismic damage information than traditional method.
Directory of Open Access Journals (Sweden)
Jiang Ting
2010-01-01
Full Text Available We optimize the cluster structure to solve problems such as the uneven energy consumption of the radar sensor nodes and random cluster head selection in the traditional clustering routing algorithm. According to the defined cost function for clusters, we present the clustering algorithm which is based on radio-free space path loss. In addition, we propose the energy and distance pheromones based on the residual energy and aggregation of the radar sensor nodes. According to bionic heuristic algorithm, a new ant colony-based clustering algorithm for radar sensor networks is also proposed. Simulation results show that this algorithm can get a better balance of the energy consumption and then remarkably prolong the lifetime of the radar sensor network.
Institute of Scientific and Technical Information of China (English)
曾宪钊; 成冀; 安欣; 方礼明
2002-01-01
This paper introduces a new Air Combat Intelligence Simulation System (ACISS) in a 32 versus 32 air combat, describes three methods: Genetic Algorithms (GA) in the multi-targeting decision and Evading Missile Rule Base learning, Neural Networks (NN) in the maneuvering decision, and Time Effectiveness Algorithm (TEA) in the adjudicating an air combat and the evaluating evading missile effectiveness.
Parallel Implementation of Classification Algorithms Based on Cloud Computing Environment
Directory of Open Access Journals (Sweden)
Wenbo Wang
2012-09-01
Full Text Available As an important task of data mining, Classification has been received considerable attention in many applications, such as information retrieval, web searching, etc. The enlarging volumes of information emerging by the progress of technology and the growing individual needs of data mining, makes classifying of very large scale of data a challenging task. In order to deal with the problem, many researchers try to design efficient parallel classification algorithms. This paper introduces the classification algorithms and cloud computing briefly, based on it analyses the bad points of the present parallel classification algorithms, then addresses a new model of parallel classifying algorithms. And it mainly introduces a parallel Naïve Bayes classification algorithm based on MapReduce, which is a simple yet powerful parallel programming technique. The experimental results demonstrate that the proposed algorithm improves the original algorithm performance, and it can process large datasets efficiently on commodity hardware.
Speech Emotion Feature Selection Method Based on Contribution Analysis Algorithm of Neural Network
Wang, Xiaojia; Mao, Qirong; Zhan, Yongzhao
2008-11-01
There are many emotion features. If all these features are employed to recognize emotions, redundant features may be existed. Furthermore, recognition result is unsatisfying and the cost of feature extraction is high. In this paper, a method to select speech emotion features based on contribution analysis algorithm of NN is presented. The emotion features are selected by using contribution analysis algorithm of NN from the 95 extracted features. Cluster analysis is applied to analyze the effectiveness for the features selected, and the time of feature extraction is evaluated. Finally, 24 emotion features selected are used to recognize six speech emotions. The experiments show that this method can improve the recognition rate and the time of feature extraction
Voronoi Based Nanocrystalline Generation Algorithm for Atomistic Simulations
2016-12-22
with implementing ran- domly dispersed Voronoi tessellation algorithms for nanocrystalline construction is 1 Approved for public release; distribution...generate a list of grain centers that are populated with seeds —spherical groups of atoms extracted from a reference file. This method uses a single...the methods and code used to generate a nanocrystalline structure with a single reference file for seed extraction. Some of the code segments detailed
SAR Image Segmentation Based On Hybrid PSOGSA Optimization Algorithm
Directory of Open Access Journals (Sweden)
Amandeep Kaur
2014-09-01
Full Text Available Image segmentation is useful in many applications. It can identify the regions of interest in a scene or annotate the data. It categorizes the existing segmentation algorithm into region-based segmentation, data clustering, and edge-base segmentation. Region-based segmentation includes the seeded and unseeded region growing algorithms, the JSEG, and the fast scanning algorithm. Due to the presence of speckle noise, segmentation of Synthetic Aperture Radar (SAR images is still a challenging problem. We proposed a fast SAR image segmentation method based on Particle Swarm Optimization-Gravitational Search Algorithm (PSO-GSA. In this method, threshold estimation is regarded as a search procedure that examinations for an appropriate value in a continuous grayscale interval. Hence, PSO-GSA algorithm is familiarized to search for the optimal threshold. Experimental results indicate that our method is superior to GA based, AFS based and ABC based methods in terms of segmentation accuracy, segmentation time, and Thresholding.
A Wire-speed Routing Lookup Algorithm Based on TCAM
Institute of Scientific and Technical Information of China (English)
李小勇; 王志恒; 白英彩; 刘刚
2004-01-01
An internal structure of Ternary Content Addressable Memory (TCAM) is designed and a Sorting Prefix Block (SPB) algorithm is presented, which is a wire-speed routing lookup algorithm based on TCAM. SPB algorithm makes use of the parallelism of TCAM adequately, and improves the utilization of TCAM by optimum partitions. With the aid of effective management algorithm and memory image, SPB separates critical searching from assistant searching, and improves the searching effect. One performance test indicates that this algorithm can work with different TCAM to meet the requirement of wire-speed routing lookup.
A Multi-Scale Gradient Algorithm Based on Morphological Operators
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
Watershed transformation is a powerful morphological tool for image segmentation. However, the performance of the image segmentation methods based on watershed transformation depends largely on the algorithm for computing the gradient of the image to be segmented. In this paper, we present a multi-scale gradient algorithm based on morphological operators for watershed-based image segmentation, with effective handling of both step and blurred edges. We also present an algorithm to eliminate the local minima produced by noise and quantization errors. Experimental results indicate that watershed transformation with the algorithms proposed in this paper produces meaningful segmentations, even without a region-merging step.
Cycle-Based Algorithm Used to Accelerate VHDL Simulation
Institute of Scientific and Technical Information of China (English)
杨勋; 刘明业
2000-01-01
Cycle-based algorithm has very high performance for the simula-tion of synchronous design, but it is confined to synchronous design and it is not as accurate as event-driven algorithm. In this paper, a revised cycle-based algorithm is proposed and implemented in VHDL simulator. Event-driven simulation engine and cycle-based simulation engine have been imbedded in the same simulation environ-ment and can be used to asynchronous design and synchronous design respectively. Thus the simulation performance is improved without losing the flexibility and ac-curacy of event-driven algorithm.
Itoh, Yoshiaki; Tanaka, Kazuyo
2004-08-01
Word frequency in a document has often been utilized in text searching and summarization. Similarly, identifying frequent words or phrases in a speech data set for searching and summarization would also be meaningful. However, obtaining word frequency in a speech data set is difficult, because frequent words are often special terms in the speech and cannot be recognized by a general speech recognizer. This paper proposes another approach that is effective for automatic extraction of such frequent word sections in a speech data set. The proposed method is applicable to any domain of monologue speech, because no language models or specific terms are required in advance. The extracted sections can be regarded as speech labels of some kind or a digest of the speech presentation. The frequent word sections are determined by detecting similar sections, which are sections of audio data that represent the same word or phrase. The similar sections are detected by an efficient algorithm, called Shift Continuous Dynamic Programming (Shift CDP), which realizes fast matching between arbitrary sections in the reference speech pattern and those in the input speech, and enables frame-synchronous extraction of similar sections. In experiments, the algorithm is applied to extract the repeated sections in oral presentation speeches recorded in academic conferences in Japan. The results show that Shift CDP successfully detects similar sections and identifies the frequent word sections in individual presentation speeches, without prior domain knowledge, such as language models and terms.
QOS-BASED MULTICAST ROUTING OPTIMIZATION ALGORITHMS FOR INTERNET
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
Most of the multimedia applications require strict Quality-of-Service (QoS) guarantee during the communication between a single source and multiple destinations. The paper mainly presents a QoS Multicast Routing algorithms based on Genetic Algorithm (QMRGA). Simulation results demonstrate that the algorithm is capable of discovering a set of QoS-based near optimized, non-dominated multicast routes within a few iterations, even for the networks environment with uncertain parameters.
Fast interactive segmentation algorithm of image sequences based on relative fuzzy connectedness
Institute of Scientific and Technical Information of China (English)
Tian Chunna; Gao Xinbo
2005-01-01
A fast interactive segmentation algorithm of image-sequences based on relative fuzzy connectedness is presented. In comparison with the original algorithm, the proposed one, with the same accuracy, accelerates the segmentation speed by three times for single image. Meanwhile, this fast segmentation algorithm is extended from single object to multiple objects and from single-image to image-sequences. Thus the segmentation of multiple objects from complex background and batch segmentation of image-sequences can be achieved. In addition, a post-processing scheme is incorporated in this algorithm, which extracts smooth edge with one-pixel-width for each segmented object. The experimental results illustrate that the proposed algorithm can obtain the object regions of interest from medical image or image-sequences as well as man-made images quickly and reliably with only a little interaction.
A voltage resonance-based single-ended online fault location algorithm for DC distribution networks
Institute of Scientific and Technical Information of China (English)
JIA Ke; LI Meng; BI TianShu; YANG QiXun
2016-01-01
A novel single-ended online fault location algorithm is investigated for DC distribution networks.The proposed algorithm calculates the fault distance based on the characteristics of the voltage resonance.The Prony's method is introduced to extract the characteristics.A novel method is proposed to solve the pseudo dual-root problem in the calculation process.The multiple data windows are adopted to enhance the robustness of the proposed algorithm.An index is proposed to evaluate the accuracy and validity of the results derived from the various data windows.The performances of the proposed algorithm in different fault scenarios were evaluated using the PSCAD/EMTDC simulations.The results show that the algorithm can locate the faults with transient resistance using the 1.6 ms data of the DC-side voltage after a fault inception and offers a good precision.
Automatic Peak Selection by a Benjamini-Hochberg-Based Algorithm
Abbas, Ahmed
2013-01-07
A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into p-values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx. © 2013
Automatic peak selection by a Benjamini-Hochberg-based algorithm.
Directory of Open Access Journals (Sweden)
Ahmed Abbas
Full Text Available A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into [Formula: see text]-values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx.
Improved FCLSD algorithm based on LTE/LTE-A system
Directory of Open Access Journals (Sweden)
Kewen Liu
2011-08-01
Full Text Available In order to meet the high data rate, large capacity and low latency in LTE, advanced MIMO technology has been introduced in LTE system, which becomes one of the core technologies in physical layer. In a variety of MIMO detection algorithms, the ZF and MMSE linear detection algorithms are the most simple, but the performance is poor. MLD algorithm can achieve optimal detection performance, but it’s too complexity to be applied in practice. CLSD algorithm has similar detection performance and lower complexity with the MLD algorithm, but the uncertainty of complexity will bring hardware difficulties. FCLSD algorithm can maximize the advantages of CLSD algorithm and solve difficult problems in practice. Based on advanced FCLSD algorithm and combined with LTE / LTE-A practical system applications, this article designed two improved algorithms. The two improved algorithms can be flexibly and adaptively used in various antenna configurations and modulation scene in LTE / LTE-A spatial multiplexing MIMO system. The Simulation results show that the improved algorithm can achieve an approximate performance to the original FCLSD algorithm; in addition, it has a fixed complexity and could be carried out by parallel processing.
A design of space robot multi-target capture algorithm based on DSP
Institute of Scientific and Technical Information of China (English)
MA Xiao-na; HU Bing-liang; LIU Xue-bin; LIU Hui; LIU Qian-wen
2009-01-01
To correctly capture spatial targets from cluttered and motive celestial background, a new Multi-Target Capture algorithm was proposed, which is a comparative difference algorithm based on the combination of centroid extraction and despun registration of efficient points. Moreover, this algorithm was applied in an image processing system based on the DSP featuring high speed and high performance. The procedures of image processing are as follows: first, label efficient points in the frame and extract their centroids; second, make appropriate despun registration, according to the reference rotation angles provided by Space Robot position system; third, translate and register centroid coordinates of efficient points in reference frames and get the registration points according to the principle that there are the most same centroid coordinates of efficient points when completely registered; finally, eliminate the same background points by using comparative difference method. The result shows that this image processing system can meet the needs of the whole system.
List-Based Simulated Annealing Algorithm for Traveling Salesman Problem.
Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun; Zhong, Yi-wen
2016-01-01
Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.
Adaptive local backlight dimming algorithm based on local histogram and image characteristics
Nadernejad, Ehsan; Burini, Nino; Korhonen, Jari; Forchhammer, Søren; Mantel, Claire
2013-02-01
Liquid Crystal Display (LCDs) with Light Emitting Diode (LED) backlight is a very popular display technology, used for instance in television sets, monitors and mobile phones. This paper presents a new backlight dimming algorithm that exploits the characteristics of the target image, such as the local histograms and the average pixel intensity of each backlight segment, to reduce the power consumption of the backlight and enhance image quality. The local histogram of the pixels within each backlight segment is calculated and, based on this average, an adaptive quantile value is extracted. A classification into three classes based on the average luminance value is performed and, depending on the image luminance class, the extracted information on the local histogram determines the corresponding backlight value. The proposed method has been applied on two modeled screens: one with a high resolution direct-lit backlight, and the other screen with 16 edge-lit backlight segments placed in two columns and eight rows. We have compared the proposed algorithm against several known backlight dimming algorithms by simulations; and the results show that the proposed algorithm provides better trade-off between power consumption and image quality preservation than the other algorithms representing the state of the art among feature based backlight algorithms.
Extracting quantum dynamics from genetic learning algorithms through principal component analysis
White, J L; Bucksbaum, P H
2004-01-01
Genetic learning algorithms are widely used to control ultrafast optical pulse shapes for photo-induced quantum control of atoms and molecules. An outstanding issue is how to use the solutions found by these algorithms to learn about the system's quantum dynamics. We propose a simple method based on principal component analysis of the control space, which can reveal the degrees of freedom responsible for control, and aid in the construction of an effective Hamiltonian for the dynamics.
Institute of Scientific and Technical Information of China (English)
宋犇; 洪伟
2002-01-01
There exist the complicated waveguide modes as well as the surface waves in the electromagnetic field induced by a horizontal electric dipole in layered Iossless dielectrics between two ground planes.In spectral domain,all these modes can be characterized by the rational parts with the real poles of the vector and scalar potentials.The accurate extraction of these modes plays an important role in the evaluation of the Green's function in spatial domain.In this paper,a new algorithm based on rational approximation is presented,which can accurately extract all the real poles and the residues of each pole simultaneously.Thus,we can get all the surface wave modes and waveguide modes,which is of great help to the calculation of the spatial domain Green's function.The numerical results demonstrated the accuracy and efficiency of the proposed method.``
Clonal Selection Algorithm Based Iterative Learning Control with Random Disturbance
Directory of Open Access Journals (Sweden)
Yuanyuan Ju
2013-01-01
Full Text Available Clonal selection algorithm is improved and proposed as a method to solve optimization problems in iterative learning control. And a clonal selection algorithm based optimal iterative learning control algorithm with random disturbance is proposed. In the algorithm, at the same time, the size of the search space is decreased and the convergence speed of the algorithm is increased. In addition a model modifying device is used in the algorithm to cope with the uncertainty in the plant model. In addition a model is used in the algorithm cope with the uncertainty in the plant model. Simulations show that the convergence speed is satisfactory regardless of whether or not the plant model is precise nonlinear plants. The simulation test verify the controlled system with random disturbance can reached to stability by using improved iterative learning control law but not the traditional control law.
An optimal scheduling algorithm based on task duplication
Institute of Scientific and Technical Information of China (English)
Ruan Youlin; Liu Gan; Zhu Guangxi; Lu Xiaofeng
2005-01-01
When the communication time is relatively shorter than the computation time for every task, the task duplication based scheduling (TDS) algorithm proposed by Darbha and Agrawal generates an optimal schedule. Park and Choe also proposed an extended TDS algorithm whose optimality condition is less restricted than that of TDS algorithm, but the condition is very complex and is difficult to satisfy when the number of tasks is large. An efficient algorithm is proposed whose optimality condition is less restricted and simpler than both of the algorithms, and the schedule length is also shorter than both of the algorithms. The time complexity of the proposed algorithm is O ( v2 ), where v represents the number of tasks.
OPTIMIZATION BASED ON LMPROVED REAL—CODED GENETIC ALGORITHM
Institute of Scientific and Technical Information of China (English)
ShiYu; YuShenglin
2002-01-01
An improved real-coded genetic algorithm is pro-posed for global optimization of functionsl.The new algo-rithm is based om the judgement of the searching perfor-mance of basic real-coded genetic algorithm.The opera-tions of basic real-coded genetic algorithm are briefly dis-cussed and selected.A kind of chaos sequence is described in detail and added in the new algorithm ad a disturbance factor.The strategy of field partition is also used to im-prove the strcture of the new algorithm.Numerical ex-periment shows that the mew genetic algorithm can find the global optimum of complex funtions with satistaiting precision.
Robust adaptive beamforming algorithm based on Bayesian approach
Institute of Scientific and Technical Information of China (English)
Xin SONG; Jinkuan WANG; Yinghua HAN; Han WANG
2008-01-01
The performance of adaptive array beamform-ing algorithms substantially degrades in practice because of a slight mismatch between actual and presumed array res-ponses to the desired signal. A novel robust adaptive beam-forming algorithm based on Bayesian approach is therefore proposed. The algorithm responds to the current envi-ronment by estimating the direction of arrival (DOA) of the actual signal from observations. Computational com-plexity of the proposed algorithm can thus be reduced com-pared with other algorithms since the recursive method is used to obtain inverse matrix. In addition, it has strong robustness to the uncertainty of actual signal DOA and makes the mean output array signal-to-interference-plus-noise ratio (SINR) consistently approach the optimum. Simulation results show that the proposed algorithm is bet-ter in performance than conventional adaptive beamform-ing algorithms.
New Gradient-Based Variable Step Size LMS Algorithms
Directory of Open Access Journals (Sweden)
Yanling Hao
2008-03-01
Full Text Available Two new gradient-based variable step size least-mean-square (VSSLMS algorithms are proposed on the basis of a concise assessment of the weaknesses of previous VSSLMS algorithms in high-measurement noise environments. The first algorithm is designed for applications where the measurement noise signal is statistically stationary and the second for statistically nonstationary noise. Steady-state performance analyses are provided for both algorithms and verified by simulations. The proposed algorithms are also confirmed by simulations to obtain both a fast convergence rate and a small steady-state excess mean square error (EMSE, and to outperform existing VSSLMS algorithms. To facilitate practical application, parameter choice guidelines are provided for the new algorithms.
Generating Decision Trees Method Based on Improved ID3 Algorithm
Institute of Scientific and Technical Information of China (English)
Yang Ming; Guo Shuxu1; Wang Jun3
2011-01-01
The ID3 algorithm is a classical learning algorithm of decision tree in data mining.The algorithm trends to choosing the attribute with more values,affect the efficiency of classification and prediction for building a decision tree.This article proposes a new approach based on an improved ID3 algorithm.The new algorithm introduces the importance factor λ when calculating the information entropy.It can strengthen the label of important attributes of a tree and reduce the label of non-important attributes.The algorithm overcomes the flaw of the traditional ID3 algorithm which tends to choose the attributes with more values,and also improves the efficiency and flexibility in the process of generating decision trees.
Fuzzy Rules for Ant Based Clustering Algorithm
Directory of Open Access Journals (Sweden)
Amira Hamdi
2016-01-01
Full Text Available This paper provides a new intelligent technique for semisupervised data clustering problem that combines the Ant System (AS algorithm with the fuzzy c-means (FCM clustering algorithm. Our proposed approach, called F-ASClass algorithm, is a distributed algorithm inspired by foraging behavior observed in ant colonyT. The ability of ants to find the shortest path forms the basis of our proposed approach. In the first step, several colonies of cooperating entities, called artificial ants, are used to find shortest paths in a complete graph that we called graph-data. The number of colonies used in F-ASClass is equal to the number of clusters in dataset. Hence, the partition matrix of dataset founded by artificial ants is given in the second step, to the fuzzy c-means technique in order to assign unclassified objects generated in the first step. The proposed approach is tested on artificial and real datasets, and its performance is compared with those of K-means, K-medoid, and FCM algorithms. Experimental section shows that F-ASClass performs better according to the error rate classification, accuracy, and separation index.
Fourier transform profilometry based on mean envelope extraction
Zhang, Xiaoxuan; Huang, Shujun; Gao, Nan; Zhang, Zonghua
2017-02-01
Based on an image pre-processing algorithm, a three-dimensional (3D) object measurement method is proposed by combining time domain and frequency domain analysis. Firstly, extreme points of sinusoidal fringes under the disturbance of noise are accurately extracted. Secondly, mean envelope of the fringe is obtained through appropriate interpolation method and then removed. Thirdly, phase information is extracted by using specific filtering in Fourier spectrum of the pre-processed fringe pattern. Finally, simulated and experimental results show a good property of the proposed method in accuracy and measurement range. The proposed method can achieve 3D shape of objects having large slopes and/or discontinuous surfaces from one-shot acquisition by using color fringe projection technique and will have wide applications in the fields of fast measurement.
Ground extraction from airborne laser data based on wavelet analysis
Xu, Liang; Yang, Yan; Jiang, Bowen; Li, Jia
2007-11-01
With the advantages of high resolution and accuracy, airborne laser scanning data are widely used in topographic mapping. In order to generate a DTM, measurements from object features such as buildings, vehicles and vegetation have to be classified and removed. However, the automatic extraction of bare earth from point clouds acquired by airborne laser scanning equipment remains a problem in LIDAR data filtering nowadays. In this paper, a filter algorithm based on wavelet analysis is proposed. Relying on the capability of detecting discontinuities of continuous wavelet transform and the feature of multi-resolution analysis, the object points can be removed, while ground data are preserved. In order to evaluate the performance of this approach, we applied it to the data set used in the ISPRS filter test in 2003. 15 samples have been tested by the proposed approach. Results showed that it filtered most of the objects like vegetation and buildings, and extracted a well defined ground model.
Selective electromembrane extraction based on isoelectric point
DEFF Research Database (Denmark)
Huang, Chuixiu; Gjelstad, Astrid; Pedersen-Bjergaard, Stig
2015-01-01
For the first time, selective isolation of a target peptide based on the isoelectric point (pI) was achieved using a two-step electromembrane extraction (EME) approach with a thin flat membrane-based EME device. In this approach, step #1 was an extraction process, where both the target peptide...... angiotensin II antipeptide (AT2 AP, pI=5.13) and the matrix peptides (pI>5.13) angiotensin II (AT2), neurotensin (NT), angiotensin I (AT1) and leu-enkephalin (L-Enke) were all extracted as net positive species from the sample (pH 3.50), through a supported liquid membrane (SLM) of 1-nonanol diluted with 2......, and the target remained in the acceptor solution. The acceptor solution pH, the SLM composition, the extraction voltage, and the extraction time during the clean-up process (step #2) were important factors influencing the separation performance. An acceptor solution pH of 5.25 for the clean-up process slightly...
A POCS-Based Algorithm for Blocking Artifacts Reduction
Institute of Scientific and Technical Information of China (English)
ZHAO Yi-hong; CHENG Guo-hua; YU Song-yu
2006-01-01
An algorithm for blocking artifacts reduction in DCT domain for block-based image coding was developed. The algorithm is based on the projection onto convex set (POCS) theory. Due to the fact that the DCT characteristics of shifted blocks are different caused by the blocking artifacts, a novel smoothness constraint set and the corresponding projection operator were proposed to reduce the blocking artifacts by discarding the undesired high frequency coefficients in the shifted DCT blocks. The experimental results show that the proposed algorithm outperforms the conventional algorithms in terms of objective quality, subjective quality, and convergence property.
A Parallel Encryption Algorithm Based on Piecewise Linear Chaotic Map
Directory of Open Access Journals (Sweden)
Xizhong Wang
2013-01-01
Full Text Available We introduce a parallel chaos-based encryption algorithm for taking advantage of multicore processors. The chaotic cryptosystem is generated by the piecewise linear chaotic map (PWLCM. The parallel algorithm is designed with a master/slave communication model with the Message Passing Interface (MPI. The algorithm is suitable not only for multicore processors but also for the single-processor architecture. The experimental results show that the chaos-based cryptosystem possesses good statistical properties. The parallel algorithm provides much better performance than the serial ones and would be useful to apply in encryption/decryption file with large size or multimedia.
Heuristic Reduction Algorithm Based on Pairwise Positive Region
Institute of Scientific and Technical Information of China (English)
QI Li; LIU Yu-shu
2007-01-01
To guarantee the optimal reduct set, a heuristic reduction algorithm is proposed, which considers the distinguishing information between the members of each pair decision classes. Firstly the pairwise positive region is defined, based on which the pairwise significance measure is calculated between the members of each pair classes. Finally the weighted pairwise significance of attribute is used as the attribute reduction criterion, which indicates the necessity of attributes very well. By introducing the noise tolerance factor, the new algorithm can tolerate noise to some extent. Experimental results show the advantages of our novel heuristic reduction algorithm over the traditional attribute dependency based algorithm.
Survey of gene splicing algorithms based on reads.
Si, Xiuhua; Wang, Qian; Zhang, Lei; Wu, Ruo; Ma, Jiquan
2017-09-05
Gene splicing is the process of assembling a large number of unordered short sequence fragments to the original genome sequence as accurately as possible. Several popular splicing algorithms based on reads are reviewed in this article, including reference genome algorithms and de novo splicing algorithms (Greedy-extension, Overlap-Layout-Consensus graph, De Bruijn graph). We also discuss a new splicing method based on the MapReduce strategy and Hadoop. By comparing these algorithms, some conclusions are drawn and some suggestions on gene splicing research are made.
PHC: A Fast Partition and Hierarchy-Based Clustering Algorithm
Institute of Scientific and Technical Information of China (English)
ZHOU HaoFeng(周皓峰); YUAN QingQing(袁晴晴); CHENG ZunPing(程尊平); SHI BaiLe(施伯乐)
2003-01-01
Cluster analysis is a process to classify data in a specified data set. In this field,much attention is paid to high-efficiency clustering algorithms. In this paper, the features in thecurrent partition-based and hierarchy-based algorithms are reviewed, and a new hierarchy-basedalgorithm PHC is proposed by combining advantages of both algorithms, which uses the cohesionand the closeness to amalgamate the clusters. Compared with similar algorithms, the performanceof PHC is improved, and the quality of clustering is guaranteed. And both the features were provedby the theoretic and experimental analyses in the paper.
Algorithm for heart rate extraction in a novel wearable acoustic sensor.
Chen, Guangwei; Imtiaz, Syed Anas; Aguilar-Pelaez, Eduardo; Rodriguez-Villegas, Esther
2015-02-01
Phonocardiography is a widely used method of listening to the heart sounds and indicating the presence of cardiac abnormalities. Each heart cycle consists of two major sounds - S1 and S2 - that can be used to determine the heart rate. The conventional method of acoustic signal acquisition involves placing the sound sensor at the chest where this sound is most audible. Presented is a novel algorithm for the detection of S1 and S2 heart sounds and the use of them to extract the heart rate from signals acquired by a small sensor placed at the neck. This algorithm achieves an accuracy of 90.73 and 90.69%, with respect to heart rate value provided by two commercial devices, evaluated on more than 38 h of data acquired from ten different subjects during sleep in a pilot clinical study. This is the largest dataset for acoustic heart sound classification and heart rate extraction in the literature to date. The algorithm in this study used signals from a sensor designed to monitor breathing. This shows that the same sensor and signal can be used to monitor both breathing and heart rate, making it highly useful for long-term wearable vital signs monitoring.
Brain MR image segmentation improved algorithm based on probability
Liao, Hengxu; Liu, Gang; Guo, Xiantang
2017-08-01
Local weight voting algorithm is a kind of current mainstream segmentation algorithm. It takes full account of the influences of the likelihood of image likelihood and the prior probabilities of labels on the segmentation results. But this method still can be improved since the essence of this method is to get the label with the maximum probability. If the probability of a label is 70%, it may be acceptable in mathematics. But in the actual segmentation, it may be wrong. So we use the matrix completion algorithm as a supplement. When the probability of the former is larger, the result of the former algorithm is adopted. When the probability of the later is larger, the result of the later algorithm is adopted. This is equivalent to adding an automatic algorithm selection switch that can theoretically ensure that the accuracy of the algorithm we propose is superior to the local weight voting algorithm. At the same time, we propose an improved matrix completion algorithm based on enumeration method. In addition, this paper also uses a multi-parameter registration model to reduce the influence that the registration made on the segmentation. The experimental results show that the accuracy of the algorithm is better than the common segmentation algorithm.
Adaptive image contrast enhancement algorithm for point-based rendering
Xu, Shaoping; Liu, Xiaoping P.
2015-03-01
Surgical simulation is a major application in computer graphics and virtual reality, and most of the existing work indicates that interactive real-time cutting simulation of soft tissue is a fundamental but challenging research problem in virtual surgery simulation systems. More specifically, it is difficult to achieve a fast enough graphic update rate (at least 30 Hz) on commodity PC hardware by utilizing traditional triangle-based rendering algorithms. In recent years, point-based rendering (PBR) has been shown to offer the potential to outperform the traditional triangle-based rendering in speed when it is applied to highly complex soft tissue cutting models. Nevertheless, the PBR algorithms are still limited in visual quality due to inherent contrast distortion. We propose an adaptive image contrast enhancement algorithm as a postprocessing module for PBR, providing high visual rendering quality as well as acceptable rendering efficiency. Our approach is based on a perceptible image quality technique with automatic parameter selection, resulting in a visual quality comparable to existing conventional PBR algorithms. Experimental results show that our adaptive image contrast enhancement algorithm produces encouraging results both visually and numerically compared to representative algorithms, and experiments conducted on the latest hardware demonstrate that the proposed PBR framework with the postprocessing module is superior to the conventional PBR algorithm and that the proposed contrast enhancement algorithm can be utilized in (or compatible with) various variants of the conventional PBR algorithm.
Research on Algorithms for Mining Distance-Based Outliers
Institute of Scientific and Technical Information of China (English)
WANGLizhen; ZOULikun
2005-01-01
The outlier detection is an important and valuable research in KDD (Knowledge discover in database). The identification of outliers can lead to the discovery of truly unexpected knowledge in areas such as electronic commerce, credit card fraud, and even weather forecast. In existing methods that we have seen for finding outliers, the notion of DB-(Distance-based) outliers is not restricted computationally to small values of the number of dimensions k and goes beyond the data space. Here, we study algorithms for mining DB-outliers. We focus on developing algorithms unlimited by k. First, we present a Partition-based algorithm (the PBA). The key idea is to gain efficiency by divide-and-conquer. Second, we present an optimized algorithm called Object-class-based algorithm (the OCBA). The computing of this algorithm has nothing to do with k and the efficiency of this algorithm is as good as the cell-based algorithm. We provide experimental results showing that the two new algorithms have better execution efficiency.
Multi-Scale Analysis Based Curve Feature Extraction in Reverse Engineering
Institute of Scientific and Technical Information of China (English)
YANG Hongjuan; ZHOU Yiqi; CHEN Chengjun; ZHAO Zhengxu
2006-01-01
A sectional curve feature extraction algorithm based on multi-scale analysis is proposed for reverse engineering. The algorithm consists of two parts: feature segmentation and feature classification. In the first part, curvature scale space is applied to multi-scale analysis and original feature detection. To obtain the primary and secondary curve primitives, feature fusion is realized by multi-scale feature detection information transmission. In the second part: projection height function is presented based on the area of quadrilateral, which improved criterions of sectional curve feature classification. Results of synthetic curves and practical scanned sectional curves are given to illustrate the efficiency of the proposed algorithm on feature extraction. The consistence between feature extraction based on multi-scale curvature analysis and curve primitives is verified.
Institute of Scientific and Technical Information of China (English)
潘赛虎; 李文杰; 张义
2014-01-01
Classification of electroencephalogram(EEG)signal is an important issue in brain-computer interface(BCI). Based on the classification of the EEG signals,in this paper,we collect the left-right hand motor imagery EEG data of 7 subjects which are recorded by EGI-64 scalp electrodes placed according to the international 10/20 system. Firstly,the EEG data are denoised with extend Infomax-Independent Component Analysis ( ICA );Secondly, C3 and C4 electrodes features are extracted by using Common Spatial Pattern( CSP);Finally,the average classification rates of Fisher Linear Discriminant Analysis(FLDA),Bayesian,Radial Basis Function(RBF)neural network and BP neural network methods are compared. The classification results show that the average classification rate of neural network is higher than the other two methods,and that the average classification rate of BP neural network can be up to 95. 36%,but the other three methods of running velocity is obviously faster than the BP neural network. The results provide a basis for real-time BCI system implementation.%脑-机接口技术领域的关键问题是脑电信号的分类识别研究.本文针对脑电信号的分类问题，基于EGI-64导脑电采集系统得到7名被试者的左右手运动想象脑电数据，首先采用扩展Infomax-ICA方法对脑电数据进行去噪处理；然后利用共空间模式方法对C3/C42个电极的脑电信号进行特征提取；最后比较了Fisher线性判别分析法、贝叶斯方法、径向神经网络和BP神经网络几种算法的平均分类率.结果表明：神经网络分类方法得到的平均分类率要高于其他2种方法，而BP神经网络方法的平均分类率最高，可以达到95.36%，但另外3种方法的运行速度明显高于BP神经网络.该结果为实时BCI系统实施提供了一定依据.
Directory of Open Access Journals (Sweden)
Dandan Wang
2015-03-01
Full Text Available The key problem for picking robots is to locate the picking points of fruit. A method based on the moment of inertia and symmetry of apples is proposed in this paper to locate the picking points of apples. Image pre-processing procedures, which are crucial to improving the accuracy of the location, were carried out to remove noise and smooth the edges of apples. The moment of inertia method has the disadvantage of high computational complexity, which should be solved, so convex hull was used to improve this problem. To verify the validity of this algorithm, a test was conducted using four types of apple images containing 107 apple targets. These images were single and unblocked apple images, single and blocked apple images, images containing adjacent apples, and apples in panoramas. The root mean square error values of these four types of apple images were 6.3, 15.0, 21.6 and 18.4, respectively, and the average location errors were 4.9°, 10.2°, 16.3° and 13.8°, respectively. Furthermore, the improved algorithm was effective in terms of average runtime, with 3.7 ms and 9.2 ms for single and unblocked and single and blocked apple images, respectively. For the other two types of apple images, the runtime was determined by the number of apples and blocked apples contained in the images. The results showed that the improved algorithm could extract symmetry axes and locate the picking points of apples more efficiently. In conclusion, the improved algorithm is feasible for extracting symmetry axes and locating the picking points of apples.
Energy Technology Data Exchange (ETDEWEB)
Wang, D.; Song, H.; Yu, X.; Zhang, W.; Qu, W.; Xu, Y.
2015-07-01
The key problem for picking robots is to locate the picking points of fruit. A method based on the moment of inertia and symmetry of apples is proposed in this paper to locate the picking points of apples. Image pre-processing procedures, which are crucial to improving the accuracy of the location, were carried out to remove noise and smooth the edges of apples. The moment of inertia method has the disadvantage of high computational complexity, which should be solved, so convex hull was used to improve this problem. To verify the validity of this algorithm, a test was conducted using four types of apple images containing 107 apple targets. These images were single and unblocked apple images, single and blocked apple images, images containing adjacent apples, and apples in panoramas. The root mean square error values of these four types of apple images were 6.3, 15.0, 21.6 and 18.4, respectively, and the average location errors were 4.9°, 10.2°, 16.3° and 13.8°, respectively. Furthermore, the improved algorithm was effective in terms of average runtime, with 3.7 ms and 9.2 ms for single and unblocked and single and blocked apple images, respectively. For the other two types of apple images, the runtime was determined by the number of apples and blocked apples contained in the images. The results showed that the improved algorithm could extract symmetry axes and locate the picking points of apples more efficiently. In conclusion, the improved algorithm is feasible for extracting symmetry axes and locating the picking points of apples. (Author)
Grover quantum searching algorithm based on weighted targets
Institute of Scientific and Technical Information of China (English)
Li Panchi; Li Shiyong
2008-01-01
The current Grover quantum searching algorithm cannot identify the difference in importance of the search targets when it is applied to an unsorted quantum database, and the probability for each search target is equal. To solve this problem, a Grover searching algorithm based on weighted targets is proposed. First, each target is endowed a weight coefficient according to its importance. Applying these different weight coefficients, the targets are represented as quantum superposition states. Second, the novel Grover searching algorithm based on the quantum superposition of the weighted targets is constructed. Using this algorithm, the probability of getting each target can be approximated to the corresponding weight coefficient, which shows the flexibility of this algorithm.Finally, the validity of the algorithm is proved by a simple searching example.
Lazy learner text categorization algorithm based on embedded feature selection
Institute of Scientific and Technical Information of China (English)
Yan Peng; Zheng Xuefeng; Zhu Jianyong; Xiao Yunhong
2009-01-01
To avoid the curse of dimensionality, text categorization (TC) algorithms based on machine learning (ML) have to use an feature selection (FS) method to reduce the dimensionality of feature space. Although having been widely used, FS process will generally cause information losing and then have much side-effect on the whole performance of TC algorithms. On the basis of the sparsity characteristic of text vectors, a new TC algorithm based on lazy feature selection (LFS) is presented. As a new type of embedded feature selection approach, the LFS method can greatly reduce the dimension of features without any information losing, which can improve both efficiency and performance of algorithms greatly. The experiments show the new algorithm can simultaneously achieve much higher both performance and efficiency than some of other classical TC algorithms.
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
A novel method of global optimal path planning for mobile robot was proposed based on the improved Dijkstra algorithm and ant system algorithm. This method includes three steps: the first step is adopting the MAKLINK graph theory to establish the free space model of the mobile robot, the second step is adopting the improved Dijkstra algorithm to find out a sub-optimal collision-free path, and the third step is using the ant system algorithm to adjust and optimize the location of the sub-optimal path so as to generate the global optimal path for the mobile robot. The computer simulation experiment was carried out and the results show that this method is correct and effective. The comparison of the results confirms that the proposed method is better than the hybrid genetic algorithm in the global optimal path planning.
Genetic Algorithm-based Affine Parameter Estimation for Shape Recognition
Directory of Open Access Journals (Sweden)
Yuxing Mao
2014-06-01
Full Text Available Shape recognition is a classically difficult problem because of the affine transformation between two shapes. The current study proposes an affine parameter estimation method for shape recognition based on a genetic algorithm (GA. The contributions of this study are focused on the extraction of affine-invariant features, the individual encoding scheme, and the fitness function construction policy for a GA. First, the affine-invariant characteristics of the centroid distance ratios (CDRs of any two opposite contour points to the barycentre are analysed. Using different intervals along the azimuth angle, the different numbers of CDRs of two candidate shapes are computed as representations of the shapes, respectively. Then, the CDRs are selected based on predesigned affine parameters to construct the fitness function. After that, a GA is used to search for the affine parameters with optimal matching between candidate shapes, which serve as actual descriptions of the affine transformation between the shapes. Finally, the CDRs are resampled based on the estimated parameters to evaluate the similarity of the shapes for classification. The experimental results demonstrate the robust performance of the proposed method in shape recognition with translation, scaling, rotation and distortion.
A new algorithm for extracting a small representative subgraph from a very large graph
Sethu, Harish
2012-01-01
Many real-world networks are prohibitively large for data retrieval, storage and analysis of all of its nodes and links. Understanding the structure and dynamics of these networks entails creating a smaller representative sample of the full graph while preserving its relevant topological properties. In this report, we show that graph sampling algorithms currently proposed in the literature are not able to preserve network properties even with sample sizes containing as many as 20% of the nodes from the original graph. We present a new sampling algorithm, called Tiny Sample Extractor, with a new goal of a sample size smaller than 5% of the original graph while preserving two key properties of a network, the degree distribution and its clustering co-efficient. Our approach is based on a new empirical method of estimating measurement biases in crawling algorithms and compensating for them accordingly. We present a detailed comparison of best known graph sampling algorithms, focusing in particular on how the prop...
Phase-unwrapping algorithm for translation extraction from spherical navigator echoes.
Liu, Junmin; Drangova, Maria
2010-02-01
Spherical navigator echoes have been shown to determine rigid-body rotation and translation simultaneously. Following the determination of rotation, translations are determined from the phase change between the baseline and transformed spherical navigator echoes. Because the measured phase change is limited in the interval (-pi, pi), a phase-unwrapping algorithm is required to recover the true phase change in absolute values. The unwrapping algorithm presented in this article is based on a priori information about the true translation-induced phase-change function. The algorithm is verified using simulation and in vivo experiments, and the accuracy and precision of translation determination are evaluated. Specifically, the effects of background and off-resonance-induced phase noise are explored. When the proposed phase-unwrapping algorithm was used, translations up to 15 mm were measured, with accuracy better than 5%; for translations up to 40 mm, an error of approximately 10% was observed.
Iorgulescu, E; Voicu, V A; Sârbu, C; Tache, F; Albu, F; Medvedovici, A
2016-08-01
The influence of the experimental variability (instrumental repeatability, instrumental intermediate precision and sample preparation variability) and data pre-processing (normalization, peak alignment, background subtraction) on the discrimination power of multivariate data analysis methods (Principal Component Analysis -PCA- and Cluster Analysis -CA-) as well as a new algorithm based on linear regression was studied. Data used in the study were obtained through positive or negative ion monitoring electrospray mass spectrometry (+/-ESI/MS) and reversed phase liquid chromatography/UV spectrometric detection (RPLC/UV) applied to green tea extracts. Extractions in ethanol and heated water infusion were used as sample preparation procedures. The multivariate methods were directly applied to mass spectra and chromatograms, involving strictly a holistic comparison of shapes, without assignment of any structural identity to compounds. An alternative data interpretation based on linear regression analysis mutually applied to data series is also discussed. Slopes, intercepts and correlation coefficients produced by the linear regression analysis applied on pairs of very large experimental data series successfully retain information resulting from high frequency instrumental acquisition rates, obviously better defining the profiles being compared. Consequently, each type of sample or comparison between samples produces in the Cartesian space an ellipsoidal volume defined by the normal variation intervals of the slope, intercept and correlation coefficient. Distances between volumes graphically illustrates (dis)similarities between compared data. The instrumental intermediate precision had the major effect on the discrimination power of the multivariate data analysis methods. Mass spectra produced through ionization from liquid state in atmospheric pressure conditions of bulk complex mixtures resulting from extracted materials of natural origins provided an excellent data
Spatially reduced image extraction from MPEG-2 video: fast algorithms and applications
Song, Junehwa; Yeo, Boon-Lock
1997-12-01
The MPEG-2 video standards are targeted for high-quality video broadcast and distribution, and are optimized for efficient storage and transmission. However, it is difficult to process MPEG-2 for video browsing and database applications without first decompressing the video. Yeo and Liu have proposed fast algorithms for the direct extraction of spatially reduced images from MPEG-1 video. Reduced images have been demonstrated to be effective for shot detection, shot browsing and editing, and temporal processing of video for video presentation and content annotation. In this paper, we develop new tools to handle the extra complexity in MPEG-2 video for extracting spatially reduced images. In particular, we propose new classes of discrete cosine transform (DCT) domain and DCT inverse motion compensation operations for handling the interlaced modes in the different frame types of MPEG-2, and design new and efficient algorithms for generating spatially reduced images of an MPEG-2 video. We also describe key video applications on the extracted reduced images.
A real time vehicles detection algorithm for vision based sensors
Płaczek, Bartłomiej
2011-01-01
A vehicle detection plays an important role in the traffic control at signalised intersections. This paper introduces a vision-based algorithm for vehicles presence recognition in detection zones. The algorithm uses linguistic variables to evaluate local attributes of an input image. The image attributes are categorised as vehicle, background or unknown features. Experimental results on complex traffic scenes show that the proposed algorithm is effective for a real-time vehicles detection.
Algorithm Research of Individualized Travelling Route Recommendation Based on Similarity
Xue Shan; Liu Song
2015-01-01
Although commercial recommendation system has made certain achievement in travelling route development, the recommendation system is facing a series of challenges because of people’s increasing interest in travelling. It is obvious that the core content of the recommendation system is recommendation algorithm. The advantages of recommendation algorithm can bring great effect to the recommendation system. Based on this, this paper applies traditional collaborative filtering algorithm for analy...
A New RWA Algorithm Based on Multi-Objective
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
In this article, we studied the associated research problems and challenges on routing and wavelength assignment (RWA) in WDM (wavelength division multiplexing) networks. Various RWA approaches are examined and compared. We proposed a new RWA algorithm based on multi-objective. In this new algorithm, we consider multiple network optimizing objectives to setup a lightpath with maximize profit and shortest path under the limited resources. By comparing and analyzing, the proposed algorithm is much better ...
Variable Neighborhood Search Based Algorithm for University Course Timetabling Problem
Kralev, Velin; Kraleva, Radoslava
2016-01-01
In this paper a variable neighborhood search approach as a method for solving combinatoric optimization problems is presented. A variable neighborhood search based algorithm for solving the problem concerning the university course timetable design has been developed. This algorithm is used to solve the real problem regarding the university course timetable design. It is compared with other algorithms that are tested on the same sets of input data. The object and the methodology of study are p...
TOA estimation algorithm based on multi-search
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
A new time of arrival (TOA) estimation algorithm is proposed. The algorithm computes the optimal sub-correlation length based on the SNR theory. So the robust of TOA acquirement is guaranteed very well. Then, according to the actual transmission environment and network system, the multi-search method is given. From the simulation result,the algorithm shows a very high application value in the realization of wireless location system (WLS).
Variable Neighborhood Search Based Algorithm for University Course Timetabling Problem
Kralev, Velin; Kraleva, Radoslava
2016-01-01
In this paper a variable neighborhood search approach as a method for solving combinatoric optimization problems is presented. A variable neighborhood search based algorithm for solving the problem concerning the university course timetable design has been developed. This algorithm is used to solve the real problem regarding the university course timetable design. It is compared with other algorithms that are tested on the same sets of input data. The object and the methodology of study are p...
Hindi Parser-based on CKY algorithm
Nitin Hambir; Ambrish Srivastav
2012-01-01
Hindi parser is a tool which takes Hindi sentence and verifies whether or not given Hindi sentence is correct according to Hindi language grammar. Parsing is important for Natural Language Processing tools. Hindi parser uses the CKY (Coke- Kasami-Younger) parsing algorithm for Parsing of Hindi language. It parses whole sentence and generate a matrix
Driver Distraction Using Visual-Based Sensors and Algorithms.
Fernández, Alberto; Usamentiaga, Rubén; Carús, Juan Luis; Casado, Rubén
2016-10-28
Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information systems is critical because they induce visual, biomechanical and cognitive distraction and may affect driving performance in qualitatively different ways. Non-intrusive methods are strongly preferred for monitoring distraction, and vision-based systems have appeared to be attractive for both drivers and researchers. Biomechanical, visual and cognitive distractions are the most commonly detected types in video-based algorithms. Many distraction detection systems only use a single visual cue and therefore, they may be easily disturbed when occlusion or illumination changes appear. Moreover, the combination of these visual cues is a key and challenging aspect in the development of robust distraction detection systems. These visual cues can be extracted mainly by using face monitoring systems but they should be completed with more visual cues (e.g., hands or body information) or even, distraction detection from specific actions (e.g., phone usage). Additionally, these algorithms should be included in an embedded device or system inside a car. This is not a trivial task and several requirements must be taken into account: reliability, real-time performance, low cost, small size, low power consumption, flexibility and short time-to-market. The key points for the development and implementation of sensors to carry out the detection of distraction will also be reviewed. This paper shows a review of the role of computer vision technology applied to the development of monitoring systems to detect distraction. Some key points considered as both future work and challenges ahead yet to be solved will also be addressed.
Tiwari, Saumya; Bhargava, Rohit
2015-06-01
Fourier transform infrared (FTIR) spectroscopic imaging is an emerging microscopy modality for clinical histopathologic diagnoses as well as for biomedical research. Spectral data recorded in this modality are indicative of the underlying, spatially resolved biochemical composition but need computerized algorithms to digitally recognize and transform this information to a diagnostic tool to identify cancer or other physiologic conditions. Statistical pattern recognition forms the backbone of these recognition protocols and can be used for highly accurate results. Aided by biochemical correlations with normal and diseased states and the power of modern computer-aided pattern recognition, this approach is capable of combating many standing questions of traditional histology-based diagnosis models. For example, a simple diagnostic test can be developed to determine cell types in tissue. As a more advanced application, IR spectral data can be integrated with patient information to predict risk of cancer, providing a potential road to precision medicine and personalized care in cancer treatment. The IR imaging approach can be implemented to complement conventional diagnoses, as the samples remain unperturbed and are not destroyed. Despite high potential and utility of this approach, clinical implementation has not yet been achieved due to practical hurdles like speed of data acquisition and lack of optimized computational procedures for extracting clinically actionable information rapidly. The latter problem has been addressed by developing highly efficient ways to process IR imaging data but remains one that has considerable scope for progress. Here, we summarize the major issues and provide practical considerations in implementing a modified Bayesian classification protocol for digital molecular pathology. We hope to familiarize readers with analysis methods in IR imaging data and enable researchers to develop methods that can lead to the use of this promising
VHDL implementation of feature-extraction algorithm for the PANDA electromagnetic calorimeter
Guliyev, E.; Kavatsyuk, M.; Lemmens, P. J. J.; Tambave, G.; Löhner, H.; Panda Collaboration
2012-02-01
A simple, efficient, and robust feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA spectrometer at FAIR, Darmstadt, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The source-code is available as an open-source project and is adaptable for other projects and sampling ADCs. Best performance with different types of signal sources can be achieved through flexible parameter selection. The on-line data-processing in FPGA enables to construct an almost dead-time free data acquisition system which is successfully evaluated as a first step towards building a complete trigger-less readout chain. Prototype setups are studied to determine the dead-time of the implemented algorithm, the rate of false triggering, timing performance, and event correlations.
Model and Algorithm of BP Neural Network Based on Expanded Multichain Quantum Optimization
Directory of Open Access Journals (Sweden)
Baoyu Xu
2015-01-01
Full Text Available The model and algorithm of BP neural network optimized by expanded multichain quantum optimization algorithm with super parallel and ultra-high speed are proposed based on the analysis of the research status quo and defects of BP neural network to overcome the defects of overfitting, the random initial weights, and the oscillation of the fitting and generalization ability along with subtle changes of the network parameters. The method optimizes the structure of the neural network effectively and can overcome a series of problems existing in the BP neural network optimized by basic genetic algorithm such as slow convergence speed, premature convergence, and bad computational stability. The performance of the BP neural network controller is further improved. The simulation experimental results show that the model is with good stability, high precision of the extracted parameters, and good real-time performance and adaptability in the actual parameter extraction.
Flach, Milan; Gans, Fabian; Brenning, Alexander; Denzler, Joachim; Reichstein, Markus; Rodner, Erik; Bathiany, Sebastian; Bodesheim, Paul; Guanche, Yanira; Sippel, Sebastian; Mahecha, Miguel D.
2017-08-01
Today, many processes at the Earth's surface are constantly monitored by multiple data streams. These observations have become central to advancing our understanding of vegetation dynamics in response to climate or land use change. Another set of important applications is monitoring effects of extreme climatic events, other disturbances such as fires, or abrupt land transitions. One important methodological question is how to reliably detect anomalies in an automated and generic way within multivariate data streams, which typically vary seasonally and are interconnected across variables. Although many algorithms have been proposed for detecting anomalies in multivariate data, only a few have been investigated in the context of Earth system science applications. In this study, we systematically combine and compare feature extraction and anomaly detection algorithms for detecting anomalous events. Our aim is to identify suitable workflows for automatically detecting anomalous patterns in multivariate Earth system data streams. We rely on artificial data that mimic typical properties and anomalies in multivariate spatiotemporal Earth observations like sudden changes in basic characteristics of time series such as the sample mean, the variance, changes in the cycle amplitude, and trends. This artificial experiment is needed as there is no gold standard for the identification of anomalies in real Earth observations. Our results show that a well-chosen feature extraction step (e.g., subtracting seasonal cycles, or dimensionality reduction) is more important than the choice of a particular anomaly detection algorithm. Nevertheless, we identify three detection algorithms (k-nearest neighbors mean distance, kernel density estimation, a recurrence approach) and their combinations (ensembles) that outperform other multivariate approaches as well as univariate extreme-event detection methods. Our results therefore provide an effective workflow to automatically detect anomalies
A. AL-Salhi, Yahya E.; Lu, Songfeng
2016-08-01
Quantum steganography can solve some problems that are considered inefficient in image information concealing. It researches on Quantum image information concealing to have been widely exploited in recent years. Quantum image information concealing can be categorized into quantum image digital blocking, quantum image stereography, anonymity and other branches. Least significant bit (LSB) information concealing plays vital roles in the classical world because many image information concealing algorithms are designed based on it. Firstly, based on the novel enhanced quantum representation (NEQR), image uniform blocks clustering around the concrete the least significant Qu-block (LSQB) information concealing algorithm for quantum image steganography is presented. Secondly, a clustering algorithm is proposed to optimize the concealment of important data. Finally, we used Con-Steg algorithm to conceal the clustered image blocks. Information concealing located on the Fourier domain of an image can achieve the security of image information, thus we further discuss the Fourier domain LSQu-block information concealing algorithm for quantum image based on Quantum Fourier Transforms. In our algorithms, the corresponding unitary Transformations are designed to realize the aim of concealing the secret information to the least significant Qu-block representing color of the quantum cover image. Finally, the procedures of extracting the secret information are illustrated. Quantum image LSQu-block image information concealing algorithm can be applied in many fields according to different needs.
Zhang, Xiaowen; Chen, Bingfeng
2017-08-01
Based on the frequent sub-tree mining algorithm, this paper proposes a construction scheme of web page comment information extraction system based on frequent subtree mining, referred to as FSM system. The entire system architecture and the various modules to do a brief introduction, and then the core of the system to do a detailed description, and finally give the system prototype.
Intelligent Hybrid Cluster Based Classification Algorithm for Social Network Analysis
Directory of Open Access Journals (Sweden)
S. Muthurajkumar
2014-05-01
Full Text Available In this paper, we propose an hybrid clustering based classification algorithm based on mean approach to effectively classify to mine the ordered sequences (paths from weblog data in order to perform social network analysis. In the system proposed in this work for social pattern analysis, the sequences of human activities are typically analyzed by switching behaviors, which are likely to produce overlapping clusters. In this proposed system, a robust Modified Boosting algorithm is proposed to hybrid clustering based classification for clustering the data. This work is useful to provide connection between the aggregated features from the network data and traditional indices used in social network analysis. Experimental results show that the proposed algorithm improves the decision results from data clustering when combined with the proposed classification algorithm and hence it is proved that of provides better classification accuracy when tested with Weblog dataset. In addition, this algorithm improves the predictive performance especially for multiclass datasets which can increases the accuracy.
A Vehicle Detection Algorithm Based on Deep Belief Network
Directory of Open Access Journals (Sweden)
Hai Wang
2014-01-01
Full Text Available Vision based vehicle detection is a critical technology that plays an important role in not only vehicle active safety but also road video surveillance application. Traditional shallow model based vehicle detection algorithm still cannot meet the requirement of accurate vehicle detection in these applications. In this work, a novel deep learning based vehicle detection algorithm with 2D deep belief network (2D-DBN is proposed. In the algorithm, the proposed 2D-DBN architecture uses second-order planes instead of first-order vector as input and uses bilinear projection for retaining discriminative information so as to determine the size of the deep architecture which enhances the success rate of vehicle detection. On-road experimental results demonstrate that the algorithm performs better than state-of-the-art vehicle detection algorithm in testing data sets.
A PRESSURE-BASED ALGORITHM FOR CAVITATING FLOW COMPUTATIONS
Institute of Scientific and Technical Information of China (English)
ZHANG Ling-xin; ZHAO Wei-guo; SHAO Xue-ming
2011-01-01
A pressure-based algorithm for the prediction of cavitating flows is presented. The algorithm employs a set of equations including the Navier-Stokes equations and a cavitation model explaining the phase change between liquid and vapor. A pressure-based method is used to construct the algorithm and the coupling between pressure and velocity is considered. The pressure correction equation is derived from a new continuity equation which employs a source term related to phase change rate instead of the material derivative of density Dp/Dt.Thispressure-based algorithm allows for the computation of steady or unsteady,2-Dor 3-D cavitating flows. Two 2-D cases, flows around a flat-nose cylinder and around a NACA0015 hydrofoil, are simulated respectively, and the periodic cavitation behaviors associated with the re-entrant jets are captured. This algorithm shows good capability of computating time-dependent cavitating flows.
Thermal Parameter Extraction of a Multilayered System by a Genetic Algorithm
Kuriakose, M.; Depriester, M.; Mascot, M.; Longuemart, S.; Fasquelle, D.; Carru, J. C.; Sahraoui, A. Hadj
2013-09-01
Submicron multilayer systems are nowadays used in many common applications such as electronic systems, fuel cells, etc. A knowledge of the layer’s thermal properties are of main interest for thermal management in such systems. Thus, the aim of this study is to investigate thermal parameters of a commercially available multilayered system (Pt-Ti-SiO-Si) using the photothermal radiometry technique. Here, a genetic algorithm is used for extracting thermal parameters of this typical four-layer wafer system. The obtained results can be used as a reference for thermal studies of thin layers coated on the top of such wafers, where they act as a deposition substrate.
New Iterated Decoding Algorithm Based on Differential Frequency Hopping System
Institute of Scientific and Technical Information of China (English)
LIANG Fu-lin; LUO Wei-xiong
2005-01-01
A new iterated decoding algorithm is proposed for differential frequency hopping (DFH) encoder concatenated with multi-frequency shift-key (MFSK) modulator. According to the character of the frequency hopping (FH) pattern trellis produced by DFH function, maximum a posteriori (MAP) probability theory is applied to realize the iterate decoding of it. Further, the initial conditions for the new iterate algorithm based on MAP algorithm are modified for better performance. Finally, the simulation result compared with that from traditional algorithms shows good anti-interference performance.
Topology control based on quantum genetic algorithm in sensor networks
Institute of Scientific and Technical Information of China (English)
SUN Lijuan; GUO Jian; LU Kai; WANG Ruchuan
2007-01-01
Nowadays,two trends appear in the application of sensor networks in which both multi-service and quality of service (QoS)are supported.In terms of the goal of low energy consumption and high connectivity,the control on topology is crucial.The algorithm of topology control based on quantum genetic algorithm in sensor networks is proposed.An advantage of the quantum genetic algorithm over the conventional genetic algorithm is demonstrated in simulation experiments.The goals of high connectivity and low consumption of energy are reached.
Surname Inherited Algorithm Research Based on Artificial Immune System
Directory of Open Access Journals (Sweden)
Jing Xie
2013-06-01
Full Text Available To keep the diversity of antibodies in artificial immune system evolution process, this paper puts forward a kind of increase simulation surname inheritance algorithm based on the clonal selection algorithm, and identification and forecast the Vibration Data about CA6140 horizontal lathe machining slender shaft workpiece prone . The results show that the algorithm has the characteristics of flexible application, strong adaptability, an effective approach to improve efficiency of the algorithm, a good performance of global searching and broad application prospect.
Agent-based Algorithm for Spatial Distribution of Objects
Collier, Nathan
2012-06-02
In this paper we present an agent-based algorithm for the spatial distribution of objects. The algorithm is a generalization of the bubble mesh algorithm, initially created for the point insertion stage of the meshing process of the finite element method. The bubble mesh algorithm treats objects in space as bubbles, which repel and attract each other. The dynamics of each bubble are approximated by solving a series of ordinary differential equations. We present numerical results for a meshing application as well as a graph visualization application.
Support vector classification algorithm based on variable parameter linear programming
Institute of Scientific and Technical Information of China (English)
Xiao Jianhua; Lin Jian
2007-01-01
To solve the problems of SVM in dealing with large sample size and asymmetric distributed samples, a support vector classification algorithm based on variable parameter linear programming is proposed.In the proposed algorithm, linear programming is employed to solve the optimization problem of classification to decrease the computation time and to reduce its complexity when compared with the original model.The adjusted punishment parameter greatly reduced the classification error resulting from asymmetric distributed samples and the detailed procedure of the proposed algorithm is given.An experiment is conducted to verify whether the proposed algorithm is suitable for asymmetric distributed samples.
A multicast dynamic wavelength assignment algorithm based on matching degree
Institute of Scientific and Technical Information of China (English)
WU Qi-wu; ZHOU Xian-wei; WANG Jian-ping; YIN Zhi-hong; ZHANG Long
2009-01-01
The wavelength assignment with multiple multicast requests in fixed routing WDM network is studied. A new multicast dynamic wavelength assignment algorithm is presented based on matching degree. First, the wavelength matching degree between available wavelengths and multicast routing trees is introduced into the algorithm. Then, the wavelength assign-ment is translated into the maximum weight matching in bipartite graph, and this matching problem is solved by using an extended Kuhn-Munkres algorithm. The simulation results prove that the overall optimal wavelength assignment scheme is obtained in polynomial time. At the same time, the proposed algorithm can reduce the connecting blocking probability and improve the system resource utilization.
A new parallel algorithm for image matching based on entropy
Institute of Scientific and Technical Information of China (English)
董开坤; 胡铭曾
2001-01-01
Presents a new parallel image matching algorithm based on the concept of entropy feature vector and suitable to SIMD computer, which, in comparison with other algorithms, has the following advantages: ( 1 ) The spatial information of an image is appropriately introduced into the definition of image entropy. (2) A large number of multiplication operations are eliminated, thus the algorithm is sped up. (3) The shortcoming of having to do global calculation in the first instance is overcome, and concludes the algorithm has very good locality and is suitable for parallel processing.
The RSA Cryptoprocessor Hardware Implementation Based on Modified Montgomery Algorithm
Institute of Scientific and Technical Information of China (English)
CHEN Bo; WANG Xu; RONG Meng-tian
2005-01-01
RSA (Rivest-Shamir-Adleman)public-key cryptosystem is widely used in the information security area such as encryption and digital signature. Based on the modified Montgomery modular multiplication algorithm, a new architecture using CSA(carry save adder)was presented to implement modular multiplication. Compared with the popular modular multiplication algorithms using two CSA, the presented algorithm uses only one CSA, so it can improve the time efficiency of RSA cryptoprocessor and save about half of hardware resources for modular multiplication. With the increase of encryption data size n, the clock cycles for the encryption procedure reduce in T(n2) , compared with the modular multiplication algorithms using two CSA.
An Incremental Rule Acquisition Algorithm Based on Rough Set
Institute of Scientific and Technical Information of China (English)
YU Hong; YANG Da-chun
2005-01-01
Rough Set is a valid mathematical theory developed in recent years,which has the ability to deal with imprecise,uncertain,and vague information.This paper presents a new incremental rule acquisition algorithm based on rough set theory.First,the relation of the new instances with the original rule set is discussed.Then the change law of attribute reduction and value reduction are studied when a new instance is added.Follow,a new incremental learning algorithm for decision tables is presented within the framework of rough set.Finally,the new algorithm and the classical algorithm are analyzed and compared by theory and experiments.
CUDT: a CUDA based decision tree algorithm.
Lo, Win-Tsung; Chang, Yue-Shan; Sheu, Ruey-Kai; Chiu, Chun-Chieh; Yuan, Shyan-Ming
2014-01-01
Decision tree is one of the famous classification methods in data mining. Many researches have been proposed, which were focusing on improving the performance of decision tree. However, those algorithms are developed and run on traditional distributed systems. Obviously the latency could not be improved while processing huge data generated by ubiquitous sensing node in the era without new technology help. In order to improve data processing latency in huge data mining, in this paper, we design and implement a new parallelized decision tree algorithm on a CUDA (compute unified device architecture), which is a GPGPU solution provided by NVIDIA. In the proposed system, CPU is responsible for flow control while the GPU is responsible for computation. We have conducted many experiments to evaluate system performance of CUDT and made a comparison with traditional CPU version. The results show that CUDT is 5 ∼ 55 times faster than Weka-j48 and is 18 times speedup than SPRINT for large data set.
Novel Adaptive Beamforming Algorithm Based on Wavelet Packet Transform
Institute of Scientific and Technical Information of China (English)
Zhang Xiaofei; Xu Dazhuan
2005-01-01
An analysis of the received signal of array antennas shows that the received signal has multi-resolution characteristics, and hence the wavelet packet theory can be used to detect the signal. By emplying wavelet packet theory to adaptive beamforming, a wavelet packet transform-based adaptive beamforming algorithm (WP-ABF) is proposed . This WP-ABF algorithm uses wavelet packet transform as the preprocessing, and the wavelet packet transformed signal uses least mean square algorithm to implement the adaptive beamforming. White noise can be wiped off under wavelet packet transform according to the different characteristics of signal and white under the wavelet packet transform. Theoretical analysis and simulations demonstrate that the proposed WP-ABF algorithm converges faster than the conventional adaptive beamforming algorithm and the wavelet transform-based beamforming algorithm. Simulation results also reveal that the convergence of the algorithm relates closely to the wavelet base and series; that is, the algorithm convergence gets better with the increasing of series, and for the same series of wavelet base the convergence gets better with the increasing of regularity.
New MPPT algorithm based on hybrid dynamical theory
Elmetennani, Shahrazed
2014-11-01
This paper presents a new maximum power point tracking algorithm based on the hybrid dynamical theory. A multiceli converter has been considered as an adaptation stage for the photovoltaic chain. The proposed algorithm is a hybrid automata switching between eight different operating modes, which has been validated by simulation tests under different working conditions. © 2014 IEEE.
A danger-theory-based immune network optimization algorithm.
Zhang, Ruirui; Li, Tao; Xiao, Xin; Shi, Yuanquan
2013-01-01
Existing artificial immune optimization algorithms reflect a number of shortcomings, such as premature convergence and poor local search ability. This paper proposes a danger-theory-based immune network optimization algorithm, named dt-aiNet. The danger theory emphasizes that danger signals generated from changes of environments will guide different levels of immune responses, and the areas around danger signals are called danger zones. By defining the danger zone to calculate danger signals for each antibody, the algorithm adjusts antibodies' concentrations through its own danger signals and then triggers immune responses of self-regulation. So the population diversity can be maintained. Experimental results show that the algorithm has more advantages in the solution quality and diversity of the population. Compared with influential optimization algorithms, CLONALG, opt-aiNet, and dopt-aiNet, the algorithm has smaller error values and higher success rates and can find solutions to meet the accuracies within the specified function evaluation times.
Analog Circuit Design Optimization Based on Evolutionary Algorithms
Directory of Open Access Journals (Sweden)
Mansour Barari
2014-01-01
Full Text Available This paper investigates an evolutionary-based designing system for automated sizing of analog integrated circuits (ICs. Two evolutionary algorithms, genetic algorithm and PSO (Parswal particle swarm optimization algorithm, are proposed to design analog ICs with practical user-defined specifications. On the basis of the combination of HSPICE and MATLAB, the system links circuit performances, evaluated through specific electrical simulation, to the optimization system in the MATLAB environment, for the selected topology. The system has been tested by typical and hard-to-design cases, such as complex analog blocks with stringent design requirements. The results show that the design specifications are closely met. Comparisons with available methods like genetic algorithms show that the proposed algorithm offers important advantages in terms of optimization quality and robustness. Moreover, the algorithm is shown to be efficient.
A Practical Localization Algorithm Based on Wireless Sensor Networks
Huang, Tao; Xia, Feng; Jin, Cheng; Li, Liang
2010-01-01
Many localization algorithms and systems have been developed by means of wireless sensor networks for both indoor and outdoor environments. To achieve higher localization accuracy, extra hardware equipments are utilized by most of the existing localization algorithms, which increase the cost and greatly limit the range of location-based applications. In this paper we present a method which can effectively meet different localization accuracy requirements of most indoor and outdoor location services in realistic applications. Our algorithm is composed of two phases: partition phase, in which the target region is split into small grids and localization refinement phase in which a higher accuracy location can be generated by applying a trick algorithm. A realistic demo system using our algorithm has been developed to illustrate its feasibility and availability. The results show that our algorithm can improve the localization accuracy.
Teaching learning based optimization algorithm and its engineering applications
Rao, R Venkata
2016-01-01
Describing a new optimization algorithm, the “Teaching-Learning-Based Optimization (TLBO),” in a clear and lucid style, this book maximizes reader insights into how the TLBO algorithm can be used to solve continuous and discrete optimization problems involving single or multiple objectives. As the algorithm operates on the principle of teaching and learning, where teachers influence the quality of learners’ results, the elitist version of TLBO algorithm (ETLBO) is described along with applications of the TLBO algorithm in the fields of electrical engineering, mechanical design, thermal engineering, manufacturing engineering, civil engineering, structural engineering, computer engineering, electronics engineering, physics and biotechnology. The book offers a valuable resource for scientists, engineers and practitioners involved in the development and usage of advanced optimization algorithms.
Acceleration of Directional Medain Filter Based Deinterlacing Algorithm (DMFD
Directory of Open Access Journals (Sweden)
Addanki Purna Ramesh
2011-12-01
Full Text Available This paper presents a novel directional median filter based deinterlacing algorithm (DMFD. DMFD is a content adaptive spatial deinterlacing algorithm that finds the direction of the edge and applies the median filtering along the edge to interpolate the odd pixels from the 5 pixels from the upper and 5 pixels from the lower even lines of the field. The proposed algorithm gives a significance improvement of 3db for baboon standard test image that has high textured content compared to CADEM, DOI, and MELA and also gives improved average PSNR compared previous algorithms. The algorithm written and tested in C and ported onto Altera’s NIOS II embedded soft processor and configured in CYCLONE-II FPGA. The ISA of Nios-II processor has extended with two additional instructions for calculation of absolute difference and minimum of four numbers to accelerate the FPGA implementation of the algorithms by 3.2 times
Local Community Detection Algorithm Based on Minimal Cluster
Directory of Open Access Journals (Sweden)
Yong Zhou
2016-01-01
Full Text Available In order to discover the structure of local community more effectively, this paper puts forward a new local community detection algorithm based on minimal cluster. Most of the local community detection algorithms begin from one node. The agglomeration ability of a single node must be less than multiple nodes, so the beginning of the community extension of the algorithm in this paper is no longer from the initial node only but from a node cluster containing this initial node and nodes in the cluster are relatively densely connected with each other. The algorithm mainly includes two phases. First it detects the minimal cluster and then finds the local community extended from the minimal cluster. Experimental results show that the quality of the local community detected by our algorithm is much better than other algorithms no matter in real networks or in simulated networks.
Obeidat, Omar; Yu, Qiuye; Han, Xiaoyan
2016-12-01
Sonic IR imaging is an emerging NDE technology. This technology uses short pulses of ultrasonic excitation together with infrared imaging to detect defects in materials and structures. Sonic energy is coupled to the specimen under inspection by means of direct contact between the transducer tip and the specimen at some convenient point. This region which is normally in the field of view of the camera appears as intensity peak in the image which might be misinterpreted as defects or obscure the detection and/or extraction of the defect signals in the proximity of the contact region. Moreover, certain defects may have very small heat signature or being buried in noise. In this paper, we present algorithms to improve defect extraction and suppression of undesired heat patterns in sonic IR images. Two approaches are presented, each fits to a specific category of sonic IR images.
21 CFR 172.585 - Sugar beet extract flavor base.
2010-04-01
... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Sugar beet extract flavor base. 172.585 Section 172... CONSUMPTION Flavoring Agents and Related Substances § 172.585 Sugar beet extract flavor base. Sugar beet...) Sugar beet extract flavor base is the concentrated residue of soluble sugar beet extractives from...
GriMa: a Grid Mining Algorithm for Bag-of-Grid-Based Classification
Deville, Romain; Fromont, Elisa; Jeudy, Baptiste; Solnon, Christine
2016-01-01
International audience; General-purpose exhaustive graph mining algorithms have seldom been used in real life contexts due to the high complexity of the process that is mostly based on costly isomorphism tests and countless expansion possibilities. In this paper, we explain how to exploit grid-based representations of problems to efficiently extract frequent grid subgraphs and create Bag-of-Grids which can be used as new features for classification purposes. We provide an efficient grid minin...
Compressive sensing based algorithms for electronic defence
Mishra, Amit Kumar
2017-01-01
This book details some of the major developments in the implementation of compressive sensing in radio applications for electronic defense and warfare communication use. It provides a comprehensive background to the subject and at the same time describes some novel algorithms. It also investigates application value and performance-related parameters of compressive sensing in scenarios such as direction finding, spectrum monitoring, detection, and classification.
Image completion algorithm based on texture synthesis
Institute of Scientific and Technical Information of China (English)
Zhang Hongying; Peng Qicong; Wu Yadong
2007-01-01
A new algorithm is proposed for completing the missing parts caused by the removal of foreground or background elements from an image of natural scenery in a visually plausible way.The major contributions of the proposed algorithm are: (1) for most natural images, there is a strong orientation of texture or color distribution.So a method is introduced to compute the main direction of the texture and complete the image by limiting the search to one direction to carry out image completion quite fast; (2) there exists a synthesis ordering for image completion.The searching order of the patches is denned to ensure the regions with more known information and the structures should be completed before filling in other regions; (3) to improve the visual effect of texture synthesis, an adaptive scheme is presented to determine the size of the template window for capturing the features of various scales.A number of examples are given to demonstrate the effectiveness of the proposed algorithm.
A Novel Heuristic Algorithm Based on Clark and Wright Algorithm for Green Vehicle Routing Problem
Mehdi Alinaghian; Zahra Kaviani; Siyavash Khaledan
2015-01-01
A significant portion of Gross Domestic Production (GDP) in any country belongs to the transportation system. Transportation equipment, in the other hand, is supposed to be great consumer of oil products. Many attempts have been assigned to the vehicles to cut down Greenhouse Gas (GHG). In this paper a novel heuristic algorithm based on Clark and Wright Algorithm called Green Clark and Wright (GCW) for Vehicle Routing Problem regarding to fuel consumption is presented. The objective function ...
Wu, Tin-Yu; Chang, Tse; Chu, Teng-Hao
2017-02-01
Many data mining adopts the form of Artificial Neural Network (ANN) to solve many problems, many problems will be involved in the process of training Artificial Neural Network, such as the number of samples with volume label, the time and performance of training, the number of hidden layers and Transfer function, if the compared data results are not expected, it cannot be known clearly that which dimension causes the deviation, the main reason is that Artificial Neural Network trains compared results through the form of modifying weight, and it is not a kind of training to improve the original algorithm for the extraction algorithm of image, but tend to obtain correct value aimed at the result plus the weigh; in terms of these problems, this paper will mainly put forward a method to assist in the image data analysis of Artificial Neural Network; normally, a parameter will be set as the value to extract feature vector during processing the image, which will be considered by us as weight, the experiment will use the value extracted from feature point of Speeded Up Robust Features (SURF) Image as the basis for training, SURF itself can extract different feature points according to extracted values, we will make initial semi-supervised clustering according to these values, and use Modified K - on his Neighbors (MFKNN) as training and classification, the matching mode of unknown images is not one-to-one complete comparison, but only compare group Centroid, its main purpose is to save its efficiency and speed up, and its retrieved data results will be observed and analyzed eventually; the method is mainly to make clustering and classification with the use of the nature of image feature point to give values to groups with high error rate to produce new feature points and put them into Input Layer of Artificial Neural Network for training, and finally comparative analysis is made with Back-Propagation Neural Network (BPN) of Genetic Algorithm-Artificial Neural Network
Collaborative Filtering Algorithms Based on Kendall Correlation in Recommender Systems
Institute of Scientific and Technical Information of China (English)
YAO Yu; ZHU Shanfeng; CHEN Xinmeng
2006-01-01
In this work, Kendall correlation based collaborative filtering algorithms for the recommender systems are proposed. The Kendall correlation method is used to measure the correlation amongst users by means of considering the relative order of the users' ratings. Kendall based algorithm is based upon a more general model and thus could be more widely applied in e-commerce. Another discovery of this work is that the consideration of only positive correlated neighbors in prediction, in both Pearson and Kendall algorithms, achieves higher accuracy than the consideration of all neighbors, with only a small loss of coverage.
Video segmentation using multiple features based on EM algorithm
Institute of Scientific and Technical Information of China (English)
张风超; 杨杰; 刘尔琦
2004-01-01
Object-based video segmentation is an important issue for many multimedia applications. A video segmentation method based on EM algorithm is proposed. We consider video segmentation as an unsupervised classification problem and apply EM algorithm to obtain the maximum-likelihood estimation of the Gaussian model parameters for model-based segmentation. We simultaneously combine multiple features (motion, color) within a maximum likelihood framework to obtain accurate segment results. We also use the temporal consistency among video frames to improve the speed of EM algorithm. Experimental results on typical MPEG-4 sequences and real scene sequences show that our method has an attractive accuracy and robustness.
Fast image matching algorithm based on projection characteristics
Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun
2011-06-01
Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.
Gao, Heng-zhen; Wan, Jian-wei; Zhu, Zhen-zhen; Wang, Li-bao; Nian, Yong-jian
2011-05-01
The present paper proposes a novel hyperspectral image classification algorithm based on LS-SVM (least squares support vector machine). The LS-SVM uses the features extracted from subspace of bands (SOB). The maximum noise fraction (MNF) method is adopted as the feature extraction method. The spectral correlations of the hyperspectral image are used in order to divide the feature space into several SOBs. Then the MNF is used to extract characteristic features of the SOBs. The extracted features are combined into the feature vector for classification. So the strong bands correlation is avoided and the spectral redundancies are reduced. The LS-SVM classifier is adopted, which replaces inequality constraints in SVM by equality constraints. So the computation consumption is reduced and the learning performance is improved. The proposed method optimizes spectral information by feature extraction and reduces the spectral noise. The classifier performance is improved. Experimental results show the superiorities of the proposed algorithm.
Energy Technology Data Exchange (ETDEWEB)
Zagrouba, M.; Sellami, A.; Bouaicha, M. [Laboratoire de Photovoltaique, des Semi-conducteurs et des Nanostructures, Centre de Recherches et des Technologies de l' Energie, Technopole de Borj-Cedria, Tunis, B.P. 95, 2050 Hammam-Lif (Tunisia); Ksouri, M. [Unite de Recherche RME-Groupe AIA, Institut National des Sciences Appliquees et de Technologie (Tunisia)
2010-05-15
In this paper, we propose to perform a numerical technique based on genetic algorithms (GAs) to identify the electrical parameters (I{sub s}, I{sub ph}, R{sub s}, R{sub sh}, and n) of photovoltaic (PV) solar cells and modules. These parameters were used to determine the corresponding maximum power point (MPP) from the illuminated current-voltage (I-V) characteristic. The one diode type approach is used to model the AM1.5 I-V characteristic of the solar cell. To extract electrical parameters, the approach is formulated as a non convex optimization problem. The GAs approach was used as a numerical technique in order to overcome problems involved in the local minima in the case of non convex optimization criteria. Compared to other methods, we find that the GAs is a very efficient technique to estimate the electrical parameters of PV solar cells and modules. Indeed, the race of the algorithm stopped after five generations in the case of PV solar cells and seven generations in the case of PV modules. The identified parameters are then used to extract the maximum power working points for both cell and module. (author)
Knowledge-Driven Event Extraction in Russian: Corpus-Based Linguistic Resources.
Solovyev, Valery; Ivanov, Vladimir
2016-01-01
Automatic event extraction form text is an important step in knowledge acquisition and knowledge base population. Manual work in development of extraction system is indispensable either in corpus annotation or in vocabularies and pattern creation for a knowledge-based system. Recent works have been focused on adaptation of existing system (for extraction from English texts) to new domains. Event extraction in other languages was not studied due to the lack of resources and algorithms necessary for natural language processing. In this paper we define a set of linguistic resources that are necessary in development of a knowledge-based event extraction system in Russian: a vocabulary of subordination models, a vocabulary of event triggers, and a vocabulary of Frame Elements that are basic building blocks for semantic patterns. We propose a set of methods for creation of such vocabularies in Russian and other languages using Google Books NGram Corpus. The methods are evaluated in development of event extraction system for Russian.
Directory of Open Access Journals (Sweden)
Jie-sheng Wang
2014-01-01
Full Text Available For meeting the forecasting target of key technology indicators in the flotation process, a BP neural network soft-sensor model based on features extraction of flotation froth images and optimized by shuffled cuckoo search algorithm is proposed. Based on the digital image processing technique, the color features in HSI color space, the visual features based on the gray level cooccurrence matrix, and the shape characteristics based on the geometric theory of flotation froth images are extracted, respectively, as the input variables of the proposed soft-sensor model. Then the isometric mapping method is used to reduce the input dimension, the network size, and learning time of BP neural network. Finally, a shuffled cuckoo search algorithm is adopted to optimize the BP neural network soft-sensor model. Simulation results show that the model has better generalization results and prediction accuracy.
Wang, Jie-sheng; Han, Shuang; Shen, Na-na; Li, Shu-xia
2014-01-01
For meeting the forecasting target of key technology indicators in the flotation process, a BP neural network soft-sensor model based on features extraction of flotation froth images and optimized by shuffled cuckoo search algorithm is proposed. Based on the digital image processing technique, the color features in HSI color space, the visual features based on the gray level cooccurrence matrix, and the shape characteristics based on the geometric theory of flotation froth images are extracted, respectively, as the input variables of the proposed soft-sensor model. Then the isometric mapping method is used to reduce the input dimension, the network size, and learning time of BP neural network. Finally, a shuffled cuckoo search algorithm is adopted to optimize the BP neural network soft-sensor model. Simulation results show that the model has better generalization results and prediction accuracy.
Fast Contour-Tracing Algorithm Based on a Pixel-Following Method for Image Sensors.
Seo, Jonghoon; Chae, Seungho; Shim, Jinwook; Kim, Dongchul; Cheong, Cheolho; Han, Tack-Don
2016-03-09
Contour pixels distinguish objects from the background. Tracing and extracting contour pixels are widely used for smart/wearable image sensor devices, because these are simple and useful for detecting objects. In this paper, we present a novel contour-tracing algorithm for fast and accurate contour following. The proposed algorithm classifies the type of contour pixel, based on its local pattern. Then, it traces the next contour using the previous pixel's type. Therefore, it can classify the type of contour pixels as a straight line, inner corner, outer corner and inner-outer corner, and it can extract pixels of a specific contour type. Moreover, it can trace contour pixels rapidly because it can determine the local minimal path using the contour case. In addition, the proposed algorithm is capable of the compressing data of contour pixels using the representative points and inner-outer corner points, and it can accurately restore the contour image from the data. To compare the performance of the proposed algorithm to that of conventional techniques, we measure their processing time and accuracy. In the experimental results, the proposed algorithm shows better performance compared to the others. Furthermore, it can provide the compressed data of contour pixels and restore them accurately, including the inner-outer corner, which cannot be restored using conventional algorithms.
Fast Contour-Tracing Algorithm Based on a Pixel-Following Method for Image Sensors
Directory of Open Access Journals (Sweden)
Jonghoon Seo
2016-03-01
Full Text Available Contour pixels distinguish objects from the background. Tracing and extracting contour pixels are widely used for smart/wearable image sensor devices, because these are simple and useful for detecting objects. In this paper, we present a novel contour-tracing algorithm for fast and accurate contour following. The proposed algorithm classifies the type of contour pixel, based on its local pattern. Then, it traces the next contour using the previous pixel’s type. Therefore, it can classify the type of contour pixels as a straight line, inner corner, outer corner and inner-outer corner, and it can extract pixels of a specific contour type. Moreover, it can trace contour pixels rapidly because it can determine the local minimal path using the contour case. In addition, the proposed algorithm is capable of the compressing data of contour pixels using the representative points and inner-outer corner points, and it can accurately restore the contour image from the data. To compare the performance of the proposed algorithm to that of conventional techniques, we measure their processing time and accuracy. In the experimental results, the proposed algorithm shows better performance compared to the others. Furthermore, it can provide the compressed data of contour pixels and restore them accurately, including the inner-outer corner, which cannot be restored using conventional algorithms.
Research of Collaborative Filtering Recommendation Algorithm based on Network Structure
Directory of Open Access Journals (Sweden)
Hui PENG
2013-10-01
Full Text Available This paper combines the classic collaborative filtering algorithm with personalized recommendation algorithm based on network structure. For the data sparsity and malicious behavior problems of traditional collaborative filtering algorithm, the paper introduces a new kind of social network-based collaborative filtering algorithm. In order to improve the accuracy of the personalized recommendation technology, we first define empty state in the state space of multi-dimensional semi-Markov processes and obtain extended multi-dimensional semi-Markov processes which are combined with social network analysis theory, and then we get social network information flow model. The model describes the flow of information between the members of the social network. So, we propose collaborative filtering algorithm based on social network information flow model. The algorithm uses social network information and combines user trust with user interest and find nearest neighbors of the target user and then forms a project recommended to improve the accuracy of recommended. Compared with the traditional collaborative filtering algorithm, the algorithm can effectively alleviate the sparsity and malicious behavior problem, and significantly improve the quality of the recommendation system recommended.
BEaST: brain extraction based on nonlocal segmentation technique.
Eskildsen, S.F.; Coupe, P.; Fonov, V.; Manjon, J.V.; Leung, K.K.; Guizard, N.; Wassef, S.N.; Ostergaard, L.R.; Collins, D.L.; Olde Rikkert, M.
2012-01-01
Brain extraction is an important step in the analysis of brain images. The variability in brain morphology and the difference in intensity characteristics due to imaging sequences make the development of a general purpose brain extraction algorithm challenging. To address this issue, we propose a ne
Research on Space Target Recognition Algorithm Based on Empirical Mode Decomposition
Directory of Open Access Journals (Sweden)
Shen Yiying
2013-07-01
Full Text Available The space target recognition algorithm, which is based on the time series of radar cross section (RCS, is proposed in this paper to solve the problems of space target recognition in the active radar system. In the algorithm, EMD method is applied for the first time to extract the eigen of RCS time series. The normalized instantaneous frequencies of high-frequency intrinsic mode functions obtained by EMD are used as the eigen values for the recognition, and an effective target recognition criterion is established. The effectiveness and the stability of the algorithm are verified by both simulation data and real data. In addition, the algorithm could reduce the estimation bias of RCS caused by inaccurate evaluation, and it is of great significance in promoting the target recognition ability of narrow-band radar in practice.
Novel algorithm for distributed replicas management based on dynamic programming
Institute of Scientific and Technical Information of China (English)
Wang Tao; Lu Xianliang; Hou Mengshu
2006-01-01
Replicas can improve the data reliability in distributed system. However, the traditional algorithms for replica management are based on the assumption that all replicas have the uniform reliability, which is inaccurate in some actual systems. To address such problem, a novel algorithm is proposed based on dynamic programming to manage the number and distribution of replicas in different nodes. By using Markov model, replicas management is organized as a multi-phase process, and the recursion equations are provided. In this algorithm, the heterogeneity of nodes, the expense for maintaining replicas and the engaged space have been considered. Under these restricted conditions, this algorithm realizes high data reliability in a distributed system. The results of case analysis prove the feasibility of the algorithm.
Heuristic based data scheduling algorithm for OFDMA wireless network
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
A system model based on joint layer mechanism is formulated for optimal data scheduling over fixed point-to-point links in OFDMA ad-hoc wireless networks.A distributed scheduling algorithm (DSA) for system model optimization is proposed that combines the randomly chosen subcarrier according to the channel condition of local subcarriers with link power control to limit interference caused by the reuse of subcarrier among links.For the global fairness improvement of algorithms,a global power control scheduling algorithm (GPCSA) based on the proposed DSA is presented and dynamically allocates global power according to difference between average carrier-noise-ratio of selected local links and system link protection ratio.Simulation results demonstrate that the proposed algorithms achieve better efficiency and fairness compared with other existing algorithms.
Drilling Path Optimization Based on Particle Swarm Optimization Algorithm
Institute of Scientific and Technical Information of China (English)
ZHU Guangyu; ZHANG Weibo; DU Yuexiang
2006-01-01
This paper presents a new approach based on the particle swarm optimization (PSO) algorithm for solving the drilling path optimization problem belonging to discrete space. Because the standard PSO algorithm is not guaranteed to be global convergence or local convergence, based on the mathematical algorithm model, the algorithm is improved by adopting the method of generate the stop evolution particle over again to get the ability of convergence to the global optimization solution. And the operators are improved by establishing the duality transposition method and the handle manner for the elements of the operator, the improved operator can satisfy the need of integer coding in drilling path optimization. The experiment with small node numbers indicates that the improved algorithm has the characteristics of easy realize, fast convergence speed, and better global convergence characteristics, hence the new PSO can play a role in solving the problem of drilling path optimization in drilling holes.
Statistical feature extraction based iris recognition system
Indian Academy of Sciences (India)
ATUL BANSAL; RAVINDER AGARWAL; R K SHARMA
2016-05-01
Iris recognition systems have been proposed by numerous researchers using different feature extraction techniques for accurate and reliable biometric authentication. In this paper, a statistical feature extraction technique based on correlation between adjacent pixels has been proposed and implemented. Hamming distance based metric has been used for matching. Performance of the proposed iris recognition system (IRS) has been measured by recording false acceptance rate (FAR) and false rejection rate (FRR) at differentthresholds in the distance metric. System performance has been evaluated by computing statistical features along two directions, namely, radial direction of circular iris region and angular direction extending from pupil tosclera. Experiments have also been conducted to study the effect of number of statistical parameters on FAR and FRR. Results obtained from the experiments based on different set of statistical features of iris images show thatthere is a significant improvement in equal error rate (EER) when number of statistical parameters for feature extraction is increased from three to six. Further, it has also been found that increasing radial/angular resolution,with normalization in place, improves EER for proposed iris recognition system
Multi-User Identification-Based Eye-Tracking Algorithm Using Position Estimation.
Kang, Suk-Ju
2016-12-27
This paper proposes a new multi-user eye-tracking algorithm using position estimation. Conventional eye-tracking algorithms are typically suitable only for a single user, and thereby cannot be used for a multi-user system. Even though they can be used to track the eyes of multiple users, their detection accuracy is low and they cannot identify multiple users individually. The proposed algorithm solves these problems and enhances the detection accuracy. Specifically, the proposed algorithm adopts a classifier to detect faces for the red, green, and blue (RGB) and depth images. Then, it calculates features based on the histogram of the oriented gradient for the detected facial region to identify multiple users, and selects the template that best matches the users from a pre-determined face database. Finally, the proposed algorithm extracts the final eye positions based on anatomical proportions. Simulation results show that the proposed algorithm improved the average F₁ score by up to 0.490, compared with benchmark algorithms.