WorldWideScience

Sample records for extraction algorithm based

  1. Model-based Bayesian signal extraction algorithm for peripheral nerves

    Science.gov (United States)

    Eggers, Thomas E.; Dweiri, Yazan M.; McCallum, Grant A.; Durand, Dominique M.

    2017-10-01

    Objective. Multi-channel cuff electrodes have recently been investigated for extracting fascicular-level motor commands from mixed neural recordings. Such signals could provide volitional, intuitive control over a robotic prosthesis for amputee patients. Recent work has demonstrated success in extracting these signals in acute and chronic preparations using spatial filtering techniques. These extracted signals, however, had low signal-to-noise ratios and thus limited their utility to binary classification. In this work a new algorithm is proposed which combines previous source localization approaches to create a model based method which operates in real time. Approach. To validate this algorithm, a saline benchtop setup was created to allow the precise placement of artificial sources within a cuff and interference sources outside the cuff. The artificial source was taken from five seconds of chronic neural activity to replicate realistic recordings. The proposed algorithm, hybrid Bayesian signal extraction (HBSE), is then compared to previous algorithms, beamforming and a Bayesian spatial filtering method, on this test data. An example chronic neural recording is also analyzed with all three algorithms. Main results. The proposed algorithm improved the signal to noise and signal to interference ratio of extracted test signals two to three fold, as well as increased the correlation coefficient between the original and recovered signals by 10-20%. These improvements translated to the chronic recording example and increased the calculated bit rate between the recovered signals and the recorded motor activity. Significance. HBSE significantly outperforms previous algorithms in extracting realistic neural signals, even in the presence of external noise sources. These results demonstrate the feasibility of extracting dynamic motor signals from a multi-fascicled intact nerve trunk, which in turn could extract motor command signals from an amputee for the end goal of

  2. Learning-based meta-algorithm for MRI brain extraction.

    Science.gov (United States)

    Shi, Feng; Wang, Li; Gilmore, John H; Lin, Weili; Shen, Dinggang

    2011-01-01

    Multiple-segmentation-and-fusion method has been widely used for brain extraction, tissue segmentation, and region of interest (ROI) localization. However, such studies are hindered in practice by their computational complexity, mainly coming from the steps of template selection and template-to-subject nonlinear registration. In this study, we address these two issues and propose a novel learning-based meta-algorithm for MRI brain extraction. Specifically, we first use exemplars to represent the entire template library, and assign the most similar exemplar to the test subject. Second, a meta-algorithm combining two existing brain extraction algorithms (BET and BSE) is proposed to conduct multiple extractions directly on test subject. Effective parameter settings for the meta-algorithm are learned from the training data and propagated to subject through exemplars. We further develop a level-set based fusion method to combine multiple candidate extractions together with a closed smooth surface, for obtaining the final result. Experimental results show that, with only a small portion of subjects for training, the proposed method is able to produce more accurate and robust brain extraction results, at Jaccard Index of 0.956 +/- 0.010 on total 340 subjects under 6-fold cross validation, compared to those by the BET and BSE even using their best parameter combinations.

  3. Historical Feature Pattern Extraction Based Network Attack Situation Sensing Algorithm

    OpenAIRE

    Zeng, Yong; Liu, Dacheng; Lei, Zhou

    2014-01-01

    The situation sequence contains a series of complicated and multivariate random trends, which are very sudden, uncertain, and difficult to recognize and describe its principle by traditional algorithms. To solve the above questions, estimating parameters of super long situation sequence is essential, but very difficult, so this paper proposes a situation prediction method based on historical feature pattern extraction (HFPE). First, HFPE algorithm seeks similar indications from the history si...

  4. Feature extraction algorithm for space targets based on fractal theory

    Science.gov (United States)

    Tian, Balin; Yuan, Jianping; Yue, Xiaokui; Ning, Xin

    2007-11-01

    In order to offer a potential for extending the life of satellites and reducing the launch and operating costs, satellite servicing including conducting repairs, upgrading and refueling spacecraft on-orbit become much more frequently. Future space operations can be more economically and reliably executed using machine vision systems, which can meet real time and tracking reliability requirements for image tracking of space surveillance system. Machine vision was applied to the research of relative pose for spacecrafts, the feature extraction algorithm was the basis of relative pose. In this paper fractal geometry based edge extraction algorithm which can be used in determining and tracking the relative pose of an observed satellite during proximity operations in machine vision system was presented. The method gets the gray-level image distributed by fractal dimension used the Differential Box-Counting (DBC) approach of the fractal theory to restrain the noise. After this, we detect the consecutive edge using Mathematical Morphology. The validity of the proposed method is examined by processing and analyzing images of space targets. The edge extraction method not only extracts the outline of the target, but also keeps the inner details. Meanwhile, edge extraction is only processed in moving area to reduce computation greatly. Simulation results compared edge detection using the method which presented by us with other detection methods. The results indicate that the presented algorithm is a valid method to solve the problems of relative pose for spacecrafts.

  5. A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs.

    Science.gov (United States)

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2015-02-01

    A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms' parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.

  6. Historical feature pattern extraction based network attack situation sensing algorithm.

    Science.gov (United States)

    Zeng, Yong; Liu, Dacheng; Lei, Zhou

    2014-01-01

    The situation sequence contains a series of complicated and multivariate random trends, which are very sudden, uncertain, and difficult to recognize and describe its principle by traditional algorithms. To solve the above questions, estimating parameters of super long situation sequence is essential, but very difficult, so this paper proposes a situation prediction method based on historical feature pattern extraction (HFPE). First, HFPE algorithm seeks similar indications from the history situation sequence recorded and weighs the link intensity between occurred indication and subsequent effect. Then it calculates the probability that a certain effect reappears according to the current indication and makes a prediction after weighting. Meanwhile, HFPE method gives an evolution algorithm to derive the prediction deviation from the views of pattern and accuracy. This algorithm can continuously promote the adaptability of HFPE through gradual fine-tuning. The method preserves the rules in sequence at its best, does not need data preprocessing, and can track and adapt to the variation of situation sequence continuously.

  7. Historical Feature Pattern Extraction Based Network Attack Situation Sensing Algorithm

    Directory of Open Access Journals (Sweden)

    Yong Zeng

    2014-01-01

    Full Text Available The situation sequence contains a series of complicated and multivariate random trends, which are very sudden, uncertain, and difficult to recognize and describe its principle by traditional algorithms. To solve the above questions, estimating parameters of super long situation sequence is essential, but very difficult, so this paper proposes a situation prediction method based on historical feature pattern extraction (HFPE. First, HFPE algorithm seeks similar indications from the history situation sequence recorded and weighs the link intensity between occurred indication and subsequent effect. Then it calculates the probability that a certain effect reappears according to the current indication and makes a prediction after weighting. Meanwhile, HFPE method gives an evolution algorithm to derive the prediction deviation from the views of pattern and accuracy. This algorithm can continuously promote the adaptability of HFPE through gradual fine-tuning. The method preserves the rules in sequence at its best, does not need data preprocessing, and can track and adapt to the variation of situation sequence continuously.

  8. Fractal Complexity-Based Feature Extraction Algorithm of Communication Signals

    Science.gov (United States)

    Wang, Hui; Li, Jingchao; Guo, Lili; Dou, Zheng; Lin, Yun; Zhou, Ruolin

    How to analyze and identify the characteristics of radiation sources and estimate the threat level by means of detecting, intercepting and locating has been the central issue of electronic support in the electronic warfare, and communication signal recognition is one of the key points to solve this issue. Aiming at accurately extracting the individual characteristics of the radiation source for the increasingly complex communication electromagnetic environment, a novel feature extraction algorithm for individual characteristics of the communication radiation source based on the fractal complexity of the signal is proposed. According to the complexity of the received signal and the situation of environmental noise, use the fractal dimension characteristics of different complexity to depict the subtle characteristics of the signal to establish the characteristic database, and then identify different broadcasting station by gray relation theory system. The simulation results demonstrate that the algorithm can achieve recognition rate of 94% even in the environment with SNR of -10dB, and this provides an important theoretical basis for the accurate identification of the subtle features of the signal at low SNR in the field of information confrontation.

  9. The Research of Disease Spots Extraction Based on Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    Kangshun Li

    2017-01-01

    Full Text Available According to the characteristics of maize disease spot performance in the image, this paper designs two-histogram segmentation method based on evolutionary algorithm, which combined with the analysis of image of maize diseases and insect pests, with full consideration of color and texture characteristic of the lesion of pests and diseases, the chroma and gray image, composed of two tuples to build a two-dimensional histogram, solves the problem of one-dimensional histograms that cannot be clearly divided into target and background bimodal distribution and improved the traditional two-dimensional histogram application in pest damage lesion extraction. The chromosome coding suitable for the characteristics of lesion image is designed based on second segmentation of the genetic algorithm Otsu. Determining initial population with analysis results of lesion image, parallel selection, optimal preservation strategy, and adaptive mutation operator are used to improve the search efficiency. Finally, by setting the fluctuation threshold, we continue to search for the best threshold in the range of fluctuations for implementation of global search and local search.

  10. The optimal extraction of feature algorithm based on KAZE

    Science.gov (United States)

    Yao, Zheyi; Gu, Guohua; Qian, Weixian; Wang, Pengcheng

    2015-10-01

    As a novel method of 2D features extraction algorithm over the nonlinear scale space, KAZE provide a special method. However, the computation of nonlinear scale space and the construction of KAZE feature vectors are more expensive than the SIFT and SURF significantly. In this paper, the given image is used to build the nonlinear space up to a maximum evolution time through the efficient Additive Operator Splitting (AOS) techniques and the variable conductance diffusion. Changing the parameter can improve the construction of nonlinear scale space and simplify the image conductivities for each dimension space, with the predigest computation. Then, the detection for points of interest can exhibit a maxima of the scale-normalized determinant with the Hessian response in the nonlinear scale space. At the same time, the detection of feature vectors is optimized by the Wavelet Transform method, which can avoid the second Gaussian smoothing in the KAZE Features and cut down the complexity of the algorithm distinctly in the building and describing vectors steps. In this way, the dominant orientation is obtained, similar to SURF, by summing the responses within a sliding circle segment covering an angle of π/3 in the circular area of radius 6σ with a sampling step of size σ one by one. Finally, the extraction in the multidimensional patch at the given scale, centered over the points of interest and rotated to align its dominant orientation to a canonical direction, is able to simplify the description of feature by reducing the description dimensions, just as the PCA-SIFT method. Even though the features are somewhat more expensive to compute than SIFT due to the construction of nonlinear scale space, but compared to SURF, the result revels a step forward in performance in detection, description and application against the previous ways by the following contrast experiments.

  11. A feature extraction algorithm based on corner and spots in self-driving vehicles

    Directory of Open Access Journals (Sweden)

    Yupeng FENG

    2017-06-01

    Full Text Available To solve the poor real-time performance problem of the visual odometry based on embedded system with limited computing resources, an image matching method based on Harris and SIFT is proposed, namely the Harris-SIFT algorithm. On the basis of the review of SIFT algorithm, the principle of Harris-SIFT algorithm is provided. First, Harris algorithm is used to extract the corners of the image as candidate feature points, and scale invariant feature transform (SIFT features are extracted from those candidate feature points. At last, through an example, the algorithm is simulated by Matlab, then the complexity and other performance of the algorithm are analyzed. The experimental results show that the proposed method reduces the computational complexity and improves the speed of feature extraction. Harris-SIFT algorithm can be used in the real-time vision odometer system, and will bring about a wide application of visual odometry in embedded navigation system.

  12. Patent Keyword Extraction Algorithm Based on Distributed Representation for Patent Classification

    Directory of Open Access Journals (Sweden)

    Jie Hu

    2018-02-01

    Full Text Available Many text mining tasks such as text retrieval, text summarization, and text comparisons depend on the extraction of representative keywords from the main text. Most existing keyword extraction algorithms are based on discrete bag-of-words type of word representation of the text. In this paper, we propose a patent keyword extraction algorithm (PKEA based on the distributed Skip-gram model for patent classification. We also develop a set of quantitative performance measures for keyword extraction evaluation based on information gain and cross-validation, based on Support Vector Machine (SVM classification, which are valuable when human-annotated keywords are not available. We used a standard benchmark dataset and a homemade patent dataset to evaluate the performance of PKEA. Our patent dataset includes 2500 patents from five distinct technological fields related to autonomous cars (GPS systems, lidar systems, object recognition systems, radar systems, and vehicle control systems. We compared our method with Frequency, Term Frequency-Inverse Document Frequency (TF-IDF, TextRank and Rapid Automatic Keyword Extraction (RAKE. The experimental results show that our proposed algorithm provides a promising way to extract keywords from patent texts for patent classification.

  13. Algorithm based on regional separation for automatic grain boundary extraction using improved mean shift method

    Science.gov (United States)

    Zhenying, Xu; Jiandong, Zhu; Qi, Zhang; Yamba, Philip

    2018-06-01

    Metallographic microscopy shows that the vast majority of metal materials are composed of many small grains; the grain size of a metal is important for determining the tensile strength, toughness, plasticity, and other mechanical properties. In order to quantitatively evaluate grain size in metals, grain boundaries must be identified in metallographic images. Based on the phenomenon of grain boundary blurring or disconnection in metallographic images, this study develops an algorithm based on regional separation for automatically extracting grain boundaries by an improved mean shift method. Experimental observation shows that the grain boundaries obtained by the proposed algorithm are highly complete and accurate. This research has practical value because the proposed algorithm is suitable for grain boundary extraction from most metallographic images.

  14. Chinese License Plates Recognition Method Based on A Robust and Efficient Feature Extraction and BPNN Algorithm

    Science.gov (United States)

    Zhang, Ming; Xie, Fei; Zhao, Jing; Sun, Rui; Zhang, Lei; Zhang, Yue

    2018-04-01

    The prosperity of license plate recognition technology has made great contribution to the development of Intelligent Transport System (ITS). In this paper, a robust and efficient license plate recognition method is proposed which is based on a combined feature extraction model and BPNN (Back Propagation Neural Network) algorithm. Firstly, the candidate region of the license plate detection and segmentation method is developed. Secondly, a new feature extraction model is designed considering three sets of features combination. Thirdly, the license plates classification and recognition method using the combined feature model and BPNN algorithm is presented. Finally, the experimental results indicate that the license plate segmentation and recognition both can be achieved effectively by the proposed algorithm. Compared with three traditional methods, the recognition accuracy of the proposed method has increased to 95.7% and the consuming time has decreased to 51.4ms.

  15. A graph-Laplacian-based feature extraction algorithm for neural spike sorting.

    Science.gov (United States)

    Ghanbari, Yasser; Spence, Larry; Papamichalis, Panos

    2009-01-01

    Analysis of extracellular neural spike recordings is highly dependent upon the accuracy of neural waveform classification, commonly referred to as spike sorting. Feature extraction is an important stage of this process because it can limit the quality of clustering which is performed in the feature space. This paper proposes a new feature extraction method (which we call Graph Laplacian Features, GLF) based on minimizing the graph Laplacian and maximizing the weighted variance. The algorithm is compared with Principal Components Analysis (PCA, the most commonly-used feature extraction method) using simulated neural data. The results show that the proposed algorithm produces more compact and well-separated clusters compared to PCA. As an added benefit, tentative cluster centers are output which can be used to initialize a subsequent clustering stage.

  16. Comparisons of feature extraction algorithm based on unmanned aerial vehicle image

    Directory of Open Access Journals (Sweden)

    Xi Wenfei

    2017-07-01

    Full Text Available Feature point extraction technology has become a research hotspot in the photogrammetry and computer vision. The commonly used point feature extraction operators are SIFT operator, Forstner operator, Harris operator and Moravec operator, etc. With the high spatial resolution characteristics, UAV image is different from the traditional aviation image. Based on these characteristics of the unmanned aerial vehicle (UAV, this paper uses several operators referred above to extract feature points from the building images, grassland images, shrubbery images, and vegetable greenhouses images. Through the practical case analysis, the performance, advantages, disadvantages and adaptability of each algorithm are compared and analyzed by considering their speed and accuracy. Finally, the suggestions of how to adapt different algorithms in diverse environment are proposed.

  17. The algorithm of fast image stitching based on multi-feature extraction

    Science.gov (United States)

    Yang, Chunde; Wu, Ge; Shi, Jing

    2018-05-01

    This paper proposed an improved image registration method combining Hu-based invariant moment contour information and feature points detection, aiming to solve the problems in traditional image stitching algorithm, such as time-consuming feature points extraction process, redundant invalid information overload and inefficiency. First, use the neighborhood of pixels to extract the contour information, employing the Hu invariant moment as similarity measure to extract SIFT feature points in those similar regions. Then replace the Euclidean distance with Hellinger kernel function to improve the initial matching efficiency and get less mismatching points, further, estimate affine transformation matrix between the images. Finally, local color mapping method is adopted to solve uneven exposure, using the improved multiresolution fusion algorithm to fuse the mosaic images and realize seamless stitching. Experimental results confirm high accuracy and efficiency of method proposed in this paper.

  18. A Smartphone Indoor Localization Algorithm Based on WLAN Location Fingerprinting with Feature Extraction and Clustering.

    Science.gov (United States)

    Luo, Junhai; Fu, Liang

    2017-06-09

    With the development of communication technology, the demand for location-based services is growing rapidly. This paper presents an algorithm for indoor localization based on Received Signal Strength (RSS), which is collected from Access Points (APs). The proposed localization algorithm contains the offline information acquisition phase and online positioning phase. Firstly, the AP selection algorithm is reviewed and improved based on the stability of signals to remove useless AP; secondly, Kernel Principal Component Analysis (KPCA) is analyzed and used to remove the data redundancy and maintain useful characteristics for nonlinear feature extraction; thirdly, the Affinity Propagation Clustering (APC) algorithm utilizes RSS values to classify data samples and narrow the positioning range. In the online positioning phase, the classified data will be matched with the testing data to determine the position area, and the Maximum Likelihood (ML) estimate will be employed for precise positioning. Eventually, the proposed algorithm is implemented in a real-world environment for performance evaluation. Experimental results demonstrate that the proposed algorithm improves the accuracy and computational complexity.

  19. A Smartphone Indoor Localization Algorithm Based on WLAN Location Fingerprinting with Feature Extraction and Clustering

    Directory of Open Access Journals (Sweden)

    Junhai Luo

    2017-06-01

    Full Text Available With the development of communication technology, the demand for location-based services is growing rapidly. This paper presents an algorithm for indoor localization based on Received Signal Strength (RSS, which is collected from Access Points (APs. The proposed localization algorithm contains the offline information acquisition phase and online positioning phase. Firstly, the AP selection algorithm is reviewed and improved based on the stability of signals to remove useless AP; secondly, Kernel Principal Component Analysis (KPCA is analyzed and used to remove the data redundancy and maintain useful characteristics for nonlinear feature extraction; thirdly, the Affinity Propagation Clustering (APC algorithm utilizes RSS values to classify data samples and narrow the positioning range. In the online positioning phase, the classified data will be matched with the testing data to determine the position area, and the Maximum Likelihood (ML estimate will be employed for precise positioning. Eventually, the proposed algorithm is implemented in a real-world environment for performance evaluation. Experimental results demonstrate that the proposed algorithm improves the accuracy and computational complexity.

  20. An improved feature extraction algorithm based on KAZE for multi-spectral image

    Science.gov (United States)

    Yang, Jianping; Li, Jun

    2018-02-01

    Multi-spectral image contains abundant spectral information, which is widely used in all fields like resource exploration, meteorological observation and modern military. Image preprocessing, such as image feature extraction and matching, is indispensable while dealing with multi-spectral remote sensing image. Although the feature matching algorithm based on linear scale such as SIFT and SURF performs strong on robustness, the local accuracy cannot be guaranteed. Therefore, this paper proposes an improved KAZE algorithm, which is based on nonlinear scale, to raise the number of feature and to enhance the matching rate by using the adjusted-cosine vector. The experiment result shows that the number of feature and the matching rate of the improved KAZE are remarkably than the original KAZE algorithm.

  1. Open critical area model and extraction algorithm based on the net flow-axis

    International Nuclear Information System (INIS)

    Wang Le; Wang Jun-Ping; Gao Yan-Hong; Xu Dan; Li Bo-Bo; Liu Shi-Gang

    2013-01-01

    In the integrated circuit manufacturing process, the critical area extraction is a bottleneck to the layout optimization and the integrated circuit yield estimation. In this paper, we study the problem that the missing material defects may result in the open circuit fault. Combining the mathematical morphology theory, we present a new computation model and a novel extraction algorithm for the open critical area based on the net flow-axis. Firstly, we find the net flow-axis for different nets. Then, the net flow-edges based on the net flow-axis are obtained. Finally, we can extract the open critical area by the mathematical morphology. Compared with the existing methods, the nets need not to divide into the horizontal nets and the vertical nets, and the experimental results show that our model and algorithm can accurately extract the size of the open critical area and obtain the location information of the open circuit critical area. (interdisciplinary physics and related areas of science and technology)

  2. Sensor-based vibration signal feature extraction using an improved composite dictionary matching pursuit algorithm.

    Science.gov (United States)

    Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui

    2014-09-09

    This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm

  3. Sensor-Based Vibration Signal Feature Extraction Using an Improved Composite Dictionary Matching Pursuit Algorithm

    Directory of Open Access Journals (Sweden)

    Lingli Cui

    2014-09-01

    Full Text Available This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and

  4. Vibration extraction based on fast NCC algorithm and high-speed camera.

    Science.gov (United States)

    Lei, Xiujun; Jin, Yi; Guo, Jie; Zhu, Chang'an

    2015-09-20

    In this study, a high-speed camera system is developed to complete the vibration measurement in real time and to overcome the mass introduced by conventional contact measurements. The proposed system consists of a notebook computer and a high-speed camera which can capture the images as many as 1000 frames per second. In order to process the captured images in the computer, the normalized cross-correlation (NCC) template tracking algorithm with subpixel accuracy is introduced. Additionally, a modified local search algorithm based on the NCC is proposed to reduce the computation time and to increase efficiency significantly. The modified algorithm can rapidly accomplish one displacement extraction 10 times faster than the traditional template matching without installing any target panel onto the structures. Two experiments were carried out under laboratory and outdoor conditions to validate the accuracy and efficiency of the system performance in practice. The results demonstrated the high accuracy and efficiency of the camera system in extracting vibrating signals.

  5. Phase extraction based on iterative algorithm using five-frame crossed fringes in phase measuring deflectometry

    Science.gov (United States)

    Jin, Chengying; Li, Dahai; Kewei, E.; Li, Mengyang; Chen, Pengyu; Wang, Ruiyang; Xiong, Zhao

    2018-06-01

    In phase measuring deflectometry, two orthogonal sinusoidal fringe patterns are separately projected on the test surface and the distorted fringes reflected by the surface are recorded, each with a sequential phase shift. Then the two components of the local surface gradients are obtained by triangulation. It usually involves some complicated and time-consuming procedures (fringe projection in the orthogonal directions). In addition, the digital light devices (e.g. LCD screen and CCD camera) are not error free. There are quantization errors for each pixel of both LCD and CCD. Therefore, to avoid the complex process and improve the reliability of the phase distribution, a phase extraction algorithm with five-frame crossed fringes is presented in this paper. It is based on a least-squares iterative process. Using the proposed algorithm, phase distributions and phase shift amounts in two orthogonal directions can be simultaneously and successfully determined through an iterative procedure. Both a numerical simulation and a preliminary experiment are conducted to verify the validity and performance of this algorithm. Experimental results obtained by our method are shown, and comparisons between our experimental results and those obtained by the traditional 16-step phase-shifting algorithm and between our experimental results and those measured by the Fizeau interferometer are made.

  6. The Surface Extraction from TIN based Search-space Minimization (SETSM) algorithm

    Science.gov (United States)

    Noh, Myoung-Jong; Howat, Ian M.

    2017-07-01

    Digital Elevation Models (DEMs) provide critical information for a wide range of scientific, navigational and engineering activities. Submeter resolution, stereoscopic satellite imagery with high geometric and radiometric quality, and wide spatial coverage are becoming increasingly accessible for generating stereo-photogrammetric DEMs. However, low contrast and repeatedly-textured surfaces, such as snow and glacial ice at high latitudes, and mountainous terrains challenge existing stereo-photogrammetric DEM generation techniques, particularly without a-priori information such as existing seed DEMs or the manual setting of terrain-specific parameters. To utilize these data for fully-automatic DEM extraction at a large scale, we developed the Surface Extraction from TIN-based Search-space Minimization (SETSM) algorithm. SETSM is fully automatic (i.e. no search parameter settings are needed) and uses only the sensor model Rational Polynomial Coefficients (RPCs). SETSM adopts a hierarchical, combined image- and object-space matching strategy utilizing weighted normalized cross-correlation with both original distorted and geometrically corrected images for overcoming ambiguities caused by foreshortening and occlusions. In addition, SETSM optimally minimizes search-spaces to extract optimal matches over problematic terrains by iteratively updating object surfaces within a Triangulated Irregular Network, and utilizes a geometric-constrained blunder and outlier detection in object space. We prove the ability of SETSM to mitigate typical stereo-photogrammetric matching problems over a range of challenging terrains. SETSM is the primary DEM generation software for the US National Science Foundation's ArcticDEM project.

  7. Rule Extraction Based on Extreme Learning Machine and an Improved Ant-Miner Algorithm for Transient Stability Assessment.

    Science.gov (United States)

    Li, Yang; Li, Guoqing; Wang, Zhenhao

    2015-01-01

    In order to overcome the problems of poor understandability of the pattern recognition-based transient stability assessment (PRTSA) methods, a new rule extraction method based on extreme learning machine (ELM) and an improved Ant-miner (IAM) algorithm is presented in this paper. First, the basic principles of ELM and Ant-miner algorithm are respectively introduced. Then, based on the selected optimal feature subset, an example sample set is generated by the trained ELM-based PRTSA model. And finally, a set of classification rules are obtained by IAM algorithm to replace the original ELM network. The novelty of this proposal is that transient stability rules are extracted from an example sample set generated by the trained ELM-based transient stability assessment model by using IAM algorithm. The effectiveness of the proposed method is shown by the application results on the New England 39-bus power system and a practical power system--the southern power system of Hebei province.

  8. Comparison of analyzer-based imaging computed tomography extraction algorithms and application to bone-cartilage imaging

    International Nuclear Information System (INIS)

    Diemoz, Paul C; Bravin, Alberto; Coan, Paola; Glaser, Christian

    2010-01-01

    In x-ray phase-contrast analyzer-based imaging, the contrast is provided by a combination of absorption, refraction and scattering effects. Several extraction algorithms, which attempt to separate and quantify these different physical contributions, have been proposed and applied. In a previous work, we presented a quantitative comparison of five among the most well-known extraction algorithms based on the geometrical optics approximation applied to planar images: diffraction-enhanced imaging (DEI), extended diffraction-enhanced imaging (E-DEI), generalized diffraction-enhanced imaging (G-DEI), multiple-image radiography (MIR) and Gaussian curve fitting (GCF). In this paper, we compare these algorithms in the case of the computed tomography (CT) modality. The extraction algorithms are applied to analyzer-based CT images of both plastic phantoms and biological samples (cartilage-on-bone cylinders). Absorption, refraction and scattering signals are derived. Results obtained with the different algorithms may vary greatly, especially in the case of large refraction angles. We show that ABI-CT extraction algorithms can provide an excellent tool to enhance the visualization of cartilage internal structures, which may find applications in a clinical context. Besides, by using the refraction images, the refractive index decrements for both the cartilage matrix and the cartilage cells have been estimated.

  9. Rules Extraction with an Immune Algorithm

    Directory of Open Access Journals (Sweden)

    Deqin Yan

    2007-12-01

    Full Text Available In this paper, a method of extracting rules with immune algorithms from information systems is proposed. Designing an immune algorithm is based on a sharing mechanism to extract rules. The principle of sharing and competing resources in the sharing mechanism is consistent with the relationship of sharing and rivalry among rules. In order to extract rules efficiently, a new concept of flexible confidence and rule measurement is introduced. Experiments demonstrate that the proposed method is effective.

  10. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms

    Directory of Open Access Journals (Sweden)

    Dashan Zhang

    2016-04-01

    Full Text Available The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.

  11. LiveWire interactive boundary extraction algorithm based on Haar wavelet transform and control point set direction search

    Science.gov (United States)

    Cheng, Jun; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.

  12. Rule Extraction Based on Extreme Learning Machine and an Improved Ant-Miner Algorithm for Transient Stability Assessment.

    Directory of Open Access Journals (Sweden)

    Yang Li

    Full Text Available In order to overcome the problems of poor understandability of the pattern recognition-based transient stability assessment (PRTSA methods, a new rule extraction method based on extreme learning machine (ELM and an improved Ant-miner (IAM algorithm is presented in this paper. First, the basic principles of ELM and Ant-miner algorithm are respectively introduced. Then, based on the selected optimal feature subset, an example sample set is generated by the trained ELM-based PRTSA model. And finally, a set of classification rules are obtained by IAM algorithm to replace the original ELM network. The novelty of this proposal is that transient stability rules are extracted from an example sample set generated by the trained ELM-based transient stability assessment model by using IAM algorithm. The effectiveness of the proposed method is shown by the application results on the New England 39-bus power system and a practical power system--the southern power system of Hebei province.

  13. Extracting Vegetation Coverage in Dry-hot Valley Regions Based on Alternating Angle Minimum Algorithm

    Science.gov (United States)

    Y Yang, M.; Wang, J.; Zhang, Q.

    2017-07-01

    Vegetation coverage is one of the most important indicators for ecological environment change, and is also an effective index for the assessment of land degradation and desertification. The dry-hot valley regions have sparse surface vegetation, and the spectral information about the vegetation in such regions usually has a weak representation in remote sensing, so there are considerable limitations for applying the commonly-used vegetation index method to calculate the vegetation coverage in the dry-hot valley regions. Therefore, in this paper, Alternating Angle Minimum (AAM) algorithm of deterministic model is adopted for selective endmember for pixel unmixing of MODIS image in order to extract the vegetation coverage, and accuracy test is carried out by the use of the Landsat TM image over the same period. As shown by the results, in the dry-hot valley regions with sparse vegetation, AAM model has a high unmixing accuracy, and the extracted vegetation coverage is close to the actual situation, so it is promising to apply the AAM model to the extraction of vegetation coverage in the dry-hot valley regions.

  14. Driver drowsiness classification using fuzzy wavelet-packet-based feature-extraction algorithm.

    Science.gov (United States)

    Khushaba, Rami N; Kodagoda, Sarath; Lal, Sara; Dissanayake, Gamini

    2011-01-01

    Driver drowsiness and loss of vigilance are a major cause of road accidents. Monitoring physiological signals while driving provides the possibility of detecting and warning of drowsiness and fatigue. The aim of this paper is to maximize the amount of drowsiness-related information extracted from a set of electroencephalogram (EEG), electrooculogram (EOG), and electrocardiogram (ECG) signals during a simulation driving test. Specifically, we develop an efficient fuzzy mutual-information (MI)- based wavelet packet transform (FMIWPT) feature-extraction method for classifying the driver drowsiness state into one of predefined drowsiness levels. The proposed method estimates the required MI using a novel approach based on fuzzy memberships providing an accurate-information content-estimation measure. The quality of the extracted features was assessed on datasets collected from 31 drivers on a simulation test. The experimental results proved the significance of FMIWPT in extracting features that highly correlate with the different drowsiness levels achieving a classification accuracy of 95%-- 97% on an average across all subjects.

  15. Multi-source feature extraction and target recognition in wireless sensor networks based on adaptive distributed wavelet compression algorithms

    Science.gov (United States)

    Hortos, William S.

    2008-04-01

    Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at

  16. A threshold auto-adjustment algorithm of feature points extraction based on grid

    Science.gov (United States)

    Yao, Zili; Li, Jun; Dong, Gaojie

    2018-02-01

    When dealing with high-resolution digital images, detection of feature points is usually the very first important step. Valid feature points depend on the threshold. If the threshold is too low, plenty of feature points will be detected, and they may be aggregated in the rich texture regions, which consequently not only affects the speed of feature description, but also aggravates the burden of following processing; if the threshold is set high, the feature points in poor texture area will lack. To solve these problems, this paper proposes a threshold auto-adjustment method of feature extraction based on grid. By dividing the image into numbers of grid, threshold is set in every local grid for extracting the feature points. When the number of feature points does not meet the threshold requirement, the threshold will be adjusted automatically to change the final number of feature points The experimental results show that feature points produced by our method is more uniform and representative, which avoids the aggregation of feature points and greatly reduces the complexity of following work.

  17. An Algorithm of Building Extraction in Urban Area Based on Improved Top-hat Transformations and LBP Elevation Texture

    Directory of Open Access Journals (Sweden)

    HE Manyun

    2017-09-01

    Full Text Available Classification of building and vegetation is difficult solely by LiDAR data and vegetation in shadows can't be eliminated only by aerial images. The improved top-hat transformations and local binary patterns (LBP elevation texture analysis for building extraction are proposed based on the fusion of aerial images and LiDAR data. Firstly, LiDAR data is reorganized into grid cell, the algorithm removes ground points through top-hat transform. Then, the vegetation points are extracted by normalized difference vegetation index (NDVI. Thirdly, according to the elevation information of LiDAR points, LBP elevation texture is calculated and achieving precise elimination of vegetation in shadows or surrounding to the buildings. At last, morphological operations are used to fill the holes of building roofs, and region growing for complete building edges. The simulation is based on the complex urban area in Vaihingen benchmark provided by ISPRS, the results show that the algorithm affording higher classification accuracy.

  18. Pattern Extraction Algorithm for NetFlow-Based Botnet Activities Detection

    Directory of Open Access Journals (Sweden)

    Rafał Kozik

    2017-01-01

    Full Text Available As computer and network technologies evolve, the complexity of cybersecurity has dramatically increased. Advanced cyber threats have led to current approaches to cyber-attack detection becoming ineffective. Many currently used computer systems and applications have never been deeply tested from a cybersecurity point of view and are an easy target for cyber criminals. The paradigm of security by design is still more of a wish than a reality, especially in the context of constantly evolving systems. On the other hand, protection technologies have also improved. Recently, Big Data technologies have given network administrators a wide spectrum of tools to combat cyber threats. In this paper, we present an innovative system for network traffic analysis and anomalies detection to utilise these tools. The systems architecture is based on a Big Data processing framework, data mining, and innovative machine learning techniques. So far, the proposed system implements pattern extraction strategies that leverage batch processing methods. As a use case we consider the problem of botnet detection by means of data in the form of NetFlows. Results are promising and show that the proposed system can be a useful tool to improve cybersecurity.

  19. The analysis and detection of hypernasality based on a formant extraction algorithm

    Science.gov (United States)

    Qian, Jiahui; Fu, Fanglin; Liu, Xinyi; He, Ling; Yin, Heng; Zhang, Han

    2017-08-01

    In the clinical practice, the effective assessment of cleft palate speech disorders is important. For hypernasal speech, the resonance between nasal cavity and oral cavity causes an additional nasal formant. Thus, the formant frequency is a crucial cue for the judgment of hypernasality in cleft palate speech. Due to the existence of nasal formant, the peak merger occurs to the spectrum of nasal speech more often. However, the peak merger could not be solved by classical linear prediction coefficient root extraction method. In this paper, a method is proposed to detect the additional nasal formant in low-frequency region and obtain the formant frequency. The experiment results show that the proposed method could locate the nasal formant preferably. Moreover, the formants are regarded as the extraction features to proceed the detection of hypernasality. 436 phonemes, which are collected from Hospital of Stomatology, are used to carry out the experiment. The detection accuracy of hypernasality in cleft palate speech is 95.2%.

  20. Application of a rule extraction algorithm family based on the Re-RX algorithm to financial credit risk assessment from a Pareto optimal perspective

    Directory of Open Access Journals (Sweden)

    Yoichi Hayashi

    2016-01-01

    Full Text Available Historically, the assessment of credit risk has proved to be both highly important and extremely difficult. Currently, financial institutions rely on the use of computer-generated credit scores for risk assessment. However, automated risk evaluations are currently imperfect, and the loss of vast amounts of capital could be prevented by improving the performance of computerized credit assessments. A number of approaches have been developed for the computation of credit scores over the last several decades, but these methods have been considered too complex without good interpretability and have therefore not been widely adopted. Therefore, in this study, we provide the first comprehensive comparison of results regarding the assessment of credit risk obtained using 10 runs of 10-fold cross validation of the Re-RX algorithm family, including the Re-RX algorithm, the Re-RX algorithm with both discrete and continuous attributes (Continuous Re-RX, the Re-RX algorithm with J48graft, the Re-RX algorithm with a trained neural network (Sampling Re-RX, NeuroLinear, NeuroLinear+GRG, and three unique rule extraction techniques involving support vector machines and Minerva from four real-life, two-class mixed credit-risk datasets. We also discuss the roles of various newly-extended types of the Re-RX algorithm and high performance classifiers from a Pareto optimal perspective. Our findings suggest that Continuous Re-RX, Re-RX with J48graft, and Sampling Re-RX comprise a powerful management tool that allows the creation of advanced, accurate, concise and interpretable decision support systems for credit risk evaluation. In addition, from a Pareto optimal perspective, the Re-RX algorithm family has superior features in relation to the comprehensibility of extracted rules and the potential for credit scoring with Big Data.

  1. An Efficient Method for Mapping High-Resolution Global River Discharge Based on the Algorithms of Drainage Network Extraction

    Directory of Open Access Journals (Sweden)

    Jiaye Li

    2018-04-01

    Full Text Available River discharge, which represents the accumulation of surface water flowing into rivers and ultimately into the ocean or other water bodies, may have great impacts on water quality and the living organisms in rivers. However, the global knowledge of river discharge is still poor and worth exploring. This study proposes an efficient method for mapping high-resolution global river discharge based on the algorithms of drainage network extraction. Using the existing global runoff map and digital elevation model (DEM data as inputs, this method consists of three steps. First, the pixels of the runoff map and the DEM data are resampled into the same resolution (i.e., 0.01-degree. Second, the flow direction of each pixel of the DEM data (identified by the optimal flow path method used in drainage network extraction is determined and then applied to the corresponding pixel of the runoff map. Third, the river discharge of each pixel of the runoff map is calculated by summing the runoffs of all the pixels in the upstream of this pixel, similar to the upslope area accumulation step in drainage network extraction. Finally, a 0.01-degree global map of the mean annual river discharge is obtained. Moreover, a 0.5-degree global map of the mean annual river discharge is produced to display the results with a more intuitive perception. Compared against the existing global river discharge databases, the 0.01-degree map is of a generally high accuracy for the selected river basins, especially for the Amazon River basin with the lowest relative error (RE of 0.3% and the Yangtze River basin within the RE range of ±6.0%. However, it is noted that the results of the Congo and Zambezi River basins are not satisfactory, with RE values over 90%, and it is inferred that there may be some accuracy problems with the runoff map in these river basins.

  2. [An improved algorithm for electrohysterogram envelope extraction].

    Science.gov (United States)

    Lu, Yaosheng; Pan, Jie; Chen, Zhaoxia; Chen, Zhaoxia

    2017-02-01

    Extraction uterine contraction signal from abdominal uterine electromyogram(EMG) signal is considered as the most promising method to replace the traditional tocodynamometer(TOCO) for detecting uterine contractions activity. The traditional root mean square(RMS) algorithm has only some limited values in canceling the impulsive noise. In our study, an improved algorithm for uterine EMG envelope extraction was proposed to overcome the problem. Firstly, in our experiment, zero-crossing detection method was used to separate the burst of uterine electrical activity from the raw uterine EMG signal. After processing the separated signals by employing two filtering windows which have different width, we used the traditional RMS algorithm to extract uterus EMG envelope. To assess the performance of the algorithm, the improved algorithm was compared with two existing intensity of uterine electromyogram(IEMG) extraction algorithms. The results showed that the improved algorithm was better than the traditional ones in eliminating impulsive noise present in the uterine EMG signal. The measurement sensitivity and positive predictive value(PPV) of the improved algorithm were 0.952 and 0.922, respectively, which were not only significantly higher than the corresponding values(0.859 and 0.847) of the first comparison algorithm, but also higher than the values(0.928 and 0.877) of the second comparison algorithm. Thus the new method is reliable and effective.

  3. A novel Gravity-FREAK feature extraction and Gravity-KLT tracking registration algorithm based on iPhone MEMS mobile sensor in mobile environment.

    Science.gov (United States)

    Hong, Zhiling; Lin, Fan; Xiao, Bin

    2017-01-01

    Based on the traditional Fast Retina Keypoint (FREAK) feature description algorithm, this paper proposed a Gravity-FREAK feature description algorithm based on Micro-electromechanical Systems (MEMS) sensor to overcome the limited computing performance and memory resources of mobile devices and further improve the reality interaction experience of clients through digital information added to the real world by augmented reality technology. The algorithm takes the gravity projection vector corresponding to the feature point as its feature orientation, which saved the time of calculating the neighborhood gray gradient of each feature point, reduced the cost of calculation and improved the accuracy of feature extraction. In the case of registration method of matching and tracking natural features, the adaptive and generic corner detection based on the Gravity-FREAK matching purification algorithm was used to eliminate abnormal matches, and Gravity Kaneda-Lucas Tracking (KLT) algorithm based on MEMS sensor can be used for the tracking registration of the targets and robustness improvement of tracking registration algorithm under mobile environment.

  4. A novel Gravity-FREAK feature extraction and Gravity-KLT tracking registration algorithm based on iPhone MEMS mobile sensor in mobile environment.

    Directory of Open Access Journals (Sweden)

    Zhiling Hong

    Full Text Available Based on the traditional Fast Retina Keypoint (FREAK feature description algorithm, this paper proposed a Gravity-FREAK feature description algorithm based on Micro-electromechanical Systems (MEMS sensor to overcome the limited computing performance and memory resources of mobile devices and further improve the reality interaction experience of clients through digital information added to the real world by augmented reality technology. The algorithm takes the gravity projection vector corresponding to the feature point as its feature orientation, which saved the time of calculating the neighborhood gray gradient of each feature point, reduced the cost of calculation and improved the accuracy of feature extraction. In the case of registration method of matching and tracking natural features, the adaptive and generic corner detection based on the Gravity-FREAK matching purification algorithm was used to eliminate abnormal matches, and Gravity Kaneda-Lucas Tracking (KLT algorithm based on MEMS sensor can be used for the tracking registration of the targets and robustness improvement of tracking registration algorithm under mobile environment.

  5. Spreadsheet algorithm for stagewise solvent extraction

    International Nuclear Information System (INIS)

    Leonard, R.A.; Regalbuto, M.C.

    1994-01-01

    The material balance and equilibrium equations for solvent extraction processes have been combined with computer spreadsheets in a new way so that models for very complex multicomponent multistage operations can be setup and used easily. A part of the novelty is the way in which the problem is organized in the spreadsheet. In addition, to facilitate spreadsheet setup, a new calculational procedure has been developed. The resulting Spreadsheet Algorithm for Stagewise Solvent Extraction (SASSE) can be used with either IBM or Macintosh personal computers as a simple yet powerful tool for analyzing solvent extraction flowsheets. 22 refs., 4 figs., 2 tabs

  6. SIFT based algorithm for point feature tracking

    Directory of Open Access Journals (Sweden)

    Adrian BURLACU

    2007-12-01

    Full Text Available In this paper a tracking algorithm for SIFT features in image sequences is developed. For each point feature extracted using SIFT algorithm a descriptor is computed using information from its neighborhood. Using an algorithm based on minimizing the distance between two descriptors tracking point features throughout image sequences is engaged. Experimental results, obtained from image sequences that capture scaling of different geometrical type object, reveal the performances of the tracking algorithm.

  7. A comparative study of image low level feature extraction algorithms

    Directory of Open Access Journals (Sweden)

    M.M. El-gayar

    2013-07-01

    Full Text Available Feature extraction and matching is at the base of many computer vision problems, such as object recognition or structure from motion. Current methods for assessing the performance of popular image matching algorithms are presented and rely on costly descriptors for detection and matching. Specifically, the method assesses the type of images under which each of the algorithms reviewed herein perform to its maximum or highest efficiency. The efficiency is measured in terms of the number of matches founds by the algorithm and the number of type I and type II errors encountered when the algorithm is tested against a specific pair of images. Current comparative studies asses the performance of the algorithms based on the results obtained in different criteria such as speed, sensitivity, occlusion, and others. This study addresses the limitations of the existing comparative tools and delivers a generalized criterion to determine beforehand the level of efficiency expected from a matching algorithm given the type of images evaluated. The algorithms and the respective images used within this work are divided into two groups: feature-based and texture-based. And from this broad classification only three of the most widely used algorithms are assessed: color histogram, FAST (Features from Accelerated Segment Test, SIFT (Scale Invariant Feature Transform, PCA-SIFT (Principal Component Analysis-SIFT, F-SIFT (fast-SIFT and SURF (speeded up robust features. The performance of the Fast-SIFT (F-SIFT feature detection methods are compared for scale changes, rotation, blur, illumination changes and affine transformations. All the experiments use repeatability measurement and the number of correct matches for the evaluation measurements. SIFT presents its stability in most situations although its slow. F-SIFT is the fastest one with good performance as the same as SURF, SIFT, PCA-SIFT show its advantages in rotation and illumination changes.

  8. An algorithm for centerline extraction using natural neighbour interpolation

    DEFF Research Database (Denmark)

    Mioc, Darka; Antón Castro, Francesc/François; Dharmaraj, Girija

    2004-01-01

    , especially due to the lack of explicit topology in commercial GIS systems. Indeed, each map update might require the batch processing of the whole map. Currently, commercial GIS do not offer completely automatic raster/vector conversion even for simple scanned black and white maps. Various commercial raster...... they need user defined tolerances settings, what causes difficulties in the extraction of complex spatial features, for example: road junctions, curved or irregular lines and complex intersections of linear features. The approach we use here is based on image processing filtering techniques to extract...... to the improvement of data caption and conversion in GIS and to develop a software toolkit for automated raster/vector conversion. The approach is based on computing the skeleton from Voronoi diagrams using natural neighbour interpolation. In this paper we present the algorithm for skeleton extraction from scanned...

  9. Phase Grouping Line Extraction Algorithm Using Overlapped Partition

    Directory of Open Access Journals (Sweden)

    WANG Jingxue

    2015-07-01

    Full Text Available Aiming at solving the problem of fracture at the discontinuities area and the challenges of line fitting in each partition, an innovative line extraction algorithm is proposed based on phase grouping using overlapped partition. The proposed algorithm adopted dual partition steps, which will generate overlapped eight partitions. Between the two steps, the middle axis in the first step coincides with the border lines in the other step. Firstly, the connected edge points that share the same phase gradients are merged into the line candidates, and fitted into line segments. Then to remedy the break lines at the border areas, the break segments in the second partition steps are refitted. The proposed algorithm is robust and does not need any parameter tuning. Experiments with various datasets have confirmed that the method is not only capable of handling the linear features, but also powerful enough in handling the curve features.

  10. A novel automated spike sorting algorithm with adaptable feature extraction.

    Science.gov (United States)

    Bestel, Robert; Daus, Andreas W; Thielemann, Christiane

    2012-10-15

    To study the electrophysiological properties of neuronal networks, in vitro studies based on microelectrode arrays have become a viable tool for analysis. Although in constant progress, a challenging task still remains in this area: the development of an efficient spike sorting algorithm that allows an accurate signal analysis at the single-cell level. Most sorting algorithms currently available only extract a specific feature type, such as the principal components or Wavelet coefficients of the measured spike signals in order to separate different spike shapes generated by different neurons. However, due to the great variety in the obtained spike shapes, the derivation of an optimal feature set is still a very complex issue that current algorithms struggle with. To address this problem, we propose a novel algorithm that (i) extracts a variety of geometric, Wavelet and principal component-based features and (ii) automatically derives a feature subset, most suitable for sorting an individual set of spike signals. Thus, there is a new approach that evaluates the probability distribution of the obtained spike features and consequently determines the candidates most suitable for the actual spike sorting. These candidates can be formed into an individually adjusted set of spike features, allowing a separation of the various shapes present in the obtained neuronal signal by a subsequent expectation maximisation clustering algorithm. Test results with simulated data files and data obtained from chick embryonic neurons cultured on microelectrode arrays showed an excellent classification result, indicating the superior performance of the described algorithm approach. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Ion trace detection algorithm to extract pure ion chromatograms to improve untargeted peak detection quality for liquid chromatography/time-of-flight mass spectrometry-based metabolomics data.

    Science.gov (United States)

    Wang, San-Yuan; Kuo, Ching-Hua; Tseng, Yufeng J

    2015-03-03

    Able to detect known and unknown metabolites, untargeted metabolomics has shown great potential in identifying novel biomarkers. However, elucidating all possible liquid chromatography/time-of-flight mass spectrometry (LC/TOF-MS) ion signals in a complex biological sample remains challenging since many ions are not the products of metabolites. Methods of reducing ions not related to metabolites or simply directly detecting metabolite related (pure) ions are important. In this work, we describe PITracer, a novel algorithm that accurately detects the pure ions of a LC/TOF-MS profile to extract pure ion chromatograms and detect chromatographic peaks. PITracer estimates the relative mass difference tolerance of ions and calibrates the mass over charge (m/z) values for peak detection algorithms with an additional option to further mass correction with respect to a user-specified metabolite. PITracer was evaluated using two data sets containing 373 human metabolite standards, including 5 saturated standards considered to be split peaks resultant from huge m/z fluctuation, and 12 urine samples spiked with 50 forensic drugs of varying concentrations. Analysis of these data sets show that PITracer correctly outperformed existing state-of-art algorithm and extracted the pure ion chromatograms of the 5 saturated standards without generating split peaks and detected the forensic drugs with high recall, precision, and F-score and small mass error.

  12. Improved discrete swarm intelligence algorithms for endmember extraction from hyperspectral remote sensing images

    Science.gov (United States)

    Su, Yuanchao; Sun, Xu; Gao, Lianru; Li, Jun; Zhang, Bing

    2016-10-01

    Endmember extraction is a key step in hyperspectral unmixing. A new endmember extraction framework is proposed for hyperspectral endmember extraction. The proposed approach is based on the swarm intelligence (SI) algorithm, where discretization is used to solve the SI algorithm because pixels in a hyperspectral image are naturally defined within a discrete space. Moreover, a "distance" factor is introduced into the objective function to limit the endmember numbers which is generally limited in real scenarios, while traditional SI algorithms likely produce superabundant spectral signatures, which generally belong to the same classes. Three endmember extraction methods are proposed based on the artificial bee colony, ant colony optimization, and particle swarm optimization algorithms. Experiments with both simulated and real hyperspectral images indicate that the proposed framework can improve the accuracy of endmember extraction.

  13. A Two-Step Resume Information Extraction Algorithm

    Directory of Open Access Journals (Sweden)

    Jie Chen

    2018-01-01

    Full Text Available With the rapid growth of Internet-based recruiting, there are a great number of personal resumes among recruiting systems. To gain more attention from the recruiters, most resumes are written in diverse formats, including varying font size, font colour, and table cells. However, the diversity of format is harmful to data mining, such as resume information extraction, automatic job matching, and candidates ranking. Supervised methods and rule-based methods have been proposed to extract facts from resumes, but they strongly rely on hierarchical structure information and large amounts of labelled data, which are hard to collect in reality. In this paper, we propose a two-step resume information extraction approach. In the first step, raw text of resume is identified as different resume blocks. To achieve the goal, we design a novel feature, Writing Style, to model sentence syntax information. Besides word index and punctuation index, word lexical attribute and prediction results of classifiers are included in Writing Style. In the second step, multiple classifiers are employed to identify different attributes of fact information in resumes. Experimental results on a real-world dataset show that the algorithm is feasible and effective.

  14. Support the Design of Improved IUE NEWSIPS High Dispersion Extraction Algorithms: Improved IUE High Dispersion Extraction Algorithms

    Science.gov (United States)

    Lawton, Pat

    2004-01-01

    The objective of this work was to support the design of improved IUE NEWSIPS high dispersion extraction algorithms. The purpose of this work was to evaluate use of the Linearized Image (LIHI) file versus the Re-Sampled Image (SIHI) file, evaluate various extraction, and design algorithms for evaluation of IUE High Dispersion spectra. It was concluded the use of the Re-Sampled Image (SIHI) file was acceptable. Since the Gaussian profile worked well for the core and the Lorentzian profile worked well for the wings, the Voigt profile was chosen for use in the extraction algorithm. It was found that the gamma and sigma parameters varied significantly across the detector, so gamma and sigma masks for the SWP detector were developed. Extraction code was written.

  15. A Mining Algorithm for Extracting Decision Process Data Models

    Directory of Open Access Journals (Sweden)

    Cristina-Claudia DOLEAN

    2011-01-01

    Full Text Available The paper introduces an algorithm that mines logs of user interaction with simulation software. It outputs a model that explicitly shows the data perspective of the decision process, namely the Decision Data Model (DDM. In the first part of the paper we focus on how the DDM is extracted by our mining algorithm. We introduce it as pseudo-code and, then, provide explanations and examples of how it actually works. In the second part of the paper, we use a series of small case studies to prove the robustness of the mining algorithm and how it deals with the most common patterns we found in real logs.

  16. A FAST AND ROBUST ALGORITHM FOR ROAD EDGES EXTRACTION FROM LIDAR DATA

    Directory of Open Access Journals (Sweden)

    K. Qiu

    2016-06-01

    Full Text Available Fast mapping of roads plays an important role in many geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance. How to extract various road edges fast and robustly is a challenging task. In this paper, we present a fast and robust algorithm for the automatic road edges extraction from terrestrial mobile LiDAR data. The algorithm is based on a key observation: most roads around edges have difference in elevation and road edges with pavement are seen in two different planes. In our algorithm, we firstly extract a rough plane based on RANSAC algorithm, and then multiple refined planes which only contains pavement are extracted from the rough plane. The road edges are extracted based on these refined planes. In practice, there is a serious problem that the rough and refined planes usually extracted badly due to rough roads and different density of point cloud. To eliminate the influence of rough roads, the technology which is similar with the difference of DSM (digital surface model and DTM (digital terrain model is used, and we also propose a method which adjust the point clouds to a similar density to eliminate the influence of different density. Experiments show the validities of the proposed method with multiple datasets (e.g. urban road, highway, and some rural road. We use the same parameters through the experiments and our algorithm can achieve real-time processing speeds.

  17. Data Extraction Based on Page Structure Analysis

    Directory of Open Access Journals (Sweden)

    Ren Yichao

    2017-01-01

    Full Text Available The information we need has some confusing problems such as dispersion and different organizational structure. In addition, because of the existence of unstructured data like natural language and images, extracting local content pages is extremely difficult. In the light of of the problems above, this article will apply a method combined with page structure analysis algorithm and page data extraction algorithm to accomplish the gathering of network data. In this way, the problem that traditional complex extraction model behave poorly when dealing with large-scale data is perfectly solved and the page data extraction efficiency is also boosted to a new level. In the meantime, the article will also make a comparison about pages and content of different types between the methods of DOM structure based on the page and HTML regularities of distribution. After all of those, we may find a more efficient extract method.

  18. Feature-extraction algorithms for the PANDA electromagnetic calorimeter

    NARCIS (Netherlands)

    Kavatsyuk, M.; Guliyev, E.; Lemmens, P. J. J.; Loehner, H.; Poelman, T. P.; Tambave, G.; Yu, B

    2009-01-01

    The feature-extraction algorithms are discussed which have been developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA detector at the future FAIR facility. Performance parameters have been derived in test measurements with cosmic rays, particle and photon

  19. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs

    Directory of Open Access Journals (Sweden)

    Haijian Chen

    2015-01-01

    Full Text Available In recent years, Massive Open Online Courses (MOOCs are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate.

  20. LSB Based Quantum Image Steganography Algorithm

    Science.gov (United States)

    Jiang, Nan; Zhao, Na; Wang, Luo

    2016-01-01

    Quantum steganography is the technique which hides a secret message into quantum covers such as quantum images. In this paper, two blind LSB steganography algorithms in the form of quantum circuits are proposed based on the novel enhanced quantum representation (NEQR) for quantum images. One algorithm is plain LSB which uses the message bits to substitute for the pixels' LSB directly. The other is block LSB which embeds a message bit into a number of pixels that belong to one image block. The extracting circuits can regain the secret message only according to the stego cover. Analysis and simulation-based experimental results demonstrate that the invisibility is good, and the balance between the capacity and the robustness can be adjusted according to the needs of applications.

  1. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing

    OpenAIRE

    Li, Chunhua; Zhao, Pengpeng; Sheng, Victor S.; Xian, Xuefeng; Wu, Jian; Cui, Zhiming

    2017-01-01

    Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality i...

  2. Automatic fault extraction using a modified ant-colony algorithm

    International Nuclear Information System (INIS)

    Zhao, Junsheng; Sun, Sam Zandong

    2013-01-01

    The basis of automatic fault extraction is seismic attributes, such as the coherence cube which is always used to identify a fault by the minimum value. The biggest challenge in automatic fault extraction is noise, including that of seismic data. However, a fault has a better spatial continuity in certain direction, which makes it quite different from noise. Considering this characteristic, a modified ant-colony algorithm is introduced into automatic fault identification and tracking, where the gradient direction and direction consistency are used as constraints. Numerical model test results show that this method is feasible and effective in automatic fault extraction and noise suppression. The application of field data further illustrates its validity and superiority. (paper)

  3. Cryptographic Protocols Based on Root Extracting

    DEFF Research Database (Denmark)

    Koprowski, Maciej

    In this thesis we design new cryptographic protocols, whose security is based on the hardness of root extracting or more speci cally the RSA problem. First we study the problem of root extraction in nite Abelian groups, where the group order is unknown. This is a natural generalization of the...... complexity of root extraction, even if the algorithm can choose the "public exponent'' itself. In other words, both the standard and the strong RSA assumption are provably true w.r.t. generic algorithms. The results hold for arbitrary groups, so security w.r.t. generic attacks follows for any cryptographic...... groups. In all cases, security follows from a well de ned complexity assumption (the strong root assumption), without relying on random oracles. A smooth natural number has no big prime factors. The probability, that a random natural number not greater than x has all prime factors smaller than x1/u...

  4. The Algorithm for Algorithms: An Evolutionary Algorithm Based on Automatic Designing of Genetic Operators

    Directory of Open Access Journals (Sweden)

    Dazhi Jiang

    2015-01-01

    Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.

  5. Human resource recommendation algorithm based on ensemble learning and Spark

    Science.gov (United States)

    Cong, Zihan; Zhang, Xingming; Wang, Haoxiang; Xu, Hongjie

    2017-08-01

    Aiming at the problem of “information overload” in the human resources industry, this paper proposes a human resource recommendation algorithm based on Ensemble Learning. The algorithm considers the characteristics and behaviours of both job seeker and job features in the real business circumstance. Firstly, the algorithm uses two ensemble learning methods-Bagging and Boosting. The outputs from both learning methods are then merged to form user interest model. Based on user interest model, job recommendation can be extracted for users. The algorithm is implemented as a parallelized recommendation system on Spark. A set of experiments have been done and analysed. The proposed algorithm achieves significant improvement in accuracy, recall rate and coverage, compared with recommendation algorithms such as UserCF and ItemCF.

  6. Hybrid employment recommendation algorithm based on Spark

    Science.gov (United States)

    Li, Zuoquan; Lin, Yubei; Zhang, Xingming

    2017-08-01

    Aiming at the real-time application of collaborative filtering employment recommendation algorithm (CF), a clustering collaborative filtering recommendation algorithm (CCF) is developed, which applies hierarchical clustering to CF and narrows the query range of neighbour items. In addition, to solve the cold-start problem of content-based recommendation algorithm (CB), a content-based algorithm with users’ information (CBUI) is introduced for job recommendation. Furthermore, a hybrid recommendation algorithm (HRA) which combines CCF and CBUI algorithms is proposed, and implemented on Spark platform. The experimental results show that HRA can overcome the problems of cold start and data sparsity, and achieve good recommendation accuracy and scalability for employment recommendation.

  7. Comparison of some classification algorithms based on deterministic and nondeterministic decision rules

    KAUST Repository

    Delimata, Paweł; Marszał-Paszek, Barbara; Moshkov, Mikhail; Paszek, Piotr; Skowron, Andrzej; Suraj, Zbigniew

    2010-01-01

    the considered algorithms extract from a given decision table efficiently some information about the set of rules. Next, this information is used by a decision-making procedure. The reported results of experiments show that the algorithms based on inhibitory

  8. Opposition-Based Adaptive Fireworks Algorithm

    Directory of Open Access Journals (Sweden)

    Chibing Gong

    2016-07-01

    Full Text Available A fireworks algorithm (FWA is a recent swarm intelligence algorithm that is inspired by observing fireworks explosions. An adaptive fireworks algorithm (AFWA proposes additional adaptive amplitudes to improve the performance of the enhanced fireworks algorithm (EFWA. The purpose of this paper is to add opposition-based learning (OBL to AFWA with the goal of further boosting performance and achieving global optimization. Twelve benchmark functions are tested in use of an opposition-based adaptive fireworks algorithm (OAFWA. The final results conclude that OAFWA significantly outperformed EFWA and AFWA in terms of solution accuracy. Additionally, OAFWA was compared with a bat algorithm (BA, differential evolution (DE, self-adapting control parameters in differential evolution (jDE, a firefly algorithm (FA, and a standard particle swarm optimization 2011 (SPSO2011 algorithm. The research results indicate that OAFWA ranks the highest of the six algorithms for both solution accuracy and runtime cost.

  9. Opposition-Based Adaptive Fireworks Algorithm

    OpenAIRE

    Chibing Gong

    2016-01-01

    A fireworks algorithm (FWA) is a recent swarm intelligence algorithm that is inspired by observing fireworks explosions. An adaptive fireworks algorithm (AFWA) proposes additional adaptive amplitudes to improve the performance of the enhanced fireworks algorithm (EFWA). The purpose of this paper is to add opposition-based learning (OBL) to AFWA with the goal of further boosting performance and achieving global optimization. Twelve benchmark functions are tested in use of an opposition-based a...

  10. A novel line segment detection algorithm based on graph search

    Science.gov (United States)

    Zhao, Hong-dan; Liu, Guo-ying; Song, Xu

    2018-02-01

    To overcome the problem of extracting line segment from an image, a method of line segment detection was proposed based on the graph search algorithm. After obtaining the edge detection result of the image, the candidate straight line segments are obtained in four directions. For the candidate straight line segments, their adjacency relationships are depicted by a graph model, based on which the depth-first search algorithm is employed to determine how many adjacent line segments need to be merged. Finally we use the least squares method to fit the detected straight lines. The comparative experimental results verify that the proposed algorithm has achieved better results than the line segment detector (LSD).

  11. Image Registration Algorithm Based on Parallax Constraint and Clustering Analysis

    Science.gov (United States)

    Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song

    2018-01-01

    To resolve the problem of slow computation speed and low matching accuracy in image registration, a new image registration algorithm based on parallax constraint and clustering analysis is proposed. Firstly, Harris corner detection algorithm is used to extract the feature points of two images. Secondly, use Normalized Cross Correlation (NCC) function to perform the approximate matching of feature points, and the initial feature pair is obtained. Then, according to the parallax constraint condition, the initial feature pair is preprocessed by K-means clustering algorithm, which is used to remove the feature point pairs with obvious errors in the approximate matching process. Finally, adopt Random Sample Consensus (RANSAC) algorithm to optimize the feature points to obtain the final feature point matching result, and the fast and accurate image registration is realized. The experimental results show that the image registration algorithm proposed in this paper can improve the accuracy of the image matching while ensuring the real-time performance of the algorithm.

  12. Simple sorting algorithm test based on CUDA

    OpenAIRE

    Meng, Hongyu; Guo, Fangjin

    2015-01-01

    With the development of computing technology, CUDA has become a very important tool. In computer programming, sorting algorithm is widely used. There are many simple sorting algorithms such as enumeration sort, bubble sort and merge sort. In this paper, we test some simple sorting algorithm based on CUDA and draw some useful conclusions.

  13. RESEARCH ON FOREST FLAME RECOGNITION ALGORITHM BASED ON IMAGE FEATURE

    Directory of Open Access Journals (Sweden)

    Z. Wang

    2017-09-01

    Full Text Available In recent years, fire recognition based on image features has become a hotspot in fire monitoring. However, due to the complexity of forest environment, the accuracy of forest fireworks recognition based on image features is low. Based on this, this paper proposes a feature extraction algorithm based on YCrCb color space and K-means clustering. Firstly, the paper prepares and analyzes the color characteristics of a large number of forest fire image samples. Using the K-means clustering algorithm, the forest flame model is obtained by comparing the two commonly used color spaces, and the suspected flame area is discriminated and extracted. The experimental results show that the extraction accuracy of flame area based on YCrCb color model is higher than that of HSI color model, which can be applied in different scene forest fire identification, and it is feasible in practice.

  14. Normalization based K means Clustering Algorithm

    OpenAIRE

    Virmani, Deepali; Taneja, Shweta; Malhotra, Geetika

    2015-01-01

    K-means is an effective clustering technique used to separate similar data into groups based on initial centroids of clusters. In this paper, Normalization based K-means clustering algorithm(N-K means) is proposed. Proposed N-K means clustering algorithm applies normalization prior to clustering on the available data as well as the proposed approach calculates initial centroids based on weights. Experimental results prove the betterment of proposed N-K means clustering algorithm over existing...

  15. Efficient algorithms for extracting biological key pathways with global constraints

    DEFF Research Database (Denmark)

    Baumbach, Jan; Friedrich, T.; Kötzing, T.

    2012-01-01

    The integrated analysis of data of different types and with various interdependencies is one of the major challenges in computational biology. Recently, we developed KeyPathwayMiner, a method that combines biological networks modeled as graphs with disease-specific genetic expression data gained....... Here we present an alternative approach that avoids a certain bias towards hub nodes: We now aim for extracting all maximal connected sub-networks where all but at most K nodes are expressed in all cases but in total (!) at most L, i.e. accumulated over all cases and all nodes in a solution. We call...... this strategy GLONE (global node exceptions); the previous problem we call INES (individual node exceptions). Since finding GLONE-components is computationally hard, we developed an Ant Colony Optimization algorithm and implemented it with the KeyPathwayMiner Cytoscape framework as an alternative to the INES...

  16. ID card number detection algorithm based on convolutional neural network

    Science.gov (United States)

    Zhu, Jian; Ma, Hanjie; Feng, Jie; Dai, Leiyan

    2018-04-01

    In this paper, a new detection algorithm based on Convolutional Neural Network is presented in order to realize the fast and convenient ID information extraction in multiple scenarios. The algorithm uses the mobile device equipped with Android operating system to locate and extract the ID number; Use the special color distribution of the ID card, select the appropriate channel component; Use the image threshold segmentation, noise processing and morphological processing to take the binary processing for image; At the same time, the image rotation and projection method are used for horizontal correction when image was tilting; Finally, the single character is extracted by the projection method, and recognized by using Convolutional Neural Network. Through test shows that, A single ID number image from the extraction to the identification time is about 80ms, the accuracy rate is about 99%, It can be applied to the actual production and living environment.

  17. A Robust Formant Extraction Algorithm Combining Spectral Peak Picking and Root Polishing

    Directory of Open Access Journals (Sweden)

    Seo Kwang-deok

    2006-01-01

    Full Text Available We propose a robust formant extraction algorithm that combines the spectral peak picking, formants location examining for peak merger checking, and the root extraction methods. The spectral peak picking method is employed to locate the formant candidates, and the root extraction is used for solving the peak merger problem. The location and the distance between the extracted formants are also utilized to efficiently find out suspected peak mergers. The proposed algorithm does not require much computation, and is shown to be superior to previous formant extraction algorithms through extensive tests using TIMIT speech database.

  18. Eigenvalue Decomposition-Based Modified Newton Algorithm

    Directory of Open Access Journals (Sweden)

    Wen-jun Wang

    2013-01-01

    Full Text Available When the Hessian matrix is not positive, the Newton direction may not be the descending direction. A new method named eigenvalue decomposition-based modified Newton algorithm is presented, which first takes the eigenvalue decomposition of the Hessian matrix, then replaces the negative eigenvalues with their absolute values, and finally reconstructs the Hessian matrix and modifies the searching direction. The new searching direction is always the descending direction. The convergence of the algorithm is proven and the conclusion on convergence rate is presented qualitatively. Finally, a numerical experiment is given for comparing the convergence domains of the modified algorithm and the classical algorithm.

  19. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing

    Directory of Open Access Journals (Sweden)

    Chunhua Li

    2017-01-01

    Full Text Available Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.

  20. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing.

    Science.gov (United States)

    Li, Chunhua; Zhao, Pengpeng; Sheng, Victor S; Xian, Xuefeng; Wu, Jian; Cui, Zhiming

    2017-01-01

    Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.

  1. Texture orientation-based algorithm for detecting infrared maritime targets.

    Science.gov (United States)

    Wang, Bin; Dong, Lili; Zhao, Ming; Wu, Houde; Xu, Wenhai

    2015-05-20

    Infrared maritime target detection is a key technology for maritime target searching systems. However, in infrared maritime images (IMIs) taken under complicated sea conditions, background clutters, such as ocean waves, clouds or sea fog, usually have high intensity that can easily overwhelm the brightness of real targets, which is difficult for traditional target detection algorithms to deal with. To mitigate this problem, this paper proposes a novel target detection algorithm based on texture orientation. This algorithm first extracts suspected targets by analyzing the intersubband correlation between horizontal and vertical wavelet subbands of the original IMI on the first scale. Then the self-adaptive wavelet threshold denoising and local singularity analysis of the original IMI is combined to remove false alarms further. Experiments show that compared with traditional algorithms, this algorithm can suppress background clutter much better and realize better single-frame detection for infrared maritime targets. Besides, in order to guarantee accurate target extraction further, the pipeline-filtering algorithm is adopted to eliminate residual false alarms. The high practical value and applicability of this proposed strategy is backed strongly by experimental data acquired under different environmental conditions.

  2. Seizure detection algorithms based on EMG signals

    DEFF Research Database (Denmark)

    Conradsen, Isa

    Background: the currently used non-invasive seizure detection methods are not reliable. Muscle fibers are directly connected to the nerves, whereby electric signals are generated during activity. Therefore, an alarm system on electromyography (EMG) signals is a theoretical possibility. Objective...... on the amplitude of the signal. The other algorithm was based on information of the signal in the frequency domain, and it focused on synchronisation of the electrical activity in a single muscle during the seizure. Results: The amplitude-based algorithm reliably detected seizures in 2 of the patients, while...... the frequency-based algorithm was efficient for detecting the seizures in the third patient. Conclusion: Our results suggest that EMG signals could be used to develop an automatic seizuredetection system. However, different patients might require different types of algorithms /approaches....

  3. A SAR IMAGE REGISTRATION METHOD BASED ON SIFT ALGORITHM

    Directory of Open Access Journals (Sweden)

    W. Lu

    2017-09-01

    Full Text Available In order to improve the stability and rapidity of synthetic aperture radar (SAR images matching, an effective method was presented. Firstly, the adaptive smoothing filtering was employed for image denoising in image processing based on Wallis filtering to avoid the follow-up noise is amplified. Secondly, feature points were extracted by a simplified SIFT algorithm. Finally, the exact matching of the images was achieved with these points. Compared with the existing methods, it not only maintains the richness of features, but a-lso reduces the noise of the image. The simulation results show that the proposed algorithm can achieve better matching effect.

  4. Duality based optical flow algorithms with applications

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau

    We consider the popular TV-L1 optical flow formulation, and the so-called duality based algorithm for minimizing the TV-L1 energy. The original formulation is extended to allow for vector valued images, and minimization results are given. In addition we consider different definitions of total...... variation regularization, and related formulations of the optical flow problem that may be used with a duality based algorithm. We present a highly optimized algorithmic setup to estimate optical flows, and give five novel applications. The first application is registration of medical images, where X......-ray images of different hands, taken using different imaging devices are registered using a TV-L1 optical flow algorithm. We propose to regularize the input images, using sparsity enhancing regularization of the image gradient to improve registration results. The second application is registration of 2D...

  5. NETWORKS OF NANOPARTICLES IN ORGANIC – INORGANIC COMPOSITES: ALGORITHMIC EXTRACTION AND STATISTICAL ANALYSIS

    Directory of Open Access Journals (Sweden)

    Ralf Thiedmann

    2012-03-01

    Full Text Available The rising global demand in energy and the limited resources in fossil fuels require new technologies in renewable energies like solar cells. Silicon solar cells offer a good efficiency but suffer from high production costs. A promising alternative are polymer solar cells, due to potentially low production costs and high flexibility of the panels. In this paper, the nanostructure of organic–inorganic composites is investigated, which can be used as photoactive layers in hybrid–polymer solar cells. These materials consist of a polymeric (OC1C10-PPV phase with CdSe nanoparticles embedded therein. On the basis of 3D image data with high spatial resolution, gained by electron tomography, an algorithm is developed to automatically extract the CdSe nanoparticles from grayscale images, where we assume them as spheres. The algorithm is based on a modified version of the Hough transform, where a watershed algorithm is used to separate the image data into basins such that each basin contains exactly one nanoparticle. After their extraction, neighboring nanoparticles are connected to form a 3D network that is related to the transport of electrons in polymer solar cells. A detailed statistical analysis of the CdSe network morphology is accomplished, which allows deeper insight into the hopping percolation pathways of electrons.

  6. A Robust Level-Set Algorithm for Centerline Extraction

    NARCIS (Netherlands)

    Telea, Alexandru; Vilanova, Anna

    2003-01-01

    We present a robust method for extracting 3D centerlines from volumetric datasets. We start from a 2D skeletonization method to locate voxels centered with respect to three orthogonal slicing directions. Next, we introduce a new detection criterion to extract the centerline voxels from the above

  7. Research on Palmprint Identification Method Based on Quantum Algorithms

    Directory of Open Access Journals (Sweden)

    Hui Li

    2014-01-01

    Full Text Available Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%.

  8. A Signature Comparing Android Mobile Application Utilizing Feature Extracting Algorithms

    Directory of Open Access Journals (Sweden)

    Paul Grafilon

    2017-08-01

    Full Text Available The paper presented one of the application that can be done using smartphones camera. Nowadays forgery is one of the most undetected crimes. With the forensic technology used today it is still difficult for authorities to compare and define what a real signature is and what a forged signature is. A signature is a legal representation of a person. All transactions are based on a signature. Forgers may use a signature to sign illegal contracts and withdraw from bank accounts undetected. A signature can also be forged during election periods for repeated voting. Addressing the issues a signature should always be secure. Signature verification is a reduced problem that still poses a real challenge for researchers. The literature on signature verification is quite extensive and shows two main areas of research off-line and on-line systems. Off-line systems deal with a static image of the signature i.e. the result of the action of signing while on-line systems work on the dynamic process of generating the signature i.e. the action of signing itself. The researchers have found a way to resolve the concerns. A mobile application that integrates the camera to take a picture of a signature analyzes it and compares it to other signatures for verification. It will exist to help citizens to be more cautious and aware with issues regarding the signatures. This might also be relevant to help organizations and institutions such as banks and insurance companies in verifying signatures that may avoid unwanted transactions and identity theft. Furthermore this might help the authorities in the never ending battle against crime especially against forgers and thieves. The project aimed to design and develop a mobile application that integrates the smartphone camera for verifying and comparing signatures for security using the best algorithm possible. As the result of the development the said smartphone camera application is functional and reliable.

  9. Kinect as a Tool for Gait Analysis: Validation of a Real-Time Joint Extraction Algorithm Working in Side View

    Science.gov (United States)

    Cippitelli, Enea; Gasparrini, Samuele; Spinsante, Susanna; Gambi, Ennio

    2015-01-01

    The Microsoft Kinect sensor has gained attention as a tool for gait analysis for several years. Despite the many advantages the sensor provides, however, the lack of a native capability to extract joints from the side view of a human body still limits the adoption of the device to a number of relevant applications. This paper presents an algorithm to locate and estimate the trajectories of up to six joints extracted from the side depth view of a human body captured by the Kinect device. The algorithm is then applied to extract data that can be exploited to provide an objective score for the “Get Up and Go Test”, which is typically adopted for gait analysis in rehabilitation fields. Starting from the depth-data stream provided by the Microsoft Kinect sensor, the proposed algorithm relies on anthropometric models only, to locate and identify the positions of the joints. Differently from machine learning approaches, this solution avoids complex computations, which usually require significant resources. The reliability of the information about the joint position output by the algorithm is evaluated by comparison to a marker-based system. Tests show that the trajectories extracted by the proposed algorithm adhere to the reference curves better than the ones obtained from the skeleton generated by the native applications provided within the Microsoft Kinect (Microsoft Corporation, Redmond, WA, USA, 2013) and OpenNI (OpenNI organization, Tel Aviv, Israel, 2013) Software Development Kits. PMID:25594588

  10. Kinect as a Tool for Gait Analysis: Validation of a Real-Time Joint Extraction Algorithm Working in Side View

    Directory of Open Access Journals (Sweden)

    Enea Cippitelli

    2015-01-01

    Full Text Available The Microsoft Kinect sensor has gained attention as a tool for gait analysis for several years. Despite the many advantages the sensor provides, however, the lack of a native capability to extract joints from the side view of a human body still limits the adoption of the device to a number of relevant applications. This paper presents an algorithm to locate and estimate the trajectories of up to six joints extracted from the side depth view of a human body captured by the Kinect device. The algorithm is then applied to extract data that can be exploited to provide an objective score for the “Get Up and Go Test”, which is typically adopted for gait analysis in rehabilitation fields. Starting from the depth-data stream provided by the Microsoft Kinect sensor, the proposed algorithm relies on anthropometric models only, to locate and identify the positions of the joints. Differently from machine learning approaches, this solution avoids complex computations, which usually require significant resources. The reliability of the information about the joint position output by the algorithm is evaluated by comparison to a marker-based system. Tests show that the trajectories extracted by the proposed algorithm adhere to the reference curves better than the ones obtained from the skeleton generated by the native applications provided within the Microsoft Kinect (Microsoft Corporation, Redmond,WA, USA, 2013 and OpenNI (OpenNI organization, Tel Aviv, Israel, 2013 Software Development Kits.

  11. Adaboost-based algorithm for human action recognition

    KAUST Repository

    Zerrouki, Nabil

    2017-11-28

    This paper presents a computer vision-based methodology for human action recognition. First, the shape based pose features are constructed based on area ratios to identify the human silhouette in images. The proposed features are invariance to translation and scaling. Once the human body features are extracted from videos, different human actions are learned individually on the training frames of each class. Then, we apply the Adaboost algorithm for the classification process. We assessed the proposed approach using the UR Fall Detection dataset. In this study six classes of activities are considered namely: walking, standing, bending, lying, squatting, and sitting. Results demonstrate the efficiency of the proposed methodology.

  12. Adaboost-based algorithm for human action recognition

    KAUST Repository

    Zerrouki, Nabil; Harrou, Fouzi; Sun, Ying; Houacine, Amrane

    2017-01-01

    This paper presents a computer vision-based methodology for human action recognition. First, the shape based pose features are constructed based on area ratios to identify the human silhouette in images. The proposed features are invariance to translation and scaling. Once the human body features are extracted from videos, different human actions are learned individually on the training frames of each class. Then, we apply the Adaboost algorithm for the classification process. We assessed the proposed approach using the UR Fall Detection dataset. In this study six classes of activities are considered namely: walking, standing, bending, lying, squatting, and sitting. Results demonstrate the efficiency of the proposed methodology.

  13. Structure-Based Algorithms for Microvessel Classification

    KAUST Repository

    Smith, Amy F.

    2015-02-01

    © 2014 The Authors. Microcirculation published by John Wiley & Sons Ltd. Objective: Recent developments in high-resolution imaging techniques have enabled digital reconstruction of three-dimensional sections of microvascular networks down to the capillary scale. To better interpret these large data sets, our goal is to distinguish branching trees of arterioles and venules from capillaries. Methods: Two novel algorithms are presented for classifying vessels in microvascular anatomical data sets without requiring flow information. The algorithms are compared with a classification based on observed flow directions (considered the gold standard), and with an existing resistance-based method that relies only on structural data. Results: The first algorithm, developed for networks with one arteriolar and one venular tree, performs well in identifying arterioles and venules and is robust to parameter changes, but incorrectly labels a significant number of capillaries as arterioles or venules. The second algorithm, developed for networks with multiple inlets and outlets, correctly identifies more arterioles and venules, but is more sensitive to parameter changes. Conclusions: The algorithms presented here can be used to classify microvessels in large microvascular data sets lacking flow information. This provides a basis for analyzing the distinct geometrical properties and modelling the functional behavior of arterioles, capillaries, and venules.

  14. Tolerance based algorithms for the ATSP

    NARCIS (Netherlands)

    Goldengorin, B; Sierksma, G; Turkensteen, M; Hromkovic, J; Nagl, M; Westfechtel, B

    2004-01-01

    In this paper we use arc tolerances, instead of arc costs, to improve Branch-and-Bound type algorithms for the Asymmetric Traveling Salesman Problem (ATSP). We derive new tighter lower bounds based on exact and approximate bottleneck upper tolerance values of the Assignment Problem (AP). It is shown

  15. Model based development of engine control algorithms

    NARCIS (Netherlands)

    Dekker, H.J.; Sturm, W.L.

    1996-01-01

    Model based development of engine control systems has several advantages. The development time and costs are strongly reduced because much of the development and optimization work is carried out by simulating both engine and control system. After optimizing the control algorithm it can be executed

  16. NLSE: Parameter-Based Inversion Algorithm

    Science.gov (United States)

    Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Aldrin, John C.; Knopp, Jeremy S.

    Chapter 11 introduced us to the notion of an inverse problem and gave us some examples of the value of this idea to the solution of realistic industrial problems. The basic inversion algorithm described in Chap. 11 was based upon the Gauss-Newton theory of nonlinear least-squares estimation and is called NLSE in this book. In this chapter we will develop the mathematical background of this theory more fully, because this algorithm will be the foundation of inverse methods and their applications during the remainder of this book. We hope, thereby, to introduce the reader to the application of sophisticated mathematical concepts to engineering practice without introducing excessive mathematical sophistication.

  17. A novel validation algorithm allows for automated cell tracking and the extraction of biologically meaningful parameters.

    Directory of Open Access Journals (Sweden)

    Daniel H Rapoport

    Full Text Available Automated microscopy is currently the only method to non-invasively and label-free observe complex multi-cellular processes, such as cell migration, cell cycle, and cell differentiation. Extracting biological information from a time-series of micrographs requires each cell to be recognized and followed through sequential microscopic snapshots. Although recent attempts to automatize this process resulted in ever improving cell detection rates, manual identification of identical cells is still the most reliable technique. However, its tedious and subjective nature prevented tracking from becoming a standardized tool for the investigation of cell cultures. Here, we present a novel method to accomplish automated cell tracking with a reliability comparable to manual tracking. Previously, automated cell tracking could not rival the reliability of manual tracking because, in contrast to the human way of solving this task, none of the algorithms had an independent quality control mechanism; they missed validation. Thus, instead of trying to improve the cell detection or tracking rates, we proceeded from the idea to automatically inspect the tracking results and accept only those of high trustworthiness, while rejecting all other results. This validation algorithm works independently of the quality of cell detection and tracking through a systematic search for tracking errors. It is based only on very general assumptions about the spatiotemporal contiguity of cell paths. While traditional tracking often aims to yield genealogic information about single cells, the natural outcome of a validated cell tracking algorithm turns out to be a set of complete, but often unconnected cell paths, i.e. records of cells from mitosis to mitosis. This is a consequence of the fact that the validation algorithm takes complete paths as the unit of rejection/acceptance. The resulting set of complete paths can be used to automatically extract important biological parameters

  18. Robust MST-Based Clustering Algorithm.

    Science.gov (United States)

    Liu, Qidong; Zhang, Ruisheng; Zhao, Zhili; Wang, Zhenghai; Jiao, Mengyao; Wang, Guangjing

    2018-06-01

    Minimax similarity stresses the connectedness of points via mediating elements rather than favoring high mutual similarity. The grouping principle yields superior clustering results when mining arbitrarily-shaped clusters in data. However, it is not robust against noises and outliers in the data. There are two main problems with the grouping principle: first, a single object that is far away from all other objects defines a separate cluster, and second, two connected clusters would be regarded as two parts of one cluster. In order to solve such problems, we propose robust minimum spanning tree (MST)-based clustering algorithm in this letter. First, we separate the connected objects by applying a density-based coarsening phase, resulting in a low-rank matrix in which the element denotes the supernode by combining a set of nodes. Then a greedy method is presented to partition those supernodes through working on the low-rank matrix. Instead of removing the longest edges from MST, our algorithm groups the data set based on the minimax similarity. Finally, the assignment of all data points can be achieved through their corresponding supernodes. Experimental results on many synthetic and real-world data sets show that our algorithm consistently outperforms compared clustering algorithms.

  19. a Voxel-Based Filtering Algorithm for Mobile LIDAR Data

    Science.gov (United States)

    Qin, H.; Guan, G.; Yu, Y.; Zhong, L.

    2018-04-01

    This paper presents a stepwise voxel-based filtering algorithm for mobile LiDAR data. In the first step, to improve computational efficiency, mobile LiDAR points, in xy-plane, are first partitioned into a set of two-dimensional (2-D) blocks with a given block size, in each of which all laser points are further organized into an octree partition structure with a set of three-dimensional (3-D) voxels. Then, a voxel-based upward growing processing is performed to roughly separate terrain from non-terrain points with global and local terrain thresholds. In the second step, the extracted terrain points are refined by computing voxel curvatures. This voxel-based filtering algorithm is comprehensively discussed in the analyses of parameter sensitivity and overall performance. An experimental study performed on multiple point cloud samples, collected by different commercial mobile LiDAR systems, showed that the proposed algorithm provides a promising solution to terrain point extraction from mobile point clouds.

  20. Verification-Based Interval-Passing Algorithm for Compressed Sensing

    OpenAIRE

    Wu, Xiaofu; Yang, Zhen

    2013-01-01

    We propose a verification-based Interval-Passing (IP) algorithm for iteratively reconstruction of nonnegative sparse signals using parity check matrices of low-density parity check (LDPC) codes as measurement matrices. The proposed algorithm can be considered as an improved IP algorithm by further incorporation of the mechanism of verification algorithm. It is proved that the proposed algorithm performs always better than either the IP algorithm or the verification algorithm. Simulation resul...

  1. Algorithm of pulmonary emphysema extraction using low dose thoracic 3D CT images

    Science.gov (United States)

    Saita, S.; Kubo, M.; Kawata, Y.; Niki, N.; Nakano, Y.; Omatsu, H.; Tominaga, K.; Eguchi, K.; Moriyama, N.

    2006-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to 100 thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  2. Algorithm of pulmonary emphysema extraction using thoracic 3D CT images

    Science.gov (United States)

    Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki

    2007-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  3. Research of Video Steganalysis Algorithm Based on H265 Protocol

    Directory of Open Access Journals (Sweden)

    Wu Kaicheng

    2015-01-01

    This paper researches LSB matching VSA based on H265 protocol with the research background of 26 original Video sequences, it firstly extracts classification features out from training samples as input of SVM, and trains in SVM to obtain high-quality category classification model, and then tests whether there is suspicious information in the video sample. The experimental results show that VSA algorithm based on LSB matching can be more practical to obtain all frame embedded secret information and carrier and video of local frame embedded. In addition, VSA adopts the method of frame by frame with a strong robustness in resisting attack in the corresponding time domain.

  4. DE and NLP Based QPLS Algorithm

    Science.gov (United States)

    Yu, Xiaodong; Huang, Dexian; Wang, Xiong; Liu, Bo

    As a novel evolutionary computing technique, Differential Evolution (DE) has been considered to be an effective optimization method for complex optimization problems, and achieved many successful applications in engineering. In this paper, a new algorithm of Quadratic Partial Least Squares (QPLS) based on Nonlinear Programming (NLP) is presented. And DE is used to solve the NLP so as to calculate the optimal input weights and the parameters of inner relationship. The simulation results based on the soft measurement of diesel oil solidifying point on a real crude distillation unit demonstrate that the superiority of the proposed algorithm to linear PLS and QPLS which is based on Sequential Quadratic Programming (SQP) in terms of fitting accuracy and computational costs.

  5. FACT. New image parameters based on the watershed-algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Linhoff, Lena; Bruegge, Kai Arno; Buss, Jens [TU Dortmund (Germany). Experimentelle Physik 5b; Collaboration: FACT-Collaboration

    2016-07-01

    FACT, the First G-APD Cherenkov Telescope, is the first imaging atmospheric Cherenkov telescope that is using Geiger-mode avalanche photodiodes (G-APDs) as photo sensors. The raw data produced by this telescope are processed in an analysis chain, which leads to a classification of the primary particle that induce a shower and to an estimation of its energy. One important step in this analysis chain is the parameter extraction from shower images. By the application of a watershed algorithm to the camera image, new parameters are computed. Perceiving the brightness of a pixel as height, a set of pixels can be seen as 'landscape' with hills and valleys. A watershed algorithm groups all pixels to a cluster that belongs to the same hill. From the emerging segmented image, one can find new parameters for later analysis steps, e.g. number of clusters, their shape and containing photon charge. For FACT data, the FellWalker algorithm was chosen from the class of watershed algorithms, because it was designed to work on discrete distributions, in this case the pixels of a camera image. The FellWalker algorithm is implemented in FACT-tools, which provides the low level analysis framework for FACT. This talk will focus on the computation of new, FellWalker based, image parameters, which can be used for the gamma-hadron separation. Additionally, their distributions concerning real and Monte Carlo Data are compared.

  6. A Plagiarism Detection Algorithm based on Extended Winnowing

    Directory of Open Access Journals (Sweden)

    Duan Xuliang

    2017-01-01

    Full Text Available Plagiarism is a common problem faced by academia and education. Mature commercial plagiarism detection system has the advantages of comprehensive and high accuracy, but the expensive detection costs make it unsuitable for real-time, lightweight application environment such as the student assignments plagiarism detection. This paper introduces the method of extending classic Winnowing plagiarism detection algorithm, expands the algorithm in functionality. The extended algorithm can retain the text location and length information in original document while extracting the fingerprints of a document, so that the locating and marking for plagiarism text fragment are much easier to achieve. The experimental results and several years of running practice show that the expansion of the algorithm has little effect on its performance, normal hardware configuration of PC will be able to meet small and medium-sized applications requirements. Based on the characteristics of lightweight, high efficiency, reliability and flexibility of Winnowing, the extended algorithm further enhances the adaptability and extends the application areas.

  7. A research of road centerline extraction algorithm from high resolution remote sensing images

    Science.gov (United States)

    Zhang, Yushan; Xu, Tingfa

    2017-09-01

    Satellite remote sensing technology has become one of the most effective methods for land surface monitoring in recent years, due to its advantages such as short period, large scale and rich information. Meanwhile, road extraction is an important field in the applications of high resolution remote sensing images. An intelligent and automatic road extraction algorithm with high precision has great significance for transportation, road network updating and urban planning. The fuzzy c-means (FCM) clustering segmentation algorithms have been used in road extraction, but the traditional algorithms did not consider spatial information. An improved fuzzy C-means clustering algorithm combined with spatial information (SFCM) is proposed in this paper, which is proved to be effective for noisy image segmentation. Firstly, the image is segmented using the SFCM. Secondly, the segmentation result is processed by mathematical morphology to remover the joint region. Thirdly, the road centerlines are extracted by morphology thinning and burr trimming. The average integrity of the centerline extraction algorithm is 97.98%, the average accuracy is 95.36% and the average quality is 93.59%. Experimental results show that the proposed method in this paper is effective for road centerline extraction.

  8. GPU implementation of discrete particle swarm optimization algorithm for endmember extraction from hyperspectral image

    Science.gov (United States)

    Yu, Chaoyin; Yuan, Zhengwu; Wu, Yuanfeng

    2017-10-01

    Hyperspectral image unmixing is an important part of hyperspectral data analysis. The mixed pixel decomposition consists of two steps, endmember (the unique signatures of pure ground components) extraction and abundance (the proportion of each endmember in each pixel) estimation. Recently, a Discrete Particle Swarm Optimization algorithm (DPSO) was proposed for accurately extract endmembers with high optimal performance. However, the DPSO algorithm shows very high computational complexity, which makes the endmember extraction procedure very time consuming for hyperspectral image unmixing. Thus, in this paper, the DPSO endmember extraction algorithm was parallelized, implemented on the CUDA (GPU K20) platform, and evaluated by real hyperspectral remote sensing data. The experimental results show that with increasing the number of particles the parallelized version obtained much higher computing efficiency while maintain the same endmember exaction accuracy.

  9. NInFEA: an embedded framework for the real-time evaluation of fetal ECG extraction algorithms.

    Science.gov (United States)

    Pani, Danilo; Barabino, Gianluca; Raffo, Luigi

    2013-02-01

    Fetal electrocardiogram (ECG) extraction from non-invasive biopotential recordings is a long-standing research topic. Despite the significant number of algorithms presented in the scientific literature, it is difficult to find information about embedded hardware implementations able to provide real-time support for the required features, bridging the gap between theory and practice. This article presents the NInFEA (non-invasive fetal ECG analysis) tool, an embedded hardware/software framework based on the hybrid dual-core OMAP-L137 low-power processor for the real-time evaluation of fetal ECG extraction algorithms. The hybrid platform, including a digital signal processor (DSP) and a general-purpose processor (GPP), allows achieving the best performance compared with single-core architectures. The GPP provides a portable graphical user interface, whereas the DSP is extensively used for advanced signal processing tasks. As a case study, three state-of-the-art fetal ECG extraction algorithms have been ported onto NInFEA, along with some support routines needed to provide the additional information required by the clinicians and supported by the user interface. NInFEA can be regarded both as a reference design for similar applications and as a common embedded low-power testbed for real-time fetal ECG extraction algorithms.

  10. Automated Vectorization of Decision-Based Algorithms

    Science.gov (United States)

    James, Mark

    2006-01-01

    Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.

  11. Network-based recommendation algorithms: A review

    Science.gov (United States)

    Yu, Fei; Zeng, An; Gillard, Sébastien; Medo, Matúš

    2016-06-01

    Recommender systems are a vital tool that helps us to overcome the information overload problem. They are being used by most e-commerce web sites and attract the interest of a broad scientific community. A recommender system uses data on users' past preferences to choose new items that might be appreciated by a given individual user. While many approaches to recommendation exist, the approach based on a network representation of the input data has gained considerable attention in the past. We review here a broad range of network-based recommendation algorithms and for the first time compare their performance on three distinct real datasets. We present recommendation topics that go beyond the mere question of which algorithm to use-such as the possible influence of recommendation on the evolution of systems that use it-and finally discuss open research directions and challenges.

  12. Enhancement of Twins Fetal ECG Signal Extraction Based on Hybrid Blind Extraction Techniques

    Directory of Open Access Journals (Sweden)

    Ahmed Kareem Abdullah

    2017-07-01

    Full Text Available ECG machines are noninvasive system used to measure the heartbeat signal. It’s very important to monitor the fetus ECG signals during pregnancy to check the heat activity and to detect any problem early before born, therefore the monitoring of ECG signals have clinical significance and importance. For multi-fetal pregnancy case the classical filtering algorithms are not sufficient to separate the ECG signals between mother and fetal. In this paper the mixture consists of mixing from three ECG signals, the first signal is the mother ECG (M-ECG signal, second signal the Fetal-1 ECG (F1-ECG, and third signal is the Fetal-2 ECG (F2-ECG, these signals are extracted based on modified blind source extraction (BSE techniques. The proposed work based on hybridization between two BSE techniques to ensure that the extracted signals separated well. The results demonstrate that the proposed work very efficiently to extract the useful ECG signals

  13. A genetic algorithm applied to a PWR turbine extraction optimization to increase cycle efficiency

    International Nuclear Information System (INIS)

    Sacco, Wagner F.; Schirru, Roberto

    2002-01-01

    In nuclear power plants feedwater heaters are used to heat feedwater from its temperature leaving the condenser to final feedwater temperature using steam extracted from various stages of the turbines. The purpose of this process is to increase cycle efficiency. The determination of the optimal fraction of mass flow rate to be extracted from each stage of the turbines is a complex optimization problem. This kind of problem has been efficiently solved by means of evolutionary computation techniques, such as Genetic Algorithms (GAs). GAs, which are systems based upon principles from biological genetics, have been successfully applied to several combinatorial optimization problems in nuclear engineering, as the nuclear fuel reload optimization problem. We introduce the use of GAs in cycle efficiency optimization by finding an optimal combination of turbine extractions. In order to demonstrate the effectiveness of our approach, we have chosen a typical PWR as case study. The secondary side of the PWR was simulated using PEPSE, which is a modeling tool used to perform integrated heat balances for power plants. The results indicate that the GA is a quite promising tool for cycle efficiency optimization. (author)

  14. Gradient Evolution-based Support Vector Machine Algorithm for Classification

    Science.gov (United States)

    Zulvia, Ferani E.; Kuo, R. J.

    2018-03-01

    This paper proposes a classification algorithm based on a support vector machine (SVM) and gradient evolution (GE) algorithms. SVM algorithm has been widely used in classification. However, its result is significantly influenced by the parameters. Therefore, this paper aims to propose an improvement of SVM algorithm which can find the best SVMs’ parameters automatically. The proposed algorithm employs a GE algorithm to automatically determine the SVMs’ parameters. The GE algorithm takes a role as a global optimizer in finding the best parameter which will be used by SVM algorithm. The proposed GE-SVM algorithm is verified using some benchmark datasets and compared with other metaheuristic-based SVM algorithms. The experimental results show that the proposed GE-SVM algorithm obtains better results than other algorithms tested in this paper.

  15. Characters Feature Extraction Based on Neat Oracle Bone Rubbings

    OpenAIRE

    Lei Guo

    2013-01-01

    In order to recognize characters on the neat oracle bone rubbings, a new mesh point feature extraction algorithm was put forward in this paper by researching and improving of the existing coarse mesh feature extraction algorithm and the point feature extraction algorithm. Some improvements of this algorithm were as followings: point feature was introduced into the coarse mesh feature, the absolute address was converted to relative address, and point features have been changed grid and positio...

  16. FPGA-Based Implementation of Lithuanian Isolated Word Recognition Algorithm

    Directory of Open Access Journals (Sweden)

    Tomyslav Sledevič

    2013-05-01

    Full Text Available The paper describes the FPGA-based implementation of Lithuanian isolated word recognition algorithm. FPGA is selected for parallel process implementation using VHDL to ensure fast signal processing at low rate clock signal. Cepstrum analysis was applied to features extraction in voice. The dynamic time warping algorithm was used to compare the vectors of cepstrum coefficients. A library of 100 words features was created and stored in the internal FPGA BRAM memory. Experimental testing with speaker dependent records demonstrated the recognition rate of 94%. The recognition rate of 58% was achieved for speaker-independent records. Calculation of cepstrum coefficients lasted for 8.52 ms at 50 MHz clock, while 100 DTWs took 66.56 ms at 25 MHz clock.Article in Lithuanian

  17. Application of genetic algorithm for extraction of the parameters from powder EPR spectra

    International Nuclear Information System (INIS)

    Spalek, T.; Pietrzyk, P.; Sojka, Z.

    2005-01-01

    The application of the stochastic genetic algorithm in tandem with the deterministic Powell method to automated extraction of the magnetic parameters from powder EPR spectra was described. The efficiency and robustness of such hybrid approach were investigated as a function of the uncertainty range of parameters, using simulated data sets. The discussed results demonstrate superior performance of the hybrid genetic algorithm in fitting of complex spectra in comparison to the common Monte Carlo Method joint with the Powell refinement. (author)

  18. A hash-based image encryption algorithm

    Science.gov (United States)

    Cheddad, Abbas; Condell, Joan; Curran, Kevin; McKevitt, Paul

    2010-03-01

    There exist several algorithms that deal with text encryption. However, there has been little research carried out to date on encrypting digital images or video files. This paper describes a novel way of encrypting digital images with password protection using 1D SHA-2 algorithm coupled with a compound forward transform. A spatial mask is generated from the frequency domain by taking advantage of the conjugate symmetry of the complex imagery part of the Fourier Transform. This mask is then XORed with the bit stream of the original image. Exclusive OR (XOR), a logical symmetric operation, that yields 0 if both binary pixels are zeros or if both are ones and 1 otherwise. This can be verified simply by modulus (pixel1, pixel2, 2). Finally, confusion is applied based on the displacement of the cipher's pixels in accordance with a reference mask. Both security and performance aspects of the proposed method are analyzed, which prove that the method is efficient and secure from a cryptographic point of view. One of the merits of such an algorithm is to force a continuous tone payload, a steganographic term, to map onto a balanced bits distribution sequence. This bit balance is needed in certain applications, such as steganography and watermarking, since it is likely to have a balanced perceptibility effect on the cover image when embedding.

  19. Head pose estimation algorithm based on deep learning

    Science.gov (United States)

    Cao, Yuanming; Liu, Yijun

    2017-05-01

    Head pose estimation has been widely used in the field of artificial intelligence, pattern recognition and intelligent human-computer interaction and so on. Good head pose estimation algorithm should deal with light, noise, identity, shelter and other factors robustly, but so far how to improve the accuracy and robustness of attitude estimation remains a major challenge in the field of computer vision. A method based on deep learning for pose estimation is presented. Deep learning with a strong learning ability, it can extract high-level image features of the input image by through a series of non-linear operation, then classifying the input image using the extracted feature. Such characteristics have greater differences in pose, while they are robust of light, identity, occlusion and other factors. The proposed head pose estimation is evaluated on the CAS-PEAL data set. Experimental results show that this method is effective to improve the accuracy of pose estimation.

  20. Algorithm of pulmonary emphysema extraction using thoracic 3-D CT images

    Science.gov (United States)

    Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki

    2008-03-01

    Emphysema patients have the tendency to increase due to aging and smoking. Emphysematous disease destroys alveolus and to repair is impossible, thus early detection is essential. CT value of lung tissue decreases due to the destruction of lung structure. This CT value becomes lower than the normal lung- low density absorption region or referred to as Low Attenuation Area (LAA). So far, the conventional way of extracting LAA by simple thresholding has been proposed. However, the CT value of CT image fluctuates due to the measurement conditions, with various bias components such as inspiration, expiration and congestion. It is therefore necessary to consider these bias components in the extraction of LAA. We removed these bias components and we proposed LAA extraction algorithm. This algorithm has been applied to the phantom image. Then, by using the low dose CT(normal: 30 cases, obstructive lung disease: 26 cases), we extracted early stage LAA and quantitatively analyzed lung lobes using lung structure.

  1. Algorithm Research of Individualized Travelling Route Recommendation Based on Similarity

    Directory of Open Access Journals (Sweden)

    Xue Shan

    2015-01-01

    Full Text Available Although commercial recommendation system has made certain achievement in travelling route development, the recommendation system is facing a series of challenges because of people’s increasing interest in travelling. It is obvious that the core content of the recommendation system is recommendation algorithm. The advantages of recommendation algorithm can bring great effect to the recommendation system. Based on this, this paper applies traditional collaborative filtering algorithm for analysis. Besides, illustrating the deficiencies of the algorithm, such as the rating unicity and rating matrix sparsity, this paper proposes an improved algorithm combing the multi-similarity algorithm based on user and the element similarity algorithm based on user, so as to compensate for the deficiencies that traditional algorithm has within a controllable range. Experimental results have shown that the improved algorithm has obvious advantages in comparison with the traditional one. The improved algorithm has obvious effect on remedying the rating matrix sparsity and rating unicity.

  2. Research on AHP decision algorithms based on BP algorithm

    Science.gov (United States)

    Ma, Ning; Guan, Jianhe

    2017-10-01

    Decision making is the thinking activity that people choose or judge, and scientific decision-making has always been a hot issue in the field of research. Analytic Hierarchy Process (AHP) is a simple and practical multi-criteria and multi-objective decision-making method that combines quantitative and qualitative and can show and calculate the subjective judgment in digital form. In the process of decision analysis using AHP method, the rationality of the two-dimensional judgment matrix has a great influence on the decision result. However, in dealing with the real problem, the judgment matrix produced by the two-dimensional comparison is often inconsistent, that is, it does not meet the consistency requirements. BP neural network algorithm is an adaptive nonlinear dynamic system. It has powerful collective computing ability and learning ability. It can perfect the data by constantly modifying the weights and thresholds of the network to achieve the goal of minimizing the mean square error. In this paper, the BP algorithm is used to deal with the consistency of the two-dimensional judgment matrix of the AHP.

  3. Algorithm for Extracting Digital Terrain Models under Forest Canopy from Airborne LiDAR Data

    Directory of Open Access Journals (Sweden)

    Almasi S. Maguya

    2014-07-01

    Full Text Available Extracting digital elevationmodels (DTMs from LiDAR data under forest canopy is a challenging task. This is because the forest canopy tends to block a portion of the LiDAR pulses from reaching the ground, hence introducing gaps in the data. This paper presents an algorithm for DTM extraction from LiDAR data under forest canopy. The algorithm copes with the challenge of low data density by generating a series of coarse DTMs by using the few ground points available and using trend surfaces to interpolate missing elevation values in the vicinity of the available points. This process generates a cloud of ground points from which the final DTM is generated. The algorithm has been compared to two other algorithms proposed in the literature in three different test sites with varying degrees of difficulty. Results show that the algorithm presented in this paper is more tolerant to low data density compared to the other two algorithms. The results further show that with decreasing point density, the differences between the three algorithms dramatically increased from about 0.5m to over 10m.

  4. Extracting Gene Networks for Low-Dose Radiation Using Graph Theoretical Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Voy, Brynn H [ORNL; Scharff, Jon [University of Tennessee, Knoxville (UTK); Perkins, Andy [University of Tennessee, Knoxville (UTK); Saxton, Arnold [University of Tennessee, Knoxville (UTK); Borate, Bhavesh [University of Tennessee, Knoxville (UTK); Chesler, Elissa J [ORNL; Branstetter, Lisa R [ORNL; Langston, Michael A [University of Tennessee, Knoxville (UTK)

    2006-01-01

    Genes with common functions often exhibit correlated expression levels, which can be used to identify sets of interacting genes from microarray data. Microarrays typically measure expression across genomic space, creating a massive matrix of co-expression that must be mined to extract only the most relevant gene interactions. We describe a graph theoretical approach to extracting co-expressed sets of genes, based on the computation of cliques. Unlike the results of traditional clustering algorithms, cliques are not disjoint and allow genes to be assigned to multiple sets of interacting partners, consistent with biological reality. A graph is created by thresholding the correlation matrix to include only the correlations most likely to signify functional relationships. Cliques computed from the graph correspond to sets of genes for which significant edges are present between all members of the set, representing potential members of common or interacting pathways. Clique membership can be used to infer function about poorly annotated genes, based on the known functions of better-annotated genes with which they share clique membership (i.e., ''guilt-by-association''). We illustrate our method by applying it to microarray data collected from the spleens of mice exposed to low-dose ionizing radiation. Differential analysis is used to identify sets of genes whose interactions are impacted by radiation exposure. The correlation graph is also queried independently of clique to extract edges that are impacted by radiation. We present several examples of multiple gene interactions that are altered by radiation exposure and thus represent potential molecular pathways that mediate the radiation response.

  5. Extracting gene networks for low-dose radiation using graph theoretical algorithms.

    Directory of Open Access Journals (Sweden)

    Brynn H Voy

    2006-07-01

    Full Text Available Genes with common functions often exhibit correlated expression levels, which can be used to identify sets of interacting genes from microarray data. Microarrays typically measure expression across genomic space, creating a massive matrix of co-expression that must be mined to extract only the most relevant gene interactions. We describe a graph theoretical approach to extracting co-expressed sets of genes, based on the computation of cliques. Unlike the results of traditional clustering algorithms, cliques are not disjoint and allow genes to be assigned to multiple sets of interacting partners, consistent with biological reality. A graph is created by thresholding the correlation matrix to include only the correlations most likely to signify functional relationships. Cliques computed from the graph correspond to sets of genes for which significant edges are present between all members of the set, representing potential members of common or interacting pathways. Clique membership can be used to infer function about poorly annotated genes, based on the known functions of better-annotated genes with which they share clique membership (i.e., "guilt-by-association". We illustrate our method by applying it to microarray data collected from the spleens of mice exposed to low-dose ionizing radiation. Differential analysis is used to identify sets of genes whose interactions are impacted by radiation exposure. The correlation graph is also queried independently of clique to extract edges that are impacted by radiation. We present several examples of multiple gene interactions that are altered by radiation exposure and thus represent potential molecular pathways that mediate the radiation response.

  6. PEACE: pulsar evaluation algorithm for candidate extraction - a software package for post-analysis processing of pulsar survey candidates

    Science.gov (United States)

    Lee, K. J.; Stovall, K.; Jenet, F. A.; Martinez, J.; Dartez, L. P.; Mata, A.; Lunsford, G.; Cohen, S.; Biwer, C. M.; Rohr, M.; Flanigan, J.; Walker, A.; Banaszak, S.; Allen, B.; Barr, E. D.; Bhat, N. D. R.; Bogdanov, S.; Brazier, A.; Camilo, F.; Champion, D. J.; Chatterjee, S.; Cordes, J.; Crawford, F.; Deneva, J.; Desvignes, G.; Ferdman, R. D.; Freire, P.; Hessels, J. W. T.; Karuppusamy, R.; Kaspi, V. M.; Knispel, B.; Kramer, M.; Lazarus, P.; Lynch, R.; Lyne, A.; McLaughlin, M.; Ransom, S.; Scholz, P.; Siemens, X.; Spitler, L.; Stairs, I.; Tan, M.; van Leeuwen, J.; Zhu, W. W.

    2013-07-01

    Modern radio pulsar surveys produce a large volume of prospective candidates, the majority of which are polluted by human-created radio frequency interference or other forms of noise. Typically, large numbers of candidates need to be visually inspected in order to determine if they are real pulsars. This process can be labour intensive. In this paper, we introduce an algorithm called Pulsar Evaluation Algorithm for Candidate Extraction (PEACE) which improves the efficiency of identifying pulsar signals. The algorithm ranks the candidates based on a score function. Unlike popular machine-learning-based algorithms, no prior training data sets are required. This algorithm has been applied to data from several large-scale radio pulsar surveys. Using the human-based ranking results generated by students in the Arecibo Remote Command Center programme, the statistical performance of PEACE was evaluated. It was found that PEACE ranked 68 per cent of the student-identified pulsars within the top 0.17 per cent of sorted candidates, 95 per cent within the top 0.34 per cent and 100 per cent within the top 3.7 per cent. This clearly demonstrates that PEACE significantly increases the pulsar identification rate by a factor of about 50 to 1000. To date, PEACE has been directly responsible for the discovery of 47 new pulsars, 5 of which are millisecond pulsars that may be useful for pulsar timing based gravitational-wave detection projects.

  7. Performance evaluation of PCA-based spike sorting algorithms.

    Science.gov (United States)

    Adamos, Dimitrios A; Kosmidis, Efstratios K; Theophilidis, George

    2008-09-01

    Deciphering the electrical activity of individual neurons from multi-unit noisy recordings is critical for understanding complex neural systems. A widely used spike sorting algorithm is being evaluated for single-electrode nerve trunk recordings. The algorithm is based on principal component analysis (PCA) for spike feature extraction. In the neuroscience literature it is generally assumed that the use of the first two or most commonly three principal components is sufficient. We estimate the optimum PCA-based feature space by evaluating the algorithm's performance on simulated series of action potentials. A number of modifications are made to the open source nev2lkit software to enable systematic investigation of the parameter space. We introduce a new metric to define clustering error considering over-clustering more favorable than under-clustering as proposed by experimentalists for our data. Both the program patch and the metric are available online. Correlated and white Gaussian noise processes are superimposed to account for biological and artificial jitter in the recordings. We report that the employment of more than three principal components is in general beneficial for all noise cases considered. Finally, we apply our results to experimental data and verify that the sorting process with four principal components is in agreement with a panel of electrophysiology experts.

  8. Accurate Learning with Few Atlases (ALFA): an algorithm for MRI neonatal brain extraction and comparison with 11 publicly available methods.

    Science.gov (United States)

    Serag, Ahmed; Blesa, Manuel; Moore, Emma J; Pataky, Rozalia; Sparrow, Sarah A; Wilkinson, A G; Macnaught, Gillian; Semple, Scott I; Boardman, James P

    2016-03-24

    Accurate whole-brain segmentation, or brain extraction, of magnetic resonance imaging (MRI) is a critical first step in most neuroimage analysis pipelines. The majority of brain extraction algorithms have been developed and evaluated for adult data and their validity for neonatal brain extraction, which presents age-specific challenges for this task, has not been established. We developed a novel method for brain extraction of multi-modal neonatal brain MR images, named ALFA (Accurate Learning with Few Atlases). The method uses a new sparsity-based atlas selection strategy that requires a very limited number of atlases 'uniformly' distributed in the low-dimensional data space, combined with a machine learning based label fusion technique. The performance of the method for brain extraction from multi-modal data of 50 newborns is evaluated and compared with results obtained using eleven publicly available brain extraction methods. ALFA outperformed the eleven compared methods providing robust and accurate brain extraction results across different modalities. As ALFA can learn from partially labelled datasets, it can be used to segment large-scale datasets efficiently. ALFA could also be applied to other imaging modalities and other stages across the life course.

  9. Dynamic route guidance algorithm based algorithm based on artificial immune system

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    To improve the performance of the K-shortest paths search in intelligent traffic guidance systems,this paper proposes an optimal search algorithm based on the intelligent optimization search theory and the memphor mechanism of vertebrate immune systems.This algorithm,applied to the urban traffic network model established by the node-expanding method,can expediently realize K-shortest paths search in the urban traffic guidance systems.Because of the immune memory and global parallel search ability from artificial immune systems,K shortest paths can be found without any repeat,which indicates evidently the superiority of the algorithm to the conventional ones.Not only does it perform a better parallelism,the algorithm also prevents premature phenomenon that often occurs in genetic algorithms.Thus,it is especially suitable for real-time requirement of the traffic guidance system and other engineering optimal applications.A case study verifies the efficiency and the practicability of the algorithm aforementioned.

  10. An open-source framework for stress-testing non-invasive foetal ECG extraction algorithms.

    Science.gov (United States)

    Andreotti, Fernando; Behar, Joachim; Zaunseder, Sebastian; Oster, Julien; Clifford, Gari D

    2016-05-01

    Over the past decades, many studies have been published on the extraction of non-invasive foetal electrocardiogram (NI-FECG) from abdominal recordings. Most of these contributions claim to obtain excellent results in detecting foetal QRS (FQRS) complexes in terms of location. A small subset of authors have investigated the extraction of morphological features from the NI-FECG. However, due to the shortage of available public databases, the large variety of performance measures employed and the lack of open-source reference algorithms, most contributions cannot be meaningfully assessed. This article attempts to address these issues by presenting a standardised methodology for stress testing NI-FECG algorithms, including absolute data, as well as extraction and evaluation routines. To that end, a large database of realistic artificial signals was created, totaling 145.8 h of multichannel data and over one million FQRS complexes. An important characteristic of this dataset is the inclusion of several non-stationary events (e.g. foetal movements, uterine contractions and heart rate fluctuations) that are critical for evaluating extraction routines. To demonstrate our testing methodology, three classes of NI-FECG extraction algorithms were evaluated: blind source separation (BSS), template subtraction (TS) and adaptive methods (AM). Experiments were conducted to benchmark the performance of eight NI-FECG extraction algorithms on the artificial database focusing on: FQRS detection and morphological analysis (foetal QT and T/QRS ratio). The overall median FQRS detection accuracies (i.e. considering all non-stationary events) for the best performing methods in each group were 99.9% for BSS, 97.9% for AM and 96.0% for TS. Both FQRS detections and morphological parameters were shown to heavily depend on the extraction techniques and signal-to-noise ratio. Particularly, it is shown that their evaluation in the source domain, obtained after using a BSS technique, should be

  11. VHDL Implementation of Feature-Extraction Algorithm for the PANDA Electromagnetic Calorimeter

    NARCIS (Netherlands)

    Kavatsyuk, M.; Guliyev, E.; Lemmens, P. J. J.; Löhner, H.; Tambave, G.

    2010-01-01

    The feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA detector at the future FAIR facility, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The use of modified firmware with the running on-line

  12. VHDL implementation of feature-extraction algorithm for the PANDA electromagnetic calorimeter

    NARCIS (Netherlands)

    Guliyev, E.; Kavatsyuk, M.; Lemmens, P. J. J.; Tambave, G.; Löhner, H.

    2012-01-01

    A simple, efficient, and robust feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA spectrometer at FAIR, Darmstadt, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The source-code is available as an

  13. Kernel-based discriminant feature extraction using a representative dataset

    Science.gov (United States)

    Li, Honglin; Sancho Gomez, Jose-Luis; Ahalt, Stanley C.

    2002-07-01

    Discriminant Feature Extraction (DFE) is widely recognized as an important pre-processing step in classification applications. Most DFE algorithms are linear and thus can only explore the linear discriminant information among the different classes. Recently, there has been several promising attempts to develop nonlinear DFE algorithms, among which is Kernel-based Feature Extraction (KFE). The efficacy of KFE has been experimentally verified by both synthetic data and real problems. However, KFE has some known limitations. First, KFE does not work well for strongly overlapped data. Second, KFE employs all of the training set samples during the feature extraction phase, which can result in significant computation when applied to very large datasets. Finally, KFE can result in overfitting. In this paper, we propose a substantial improvement to KFE that overcomes the above limitations by using a representative dataset, which consists of critical points that are generated from data-editing techniques and centroid points that are determined by using the Frequency Sensitive Competitive Learning (FSCL) algorithm. Experiments show that this new KFE algorithm performs well on significantly overlapped datasets, and it also reduces computational complexity. Further, by controlling the number of centroids, the overfitting problem can be effectively alleviated.

  14. Development of GPT-based optimization algorithm

    International Nuclear Information System (INIS)

    White, J.R.; Chapman, D.M.; Biswas, D.

    1985-01-01

    The University of Lowell and Westinghouse Electric Corporation are involved in a joint effort to evaluate the potential benefits of generalized/depletion perturbation theory (GPT/DTP) methods for a variety of light water reactor (LWR) physics applications. One part of that work has focused on the development of a GPT-based optimization algorithm for the overall design, analysis, and optimization of LWR reload cores. The use of GPT sensitivity data in formulating the fuel management optimization problem is conceptually straightforward; it is the actual execution of the concept that is challenging. Thus, the purpose of this paper is to address some of the major difficulties, to outline our approach to these problems, and to present some illustrative examples of an efficient GTP-based optimization scheme

  15. Archimedean copula estimation of distribution algorithm based on artificial bee colony algorithm

    Institute of Scientific and Technical Information of China (English)

    Haidong Xu; Mingyan Jiang; Kun Xu

    2015-01-01

    The artificial bee colony (ABC) algorithm is a com-petitive stochastic population-based optimization algorithm. How-ever, the ABC algorithm does not use the social information and lacks the knowledge of the problem structure, which leads to in-sufficiency in both convergent speed and searching precision. Archimedean copula estimation of distribution algorithm (ACEDA) is a relatively simple, time-economic and multivariate correlated EDA. This paper proposes a novel hybrid algorithm based on the ABC algorithm and ACEDA cal ed Archimedean copula estima-tion of distribution based on the artificial bee colony (ACABC) algorithm. The hybrid algorithm utilizes ACEDA to estimate the distribution model and then uses the information to help artificial bees to search more efficiently in the search space. Six bench-mark functions are introduced to assess the performance of the ACABC algorithm on numerical function optimization. Experimen-tal results show that the ACABC algorithm converges much faster with greater precision compared with the ABC algorithm, ACEDA and the global best (gbest)-guided ABC (GABC) algorithm in most of the experiments.

  16. A Discrete-Time Algorithm for Stiffness Extraction from sEMG and Its Application in Antidisturbance Teleoperation

    Directory of Open Access Journals (Sweden)

    Peidong Liang

    2016-01-01

    Full Text Available We have developed a new discrete-time algorithm of stiffness extraction from muscle surface electromyography (sEMG collected from human operator’s arms and have applied it for antidisturbance control in robot teleoperation. The variation of arm stiffness is estimated from sEMG signals and transferred to a telerobot under variable impedance control to imitate human motor control behaviours, particularly for disturbance attenuation. In comparison to the estimation of stiffness from sEMG, the proposed algorithm is able to reduce the nonlinear residual error effect and to enhance robustness and to simplify stiffness calibration. In order to extract a smoothing stiffness enveloping from sEMG signals, two enveloping methods are employed in this paper, namely, fast linear enveloping based on low pass filtering and moving average and amplitude monocomponent and frequency modulating (AM-FM method. Both methods have been incorporated into the proposed stiffness variance estimation algorithm and are extensively tested. The test results show that stiffness variation extraction based on the two methods is sensitive and robust to attenuation disturbance. It could potentially be applied for teleoperation in the presence of hazardous surroundings or human robot physical cooperation scenarios.

  17. An assembly sequence planning method based on composite algorithm

    Directory of Open Access Journals (Sweden)

    Enfu LIU

    2016-02-01

    Full Text Available To solve the combination explosion problem and the blind searching problem in assembly sequence planning of complex products, an assembly sequence planning method based on composite algorithm is proposed. In the composite algorithm, a sufficient number of feasible assembly sequences are generated using formalization reasoning algorithm as the initial population of genetic algorithm. Then fuzzy knowledge of assembly is integrated into the planning process of genetic algorithm and ant algorithm to get the accurate solution. At last, an example is conducted to verify the feasibility of composite algorithm.

  18. Cognitive radio resource allocation based on coupled chaotic genetic algorithm

    International Nuclear Information System (INIS)

    Zu Yun-Xiao; Zhou Jie; Zeng Chang-Chang

    2010-01-01

    A coupled chaotic genetic algorithm for cognitive radio resource allocation which is based on genetic algorithm and coupled Logistic map is proposed. A fitness function for cognitive radio resource allocation is provided. Simulations are conducted for cognitive radio resource allocation by using the coupled chaotic genetic algorithm, simple genetic algorithm and dynamic allocation algorithm respectively. The simulation results show that, compared with simple genetic and dynamic allocation algorithm, coupled chaotic genetic algorithm reduces the total transmission power and bit error rate in cognitive radio system, and has faster convergence speed

  19. Blind Extraction of Chaotic Signals by Using the Fast Independent Component Analysis Algorithm

    International Nuclear Information System (INIS)

    Hong-Bin, Chen; Jiu-Chao, Feng; Yong, Fang

    2008-01-01

    We report the results of using the fast independent component analysis (FastICA) algorithm to realize blind extraction of chaotic signals. Two cases are taken into consideration: namely, the mixture is noiseless or contaminated by noise. Pre-whitening is employed to reduce the effect of noise before using the FastICA algorithm. The correlation coefficient criterion is adopted to evaluate the performance, and the success rate is defined as a new criterion to indicate the performance with respect to noise or different mixing matrices. Simulation results show that the FastICA algorithm can extract the chaotic signals effectively. The impact of noise, the length of a signal frame, the number of sources and the number of observed mixtures on the performance is investigated in detail. It is also shown that regarding a noise as an independent source is not always correct

  20. Use of the recursive-rule extraction algorithm with continuous attributes to improve diagnostic accuracy in thyroid disease

    Directory of Open Access Journals (Sweden)

    Yoichi Hayashi

    Full Text Available Thyroid diseases, which often lead to thyroid dysfunction involving either hypo- or hyperthyroidism, affect hundreds of millions of people worldwide, many of whom remain undiagnosed; however, diagnosis is difficult because symptoms are similar to those seen in a number of other conditions. The objective of this study was to assess the effectiveness of the Recursive-Rule Extraction (Re-RX algorithm with continuous attributes (Continuous Re-RX in extracting highly accurate, concise, and interpretable classification rules for the diagnosis of thyroid disease. We used the 7200-sample Thyroid dataset from the University of California Irvine Machine Learning Repository, a large and highly imbalanced dataset that comprises both discrete and continuous attributes. We trained the dataset using Continuous Re-RX, and after obtaining the maximum training and test accuracies, the number of extracted rules, and the average number of antecedents, we compared the results with those of other extraction methods. Our results suggested that Continuous Re-RX not only achieved the highest accuracy for diagnosing thyroid disease compared with the other methods, but also provided simple, concise, and interpretable rules. Based on these results, we believe that the use of Continuous Re-RX in machine learning may assist healthcare professionals in the diagnosis of thyroid disease. Keywords: Thyroid disease diagnosis, Re-RX algorithm, Rule extraction, Decision tree

  1. A joint image encryption and watermarking algorithm based on compressive sensing and chaotic map

    International Nuclear Information System (INIS)

    Xiao Di; Cai Hong-Kun; Zheng Hong-Ying

    2015-01-01

    In this paper, a compressive sensing (CS) and chaotic map-based joint image encryption and watermarking algorithm is proposed. The transform domain coefficients of the original image are scrambled by Arnold map firstly. Then the watermark is adhered to the scrambled data. By compressive sensing, a set of watermarked measurements is obtained as the watermarked cipher image. In this algorithm, watermark embedding and data compression can be performed without knowing the original image; similarly, watermark extraction will not interfere with decryption. Due to the characteristics of CS, this algorithm features compressible cipher image size, flexible watermark capacity, and lossless watermark extraction from the compressed cipher image as well as robustness against packet loss. Simulation results and analyses show that the algorithm achieves good performance in the sense of security, watermark capacity, extraction accuracy, reconstruction, robustness, etc. (paper)

  2. Threshold quantum cryptograph based on Grover's algorithm

    International Nuclear Information System (INIS)

    Du Jianzhong; Qin Sujuan; Wen Qiaoyan; Zhu Fuchen

    2007-01-01

    We propose a threshold quantum protocol based on Grover's operator and permutation operator on one two-qubit signal. The protocol is secure because the dishonest parties can only extract 2 bits from 3 bits information of operation on one two-qubit signal while they have to introduce error probability 3/8. The protocol includes a detection scheme to resist Trojan horse attack. With probability 1/2, the detection scheme can detect a multi-qubit signal that is used to replace a single-qubit signal, while it makes every legitimate qubit invariant

  3. New calibration algorithms for dielectric-based microwave moisture sensors

    Science.gov (United States)

    New calibration algorithms for determining moisture content in granular and particulate materials from measurement of the dielectric properties at a single microwave frequency are proposed. The algorithms are based on identifying empirically correlations between the dielectric properties and the par...

  4. Standardized evaluation of algorithms for computer-aided diagnosis of dementia based on structural MRI

    DEFF Research Database (Denmark)

    Bron, Esther E.; Smits, Marion; van der Flier, Wiesje M.

    2015-01-01

    algorithms based on a clinically representative multi-center data set. Using clinical practice as the starting point, the goal was to reproduce the clinical diagnosis. Therefore, we evaluated algorithms for multi-class classification of three diagnostic groups: patients with probable Alzheimer's disease...... of aging). The best performing algorithm yielded an accuracy of 63.0% and an area under the receiver-operating-characteristic curve (AUC) of 78.8%. In general, the best performances were achieved using feature extraction based on voxel-based morphometry or a combination of features that included volume......Abstract Algorithms for computer-aided diagnosis of dementia based on structural MRI have demonstrated high performance in the literature, but are difficult to compare as different data sets and methodology were used for evaluation. In addition, it is unclear how the algorithms would perform...

  5. Performance comparison of six independent components analysis algorithms for fetal signal extraction from real fMCG data

    International Nuclear Information System (INIS)

    Hild, Kenneth E; Alleva, Giovanna; Nagarajan, Srikantan; Comani, Silvia

    2007-01-01

    In this study we compare the performance of six independent components analysis (ICA) algorithms on 16 real fetal magnetocardiographic (fMCG) datasets for the application of extracting the fetal cardiac signal. We also compare the extraction results for real data with the results previously obtained for synthetic data. The six ICA algorithms are FastICA, CubICA, JADE, Infomax, MRMI-SIG and TDSEP. The results obtained using real fMCG data indicate that the FastICA method consistently outperforms the others in regard to separation quality and that the performance of an ICA method that uses temporal information suffers in the presence of noise. These two results confirm the previous results obtained using synthetic fMCG data. There were also two notable differences between the studies based on real and synthetic data. The differences are that all six ICA algorithms are independent of gestational age and sensor dimensionality for synthetic data, but depend on gestational age and sensor dimensionality for real data. It is possible to explain these differences by assuming that the number of point sources needed to completely explain the data is larger than the dimensionality used in the ICA extraction

  6. A Trust-region-based Sequential Quadratic Programming Algorithm

    DEFF Research Database (Denmark)

    Henriksen, Lars Christian; Poulsen, Niels Kjølstad

    This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints.......This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints....

  7. A Penalized Semialgebraic Deflation ICA Algorithm for the Efficient Extraction of Interictal Epileptic Signals.

    Science.gov (United States)

    Becker, Hanna; Albera, Laurent; Comon, Pierre; Kachenoura, Amar; Merlet, Isabelle

    2017-01-01

    As a noninvasive technique, electroencephalography (EEG) is commonly used to monitor the brain signals of patients with epilepsy such as the interictal epileptic spikes. However, the recorded data are often corrupted by artifacts originating, for example, from muscle activities, which may have much higher amplitudes than the interictal epileptic signals of interest. To remove these artifacts, a number of independent component analysis (ICA) techniques were successfully applied. In this paper, we propose a new deflation ICA algorithm, called penalized semialgebraic unitary deflation (P-SAUD) algorithm, that improves upon classical ICA methods by leading to a considerably reduced computational complexity at equivalent performance. This is achieved by employing a penalized semialgebraic extraction scheme, which permits us to identify the epileptic components of interest (interictal spikes) first and obviates the need of extracting subsequent components. The proposed method is evaluated on physiologically plausible simulated EEG data and actual measurements of three patients. The results are compared to those of several popular ICA algorithms as well as second-order blind source separation methods, demonstrating that P-SAUD extracts the epileptic spikes with the same accuracy as the best ICA methods, but reduces the computational complexity by a factor of 10 for 32-channel recordings. This superior computational efficiency is of particular interest considering the increasing use of high-resolution EEG recordings, whose analysis requires algorithms with low computational cost.

  8. Genetic algorithm based separation cascade optimization

    International Nuclear Information System (INIS)

    Mahendra, A.K.; Sanyal, A.; Gouthaman, G.; Bera, T.K.

    2008-01-01

    The conventional separation cascade design procedure does not give an optimum design because of squaring-off, variation of flow rates and separation factor of the element with respect to stage location. Multi-component isotope separation further complicates the design procedure. Cascade design can be stated as a constrained multi-objective optimization. Cascade's expectation from the separating element is multi-objective i.e. overall separation factor, cut, optimum feed and separative power. Decision maker may aspire for more comprehensive multi-objective goals where optimization of cascade is coupled with the exploration of separating element optimization vector space. In real life there are many issues which make it important to understand the decision maker's perception of cost-quality-speed trade-off and consistency of preferences. Genetic algorithm (GA) is one such evolutionary technique that can be used for cascade design optimization. This paper addresses various issues involved in the GA based multi-objective optimization of the separation cascade. Reference point based optimization methodology with GA based Pareto optimality concept for separation cascade was found pragmatic and promising. This method should be explored, tested, examined and further developed for binary as well as multi-component separations. (author)

  9. Aircraft target detection algorithm based on high resolution spaceborne SAR imagery

    Science.gov (United States)

    Zhang, Hui; Hao, Mengxi; Zhang, Cong; Su, Xiaojing

    2018-03-01

    In this paper, an image classification algorithm for airport area is proposed, which based on the statistical features of synthetic aperture radar (SAR) images and the spatial information of pixels. The algorithm combines Gamma mixture model and MRF. The algorithm using Gamma mixture model to obtain the initial classification result. Pixel space correlation based on the classification results are optimized by the MRF technique. Additionally, morphology methods are employed to extract airport (ROI) region where the suspected aircraft target samples are clarified to reduce the false alarm and increase the detection performance. Finally, this paper presents the plane target detection, which have been verified by simulation test.

  10. A Region-Based GeneSIS Segmentation Algorithm for the Classification of Remotely Sensed Images

    Directory of Open Access Journals (Sweden)

    Stelios K. Mylonas

    2015-03-01

    Full Text Available This paper proposes an object-based segmentation/classification scheme for remotely sensed images, based on a novel variant of the recently proposed Genetic Sequential Image Segmentation (GeneSIS algorithm. GeneSIS segments the image in an iterative manner, whereby at each iteration a single object is extracted via a genetic-based object extraction algorithm. Contrary to the previous pixel-based GeneSIS where the candidate objects to be extracted were evaluated through the fuzzy content of their included pixels, in the newly developed region-based GeneSIS algorithm, a watershed-driven fine segmentation map is initially obtained from the original image, which serves as the basis for the forthcoming GeneSIS segmentation. Furthermore, in order to enhance the spatial search capabilities, we introduce a more descriptive encoding scheme in the object extraction algorithm, where the structural search modules are represented by polygonal shapes. Our objectives in the new framework are posed as follows: enhance the flexibility of the algorithm in extracting more flexible object shapes, assure high level classification accuracies, and reduce the execution time of the segmentation, while at the same time preserving all the inherent attributes of the GeneSIS approach. Finally, exploiting the inherent attribute of GeneSIS to produce multiple segmentations, we also propose two segmentation fusion schemes that operate on the ensemble of segmentations generated by GeneSIS. Our approaches are tested on an urban and two agricultural images. The results show that region-based GeneSIS has considerably lower computational demands compared to the pixel-based one. Furthermore, the suggested methods achieve higher classification accuracies and good segmentation maps compared to a series of existing algorithms.

  11. Hand and goods judgment algorithm based on depth information

    Science.gov (United States)

    Li, Mingzhu; Zhang, Jinsong; Yan, Dan; Wang, Qin; Zhang, Ruiqi; Han, Jing

    2016-03-01

    A tablet computer with a depth camera and a color camera is loaded on a traditional shopping cart. The inside information of the shopping cart is obtained by two cameras. In the shopping cart monitoring field, it is very important for us to determine whether the customer with goods in or out of the shopping cart. This paper establishes a basic framework for judging empty hand, it includes the hand extraction process based on the depth information, process of skin color model building based on WPCA (Weighted Principal Component Analysis), an algorithm for judging handheld products based on motion and skin color information, statistical process. Through this framework, the first step can ensure the integrity of the hand information, and effectively avoids the influence of sleeve and other debris, the second step can accurately extract skin color and eliminate the similar color interference, light has little effect on its results, it has the advantages of fast computation speed and high efficiency, and the third step has the advantage of greatly reducing the noise interference and improving the accuracy.

  12. MVDR Algorithm Based on Estimated Diagonal Loading for Beamforming

    Directory of Open Access Journals (Sweden)

    Yuteng Xiao

    2017-01-01

    Full Text Available Beamforming algorithm is widely used in many signal processing fields. At present, the typical beamforming algorithm is MVDR (Minimum Variance Distortionless Response. However, the performance of MVDR algorithm relies on the accurate covariance matrix. The MVDR algorithm declines dramatically with the inaccurate covariance matrix. To solve the problem, studying the beamforming array signal model and beamforming MVDR algorithm, we improve MVDR algorithm based on estimated diagonal loading for beamforming. MVDR optimization model based on diagonal loading compensation is established and the interval of the diagonal loading compensation value is deduced on the basis of the matrix theory. The optimal diagonal loading value in the interval is also determined through the experimental method. The experimental results show that the algorithm compared with existing algorithms is practical and effective.

  13. Generalized phase retrieval algorithm based on information measures

    OpenAIRE

    Shioya, Hiroyuki; Gohara, Kazutoshi

    2006-01-01

    An iterative phase retrieval algorithm based on the maximum entropy method (MEM) is presented. Introducing a new generalized information measure, we derive a novel class of algorithms which includes the conventionally used error reduction algorithm and a MEM-type iterative algorithm which is presented for the first time. These different phase retrieval methods are unified on the basis of the framework of information measures used in information theory.

  14. An automatic extraction algorithm of three dimensional shape of brain parenchyma from MR images

    International Nuclear Information System (INIS)

    Matozaki, Takeshi

    2000-01-01

    For the simulation of surgical operations, the extraction of the selected region using MR images is useful. However, this segmentation requires a high level of skill and experience from the technicians. We have developed an unique automatic extraction algorithm for extracting three dimensional brain parenchyma using MR head images. It is named the ''three dimensional gray scale clumsy painter method''. In this method, a template having the shape of a pseudo-circle, a so called clumsy painter (CP), moves along the contour of the selected region and extracts the region surrounded by the contour. This method has advantages compared with the morphological filtering and the region growing method. Previously, this method was applied to binary images, but there were some problems in that the results of the extractions were varied by the value of the threshold level. We introduced gray level information of images to decide the threshold, and depend upon the change of image density between the brain parenchyma and CSF. We decided the threshold level by the vector of a map of templates, and changed the map according to the change of image density. As a result, the over extracted ratio was improved by 36%, and the under extracted ratio was improved by 20%. (author)

  15. Algorithm of Particle Data Association for SLAM Based on Improved Ant Algorithm

    Directory of Open Access Journals (Sweden)

    KeKe Gen

    2015-01-01

    Full Text Available The article considers a problem of data association algorithm for simultaneous localization and mapping guidelines in determining the route of unmanned aerial vehicles (UAVs. Currently, these equipments are already widely used, but mainly controlled from the remote operator. An urgent task is to develop a control system that allows for autonomous flight. Algorithm SLAM (simultaneous localization and mapping, which allows to predict the location, speed, the ratio of flight parameters and the coordinates of landmarks and obstacles in an unknown environment, is one of the key technologies to achieve real autonomous UAV flight. The aim of this work is to study the possibility of solving this problem by using an improved ant algorithm.The data association for SLAM algorithm is meant to establish a matching set of observed landmarks and landmarks in the state vector. Ant algorithm is one of the widely used optimization algorithms with positive feedback and the ability to search in parallel, so the algorithm is suitable for solving the problem of data association for SLAM. But the traditional ant algorithm in the process of finding routes easily falls into local optimum. Adding random perturbations in the process of updating the global pheromone to avoid local optima. Setting limits pheromone on the route can increase the search space with a reasonable amount of calculations for finding the optimal route.The paper proposes an algorithm of the local data association for SLAM algorithm based on an improved ant algorithm. To increase the speed of calculation, local data association is used instead of the global data association. The first stage of the algorithm defines targets in the matching space and the observed landmarks with the possibility of association by the criterion of individual compatibility (IC. The second stage defines the matched landmarks and their coordinates using improved ant algorithm. Simulation results confirm the efficiency and

  16. MIDAS. An algorithm for the extraction of modal information from experimentally determined transfer functions

    International Nuclear Information System (INIS)

    Durrans, R.F.

    1978-12-01

    In order to design reactor structures to withstand the large flow and acoustic forces present it is necessary to know something of their dynamic properties. In many cases these properties cannot be predicted theoretically and it is necessary to determine them experimentally. The algorithm MIDAS (Modal Identification for the Dynamic Analysis of Structures) which has been developed at B.N.L. for extracting these structural properties from experimental data is described. (author)

  17. Classification of underground pipe scanned images using feature extraction and neuro-fuzzy algorithm.

    Science.gov (United States)

    Sinha, S K; Karray, F

    2002-01-01

    Pipeline surface defects such as holes and cracks cause major problems for utility managers, particularly when the pipeline is buried under the ground. Manual inspection for surface defects in the pipeline has a number of drawbacks, including subjectivity, varying standards, and high costs. Automatic inspection system using image processing and artificial intelligence techniques can overcome many of these disadvantages and offer utility managers an opportunity to significantly improve quality and reduce costs. A recognition and classification of pipe cracks using images analysis and neuro-fuzzy algorithm is proposed. In the preprocessing step the scanned images of pipe are analyzed and crack features are extracted. In the classification step the neuro-fuzzy algorithm is developed that employs a fuzzy membership function and error backpropagation algorithm. The idea behind the proposed approach is that the fuzzy membership function will absorb variation of feature values and the backpropagation network, with its learning ability, will show good classification efficiency.

  18. Answer Extraction Based on Merging Score Strategy of Hot Terms

    Institute of Scientific and Technical Information of China (English)

    LE Juan; ZHANG Chunxia; NIU Zhendong

    2016-01-01

    Answer extraction (AE) is one of the key technologies in developing the open domain Question&an-swer (Q&A) system . Its task is to yield the highest score to the expected answer based on an effective answer score strategy. We introduce an answer extraction method by Merging score strategy (MSS) based on hot terms. The hot terms are defined according to their lexical and syn-tactic features to highlight the role of the question terms. To cope with the syntactic diversities of the corpus, we propose four improved candidate answer score algorithms. Each of them is based on the lexical function of hot terms and their syntactic relationships with the candidate an-swers. Two independent corpus score algorithms are pro-posed to tap the role of the corpus in ranking the candi-date answers. Six algorithms are adopted in MSS to tap the complementary action among the corpus, the candi-date answers and the questions. Experiments demonstrate the effectiveness of the proposed strategy.

  19. Filtered backprojection algorithm in RPCs based PET

    International Nuclear Information System (INIS)

    Cruceru, Ilie; Manea Ioana; Nicorescu, Carmen; Constantin Florin

    2003-01-01

    The basis of PET consists in administration of a radioactive isotope attached to a tracer that permits to reveal its molecular pathways in the human body. A 3-D Whole-Body-Scan is necessary in order to minimize the radiation exposure of the patient and to increase significantly the axial field of view (FOV). A major candidate for gamma pair detection in 3-D Whole-Body-Scan appear to be the RPCs (Resistive Plate Counters). They consist in a longitudinal microstrip grid 15 mm thick, spaced at 1 mm; the grid is placed between a large electric resistive glass anode (ρ = 10 12 Ωcm) and an aluminium cathode; the gap of around 300 μm is filled with a special gas and is polarized at around 6 kV. Several detecting structures based on Resistive Plate Counters (RPCs) are evaluated for use in a positron emission 3-Dimensional Whole-Body-Scan tomograph. The coincidence matrix is built for the specific detecting structure by means of random gamma pair ray generation and then the filtered backprojection algorithm is used to reconstruct the original picture. The accuracy of image reconstruction is examined for the four different detecting structures. (authors)

  20. A triangle voting algorithm based on double feature constraints for star sensors

    Science.gov (United States)

    Fan, Qiaoyun; Zhong, Xuyang

    2018-02-01

    A novel autonomous star identification algorithm is presented in this study. In the proposed algorithm, each sensor star constructs multi-triangle with its bright neighbor stars and obtains its candidates by triangle voting process, in which the triangle is considered as the basic voting element. In order to accelerate the speed of this algorithm and reduce the required memory for star database, feature extraction is carried out to reduce the dimension of triangles and each triangle is described by its base and height. During the identification period, the voting scheme based on double feature constraints is proposed to implement triangle voting. This scheme guarantees that only the catalog star satisfying two features can vote for the sensor star, which improves the robustness towards false stars. The simulation and real star image test demonstrate that compared with the other two algorithms, the proposed algorithm is more robust towards position noise, magnitude noise and false stars.

  1. Structure-Based Algorithms for Microvessel Classification

    KAUST Repository

    Smith, Amy F.; Secomb, Timothy W.; Pries, Axel R.; Smith, Nicolas P.; Shipley, Rebecca J.

    2015-01-01

    algorithm, developed for networks with one arteriolar and one venular tree, performs well in identifying arterioles and venules and is robust to parameter changes, but incorrectly labels a significant number of capillaries as arterioles or venules

  2. An Innovative Thinking-Based Intelligent Information Fusion Algorithm

    Directory of Open Access Journals (Sweden)

    Huimin Lu

    2013-01-01

    Full Text Available This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information.

  3. Algorithmic strategies for FPGA-based vision

    OpenAIRE

    Lim, Yoong Kang

    2016-01-01

    As demands for real-time computer vision applications increase, implementations on alternative architectures have been explored. These architectures include Field-Programmable Gate Arrays (FPGAs), which offer a high degree of flexibility and parallelism. A problem with this is that many computer vision algorithms have been optimized for serial processing, and this often does not map well to FPGA implementation. This thesis introduces the concept of FPGA-tailored computer vision algorithms...

  4. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  5. Algorithm for Stabilizing a POD-Based Dynamical System

    Science.gov (United States)

    Kalb, Virginia L.

    2010-01-01

    This algorithm provides a new way to improve the accuracy and asymptotic behavior of a low-dimensional system based on the proper orthogonal decomposition (POD). Given a data set representing the evolution of a system of partial differential equations (PDEs), such as the Navier-Stokes equations for incompressible flow, one may obtain a low-dimensional model in the form of ordinary differential equations (ODEs) that should model the dynamics of the flow. Temporal sampling of the direct numerical simulation of the PDEs produces a spatial time series. The POD extracts the temporal and spatial eigenfunctions of this data set. Truncated to retain only the most energetic modes followed by Galerkin projection of these modes onto the PDEs obtains a dynamical system of ordinary differential equations for the time-dependent behavior of the flow. In practice, the steps leading to this system of ODEs entail numerically computing first-order derivatives of the mean data field and the eigenfunctions, and the computation of many inner products. This is far from a perfect process, and often results in the lack of long-term stability of the system and incorrect asymptotic behavior of the model. This algorithm describes a new stabilization method that utilizes the temporal eigenfunctions to derive correction terms for the coefficients of the dynamical system to significantly reduce these errors.

  6. Solving SAT Problem Based on Hybrid Differential Evolution Algorithm

    Science.gov (United States)

    Liu, Kunqi; Zhang, Jingmin; Liu, Gang; Kang, Lishan

    Satisfiability (SAT) problem is an NP-complete problem. Based on the analysis about it, SAT problem is translated equally into an optimization problem on the minimum of objective function. A hybrid differential evolution algorithm is proposed to solve the Satisfiability problem. It makes full use of strong local search capacity of hill-climbing algorithm and strong global search capability of differential evolution algorithm, which makes up their disadvantages, improves the efficiency of algorithm and avoids the stagnation phenomenon. The experiment results show that the hybrid algorithm is efficient in solving SAT problem.

  7. Star point centroid algorithm based on background forecast

    Science.gov (United States)

    Wang, Jin; Zhao, Rujin; Zhu, Nan

    2014-09-01

    The calculation of star point centroid is a key step of improving star tracker measuring error. A star map photoed by APS detector includes several noises which have a great impact on veracity of calculation of star point centroid. Through analysis of characteristic of star map noise, an algorithm of calculation of star point centroid based on background forecast is presented in this paper. The experiment proves the validity of the algorithm. Comparing with classic algorithm, this algorithm not only improves veracity of calculation of star point centroid, but also does not need calibration data memory. This algorithm is applied successfully in a certain star tracker.

  8. Fuzzy Rules for Ant Based Clustering Algorithm

    Directory of Open Access Journals (Sweden)

    Amira Hamdi

    2016-01-01

    Full Text Available This paper provides a new intelligent technique for semisupervised data clustering problem that combines the Ant System (AS algorithm with the fuzzy c-means (FCM clustering algorithm. Our proposed approach, called F-ASClass algorithm, is a distributed algorithm inspired by foraging behavior observed in ant colonyT. The ability of ants to find the shortest path forms the basis of our proposed approach. In the first step, several colonies of cooperating entities, called artificial ants, are used to find shortest paths in a complete graph that we called graph-data. The number of colonies used in F-ASClass is equal to the number of clusters in dataset. Hence, the partition matrix of dataset founded by artificial ants is given in the second step, to the fuzzy c-means technique in order to assign unclassified objects generated in the first step. The proposed approach is tested on artificial and real datasets, and its performance is compared with those of K-means, K-medoid, and FCM algorithms. Experimental section shows that F-ASClass performs better according to the error rate classification, accuracy, and separation index.

  9. Parallel image encryption algorithm based on discretized chaotic map

    International Nuclear Information System (INIS)

    Zhou Qing; Wong Kwokwo; Liao Xiaofeng; Xiang Tao; Hu Yue

    2008-01-01

    Recently, a variety of chaos-based algorithms were proposed for image encryption. Nevertheless, none of them works efficiently in parallel computing environment. In this paper, we propose a framework for parallel image encryption. Based on this framework, a new algorithm is designed using the discretized Kolmogorov flow map. It fulfills all the requirements for a parallel image encryption algorithm. Moreover, it is secure and fast. These properties make it a good choice for image encryption on parallel computing platforms

  10. A Semiautomatic Segmentation Algorithm for Extracting the Complete Structure of Acini from Synchrotron Micro-CT Images

    Directory of Open Access Journals (Sweden)

    Luosha Xiao

    2013-01-01

    Full Text Available Pulmonary acinus is the largest airway unit provided with alveoli where blood/gas exchange takes place. Understanding the complete structure of acinus is necessary to measure the pathway of gas exchange and to simulate various mechanical phenomena in the lungs. The usual manual segmentation of a complete acinus structure from their experimentally obtained images is difficult and extremely time-consuming, which hampers the statistical analysis. In this study, we develop a semiautomatic segmentation algorithm for extracting the complete structure of acinus from synchrotron micro-CT images of the closed chest of mouse lungs. The algorithm uses a combination of conventional binary image processing techniques based on the multiscale and hierarchical nature of lung structures. Specifically, larger structures are removed, while smaller structures are isolated from the image by repeatedly applying erosion and dilation operators in order, adjusting the parameter referencing to previously obtained morphometric data. A cluster of isolated acini belonging to the same terminal bronchiole is obtained without floating voxels. The extracted acinar models above 98% agree well with those extracted manually. The run time is drastically shortened compared with manual methods. These findings suggest that our method may be useful for taking samples used in the statistical analysis of acinus.

  11. Automatic Peak Selection by a Benjamini-Hochberg-Based Algorithm

    KAUST Repository

    Abbas, Ahmed; Kong, Xin-Bing; Liu, Zhi; Jing, Bing-Yi; Gao, Xin

    2013-01-01

    A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into p-values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx. © 2013

  12. Automatic Peak Selection by a Benjamini-Hochberg-Based Algorithm

    KAUST Repository

    Abbas, Ahmed

    2013-01-07

    A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into p-values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx. © 2013

  13. Variations of the bacterial foraging algorithm for the extraction of PV module parameters from nameplate data

    International Nuclear Information System (INIS)

    Awadallah, Mohamed A.

    2016-01-01

    Highlights: • Bacterial foraging algorithm is used to extract PV model parameters from nameplate data. • Five variations of the bacterial foraging algorithm are compared on a simple objective function. • Best results obtained when swarming is neglected, step size is varied, and global best is preserved. • The technique is successfully applied on single- and double-diode models. • Matching between computation and measurements validates the obtained set of parameters. - Abstract: The paper introduces the task of parameter extraction of photovoltaic (PV) modules as a nonlinear optimization problem. The concerned parameters are the series resistance, shunt resistance, diode ideality factor, and diode reverse saturation current for both the single- and double-diode models. An error function representing the mismatch between computed and targeted performance is minimized using different versions of the bacterial foraging (BF) algorithm of global search and heuristic optimization. The targeted performance is obtained from the nameplate data of the PV module. Five distinct variations of the BF algorithm are used to solve the problem independently for the single- and double-diode models. The best optimization results are obtained when swarming is eliminated, chemotactic step size is dynamically varied, and global best is preserved, all acting together. Under such conditions, the best global minimum of 0.0028 is reached in an average best time of 94.4 sec for the single-diode model. However, it takes an average of 153 sec to reach the best global minimum of 0.0021 in case of double-diode model. An experimental verification study involves the comparison of computed performance to measurements on an Eclipsall PV module. It is shown that all variants of the BF algorithm could reach equivalent-circuit parameters with accepted accuracy by solving the optimization problem. The good matching between analytical and experimental results indicates the effectiveness of the

  14. Speech Emotion Feature Selection Method Based on Contribution Analysis Algorithm of Neural Network

    International Nuclear Information System (INIS)

    Wang Xiaojia; Mao Qirong; Zhan Yongzhao

    2008-01-01

    There are many emotion features. If all these features are employed to recognize emotions, redundant features may be existed. Furthermore, recognition result is unsatisfying and the cost of feature extraction is high. In this paper, a method to select speech emotion features based on contribution analysis algorithm of NN is presented. The emotion features are selected by using contribution analysis algorithm of NN from the 95 extracted features. Cluster analysis is applied to analyze the effectiveness for the features selected, and the time of feature extraction is evaluated. Finally, 24 emotion features selected are used to recognize six speech emotions. The experiments show that this method can improve the recognition rate and the time of feature extraction

  15. Efficient sampling algorithms for Monte Carlo based treatment planning

    International Nuclear Information System (INIS)

    DeMarco, J.J.; Solberg, T.D.; Chetty, I.; Smathers, J.B.

    1998-01-01

    Efficient sampling algorithms are necessary for producing a fast Monte Carlo based treatment planning code. This study evaluates several aspects of a photon-based tracking scheme and the effect of optimal sampling algorithms on the efficiency of the code. Four areas were tested: pseudo-random number generation, generalized sampling of a discrete distribution, sampling from the exponential distribution, and delta scattering as applied to photon transport through a heterogeneous simulation geometry. Generalized sampling of a discrete distribution using the cutpoint method can produce speedup gains of one order of magnitude versus conventional sequential sampling. Photon transport modifications based upon the delta scattering method were implemented and compared with a conventional boundary and collision checking algorithm. The delta scattering algorithm is faster by a factor of six versus the conventional algorithm for a boundary size of 5 mm within a heterogeneous geometry. A comparison of portable pseudo-random number algorithms and exponential sampling techniques is also discussed

  16. Csf Based Non-Ground Points Extraction from LIDAR Data

    Science.gov (United States)

    Shen, A.; Zhang, W.; Shi, H.

    2017-09-01

    Region growing is a classical method of point cloud segmentation. Based on the idea of collecting the pixels with similar properties to form regions, region growing is widely used in many fields such as medicine, forestry and remote sensing. In this algorithm, there are two core problems. One is the selection of seed points, the other is the setting of the growth constraints, in which the selection of the seed points is the foundation. In this paper, we propose a CSF (Cloth Simulation Filtering) based method to extract the non-ground seed points effectively. The experiments have shown that this method can obtain a group of seed spots compared with the traditional methods. It is a new attempt to extract seed points

  17. Evidence-based algorithm for heparin dosing before cardiopulmonary bypass. Part 1: Development of the algorithm.

    Science.gov (United States)

    McKinney, Mark C; Riley, Jeffrey B

    2007-12-01

    The incidence of heparin resistance during adult cardiac surgery with cardiopulmonary bypass has been reported at 15%-20%. The consistent use of a clinical decision-making algorithm may increase the consistency of patient care and likely reduce the total required heparin dose and other problems associated with heparin dosing. After a directed survey of practicing perfusionists regarding treatment of heparin resistance and a literature search for high-level evidence regarding the diagnosis and treatment of heparin resistance, an evidence-based decision-making algorithm was constructed. The face validity of the algorithm decisive steps and logic was confirmed by a second survey of practicing perfusionists. The algorithm begins with review of the patient history to identify predictors for heparin resistance. The definition for heparin resistance contained in the algorithm is an activated clotting time 450 IU/kg heparin loading dose. Based on the literature, the treatment for heparin resistance used in the algorithm is anti-thrombin III supplement. The algorithm seems to be valid and is supported by high-level evidence and clinician opinion. The next step is a human randomized clinical trial to test the clinical procedure guideline algorithm vs. current standard clinical practice.

  18. Multiscale Distance Coherence Vector Algorithm for Content-Based Image Retrieval

    Science.gov (United States)

    Jiexian, Zeng; Xiupeng, Liu

    2014-01-01

    Multiscale distance coherence vector algorithm for content-based image retrieval (CBIR) is proposed due to the same descriptor with different shapes and the shortcomings of antinoise performance of the distance coherence vector algorithm. By this algorithm, the image contour curve is evolved by Gaussian function first, and then the distance coherence vector is, respectively, extracted from the contour of the original image and evolved images. Multiscale distance coherence vector was obtained by reasonable weight distribution of the distance coherence vectors of evolved images contour. This algorithm not only is invariable to translation, rotation, and scaling transformation but also has good performance of antinoise. The experiment results show us that the algorithm has a higher recall rate and precision rate for the retrieval of images polluted by noise. PMID:24883416

  19. Function-Based Algorithms for Biological Sequences

    Science.gov (United States)

    Mohanty, Pragyan Sheela P.

    2015-01-01

    Two problems at two different abstraction levels of computational biology are studied. At the molecular level, efficient pattern matching algorithms in DNA sequences are presented. For gene order data, an efficient data structure is presented capable of storing all gene re-orderings in a systematic manner. A common characteristic of presented…

  20. An Isometric Mapping Based Co-Location Decision Tree Algorithm

    Science.gov (United States)

    Zhou, G.; Wei, J.; Zhou, X.; Zhang, R.; Huang, W.; Sha, H.; Chen, J.

    2018-05-01

    Decision tree (DT) induction has been widely used in different pattern classification. However, most traditional DTs have the disadvantage that they consider only non-spatial attributes (ie, spectral information) as a result of classifying pixels, which can result in objects being misclassified. Therefore, some researchers have proposed a co-location decision tree (Cl-DT) method, which combines co-location and decision tree to solve the above the above-mentioned traditional decision tree problems. Cl-DT overcomes the shortcomings of the existing DT algorithms, which create a node for each value of a given attribute, which has a higher accuracy than the existing decision tree approach. However, for non-linearly distributed data instances, the euclidean distance between instances does not reflect the true positional relationship between them. In order to overcome these shortcomings, this paper proposes an isometric mapping method based on Cl-DT (called, (Isomap-based Cl-DT), which is a method that combines heterogeneous and Cl-DT together. Because isometric mapping methods use geodetic distances instead of Euclidean distances between non-linearly distributed instances, the true distance between instances can be reflected. The experimental results and several comparative analyzes show that: (1) The extraction method of exposed carbonate rocks is of high accuracy. (2) The proposed method has many advantages, because the total number of nodes, the number of leaf nodes and the number of nodes are greatly reduced compared to Cl-DT. Therefore, the Isomap -based Cl-DT algorithm can construct a more accurate and faster decision tree.

  1. AN ISOMETRIC MAPPING BASED CO-LOCATION DECISION TREE ALGORITHM

    Directory of Open Access Journals (Sweden)

    G. Zhou

    2018-05-01

    Full Text Available Decision tree (DT induction has been widely used in different pattern classification. However, most traditional DTs have the disadvantage that they consider only non-spatial attributes (ie, spectral information as a result of classifying pixels, which can result in objects being misclassified. Therefore, some researchers have proposed a co-location decision tree (Cl-DT method, which combines co-location and decision tree to solve the above the above-mentioned traditional decision tree problems. Cl-DT overcomes the shortcomings of the existing DT algorithms, which create a node for each value of a given attribute, which has a higher accuracy than the existing decision tree approach. However, for non-linearly distributed data instances, the euclidean distance between instances does not reflect the true positional relationship between them. In order to overcome these shortcomings, this paper proposes an isometric mapping method based on Cl-DT (called, (Isomap-based Cl-DT, which is a method that combines heterogeneous and Cl-DT together. Because isometric mapping methods use geodetic distances instead of Euclidean distances between non-linearly distributed instances, the true distance between instances can be reflected. The experimental results and several comparative analyzes show that: (1 The extraction method of exposed carbonate rocks is of high accuracy. (2 The proposed method has many advantages, because the total number of nodes, the number of leaf nodes and the number of nodes are greatly reduced compared to Cl-DT. Therefore, the Isomap -based Cl-DT algorithm can construct a more accurate and faster decision tree.

  2. A Parallel Encryption Algorithm Based on Piecewise Linear Chaotic Map

    Directory of Open Access Journals (Sweden)

    Xizhong Wang

    2013-01-01

    Full Text Available We introduce a parallel chaos-based encryption algorithm for taking advantage of multicore processors. The chaotic cryptosystem is generated by the piecewise linear chaotic map (PWLCM. The parallel algorithm is designed with a master/slave communication model with the Message Passing Interface (MPI. The algorithm is suitable not only for multicore processors but also for the single-processor architecture. The experimental results show that the chaos-based cryptosystem possesses good statistical properties. The parallel algorithm provides much better performance than the serial ones and would be useful to apply in encryption/decryption file with large size or multimedia.

  3. Algorithm of extraction optics properties from the measurement of spatially resolved diffuse reflectance

    International Nuclear Information System (INIS)

    Cunill Rodriguez, Margarita; Delgado Atencio, Jose Alberto; Castro Ramos, Jorge; Vazquez y Montiel, Sergio

    2009-01-01

    There are several methods to obtain the optical parameters of biological tissues from the measurement of spatially resolved diffuse reflectance. One of them is well-known as Video Reflectometry in which a camera CCD is used as detection and recording system of the lateral distribution of diffuse reflectance Rd(r) when an infinitely narrow light beam impinges on the tissue. In this paper, we present an algorithm that we have developed for the calibration and application of an experimental set-up of Video Reflectometry destined to extract the optical properties of models of biological tissues with optical properties similar to the human skin. The results of evaluation of the accuracy of the algorithm for optical parameters extraction is shown for a set of proofs reflectance curves with known values of these parameters. In the generation of these curves the simulation of measurement errors was also considered. The results show that it is possible to extract the optical properties with an accuracy error of less than 1% for all the proofs curves. (Author)

  4. An Extended HITS Algorithm on Bipartite Network for Features Extraction of Online Customer Reviews

    Directory of Open Access Journals (Sweden)

    Chen Liu

    2018-05-01

    Full Text Available How to acquire useful information intelligently in the age of information explosion has become an important issue. In this context, sentiment analysis emerges with the growth of the need of information extraction. One of the most important tasks of sentiment analysis is feature extraction of entities in consumer reviews. This paper first constitutes a directed bipartite feature-sentiment relation network with a set of candidate features-sentiment pairs that is extracted by dependency syntax analysis from consumer reviews. Then, a novel method called MHITS which combines PMI with weighted HITS algorithm is proposed to rank these candidate product features to find out real product features. Empirical experiments indicate the effectiveness of our approach across different kinds and various data sizes of product. In addition, the effect of the proposed algorithm is not the same for the corpus with different proportions of the word pair that includes the “bad”, “good”, “poor”, “pretty good”, “not bad” these general collocation words.

  5. An improved contour symmetry axes extraction algorithm and its application in the location of picking points of apples

    Energy Technology Data Exchange (ETDEWEB)

    Wang, D.; Song, H.; Yu, X.; Zhang, W.; Qu, W.; Xu, Y.

    2015-07-01

    The key problem for picking robots is to locate the picking points of fruit. A method based on the moment of inertia and symmetry of apples is proposed in this paper to locate the picking points of apples. Image pre-processing procedures, which are crucial to improving the accuracy of the location, were carried out to remove noise and smooth the edges of apples. The moment of inertia method has the disadvantage of high computational complexity, which should be solved, so convex hull was used to improve this problem. To verify the validity of this algorithm, a test was conducted using four types of apple images containing 107 apple targets. These images were single and unblocked apple images, single and blocked apple images, images containing adjacent apples, and apples in panoramas. The root mean square error values of these four types of apple images were 6.3, 15.0, 21.6 and 18.4, respectively, and the average location errors were 4.9°, 10.2°, 16.3° and 13.8°, respectively. Furthermore, the improved algorithm was effective in terms of average runtime, with 3.7 ms and 9.2 ms for single and unblocked and single and blocked apple images, respectively. For the other two types of apple images, the runtime was determined by the number of apples and blocked apples contained in the images. The results showed that the improved algorithm could extract symmetry axes and locate the picking points of apples more efficiently. In conclusion, the improved algorithm is feasible for extracting symmetry axes and locating the picking points of apples. (Author)

  6. VHDL implementation of feature-extraction algorithm for the PANDA electromagnetic calorimeter

    Energy Technology Data Exchange (ETDEWEB)

    Guliyev, E. [Kernfysisch Versneller Instituut, University of Groningen, Zernikelaan 25, NL-9747 AA Groningen (Netherlands); Kavatsyuk, M., E-mail: m.kavatsyuk@rug.nl [Kernfysisch Versneller Instituut, University of Groningen, Zernikelaan 25, NL-9747 AA Groningen (Netherlands); Lemmens, P.J.J.; Tambave, G.; Loehner, H. [Kernfysisch Versneller Instituut, University of Groningen, Zernikelaan 25, NL-9747 AA Groningen (Netherlands)

    2012-02-01

    A simple, efficient, and robust feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA spectrometer at FAIR, Darmstadt, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The source-code is available as an open-source project and is adaptable for other projects and sampling ADCs. Best performance with different types of signal sources can be achieved through flexible parameter selection. The on-line data-processing in FPGA enables to construct an almost dead-time free data acquisition system which is successfully evaluated as a first step towards building a complete trigger-less readout chain. Prototype setups are studied to determine the dead-time of the implemented algorithm, the rate of false triggering, timing performance, and event correlations.

  7. VHDL implementation of feature-extraction algorithm for the PANDA electromagnetic calorimeter

    International Nuclear Information System (INIS)

    Guliyev, E.; Kavatsyuk, M.; Lemmens, P.J.J.; Tambave, G.; Löhner, H.

    2012-01-01

    A simple, efficient, and robust feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA spectrometer at FAIR, Darmstadt, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The source-code is available as an open-source project and is adaptable for other projects and sampling ADCs. Best performance with different types of signal sources can be achieved through flexible parameter selection. The on-line data-processing in FPGA enables to construct an almost dead-time free data acquisition system which is successfully evaluated as a first step towards building a complete trigger-less readout chain. Prototype setups are studied to determine the dead-time of the implemented algorithm, the rate of false triggering, timing performance, and event correlations.

  8. Seismic active control by a heuristic-based algorithm

    International Nuclear Information System (INIS)

    Tang, Yu.

    1996-01-01

    A heuristic-based algorithm for seismic active control is generalized to permit consideration of the effects of control-structure interaction and actuator dynamics. Control force is computed at onetime step ahead before being applied to the structure. Therefore, the proposed control algorithm is free from the problem of time delay. A numerical example is presented to show the effectiveness of the proposed control algorithm. Also, two indices are introduced in the paper to assess the effectiveness and efficiency of control laws

  9. Multivariate anomaly detection for Earth observations: a comparison of algorithms and feature extraction techniques

    Directory of Open Access Journals (Sweden)

    M. Flach

    2017-08-01

    Full Text Available Today, many processes at the Earth's surface are constantly monitored by multiple data streams. These observations have become central to advancing our understanding of vegetation dynamics in response to climate or land use change. Another set of important applications is monitoring effects of extreme climatic events, other disturbances such as fires, or abrupt land transitions. One important methodological question is how to reliably detect anomalies in an automated and generic way within multivariate data streams, which typically vary seasonally and are interconnected across variables. Although many algorithms have been proposed for detecting anomalies in multivariate data, only a few have been investigated in the context of Earth system science applications. In this study, we systematically combine and compare feature extraction and anomaly detection algorithms for detecting anomalous events. Our aim is to identify suitable workflows for automatically detecting anomalous patterns in multivariate Earth system data streams. We rely on artificial data that mimic typical properties and anomalies in multivariate spatiotemporal Earth observations like sudden changes in basic characteristics of time series such as the sample mean, the variance, changes in the cycle amplitude, and trends. This artificial experiment is needed as there is no gold standard for the identification of anomalies in real Earth observations. Our results show that a well-chosen feature extraction step (e.g., subtracting seasonal cycles, or dimensionality reduction is more important than the choice of a particular anomaly detection algorithm. Nevertheless, we identify three detection algorithms (k-nearest neighbors mean distance, kernel density estimation, a recurrence approach and their combinations (ensembles that outperform other multivariate approaches as well as univariate extreme-event detection methods. Our results therefore provide an effective workflow to

  10. Stereo Matching Based On Election Campaign Algorithm

    Directory of Open Access Journals (Sweden)

    Xie Qing Hua

    2016-01-01

    Full Text Available Stereo matching is one of the significant problems in the study of the computer vision. By getting the distance information through pixels, it is possible to reproduce a three-dimensional stereo. In this paper, the edges are the primitives for matching, the grey values of the edges and the magnitude and direction of the edge gradient were figured out as the properties of the edge feature points, according to the constraints for stereo matching, the energy function was built for finding the route minimizing by election campaign optimization algorithm during the process of stereo matching was applied to this problem the energy function. Experiment results show that this algorithm is more stable and it can get the matching result with better accuracy.

  11. Compressive sensing based algorithms for electronic defence

    CERN Document Server

    Mishra, Amit Kumar

    2017-01-01

    This book details some of the major developments in the implementation of compressive sensing in radio applications for electronic defense and warfare communication use. It provides a comprehensive background to the subject and at the same time describes some novel algorithms. It also investigates application value and performance-related parameters of compressive sensing in scenarios such as direction finding, spectrum monitoring, detection, and classification.

  12. Driver Distraction Using Visual-Based Sensors and Algorithms

    Directory of Open Access Journals (Sweden)

    Alberto Fernández

    2016-10-01

    Full Text Available Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information systems is critical because they induce visual, biomechanical and cognitive distraction and may affect driving performance in qualitatively different ways. Non-intrusive methods are strongly preferred for monitoring distraction, and vision-based systems have appeared to be attractive for both drivers and researchers. Biomechanical, visual and cognitive distractions are the most commonly detected types in video-based algorithms. Many distraction detection systems only use a single visual cue and therefore, they may be easily disturbed when occlusion or illumination changes appear. Moreover, the combination of these visual cues is a key and challenging aspect in the development of robust distraction detection systems. These visual cues can be extracted mainly by using face monitoring systems but they should be completed with more visual cues (e.g., hands or body information or even, distraction detection from specific actions (e.g., phone usage. Additionally, these algorithms should be included in an embedded device or system inside a car. This is not a trivial task and several requirements must be taken into account: reliability, real-time performance, low cost, small size, low power consumption, flexibility and short time-to-market. The key points for the development and implementation of sensors to carry out the detection of distraction will also be reviewed. This paper shows a review of the role of computer vision technology applied to the development of monitoring systems to detect distraction. Some key points considered as both future work and challenges ahead yet to be solved will also be addressed.

  13. Driver Distraction Using Visual-Based Sensors and Algorithms.

    Science.gov (United States)

    Fernández, Alberto; Usamentiaga, Rubén; Carús, Juan Luis; Casado, Rubén

    2016-10-28

    Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information systems is critical because they induce visual, biomechanical and cognitive distraction and may affect driving performance in qualitatively different ways. Non-intrusive methods are strongly preferred for monitoring distraction, and vision-based systems have appeared to be attractive for both drivers and researchers. Biomechanical, visual and cognitive distractions are the most commonly detected types in video-based algorithms. Many distraction detection systems only use a single visual cue and therefore, they may be easily disturbed when occlusion or illumination changes appear. Moreover, the combination of these visual cues is a key and challenging aspect in the development of robust distraction detection systems. These visual cues can be extracted mainly by using face monitoring systems but they should be completed with more visual cues (e.g., hands or body information) or even, distraction detection from specific actions (e.g., phone usage). Additionally, these algorithms should be included in an embedded device or system inside a car. This is not a trivial task and several requirements must be taken into account: reliability, real-time performance, low cost, small size, low power consumption, flexibility and short time-to-market. The key points for the development and implementation of sensors to carry out the detection of distraction will also be reviewed. This paper shows a review of the role of computer vision technology applied to the development of monitoring systems to detect distraction. Some key points considered as both future work and challenges ahead yet to be solved will also be addressed.

  14. Experimental variability and data pre-processing as factors affecting the discrimination power of some chemometric approaches (PCA, CA and a new algorithm based on linear regression) applied to (+/-)ESI/MS and RPLC/UV data: Application on green tea extracts.

    Science.gov (United States)

    Iorgulescu, E; Voicu, V A; Sârbu, C; Tache, F; Albu, F; Medvedovici, A

    2016-08-01

    The influence of the experimental variability (instrumental repeatability, instrumental intermediate precision and sample preparation variability) and data pre-processing (normalization, peak alignment, background subtraction) on the discrimination power of multivariate data analysis methods (Principal Component Analysis -PCA- and Cluster Analysis -CA-) as well as a new algorithm based on linear regression was studied. Data used in the study were obtained through positive or negative ion monitoring electrospray mass spectrometry (+/-ESI/MS) and reversed phase liquid chromatography/UV spectrometric detection (RPLC/UV) applied to green tea extracts. Extractions in ethanol and heated water infusion were used as sample preparation procedures. The multivariate methods were directly applied to mass spectra and chromatograms, involving strictly a holistic comparison of shapes, without assignment of any structural identity to compounds. An alternative data interpretation based on linear regression analysis mutually applied to data series is also discussed. Slopes, intercepts and correlation coefficients produced by the linear regression analysis applied on pairs of very large experimental data series successfully retain information resulting from high frequency instrumental acquisition rates, obviously better defining the profiles being compared. Consequently, each type of sample or comparison between samples produces in the Cartesian space an ellipsoidal volume defined by the normal variation intervals of the slope, intercept and correlation coefficient. Distances between volumes graphically illustrates (dis)similarities between compared data. The instrumental intermediate precision had the major effect on the discrimination power of the multivariate data analysis methods. Mass spectra produced through ionization from liquid state in atmospheric pressure conditions of bulk complex mixtures resulting from extracted materials of natural origins provided an excellent data

  15. Iris recognition based on key image feature extraction.

    Science.gov (United States)

    Ren, X; Tian, Q; Zhang, J; Wu, S; Zeng, Y

    2008-01-01

    In iris recognition, feature extraction can be influenced by factors such as illumination and contrast, and thus the features extracted may be unreliable, which can cause a high rate of false results in iris pattern recognition. In order to obtain stable features, an algorithm was proposed in this paper to extract key features of a pattern from multiple images. The proposed algorithm built an iris feature template by extracting key features and performed iris identity enrolment. Simulation results showed that the selected key features have high recognition accuracy on the CASIA Iris Set, where both contrast and illumination variance exist.

  16. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  17. Text Clustering Algorithm Based on Random Cluster Core

    Directory of Open Access Journals (Sweden)

    Huang Long-Jun

    2016-01-01

    Full Text Available Nowadays clustering has become a popular text mining algorithm, but the huge data can put forward higher requirements for the accuracy and performance of text mining. In view of the performance bottleneck of traditional text clustering algorithm, this paper proposes a text clustering algorithm with random features. This is a kind of clustering algorithm based on text density, at the same time using the neighboring heuristic rules, the concept of random cluster is introduced, which effectively reduces the complexity of the distance calculation.

  18. Agent-based Algorithm for Spatial Distribution of Objects

    KAUST Repository

    Collier, Nathan

    2012-06-02

    In this paper we present an agent-based algorithm for the spatial distribution of objects. The algorithm is a generalization of the bubble mesh algorithm, initially created for the point insertion stage of the meshing process of the finite element method. The bubble mesh algorithm treats objects in space as bubbles, which repel and attract each other. The dynamics of each bubble are approximated by solving a series of ordinary differential equations. We present numerical results for a meshing application as well as a graph visualization application.

  19. Optimization algorithm based on densification and dynamic canonical descent

    Science.gov (United States)

    Bousson, K.; Correia, S. D.

    2006-07-01

    Stochastic methods have gained some popularity in global optimization in that most of them do not assume the cost functions to be differentiable. They have capabilities to avoid being trapped by local optima, and may converge even faster than gradient-based optimization methods on some problems. The present paper proposes an optimization method, which reduces the search space by means of densification curves, coupled with the dynamic canonical descent algorithm. The performances of the new method are shown on several known problems classically used for testing optimization algorithms, and proved to outperform competitive algorithms such as simulated annealing and genetic algorithms.

  20. Teaching learning based optimization algorithm and its engineering applications

    CERN Document Server

    Rao, R Venkata

    2016-01-01

    Describing a new optimization algorithm, the “Teaching-Learning-Based Optimization (TLBO),” in a clear and lucid style, this book maximizes reader insights into how the TLBO algorithm can be used to solve continuous and discrete optimization problems involving single or multiple objectives. As the algorithm operates on the principle of teaching and learning, where teachers influence the quality of learners’ results, the elitist version of TLBO algorithm (ETLBO) is described along with applications of the TLBO algorithm in the fields of electrical engineering, mechanical design, thermal engineering, manufacturing engineering, civil engineering, structural engineering, computer engineering, electronics engineering, physics and biotechnology. The book offers a valuable resource for scientists, engineers and practitioners involved in the development and usage of advanced optimization algorithms.

  1. Improved artificial bee colony algorithm based gravity matching navigation method.

    Science.gov (United States)

    Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang

    2014-07-18

    Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position.

  2. Analog Circuit Design Optimization Based on Evolutionary Algorithms

    Directory of Open Access Journals (Sweden)

    Mansour Barari

    2014-01-01

    Full Text Available This paper investigates an evolutionary-based designing system for automated sizing of analog integrated circuits (ICs. Two evolutionary algorithms, genetic algorithm and PSO (Parswal particle swarm optimization algorithm, are proposed to design analog ICs with practical user-defined specifications. On the basis of the combination of HSPICE and MATLAB, the system links circuit performances, evaluated through specific electrical simulation, to the optimization system in the MATLAB environment, for the selected topology. The system has been tested by typical and hard-to-design cases, such as complex analog blocks with stringent design requirements. The results show that the design specifications are closely met. Comparisons with available methods like genetic algorithms show that the proposed algorithm offers important advantages in terms of optimization quality and robustness. Moreover, the algorithm is shown to be efficient.

  3. Local Community Detection Algorithm Based on Minimal Cluster

    Directory of Open Access Journals (Sweden)

    Yong Zhou

    2016-01-01

    Full Text Available In order to discover the structure of local community more effectively, this paper puts forward a new local community detection algorithm based on minimal cluster. Most of the local community detection algorithms begin from one node. The agglomeration ability of a single node must be less than multiple nodes, so the beginning of the community extension of the algorithm in this paper is no longer from the initial node only but from a node cluster containing this initial node and nodes in the cluster are relatively densely connected with each other. The algorithm mainly includes two phases. First it detects the minimal cluster and then finds the local community extended from the minimal cluster. Experimental results show that the quality of the local community detected by our algorithm is much better than other algorithms no matter in real networks or in simulated networks.

  4. Effectiveness of firefly algorithm based neural network in time series ...

    African Journals Online (AJOL)

    Effectiveness of firefly algorithm based neural network in time series forecasting. ... In the experiments, three well known time series were used to evaluate the performance. Results obtained were compared with ... Keywords: Time series, Artificial Neural Network, Firefly Algorithm, Particle Swarm Optimization, Overfitting ...

  5. New MPPT algorithm based on hybrid dynamical theory

    KAUST Repository

    Elmetennani, Shahrazed

    2014-11-01

    This paper presents a new maximum power point tracking algorithm based on the hybrid dynamical theory. A multiceli converter has been considered as an adaptation stage for the photovoltaic chain. The proposed algorithm is a hybrid automata switching between eight different operating modes, which has been validated by simulation tests under different working conditions. © 2014 IEEE.

  6. New MPPT algorithm based on hybrid dynamical theory

    KAUST Repository

    Elmetennani, Shahrazed; Laleg-Kirati, Taous-Meriem; Benmansour, K.; Boucherit, M. S.; Tadjine, M.

    2014-01-01

    This paper presents a new maximum power point tracking algorithm based on the hybrid dynamical theory. A multiceli converter has been considered as an adaptation stage for the photovoltaic chain. The proposed algorithm is a hybrid automata switching between eight different operating modes, which has been validated by simulation tests under different working conditions. © 2014 IEEE.

  7. A Tomographic method based on genetic algorithms

    International Nuclear Information System (INIS)

    Turcanu, C.; Alecu, L.; Craciunescu, T.; Niculae, C.

    1997-01-01

    Computerized tomography being a non-destructive and non-evasive technique is frequently used in medical application to generate three dimensional images of objects. Genetic algorithms are efficient, domain independent for a large variety of problems. The proposed method produces good quality reconstructions even in case of very small number of projection angles. It requests no a priori knowledge about the solution and takes into account the statistical uncertainties. The main drawback of the method is the amount of computer memory and time needed. (author)

  8. Fast image matching algorithm based on projection characteristics

    Science.gov (United States)

    Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun

    2011-06-01

    Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.

  9. A transport-based condensed history algorithm

    International Nuclear Information System (INIS)

    Tolar, D. R. Jr.

    1999-01-01

    Condensed history algorithms are approximate electron transport Monte Carlo methods in which the cumulative effects of multiple collisions are modeled in a single step of (user-specified) path length s 0 . This path length is the distance each Monte Carlo electron travels between collisions. Current condensed history techniques utilize a splitting routine over the range 0 le s le s 0 . For example, the PEnELOPE method splits each step into two substeps; one with length ξs 0 and one with length (1 minusξ)s 0 , where ξ is a random number from 0 0 is fixed (not sampled from an exponential distribution), conventional condensed history schemes are not transport processes. Here the authors describe a new condensed history algorithm that is a transport process. The method simulates a transport equation that approximates the exact Boltzmann equation. The new transport equation has a larger mean free path than, and preserves two angular moments of, the Boltzmann equation. Thus, the new process is solved more efficiently by Monte Carlo, and it conserves both particles and scattering power

  10. CUDT: A CUDA Based Decision Tree Algorithm

    Directory of Open Access Journals (Sweden)

    Win-Tsung Lo

    2014-01-01

    Full Text Available Decision tree is one of the famous classification methods in data mining. Many researches have been proposed, which were focusing on improving the performance of decision tree. However, those algorithms are developed and run on traditional distributed systems. Obviously the latency could not be improved while processing huge data generated by ubiquitous sensing node in the era without new technology help. In order to improve data processing latency in huge data mining, in this paper, we design and implement a new parallelized decision tree algorithm on a CUDA (compute unified device architecture, which is a GPGPU solution provided by NVIDIA. In the proposed system, CPU is responsible for flow control while the GPU is responsible for computation. We have conducted many experiments to evaluate system performance of CUDT and made a comparison with traditional CPU version. The results show that CUDT is 5∼55 times faster than Weka-j48 and is 18 times speedup than SPRINT for large data set.

  11. Identification of PV solar cells and modules parameters using the genetic algorithms: Application to maximum power extraction

    Energy Technology Data Exchange (ETDEWEB)

    Zagrouba, M.; Sellami, A.; Bouaicha, M. [Laboratoire de Photovoltaique, des Semi-conducteurs et des Nanostructures, Centre de Recherches et des Technologies de l' Energie, Technopole de Borj-Cedria, Tunis, B.P. 95, 2050 Hammam-Lif (Tunisia); Ksouri, M. [Unite de Recherche RME-Groupe AIA, Institut National des Sciences Appliquees et de Technologie (Tunisia)

    2010-05-15

    In this paper, we propose to perform a numerical technique based on genetic algorithms (GAs) to identify the electrical parameters (I{sub s}, I{sub ph}, R{sub s}, R{sub sh}, and n) of photovoltaic (PV) solar cells and modules. These parameters were used to determine the corresponding maximum power point (MPP) from the illuminated current-voltage (I-V) characteristic. The one diode type approach is used to model the AM1.5 I-V characteristic of the solar cell. To extract electrical parameters, the approach is formulated as a non convex optimization problem. The GAs approach was used as a numerical technique in order to overcome problems involved in the local minima in the case of non convex optimization criteria. Compared to other methods, we find that the GAs is a very efficient technique to estimate the electrical parameters of PV solar cells and modules. Indeed, the race of the algorithm stopped after five generations in the case of PV solar cells and seven generations in the case of PV modules. The identified parameters are then used to extract the maximum power working points for both cell and module. (author)

  12. Quantum Image Steganography and Steganalysis Based On LSQu-Blocks Image Information Concealing Algorithm

    Science.gov (United States)

    A. AL-Salhi, Yahya E.; Lu, Songfeng

    2016-08-01

    Quantum steganography can solve some problems that are considered inefficient in image information concealing. It researches on Quantum image information concealing to have been widely exploited in recent years. Quantum image information concealing can be categorized into quantum image digital blocking, quantum image stereography, anonymity and other branches. Least significant bit (LSB) information concealing plays vital roles in the classical world because many image information concealing algorithms are designed based on it. Firstly, based on the novel enhanced quantum representation (NEQR), image uniform blocks clustering around the concrete the least significant Qu-block (LSQB) information concealing algorithm for quantum image steganography is presented. Secondly, a clustering algorithm is proposed to optimize the concealment of important data. Finally, we used Con-Steg algorithm to conceal the clustered image blocks. Information concealing located on the Fourier domain of an image can achieve the security of image information, thus we further discuss the Fourier domain LSQu-block information concealing algorithm for quantum image based on Quantum Fourier Transforms. In our algorithms, the corresponding unitary Transformations are designed to realize the aim of concealing the secret information to the least significant Qu-block representing color of the quantum cover image. Finally, the procedures of extracting the secret information are illustrated. Quantum image LSQu-block image information concealing algorithm can be applied in many fields according to different needs.

  13. Parameters extraction of the three diode model for the multi-crystalline solar cell/module using Moth-Flame Optimization Algorithm

    International Nuclear Information System (INIS)

    Allam, Dalia; Yousri, D.A.; Eteiba, M.B.

    2016-01-01

    Highlights: • More detailed models are proposed to emulate the multi-crystalline solar cell/module. • Moth-Flame Optimizer (MFO) is proposed for the parameter extraction process. • The performance of MFO technique is compared with the recent optimization algorithms. • MFO algorithm converges to the optimal solution more rapidly and more accurately. • MFO algorithm accomplished with three diode model achieves the most accurate model. - Abstract: As a result of the wide prevalence of using the multi-crystalline silicon solar cells, an accurate mathematical model for these cells has become an important issue. Therefore, a three diode model is proposed as a more precise model to meet the relatively complicated physical behavior of the multi-crystalline silicon solar cells. The performance of this model is compared to the performance of both the double diode and the modified double diode models of the same cell/module. Therefore, there is a persistent need to keep searching for a more accurate optimization algorithm to estimate the more complicated models’ parameters. Hence, a proper optimization algorithm which is called Moth-Flame Optimizer (MFO), is proposed as a new optimization algorithm for the parameter extraction process of the three tested models based on data measured at laboratory and other data reported at previous literature. To verify the performance of the suggested technique, its results are compared with the results of the most recent and powerful techniques in the literature such as Hybrid Evolutionary (DEIM) and Flower Pollination (FPA) algorithms. Furthermore, evaluation analysis is performed for the three algorithms of the selected models at different environmental conditions. The results show that, MFO algorithm achieves the least Root Mean Square Error (RMSE), Mean Bias Error (MBE), Absolute Error at the Maximum Power Point (AEMPP) and best Coefficient of Determination. In addition, MFO is reaching to the optimal solution with the

  14. A difference tracking algorithm based on discrete sine transform

    Science.gov (United States)

    Liu, HaoPeng; Yao, Yong; Lei, HeBing; Wu, HaoKun

    2018-04-01

    Target tracking is an important field of computer vision. The template matching tracking algorithm based on squared difference matching (SSD) and standard correlation coefficient (NCC) matching is very sensitive to the gray change of image. When the brightness or gray change, the tracking algorithm will be affected by high-frequency information. Tracking accuracy is reduced, resulting in loss of tracking target. In this paper, a differential tracking algorithm based on discrete sine transform is proposed to reduce the influence of image gray or brightness change. The algorithm that combines the discrete sine transform and the difference algorithm maps the target image into a image digital sequence. The Kalman filter predicts the target position. Using the Hamming distance determines the degree of similarity between the target and the template. The window closest to the template is determined the target to be tracked. The target to be tracked updates the template. Based on the above achieve target tracking. The algorithm is tested in this paper. Compared with SSD and NCC template matching algorithms, the algorithm tracks target stably when image gray or brightness change. And the tracking speed can meet the read-time requirement.

  15. Applications of genetic algorithms to optimization problems in the solvent extraction process for spent nuclear fuel

    International Nuclear Information System (INIS)

    Omori, Ryota, Sakakibara, Yasushi; Suzuki, Atsuyuki

    1997-01-01

    Applications of genetic algorithms (GAs) to optimization problems in the solvent extraction process for spent nuclear fuel are described. Genetic algorithms have been considered a promising tool for use in solving optimization problems in complicated and nonlinear systems because they require no derivatives of the objective function. In addition, they have the ability to treat a set of many possible solutions and consider multiple objectives simultaneously, so they can calculate many pareto optimal points on the trade-off curve between the competing objectives in a single iteration, which leads to small computing time. Genetic algorithms were applied to two optimization problems. First, process variables in the partitioning process were optimized using a weighted objective function. It was observed that the average fitness of a generation increased steadily as the generation proceeded and satisfactory solutions were obtained in all cases, which means that GAs are an appropriate method to obtain such an optimization. Secondly, GAs were applied to a multiobjective optimization problem in the co-decontamination process, and the trade-off curve between the loss of uranium and the solvent flow rate was successfully obtained. For both optimization problems, CPU time with the present method was estimated to be several tens of times smaller than with the random search method

  16. Real-Coded Quantum-Inspired Genetic Algorithm-Based BP Neural Network Algorithm

    Directory of Open Access Journals (Sweden)

    Jianyong Liu

    2015-01-01

    Full Text Available The method that the real-coded quantum-inspired genetic algorithm (RQGA used to optimize the weights and threshold of BP neural network is proposed to overcome the defect that the gradient descent method makes the algorithm easily fall into local optimal value in the learning process. Quantum genetic algorithm (QGA is with good directional global optimization ability, but the conventional QGA is based on binary coding; the speed of calculation is reduced by the coding and decoding processes. So, RQGA is introduced to explore the search space, and the improved varied learning rate is adopted to train the BP neural network. Simulation test shows that the proposed algorithm is effective to rapidly converge to the solution conformed to constraint conditions.

  17. Extracting quantum dynamics from genetic learning algorithms through principal control analysis

    International Nuclear Information System (INIS)

    White, J L; Pearson, B J; Bucksbaum, P H

    2004-01-01

    Genetic learning algorithms are widely used to control ultrafast optical pulse shapes for photo-induced quantum control of atoms and molecules. An unresolved issue is how to use the solutions found by these algorithms to learn about the system's quantum dynamics. We propose a simple method based on covariance analysis of the control space, which can reveal the degrees of freedom in the effective control Hamiltonian. We have applied this technique to stimulated Raman scattering in liquid methanol. A simple model of two-mode stimulated Raman scattering is consistent with the results. (letter to the editor)

  18. Genetic Algorithm Based Microscale Vehicle Emissions Modelling

    Directory of Open Access Journals (Sweden)

    Sicong Zhu

    2015-01-01

    Full Text Available There is a need to match emission estimations accuracy with the outputs of transport models. The overall error rate in long-term traffic forecasts resulting from strategic transport models is likely to be significant. Microsimulation models, whilst high-resolution in nature, may have similar measurement errors if they use the outputs of strategic models to obtain traffic demand predictions. At the microlevel, this paper discusses the limitations of existing emissions estimation approaches. Emission models for predicting emission pollutants other than CO2 are proposed. A genetic algorithm approach is adopted to select the predicting variables for the black box model. The approach is capable of solving combinatorial optimization problems. Overall, the emission prediction results reveal that the proposed new models outperform conventional equations in terms of accuracy and robustness.

  19. A Diagnostic and Therapeutic Algorithm for Impacted Teeth for Plastic Surgeons: Outcomes of 242 Extracted Teeth

    Directory of Open Access Journals (Sweden)

    Nebil Yeşiloğlu

    2016-09-01

    Full Text Available Objective: Impacted teeth are important for plastic surgeons that frequently perform maxillofacial operations because of their tendency to affect dental occlusion, and thus, cephalometric results. Moreover, severe complications are also caused by the tooth and its surgical removal. In this study, retrospective analysis of 242 extracted teeth and 24 extracted roots was performed and an algorithmic approach to different types and the localizations of impacted teeth was presented. Possible complications and salvage procedures were also discussed. Material and Methods: A retrospective analysis of 128 patients who underwent impacted teeth removal surgery between 2013 and 2015 was performed. Mean age was 26 years (Range: 18–42 years, and the female to male ratio was 39/89. Sixteen of the patients were operated under regional nerve block, whereas the remaining were operated under general anesthesia. In 107 patients, the whole tooth was removed, whereas the residual root of the tooth was removed in 21 patients. In 89 patients, bone interventions like the creation of bone window or peridental milling to loosen the tooth were needed, whereas only oral mucosal incisions were performed in the remaining patients. Results: The most common onset symptom was localized pain, and the most common complications were swelling and edema. The most common extracted tooth was the mandibular 3rd molar. Lower lip hypoesthesia, which was continued up to eight months, was encountered in six patients who underwent mandibular 3rd molar extraction. Conclusion: In our opinion, a wide range of possible complications secondary to impacted teeth surgery makes them important for plastic surgeons who are more experienced than other disciplines, and learning teeth extraction is essential to learn in plastic surgery specialty training.

  20. Features Extraction of Flotation Froth Images and BP Neural Network Soft-Sensor Model of Concentrate Grade Optimized by Shuffled Cuckoo Searching Algorithm

    Directory of Open Access Journals (Sweden)

    Jie-sheng Wang

    2014-01-01

    Full Text Available For meeting the forecasting target of key technology indicators in the flotation process, a BP neural network soft-sensor model based on features extraction of flotation froth images and optimized by shuffled cuckoo search algorithm is proposed. Based on the digital image processing technique, the color features in HSI color space, the visual features based on the gray level cooccurrence matrix, and the shape characteristics based on the geometric theory of flotation froth images are extracted, respectively, as the input variables of the proposed soft-sensor model. Then the isometric mapping method is used to reduce the input dimension, the network size, and learning time of BP neural network. Finally, a shuffled cuckoo search algorithm is adopted to optimize the BP neural network soft-sensor model. Simulation results show that the model has better generalization results and prediction accuracy.

  1. An accurate projection algorithm for array processor based SPECT systems

    International Nuclear Information System (INIS)

    King, M.A.; Schwinger, R.B.; Cool, S.L.

    1985-01-01

    A data re-projection algorithm has been developed for use in single photon emission computed tomography (SPECT) on an array processor based computer system. The algorithm makes use of an accurate representation of pixel activity (uniform square pixel model of intensity distribution), and is rapidly performed due to the efficient handling of an array based algorithm and the Fast Fourier Transform (FFT) on parallel processing hardware. The algorithm consists of using a pixel driven nearest neighbour projection operation to an array of subdivided projection bins. This result is then convolved with the projected uniform square pixel distribution before being compressed to original bin size. This distribution varies with projection angle and is explicitly calculated. The FFT combined with a frequency space multiplication is used instead of a spatial convolution for more rapid execution. The new algorithm was tested against other commonly used projection algorithms by comparing the accuracy of projections of a simulated transverse section of the abdomen against analytically determined projections of that transverse section. The new algorithm was found to yield comparable or better standard error and yet result in easier and more efficient implementation on parallel hardware. Applications of the algorithm include iterative reconstruction and attenuation correction schemes and evaluation of regions of interest in dynamic and gated SPECT

  2. Wavelet-LMS algorithm-based echo cancellers

    Science.gov (United States)

    Seetharaman, Lalith K.; Rao, Sathyanarayana S.

    2002-12-01

    This paper presents Echo Cancellers based on the Wavelet-LMS Algorithm. The performance of the Least Mean Square Algorithm in Wavelet transform domain is observed and its application in Echo cancellation is analyzed. The Widrow-Hoff Least Mean Square Algorithm is most widely used algorithm for Adaptive filters that function as Echo Cancellers. The present day communication signals are widely non-stationary in nature and some errors crop up when Least Mean Square Algorithm is used for the Echo Cancellers handling such signals. The analysis of non-stationary signals often involves a compromise between how well transitions or discontinuities can be located. The multi-scale or multi-resolution of signal analysis, which is the essence of wavelet transform, makes Wavelets popular in non-stationary signal analysis. In this paper, we present a Wavelet-LMS algorithm wherein the wavelet coefficients of a signal are modified adaptively using the Least Mean Square Algorithm and then reconstructed to give an Echo-free signal. The Echo Canceller based on this Algorithm is found to have a better convergence and a comparatively lesser MSE (Mean Square error).

  3. A SAT-Based Algorithm for Reparameterization in Symbolic Simulation

    National Research Council Canada - National Science Library

    Chauhan, Pankaj; Kroening, Daniel; Clarke, Edmund

    2003-01-01

    .... Efficient SAT solvers have been applied successfully for many verification problems. This paper presents a novel SAT-based reparameterization algorithm that is largely immune to the large number of input variables that need to be quantified...

  4. Use of a Recursive-Rule eXtraction algorithm with J48graft to achieve highly accurate and concise rule extraction from a large breast cancer dataset

    Directory of Open Access Journals (Sweden)

    Yoichi Hayashi

    Full Text Available To assist physicians in the diagnosis of breast cancer and thereby improve survival, a highly accurate computer-aided diagnostic system is necessary. Although various machine learning and data mining approaches have been devised to increase diagnostic accuracy, most current methods are inadequate. The recently developed Recursive-Rule eXtraction (Re-RX algorithm provides a hierarchical, recursive consideration of discrete variables prior to analysis of continuous data, and can generate classification rules that have been trained on the basis of both discrete and continuous attributes. The objective of this study was to extract highly accurate, concise, and interpretable classification rules for diagnosis using the Re-RX algorithm with J48graft, a class for generating a grafted C4.5 decision tree. We used the Wisconsin Breast Cancer Dataset (WBCD. Nine research groups provided 10 kinds of highly accurate concrete classification rules for the WBCD. We compared the accuracy and characteristics of the rule set for the WBCD generated using the Re-RX algorithm with J48graft with five rule sets obtained using 10-fold cross validation (CV. We trained the WBCD using the Re-RX algorithm with J48graft and the average classification accuracies of 10 runs of 10-fold CV for the training and test datasets, the number of extracted rules, and the average number of antecedents for the WBCD. Compared with other rule extraction algorithms, the Re-RX algorithm with J48graft resulted in a lower average number of rules for diagnosing breast cancer, which is a substantial advantage. It also provided the lowest average number of antecedents per rule. These features are expected to greatly aid physicians in making accurate and concise diagnoses for patients with breast cancer. Keywords: Breast cancer diagnosis, Rule extraction, Re-RX algorithm, J48graft, C4.5

  5. A Chinese text classification system based on Naive Bayes algorithm

    Directory of Open Access Journals (Sweden)

    Cui Wei

    2016-01-01

    Full Text Available In this paper, aiming at the characteristics of Chinese text classification, using the ICTCLAS(Chinese lexical analysis system of Chinese academy of sciences for document segmentation, and for data cleaning and filtering the Stop words, using the information gain and document frequency feature selection algorithm to document feature selection. Based on this, based on the Naive Bayesian algorithm implemented text classifier , and use Chinese corpus of Fudan University has carried on the experiment and analysis on the system.

  6. Multi-User Identification-Based Eye-Tracking Algorithm Using Position Estimation

    Directory of Open Access Journals (Sweden)

    Suk-Ju Kang

    2016-12-01

    Full Text Available This paper proposes a new multi-user eye-tracking algorithm using position estimation. Conventional eye-tracking algorithms are typically suitable only for a single user, and thereby cannot be used for a multi-user system. Even though they can be used to track the eyes of multiple users, their detection accuracy is low and they cannot identify multiple users individually. The proposed algorithm solves these problems and enhances the detection accuracy. Specifically, the proposed algorithm adopts a classifier to detect faces for the red, green, and blue (RGB and depth images. Then, it calculates features based on the histogram of the oriented gradient for the detected facial region to identify multiple users, and selects the template that best matches the users from a pre-determined face database. Finally, the proposed algorithm extracts the final eye positions based on anatomical proportions. Simulation results show that the proposed algorithm improved the average F1 score by up to 0.490, compared with benchmark algorithms.

  7. A construction scheme of web page comment information extraction system based on frequent subtree mining

    Science.gov (United States)

    Zhang, Xiaowen; Chen, Bingfeng

    2017-08-01

    Based on the frequent sub-tree mining algorithm, this paper proposes a construction scheme of web page comment information extraction system based on frequent subtree mining, referred to as FSM system. The entire system architecture and the various modules to do a brief introduction, and then the core of the system to do a detailed description, and finally give the system prototype.

  8. BEaST: brain extraction based on nonlocal segmentation technique

    NARCIS (Netherlands)

    Eskildsen, Simon F.; Coupé, Pierrick; Fonov, Vladimir; Manjón, José V.; Leung, Kelvin K.; Guizard, Nicolas; Wassef, Shafik N.; Østergaard, Lasse Riis; Collins, D. Louis; Saradha, A.; Abdi, Hervé; Abdulkadir, Ahmed; Acharya, Deepa; Achuthan, Anusha; Adluru, Nagesh; Aghajanian, Jania; Agrusti, Antonella; Agyemang, Alex; Ahdidan, Jamila; Ahmad, Duaa; Ahmed, Shiek; Aisen, Paul; Akhondi-Asl, Alireza; Aksu, Yaman; Alberca, Roman; Alcauter, Sarael; Alexander, Daniel; Alin, Aylin; Almeida, Fabio; Alvarez-Linera, Juan; Amlien, Inge; Anand, Shyam; Anderson, Dallas; Ang, Amma; Angersbach, Steve; Ansarian, Reza; Aoyama, Eiji; Appannah, Arti; Arfanakis, Konstantinos; Armor, Tom; Arrighi, Michael; Arumughababu, S. Vethanayaki; Arunagiri, Vidhya; Ashe-McNalley, Cody; Ashford, Wes; Le Page, Aurelie; Avants, Brian; Chen, Wei; Richard, Edo; Schmand, Ben

    2012-01-01

    Brain extraction is an important step in the analysis of brain images. The variability in brain morphology and the difference in intensity characteristics due to imaging sequences make the development of a general purpose brain extraction algorithm challenging. To address this issue, we propose a

  9. Cloud Computing Task Scheduling Based on Cultural Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Li Jian-Wen

    2016-01-01

    Full Text Available The task scheduling strategy based on cultural genetic algorithm(CGA is proposed in order to improve the efficiency of task scheduling in the cloud computing platform, which targets at minimizing the total time and cost of task scheduling. The improved genetic algorithm is used to construct the main population space and knowledge space under cultural framework which get independent parallel evolution, forming a mechanism of mutual promotion to dispatch the cloud task. Simultaneously, in order to prevent the defects of the genetic algorithm which is easy to fall into local optimum, the non-uniform mutation operator is introduced to improve the search performance of the algorithm. The experimental results show that CGA reduces the total time and lowers the cost of the scheduling, which is an effective algorithm for the cloud task scheduling.

  10. A novel clustering algorithm based on quantum games

    International Nuclear Information System (INIS)

    Li Qiang; He Yan; Jiang Jingping

    2009-01-01

    Enormous successes have been made by quantum algorithms during the last decade. In this paper, we combine the quantum game with the problem of data clustering, and then develop a quantum-game-based clustering algorithm, in which data points in a dataset are considered as players who can make decisions and implement quantum strategies in quantum games. After each round of a quantum game, each player's expected payoff is calculated. Later, he uses a link-removing-and-rewiring (LRR) function to change his neighbors and adjust the strength of links connecting to them in order to maximize his payoff. Further, algorithms are discussed and analyzed in two cases of strategies, two payoff matrixes and two LRR functions. Consequently, the simulation results have demonstrated that data points in datasets are clustered reasonably and efficiently, and the clustering algorithms have fast rates of convergence. Moreover, the comparison with other algorithms also provides an indication of the effectiveness of the proposed approach.

  11. GPU-based fast pencil beam algorithm for proton therapy

    International Nuclear Information System (INIS)

    Fujimoto, Rintaro; Nagamine, Yoshihiko; Kurihara, Tsuneya

    2011-01-01

    Performance of a treatment planning system is an essential factor in making sophisticated plans. The dose calculation is a major time-consuming process in planning operations. The standard algorithm for proton dose calculations is the pencil beam algorithm which produces relatively accurate results, but is time consuming. In order to shorten the computational time, we have developed a GPU (graphics processing unit)-based pencil beam algorithm. We have implemented this algorithm and calculated dose distributions in the case of a water phantom. The results were compared to those obtained by a traditional method with respect to the computational time and discrepancy between the two methods. The new algorithm shows 5-20 times faster performance using the NVIDIA GeForce GTX 480 card in comparison with the Intel Core-i7 920 processor. The maximum discrepancy of the dose distribution is within 0.2%. Our results show that GPUs are effective for proton dose calculations.

  12. Tie Points Extraction for SAR Images Based on Differential Constraints

    Science.gov (United States)

    Xiong, X.; Jin, G.; Xu, Q.; Zhang, H.

    2018-04-01

    Automatically extracting tie points (TPs) on large-size synthetic aperture radar (SAR) images is still challenging because the efficiency and correct ratio of the image matching need to be improved. This paper proposes an automatic TPs extraction method based on differential constraints for large-size SAR images obtained from approximately parallel tracks, between which the relative geometric distortions are small in azimuth direction and large in range direction. Image pyramids are built firstly, and then corresponding layers of pyramids are matched from the top to the bottom. In the process, the similarity is measured by the normalized cross correlation (NCC) algorithm, which is calculated from a rectangular window with the long side parallel to the azimuth direction. False matches are removed by the differential constrained random sample consensus (DC-RANSAC) algorithm, which appends strong constraints in azimuth direction and weak constraints in range direction. Matching points in the lower pyramid images are predicted with the local bilinear transformation model in range direction. Experiments performed on ENVISAT ASAR and Chinese airborne SAR images validated the efficiency, correct ratio and accuracy of the proposed method.

  13. TIE POINTS EXTRACTION FOR SAR IMAGES BASED ON DIFFERENTIAL CONSTRAINTS

    Directory of Open Access Journals (Sweden)

    X. Xiong

    2018-04-01

    Full Text Available Automatically extracting tie points (TPs on large-size synthetic aperture radar (SAR images is still challenging because the efficiency and correct ratio of the image matching need to be improved. This paper proposes an automatic TPs extraction method based on differential constraints for large-size SAR images obtained from approximately parallel tracks, between which the relative geometric distortions are small in azimuth direction and large in range direction. Image pyramids are built firstly, and then corresponding layers of pyramids are matched from the top to the bottom. In the process, the similarity is measured by the normalized cross correlation (NCC algorithm, which is calculated from a rectangular window with the long side parallel to the azimuth direction. False matches are removed by the differential constrained random sample consensus (DC-RANSAC algorithm, which appends strong constraints in azimuth direction and weak constraints in range direction. Matching points in the lower pyramid images are predicted with the local bilinear transformation model in range direction. Experiments performed on ENVISAT ASAR and Chinese airborne SAR images validated the efficiency, correct ratio and accuracy of the proposed method.

  14. A Rule-Based Model for Bankruptcy Prediction Based on an Improved Genetic Ant Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Yudong Zhang

    2013-01-01

    Full Text Available In this paper, we proposed a hybrid system to predict corporate bankruptcy. The whole procedure consists of the following four stages: first, sequential forward selection was used to extract the most important features; second, a rule-based model was chosen to fit the given dataset since it can present physical meaning; third, a genetic ant colony algorithm (GACA was introduced; the fitness scaling strategy and the chaotic operator were incorporated with GACA, forming a new algorithm—fitness-scaling chaotic GACA (FSCGACA, which was used to seek the optimal parameters of the rule-based model; and finally, the stratified K-fold cross-validation technique was used to enhance the generalization of the model. Simulation experiments of 1000 corporations’ data collected from 2006 to 2009 demonstrated that the proposed model was effective. It selected the 5 most important factors as “net income to stock broker’s equality,” “quick ratio,” “retained earnings to total assets,” “stockholders’ equity to total assets,” and “financial expenses to sales.” The total misclassification error of the proposed FSCGACA was only 7.9%, exceeding the results of genetic algorithm (GA, ant colony algorithm (ACA, and GACA. The average computation time of the model is 2.02 s.

  15. A Nth-order linear algorithm for extracting diffuse correlation spectroscopy blood flow indices in heterogeneous tissues.

    Science.gov (United States)

    Shang, Yu; Yu, Guoqiang

    2014-09-29

    Conventional semi-infinite analytical solutions of correlation diffusion equation may lead to errors when calculating blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements in tissues with irregular geometries. Very recently, we created an algorithm integrating a N th-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in homogenous tissues with arbitrary geometries for extraction of BFI (i.e., αD B ). The purpose of this study is to extend the capability of the N th-order linear algorithm for extracting BFI in heterogeneous tissues with arbitrary geometries. The previous linear algorithm was modified to extract BFIs in different types of tissues simultaneously through utilizing DCS data at multiple source-detector separations. We compared the proposed linear algorithm with the semi-infinite homogenous solution in a computer model of adult head with heterogeneous tissue layers of scalp, skull, cerebrospinal fluid, and brain. To test the capability of the linear algorithm for extracting relative changes of cerebral blood flow (rCBF) in deep brain, we assigned ten levels of αD B in the brain layer with a step decrement of 10% while maintaining αD B values constant in other layers. Simulation results demonstrate the accuracy (errors model simplifies data analysis, thus allowing for online data processing and displaying. Future study will test this linear algorithm in heterogeneous tissues with different levels of blood flow variations and noises.

  16. A Novel Algorithm for Feature Level Fusion Using SVM Classifier for Multibiometrics-Based Person Identification

    Directory of Open Access Journals (Sweden)

    Ujwalla Gawande

    2013-01-01

    Full Text Available Recent times witnessed many advancements in the field of biometric and ultimodal biometric fields. This is typically observed in the area, of security, privacy, and forensics. Even for the best of unimodal biometric systems, it is often not possible to achieve a higher recognition rate. Multimodal biometric systems overcome various limitations of unimodal biometric systems, such as nonuniversality, lower false acceptance, and higher genuine acceptance rates. More reliable recognition performance is achievable as multiple pieces of evidence of the same identity are available. The work presented in this paper is focused on multimodal biometric system using fingerprint and iris. Distinct textual features of the iris and fingerprint are extracted using the Haar wavelet-based technique. A novel feature level fusion algorithm is developed to combine these unimodal features using the Mahalanobis distance technique. A support-vector-machine-based learning algorithm is used to train the system using the feature extracted. The performance of the proposed algorithms is validated and compared with other algorithms using the CASIA iris database and real fingerprint database. From the simulation results, it is evident that our algorithm has higher recognition rate and very less false rejection rate compared to existing approaches.

  17. Collaborative filtering recommendation model based on fuzzy clustering algorithm

    Science.gov (United States)

    Yang, Ye; Zhang, Yunhua

    2018-05-01

    As one of the most widely used algorithms in recommender systems, collaborative filtering algorithm faces two serious problems, which are the sparsity of data and poor recommendation effect in big data environment. In traditional clustering analysis, the object is strictly divided into several classes and the boundary of this division is very clear. However, for most objects in real life, there is no strict definition of their forms and attributes of their class. Concerning the problems above, this paper proposes to improve the traditional collaborative filtering model through the hybrid optimization of implicit semantic algorithm and fuzzy clustering algorithm, meanwhile, cooperating with collaborative filtering algorithm. In this paper, the fuzzy clustering algorithm is introduced to fuzzy clustering the information of project attribute, which makes the project belong to different project categories with different membership degrees, and increases the density of data, effectively reduces the sparsity of data, and solves the problem of low accuracy which is resulted from the inaccuracy of similarity calculation. Finally, this paper carries out empirical analysis on the MovieLens dataset, and compares it with the traditional user-based collaborative filtering algorithm. The proposed algorithm has greatly improved the recommendation accuracy.

  18. A Cough-Based Algorithm for Automatic Diagnosis of Pertussis

    Science.gov (United States)

    Pramono, Renard Xaviero Adhi; Imtiaz, Syed Anas; Rodriguez-Villegas, Esther

    2016-01-01

    Pertussis is a contagious respiratory disease which mainly affects young children and can be fatal if left untreated. The World Health Organization estimates 16 million pertussis cases annually worldwide resulting in over 200,000 deaths. It is prevalent mainly in developing countries where it is difficult to diagnose due to the lack of healthcare facilities and medical professionals. Hence, a low-cost, quick and easily accessible solution is needed to provide pertussis diagnosis in such areas to contain an outbreak. In this paper we present an algorithm for automated diagnosis of pertussis using audio signals by analyzing cough and whoop sounds. The algorithm consists of three main blocks to perform automatic cough detection, cough classification and whooping sound detection. Each of these extract relevant features from the audio signal and subsequently classify them using a logistic regression model. The output from these blocks is collated to provide a pertussis likelihood diagnosis. The performance of the proposed algorithm is evaluated using audio recordings from 38 patients. The algorithm is able to diagnose all pertussis successfully from all audio recordings without any false diagnosis. It can also automatically detect individual cough sounds with 92% accuracy and PPV of 97%. The low complexity of the proposed algorithm coupled with its high accuracy demonstrates that it can be readily deployed using smartphones and can be extremely useful for quick identification or early screening of pertussis and for infection outbreaks control. PMID:27583523

  19. Edge Detection Algorithm Based on Fuzzy Logic Theory for a Local Vision System of Robocup Humanoid League

    Directory of Open Access Journals (Sweden)

    Andrea K. Perez-Hernandez

    2013-06-01

    Full Text Available At this paper we shown the development of an algorithm to perform edges extraction based on fuzzy logic theory. This method allows recognizing landmarks on the game field for Humanoid League of RoboCup. The proposed algorithm describes the creation of a fuzzy inference system that permit evaluate the existent relationship between image pixels, finding variations on grey levels of related neighbor pixels. Subsequently, it shows an implementation of OTSU method to binarize an image that was obtained from fuzzy process and so generate an image containing only extracted edges, validating the algorithm with Humanoid League images. Later, we analyze obtained results that evidence a good performance of algorithm, considering that this proposal only takes an extra 35% processing time that will be required by traditional methods, whereas extracted edges are 52% less noise susceptible.

  20. A Flocking Based algorithm for Document Clustering Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Gao, Jinzhu [ORNL; Potok, Thomas E [ORNL

    2006-01-01

    Social animals or insects in nature often exhibit a form of emergent collective behavior known as flocking. In this paper, we present a novel Flocking based approach for document clustering analysis. Our Flocking clustering algorithm uses stochastic and heuristic principles discovered from observing bird flocks or fish schools. Unlike other partition clustering algorithm such as K-means, the Flocking based algorithm does not require initial partitional seeds. The algorithm generates a clustering of a given set of data through the embedding of the high-dimensional data items on a two-dimensional grid for easy clustering result retrieval and visualization. Inspired by the self-organized behavior of bird flocks, we represent each document object with a flock boid. The simple local rules followed by each flock boid result in the entire document flock generating complex global behaviors, which eventually result in a clustering of the documents. We evaluate the efficiency of our algorithm with both a synthetic dataset and a real document collection that includes 100 news articles collected from the Internet. Our results show that the Flocking clustering algorithm achieves better performance compared to the K- means and the Ant clustering algorithm for real document clustering.

  1. AdaBoost-based algorithm for network intrusion detection.

    Science.gov (United States)

    Hu, Weiming; Hu, Wei; Maybank, Steve

    2008-04-01

    Network intrusion detection aims at distinguishing the attacks on the Internet from normal use of the Internet. It is an indispensable part of the information security system. Due to the variety of network behaviors and the rapid development of attack fashions, it is necessary to develop fast machine-learning-based intrusion detection algorithms with high detection rates and low false-alarm rates. In this correspondence, we propose an intrusion detection algorithm based on the AdaBoost algorithm. In the algorithm, decision stumps are used as weak classifiers. The decision rules are provided for both categorical and continuous features. By combining the weak classifiers for continuous features and the weak classifiers for categorical features into a strong classifier, the relations between these two different types of features are handled naturally, without any forced conversions between continuous and categorical features. Adaptable initial weights and a simple strategy for avoiding overfitting are adopted to improve the performance of the algorithm. Experimental results show that our algorithm has low computational complexity and error rates, as compared with algorithms of higher computational complexity, as tested on the benchmark sample data.

  2. Study of Image Analysis Algorithms for Segmentation, Feature Extraction and Classification of Cells

    Directory of Open Access Journals (Sweden)

    Margarita Gamarra

    2017-08-01

    Full Text Available Recent advances in microcopy and improvements in image processing algorithms have allowed the development of computer-assisted analytical approaches in cell identification. Several applications could be mentioned in this field: Cellular phenotype identification, disease detection and treatment, identifying virus entry in cells and virus classification; these applications could help to complement the opinion of medical experts. Although many surveys have been presented in medical image analysis, they focus mainly in tissues and organs and none of the surveys about image cells consider an analysis following the stages in the typical image processing: Segmentation, feature extraction and classification. The goal of this study is to provide comprehensive and critical analyses about the trends in each stage of cell image processing. In this paper, we present a literature survey about cell identification using different image processing techniques.

  3. A range-based predictive localization algorithm for WSID networks

    Science.gov (United States)

    Liu, Yuan; Chen, Junjie; Li, Gang

    2017-11-01

    Most studies on localization algorithms are conducted on the sensor networks with densely distributed nodes. However, the non-localizable problems are prone to occur in the network with sparsely distributed sensor nodes. To solve this problem, a range-based predictive localization algorithm (RPLA) is proposed in this paper for the wireless sensor networks syncretizing the RFID (WSID) networks. The Gaussian mixture model is established to predict the trajectory of a mobile target. Then, the received signal strength indication is used to reduce the residence area of the target location based on the approximate point-in-triangulation test algorithm. In addition, collaborative localization schemes are introduced to locate the target in the non-localizable situations. Simulation results verify that the RPLA achieves accurate localization for the network with sparsely distributed sensor nodes. The localization accuracy of the RPLA is 48.7% higher than that of the APIT algorithm, 16.8% higher than that of the single Gaussian model-based algorithm and 10.5% higher than that of the Kalman filtering-based algorithm.

  4. A Novel Image Encryption Algorithm Based on DNA Subsequence Operation

    Directory of Open Access Journals (Sweden)

    Qiang Zhang

    2012-01-01

    Full Text Available We present a novel image encryption algorithm based on DNA subsequence operation. Different from the traditional DNA encryption methods, our algorithm does not use complex biological operation but just uses the idea of DNA subsequence operations (such as elongation operation, truncation operation, deletion operation, etc. combining with the logistic chaotic map to scramble the location and the value of pixel points from the image. The experimental results and security analysis show that the proposed algorithm is easy to be implemented, can get good encryption effect, has a wide secret key's space, strong sensitivity to secret key, and has the abilities of resisting exhaustive attack and statistic attack.

  5. Optimal algorithm for automatic detection of microaneurysms based on receiver operating characteristic curve

    Science.gov (United States)

    Xu, Lili; Luo, Shuqian

    2010-11-01

    Microaneurysms (MAs) are the first manifestations of the diabetic retinopathy (DR) as well as an indicator for its progression. Their automatic detection plays a key role for both mass screening and monitoring and is therefore in the core of any system for computer-assisted diagnosis of DR. The algorithm basically comprises the following stages: candidate detection aiming at extracting the patterns possibly corresponding to MAs based on mathematical morphological black top hat, feature extraction to characterize these candidates, and classification based on support vector machine (SVM), to validate MAs. Feature vector and kernel function of SVM selection is very important to the algorithm. We use the receiver operating characteristic (ROC) curve to evaluate the distinguishing performance of different feature vectors and different kernel functions of SVM. The ROC analysis indicates the quadratic polynomial SVM with a combination of features as the input shows the best discriminating performance.

  6. Validation of Agent Based Distillation Movement Algorithms

    National Research Council Canada - National Science Library

    Gill, Andrew

    2003-01-01

    Agent based distillations (ABD) are low-resolution abstract models, which can be used to explore questions associated with land combat operations in a short period of time Movement of agents within the EINSTein and MANA ABDs...

  7. A Modularity Degree Based Heuristic Community Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Dongming Chen

    2014-01-01

    Full Text Available A community in a complex network can be seen as a subgroup of nodes that are densely connected. Discovery of community structures is a basic problem of research and can be used in various areas, such as biology, computer science, and sociology. Existing community detection methods usually try to expand or collapse the nodes partitions in order to optimize a given quality function. These optimization function based methods share the same drawback of inefficiency. Here we propose a heuristic algorithm (MDBH algorithm based on network structure which employs modularity degree as a measure function. Experiments on both synthetic benchmarks and real-world networks show that our algorithm gives competitive accuracy with previous modularity optimization methods, even though it has less computational complexity. Furthermore, due to the use of modularity degree, our algorithm naturally improves the resolution limit in community detection.

  8. Research on personalized recommendation algorithm based on spark

    Science.gov (United States)

    Li, Zeng; Liu, Yu

    2018-04-01

    With the increasing amount of data in the past years, the traditional recommendation algorithm has been unable to meet people's needs. Therefore, how to better recommend their products to users of interest, become the opportunities and challenges of the era of big data development. At present, each platform enterprise has its own recommendation algorithm, but how to make efficient and accurate push information is still an urgent problem for personalized recommendation system. In this paper, a hybrid algorithm based on user collaborative filtering and content-based recommendation algorithm is proposed on Spark to improve the efficiency and accuracy of recommendation by weighted processing. The experiment shows that the recommendation under this scheme is more efficient and accurate.

  9. Otsu Based Optimal Multilevel Image Thresholding Using Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    N. Sri Madhava Raja

    2014-01-01

    Full Text Available Histogram based multilevel thresholding approach is proposed using Brownian distribution (BD guided firefly algorithm (FA. A bounded search technique is also presented to improve the optimization accuracy with lesser search iterations. Otsu’s between-class variance function is maximized to obtain optimal threshold level for gray scale images. The performances of the proposed algorithm are demonstrated by considering twelve benchmark images and are compared with the existing FA algorithms such as Lévy flight (LF guided FA and random operator guided FA. The performance assessment comparison between the proposed and existing firefly algorithms is carried using prevailing parameters such as objective function, standard deviation, peak-to-signal ratio (PSNR, structural similarity (SSIM index, and search time of CPU. The results show that BD guided FA provides better objective function, PSNR, and SSIM, whereas LF based FA provides faster convergence with relatively lower CPU time.

  10. Visual Perception Based Rate Control Algorithm for HEVC

    Science.gov (United States)

    Feng, Zeqi; Liu, PengYu; Jia, Kebin

    2018-01-01

    For HEVC, rate control is an indispensably important video coding technology to alleviate the contradiction between video quality and the limited encoding resources during video communication. However, the rate control benchmark algorithm of HEVC ignores subjective visual perception. For key focus regions, bit allocation of LCU is not ideal and subjective quality is unsatisfied. In this paper, a visual perception based rate control algorithm for HEVC is proposed. First bit allocation weight of LCU level is optimized based on the visual perception of luminance and motion to ameliorate video subjective quality. Then λ and QP are adjusted in combination with the bit allocation weight to improve rate distortion performance. Experimental results show that the proposed algorithm reduces average 0.5% BD-BR and maximum 1.09% BD-BR at no cost in bitrate accuracy compared with HEVC (HM15.0). The proposed algorithm devotes to improving video subjective quality under various video applications.

  11. A Developed Artificial Bee Colony Algorithm Based on Cloud Model

    Directory of Open Access Journals (Sweden)

    Ye Jin

    2018-04-01

    Full Text Available The Artificial Bee Colony (ABC algorithm is a bionic intelligent optimization method. The cloud model is a kind of uncertainty conversion model between a qualitative concept T ˜ that is presented by nature language and its quantitative expression, which integrates probability theory and the fuzzy mathematics. A developed ABC algorithm based on cloud model is proposed to enhance accuracy of the basic ABC algorithm and avoid getting trapped into local optima by introducing a new select mechanism, replacing the onlooker bees’ search formula and changing the scout bees’ updating formula. Experiments on CEC15 show that the new algorithm has a faster convergence speed and higher accuracy than the basic ABC and some cloud model based ABC variants.

  12. A new edge detection algorithm based on Canny idea

    Science.gov (United States)

    Feng, Yingke; Zhang, Jinmin; Wang, Siming

    2017-10-01

    The traditional Canny algorithm has poor self-adaptability threshold, and it is more sensitive to noise. In order to overcome these drawbacks, this paper proposed a new edge detection method based on Canny algorithm. Firstly, the media filtering and filtering based on the method of Euclidean distance are adopted to process it; secondly using the Frei-chen algorithm to calculate gradient amplitude; finally, using the Otsu algorithm to calculate partial gradient amplitude operation to get images of thresholds value, then find the average of all thresholds that had been calculated, half of the average is high threshold value, and the half of the high threshold value is low threshold value. Experiment results show that this new method can effectively suppress noise disturbance, keep the edge information, and also improve the edge detection accuracy.

  13. Algorithmic Algebraic Combinatorics and Gröbner Bases

    CERN Document Server

    Klin, Mikhail; Jurisic, Aleksandar

    2009-01-01

    This collection of tutorial and research papers introduces readers to diverse areas of modern pure and applied algebraic combinatorics and finite geometries with a special emphasis on algorithmic aspects and the use of the theory of Grobner bases. Topics covered include coherent configurations, association schemes, permutation groups, Latin squares, the Jacobian conjecture, mathematical chemistry, extremal combinatorics, coding theory, designs, etc. Special attention is paid to the description of innovative practical algorithms and their implementation in software packages such as GAP and MAGM

  14. A Learning Algorithm based on High School Teaching Wisdom

    OpenAIRE

    Philip, Ninan Sajeeth

    2010-01-01

    A learning algorithm based on primary school teaching and learning is presented. The methodology is to continuously evaluate a student and to give them training on the examples for which they repeatedly fail, until, they can correctly answer all types of questions. This incremental learning procedure produces better learning curves by demanding the student to optimally dedicate their learning time on the failed examples. When used in machine learning, the algorithm is found to train a machine...

  15. Quantum signature scheme based on a quantum search algorithm

    International Nuclear Information System (INIS)

    Yoon, Chun Seok; Kang, Min Sung; Lim, Jong In; Yang, Hyung Jin

    2015-01-01

    We present a quantum signature scheme based on a two-qubit quantum search algorithm. For secure transmission of signatures, we use a quantum search algorithm that has not been used in previous quantum signature schemes. A two-step protocol secures the quantum channel, and a trusted center guarantees non-repudiation that is similar to other quantum signature schemes. We discuss the security of our protocol. (paper)

  16. The Research and Application of SURF Algorithm Based on Feature Point Selection Algorithm

    Directory of Open Access Journals (Sweden)

    Zhang Fang Hu

    2014-04-01

    Full Text Available As the pixel information of depth image is derived from the distance information, when implementing SURF algorithm with KINECT sensor for static sign language recognition, there can be some mismatched pairs in palm area. This paper proposes a feature point selection algorithm, by filtering the SURF feature points step by step based on the number of feature points within adaptive radius r and the distance between the two points, it not only greatly improves the recognition rate, but also ensures the robustness under the environmental factors, such as skin color, illumination intensity, complex background, angle and scale changes. The experiment results show that the improved SURF algorithm can effectively improve the recognition rate, has a good robustness.

  17. Adaptive local backlight dimming algorithm based on local histogram and image characteristics

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Burini, Nino; Korhonen, Jari

    2013-01-01

    -off between power consumption and image quality preservation than the other algorithms representing the state of the art among feature based backlight algorithms. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.......Liquid Crystal Display (LCDs) with Light Emitting Diode (LED) backlight is a very popular display technology, used for instance in television sets, monitors and mobile phones. This paper presents a new backlight dimming algorithm that exploits the characteristics of the target image......, such as the local histograms and the average pixel intensity of each backlight segment, to reduce the power consumption of the backlight and enhance image quality. The local histogram of the pixels within each backlight segment is calculated and, based on this average, an adaptive quantile value is extracted...

  18. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  19. List-Based Simulated Annealing Algorithm for Traveling Salesman Problem

    Directory of Open Access Journals (Sweden)

    Shi-hua Zhan

    2016-01-01

    Full Text Available Simulated annealing (SA algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters’ setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA algorithm to solve traveling salesman problem (TSP. LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.

  20. A Nth-order linear algorithm for extracting diffuse correlation spectroscopy blood flow indices in heterogeneous tissues

    International Nuclear Information System (INIS)

    Shang, Yu; Yu, Guoqiang

    2014-01-01

    Conventional semi-infinite analytical solutions of correlation diffusion equation may lead to errors when calculating blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements in tissues with irregular geometries. Very recently, we created an algorithm integrating a Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in homogenous tissues with arbitrary geometries for extraction of BFI (i.e., αD B ). The purpose of this study is to extend the capability of the Nth-order linear algorithm for extracting BFI in heterogeneous tissues with arbitrary geometries. The previous linear algorithm was modified to extract BFIs in different types of tissues simultaneously through utilizing DCS data at multiple source-detector separations. We compared the proposed linear algorithm with the semi-infinite homogenous solution in a computer model of adult head with heterogeneous tissue layers of scalp, skull, cerebrospinal fluid, and brain. To test the capability of the linear algorithm for extracting relative changes of cerebral blood flow (rCBF) in deep brain, we assigned ten levels of αD B in the brain layer with a step decrement of 10% while maintaining αD B values constant in other layers. Simulation results demonstrate the accuracy (errors < 3%) of high-order (N ≥ 5) linear algorithm in extracting BFIs in different tissue layers and rCBF in deep brain. By contrast, the semi-infinite homogenous solution resulted in substantial errors in rCBF (34.5% ≤ errors ≤ 60.2%) and BFIs in different layers. The Nth-order linear model simplifies data analysis, thus allowing for online data processing and displaying. Future study will test this linear algorithm in heterogeneous tissues with different levels of blood flow variations and noises.

  1. A Nth-order linear algorithm for extracting diffuse correlation spectroscopy blood flow indices in heterogeneous tissues

    Energy Technology Data Exchange (ETDEWEB)

    Shang, Yu; Yu, Guoqiang, E-mail: guoqiang.yu@uky.edu [Department of Biomedical Engineering, University of Kentucky, Lexington, Kentucky 40506 (United States)

    2014-09-29

    Conventional semi-infinite analytical solutions of correlation diffusion equation may lead to errors when calculating blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements in tissues with irregular geometries. Very recently, we created an algorithm integrating a Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in homogenous tissues with arbitrary geometries for extraction of BFI (i.e., αD{sub B}). The purpose of this study is to extend the capability of the Nth-order linear algorithm for extracting BFI in heterogeneous tissues with arbitrary geometries. The previous linear algorithm was modified to extract BFIs in different types of tissues simultaneously through utilizing DCS data at multiple source-detector separations. We compared the proposed linear algorithm with the semi-infinite homogenous solution in a computer model of adult head with heterogeneous tissue layers of scalp, skull, cerebrospinal fluid, and brain. To test the capability of the linear algorithm for extracting relative changes of cerebral blood flow (rCBF) in deep brain, we assigned ten levels of αD{sub B} in the brain layer with a step decrement of 10% while maintaining αD{sub B} values constant in other layers. Simulation results demonstrate the accuracy (errors < 3%) of high-order (N ≥ 5) linear algorithm in extracting BFIs in different tissue layers and rCBF in deep brain. By contrast, the semi-infinite homogenous solution resulted in substantial errors in rCBF (34.5% ≤ errors ≤ 60.2%) and BFIs in different layers. The Nth-order linear model simplifies data analysis, thus allowing for online data processing and displaying. Future study will test this linear algorithm in heterogeneous tissues with different levels of blood flow variations and noises.

  2. Approximation Algorithms for Model-Based Diagnosis

    NARCIS (Netherlands)

    Feldman, A.B.

    2010-01-01

    Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation

  3. Electron dose map inversion based on several algorithms

    International Nuclear Information System (INIS)

    Li Gui; Zheng Huaqing; Wu Yican; Fds Team

    2010-01-01

    The reconstruction to the electron dose map in radiation therapy was investigated by constructing the inversion model of electron dose map with different algorithms. The inversion model of electron dose map based on nonlinear programming was used, and this model was applied the penetration dose map to invert the total space one. The realization of this inversion model was by several inversion algorithms. The test results with seven samples show that except the NMinimize algorithm, which worked for just one sample, with great error,though,all the inversion algorithms could be realized to our inversion model rapidly and accurately. The Levenberg-Marquardt algorithm, having the greatest accuracy and speed, could be considered as the first choice in electron dose map inversion.Further tests show that more error would be created when the data close to the electron range was used (tail error). The tail error might be caused by the approximation of mean energy spectra, and this should be considered to improve the method. The time-saving and accurate algorithms could be used to achieve real-time dose map inversion. By selecting the best inversion algorithm, the clinical need in real-time dose verification can be satisfied. (authors)

  4. Wavelet-Based Feature Extraction in Fault Diagnosis for Biquad High-Pass Filter Circuit

    OpenAIRE

    Yuehai Wang; Yongzheng Yan; Qinyong Wang

    2016-01-01

    Fault diagnosis for analog circuit has become a prominent factor in improving the reliability of integrated circuit due to its irreplaceability in modern integrated circuits. In fact fault diagnosis based on intelligent algorithms has become a popular research topic as efficient feature extraction and selection are a critical and intricate task in analog fault diagnosis. Further, it is extremely important to propose some general guidelines for the optimal feature extraction and selection. In ...

  5. Designers' Cognitive Thinking Based on Evolutionary Algorithms

    OpenAIRE

    Zhang Shutao; Jianning Su; Chibing Hu; Peng Wang

    2013-01-01

    The research on cognitive thinking is important to construct the efficient intelligent design systems. But it is difficult to describe the model of cognitive thinking with reasonable mathematical theory. Based on the analysis of design strategy and innovative thinking, we investigated the design cognitive thinking model that included the external guide thinking of "width priority - depth priority" and the internal dominated thinking of "divergent thinking - convergent thinking", built a reaso...

  6. Object tracking system using a VSW algorithm based on color and point features

    Directory of Open Access Journals (Sweden)

    Lim Hye-Youn

    2011-01-01

    Full Text Available Abstract An object tracking system using a variable search window (VSW algorithm based on color and feature points is proposed. A meanshift algorithm is an object tracking technique that works according to color probability distributions. An advantage of this algorithm based on color is that it is robust to specific color objects; however, a disadvantage is that it is sensitive to non-specific color objects due to illumination and noise. Therefore, to offset this weakness, it presents the VSW algorithm based on robust feature points for the accurate tracking of moving objects. The proposed method extracts the feature points of a detected object which is the region of interest (ROI, and generates a VSW using the given information which is the positions of extracted feature points. The goal of this paper is to achieve an efficient and effective object tracking system that meets the accurate tracking of moving objects. Through experiments, the object tracking system is implemented that it performs more precisely than existing techniques.

  7. Emergency ultrasound-based algorithms for diagnosing blunt abdominal trauma.

    Science.gov (United States)

    Stengel, Dirk; Rademacher, Grit; Ekkernkamp, Axel; Güthoff, Claas; Mutze, Sven

    2015-09-14

    , laparoscopy, laparotomy), and cost-effectiveness. Two authors (DS and CG) independently selected trials for inclusion, assessed methodological quality, and extracted data. Methodological quality was assessed using the Cochrane Collaboration risk of bias tool. Where possible, data were pooled and relative risks (RRs), risk differences (RDs), and weighted mean differences, each with 95% confidence intervals (CIs), were calculated by fixed-effect or random-effects models as appropriate. We identified four studies meeting our inclusion criteria. Overall, trials were of poor to moderate methodological quality. Few trial authors responded to our written inquiries seeking to resolve controversial issues and to obtain individual patient data. Strong heterogeneity amongst the trials prompted discussion between the review authors as to whether the data should or should not be pooled; we decided in favour of a quantitative synthesis to provide a rough impression about the effect sizes achievable with US-based triage algorithms. We pooled mortality data from three trials involving 1254 patients; the RR in favour of the FAST arm was 1.00 (95% CI 0.50 to 2.00). FAST-based pathways reduced the number of CT scans (random-effects model RD -0.52, 95% CI -0.83 to -0.21), but the meaning of this result was unclear. The experimental evidence justifying FAST-based clinical pathways in diagnosing patients with suspected abdominal or multiple blunt trauma remains poor. Because of strong heterogeneity between the trial results, the quantitative information provided by this review may only be used in an exploratory fashion. It is unlikely that FAST will ever be investigated by means of a confirmatory, large-scale RCT in the future. Thus, this Cochrane Review may be regarded as a review which provides the best available evidence for clinical practice guidelines and management recommendations. It can only be concluded from the few head-to-head studies that negative US scans are likely to reduce the

  8. Plagiarism Detection Based on SCAM Algorithm

    DEFF Research Database (Denmark)

    Anzelmi, Daniele; Carlone, Domenico; Rizzello, Fabio

    2011-01-01

    Plagiarism is a complex problem and considered one of the biggest in publishing of scientific, engineering and other types of documents. Plagiarism has also increased with the widespread use of the Internet as large amount of digital data is available. Plagiarism is not just direct copy but also...... paraphrasing, rewording, adapting parts, missing references or wrong citations. This makes the problem more difficult to handle adequately. Plagiarism detection techniques are applied by making a distinction between natural and programming languages. Our proposed detection process is based on natural language...... document. Our plagiarism detection system, like many Information Retrieval systems, is evaluated with metrics of precision and recall....

  9. Research on machine learning framework based on random forest algorithm

    Science.gov (United States)

    Ren, Qiong; Cheng, Hui; Han, Hai

    2017-03-01

    With the continuous development of machine learning, industry and academia have released a lot of machine learning frameworks based on distributed computing platform, and have been widely used. However, the existing framework of machine learning is limited by the limitations of machine learning algorithm itself, such as the choice of parameters and the interference of noises, the high using threshold and so on. This paper introduces the research background of machine learning framework, and combined with the commonly used random forest algorithm in machine learning classification algorithm, puts forward the research objectives and content, proposes an improved adaptive random forest algorithm (referred to as ARF), and on the basis of ARF, designs and implements the machine learning framework.

  10. Quantum Image Encryption Algorithm Based on Image Correlation Decomposition

    Science.gov (United States)

    Hua, Tianxiang; Chen, Jiamin; Pei, Dongju; Zhang, Wenquan; Zhou, Nanrun

    2015-02-01

    A novel quantum gray-level image encryption and decryption algorithm based on image correlation decomposition is proposed. The correlation among image pixels is established by utilizing the superposition and measurement principle of quantum states. And a whole quantum image is divided into a series of sub-images. These sub-images are stored into a complete binary tree array constructed previously and then randomly performed by one of the operations of quantum random-phase gate, quantum revolving gate and Hadamard transform. The encrypted image can be obtained by superimposing the resulting sub-images with the superposition principle of quantum states. For the encryption algorithm, the keys are the parameters of random phase gate, rotation angle, binary sequence and orthonormal basis states. The security and the computational complexity of the proposed algorithm are analyzed. The proposed encryption algorithm can resist brute force attack due to its very large key space and has lower computational complexity than its classical counterparts.

  11. Algorithms and procedures in the model based control of accelerators

    International Nuclear Information System (INIS)

    Bozoki, E.

    1987-10-01

    The overall design of a Model Based Control system was presented. The system consists of PLUG-IN MODULES, governed by a SUPERVISORY PROGRAM and communicating via SHARED DATA FILES. Models can be ladded or replaced without affecting the oveall system. There can be more then one module (algorithm) to perform the same task. The user can choose the most appropriate algorithm or can compare the results using different algorithms. Calculations, algorithms, file read and write, etc. which are used in more than one module, will be in a subroutine library. This feature will simplify the maintenance of the system. A partial list of modules is presented, specifying the task they perform. 19 refs., 1 fig

  12. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Cheng-Yuan Shih

    2010-01-01

    Full Text Available This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA and quadratic discriminant analysis (QDA. It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  13. Digital Image Encryption Algorithm Design Based on Genetic Hyperchaos

    Directory of Open Access Journals (Sweden)

    Jian Wang

    2016-01-01

    Full Text Available In view of the present chaotic image encryption algorithm based on scrambling (diffusion is vulnerable to choosing plaintext (ciphertext attack in the process of pixel position scrambling, we put forward a image encryption algorithm based on genetic super chaotic system. The algorithm, by introducing clear feedback to the process of scrambling, makes the scrambling effect related to the initial chaos sequence and the clear text itself; it has realized the image features and the organic fusion of encryption algorithm. By introduction in the process of diffusion to encrypt plaintext feedback mechanism, it improves sensitivity of plaintext, algorithm selection plaintext, and ciphertext attack resistance. At the same time, it also makes full use of the characteristics of image information. Finally, experimental simulation and theoretical analysis show that our proposed algorithm can not only effectively resist plaintext (ciphertext attack, statistical attack, and information entropy attack but also effectively improve the efficiency of image encryption, which is a relatively secure and effective way of image communication.

  14. The application of neutral network integrated with genetic algorithm and simulated annealing for the simulation of rare earths separation processes by the solvent extraction technique using EHEHPA agent

    International Nuclear Information System (INIS)

    Tran Ngoc Ha; Pham Thi Hong Ha

    2003-01-01

    In the present work, neutral network has been used for mathematically modeling equilibrium data of the mixture of two rare earth elements, namely Nd and Pr with PC88A agent. Thermo-genetic algorithm based on the idea of the genetic algorithm and the simulated annealing algorithm have been used in the training procedure of the neutral networks, giving better result in comparison with the traditional modeling approach. The obtained neutral network modeling the experimental data is further used in the computer program to simulate the solvent extraction process of two elements Nd and Pr. Based on this computer program, various optional schemes for the separation of Nd and Pr have been investigated and proposed. (author)

  15. Flowbca : A flow-based cluster algorithm in Stata

    NARCIS (Netherlands)

    Meekes, J.; Hassink, W.H.J.

    In this article, we introduce the Stata implementation of a flow-based cluster algorithm written in Mata. The main purpose of the flowbca command is to identify clusters based on relational data of flows. We illustrate the command by providing multiple applications, from the research fields of

  16. Development of Base Transceiver Station Selection Algorithm for ...

    African Journals Online (AJOL)

    TEMS) equipment was carried out on the existing BTSs, and a linear algorithm optimization program based on the spectral link efficiency of each BTS was developed, the output of this site optimization gives the selected number of base station sites ...

  17. Optimization-Based Image Segmentation by Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Rosenberger C

    2008-01-01

    Full Text Available Abstract Many works in the literature focus on the definition of evaluation metrics and criteria that enable to quantify the performance of an image processing algorithm. These evaluation criteria can be used to define new image processing algorithms by optimizing them. In this paper, we propose a general scheme to segment images by a genetic algorithm. The developed method uses an evaluation criterion which quantifies the quality of an image segmentation result. The proposed segmentation method can integrate a local ground truth when it is available in order to set the desired level of precision of the final result. A genetic algorithm is then used in order to determine the best combination of information extracted by the selected criterion. Then, we show that this approach can either be applied for gray-levels or multicomponents images in a supervised context or in an unsupervised one. Last, we show the efficiency of the proposed method through some experimental results on several gray-levels and multicomponents images.

  18. Optimization-Based Image Segmentation by Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    H. Laurent

    2008-05-01

    Full Text Available Many works in the literature focus on the definition of evaluation metrics and criteria that enable to quantify the performance of an image processing algorithm. These evaluation criteria can be used to define new image processing algorithms by optimizing them. In this paper, we propose a general scheme to segment images by a genetic algorithm. The developed method uses an evaluation criterion which quantifies the quality of an image segmentation result. The proposed segmentation method can integrate a local ground truth when it is available in order to set the desired level of precision of the final result. A genetic algorithm is then used in order to determine the best combination of information extracted by the selected criterion. Then, we show that this approach can either be applied for gray-levels or multicomponents images in a supervised context or in an unsupervised one. Last, we show the efficiency of the proposed method through some experimental results on several gray-levels and multicomponents images.

  19. Classification of EEG-P300 Signals Extracted from Brain Activities in BCI Systems Using ν-SVM and BLDA Algorithms

    Directory of Open Access Journals (Sweden)

    Ali MOMENNEZHAD

    2014-06-01

    Full Text Available In this paper, a linear predictive coding (LPC model is used to improve classification accuracy, convergent speed to maximum accuracy, and maximum bitrates in brain computer interface (BCI system based on extracting EEG-P300 signals. First, EEG signal is filtered in order to eliminate high frequency noise. Then, the parameters of filtered EEG signal are extracted using LPC model. Finally, the samples are reconstructed by LPC coefficients and two classifiers, a Bayesian Linear discriminant analysis (BLDA, and b the υ-support vector machine (υ-SVM are applied in order to classify. The proposed algorithm performance is compared with fisher linear discriminant analysis (FLDA. Results show that the efficiency of our algorithm in improving classification accuracy and convergent speed to maximum accuracy are much better. As example at the proposed algorithms, respectively BLDA with LPC model and υ-SVM with LPC model with8 electrode configuration for subject S1 the total classification accuracy is improved as 9.4% and 1.7%. And also, subject 7 at BLDA and υ-SVM with LPC model algorithms (LPC+BLDA and LPC+ υ-SVM after block 11th converged to maximum accuracy but Fisher Linear Discriminant Analysis (FLDA algorithm did not converge to maximum accuracy (with the same configuration. So, it can be used as a promising tool in designing BCI systems.

  20. A Turn-Projected State-Based Conflict Resolution Algorithm

    Science.gov (United States)

    Butler, Ricky W.; Lewis, Timothy A.

    2013-01-01

    State-based conflict detection and resolution (CD&R) algorithms detect conflicts and resolve them on the basis on current state information without the use of additional intent information from aircraft flight plans. Therefore, the prediction of the trajectory of aircraft is based solely upon the position and velocity vectors of the traffic aircraft. Most CD&R algorithms project the traffic state using only the current state vectors. However, the past state vectors can be used to make a better prediction of the future trajectory of the traffic aircraft. This paper explores the idea of using past state vectors to detect traffic turns and resolve conflicts caused by these turns using a non-linear projection of the traffic state. A new algorithm based on this idea is presented and validated using a fast-time simulator developed for this study.

  1. Improved algorithm based on equivalent enthalpy drop method of pressurized water reactor nuclear steam turbine

    International Nuclear Information System (INIS)

    Wang Hu; Qi Guangcai; Li Shaohua; Li Changjian

    2011-01-01

    Because it is difficulty to accurately determine the extraction steam turbine enthalpy and the exhaust enthalpy, the calculated result from the conventional equivalent enthalpy drop method of PWR nuclear steam turbine is not accurate. This paper presents the improved algorithm on the equivalent enthalpy drop method of PWR nuclear steam turbine to solve this problem and takes the secondary circuit thermal system calculation of 1000 MW PWR as an example. The results show that, comparing with the design value, the error of actual thermal efficiency of the steam turbine cycle obtained by the improved algorithm is within the allowable range. Since the improved method is based on the isentropic expansion process, the extraction steam turbine enthalpy and the exhaust enthalpy can be determined accurately, which is more reasonable and accurate compared to the traditional equivalent enthalpy drop method. (authors)

  2. Modeling and prediction of extraction profile for microwave-assisted extraction based on absorbed microwave energy.

    Science.gov (United States)

    Chan, Chung-Hung; Yusoff, Rozita; Ngoh, Gek-Cheng

    2013-09-01

    A modeling technique based on absorbed microwave energy was proposed to model microwave-assisted extraction (MAE) of antioxidant compounds from cocoa (Theobroma cacao L.) leaves. By adapting suitable extraction model at the basis of microwave energy absorbed during extraction, the model can be developed to predict extraction profile of MAE at various microwave irradiation power (100-600 W) and solvent loading (100-300 ml). Verification with experimental data confirmed that the prediction was accurate in capturing the extraction profile of MAE (R-square value greater than 0.87). Besides, the predicted yields from the model showed good agreement with the experimental results with less than 10% deviation observed. Furthermore, suitable extraction times to ensure high extraction yield at various MAE conditions can be estimated based on absorbed microwave energy. The estimation is feasible as more than 85% of active compounds can be extracted when compared with the conventional extraction technique. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Image segmentation-based robust feature extraction for color image watermarking

    Science.gov (United States)

    Li, Mianjie; Deng, Zeyu; Yuan, Xiaochen

    2018-04-01

    This paper proposes a local digital image watermarking method based on Robust Feature Extraction. The segmentation is achieved by Simple Linear Iterative Clustering (SLIC) based on which an Image Segmentation-based Robust Feature Extraction (ISRFE) method is proposed for feature extraction. Our method can adaptively extract feature regions from the blocks segmented by SLIC. This novel method can extract the most robust feature region in every segmented image. Each feature region is decomposed into low-frequency domain and high-frequency domain by Discrete Cosine Transform (DCT). Watermark images are then embedded into the coefficients in the low-frequency domain. The Distortion-Compensated Dither Modulation (DC-DM) algorithm is chosen as the quantization method for embedding. The experimental results indicate that the method has good performance under various attacks. Furthermore, the proposed method can obtain a trade-off between high robustness and good image quality.

  4. Automated spike sorting algorithm based on Laplacian eigenmaps and k-means clustering.

    Science.gov (United States)

    Chah, E; Hok, V; Della-Chiesa, A; Miller, J J H; O'Mara, S M; Reilly, R B

    2011-02-01

    This study presents a new automatic spike sorting method based on feature extraction by Laplacian eigenmaps combined with k-means clustering. The performance of the proposed method was compared against previously reported algorithms such as principal component analysis (PCA) and amplitude-based feature extraction. Two types of classifier (namely k-means and classification expectation-maximization) were incorporated within the spike sorting algorithms, in order to find a suitable classifier for the feature sets. Simulated data sets and in-vivo tetrode multichannel recordings were employed to assess the performance of the spike sorting algorithms. The results show that the proposed algorithm yields significantly improved performance with mean sorting accuracy of 73% and sorting error of 10% compared to PCA which combined with k-means had a sorting accuracy of 58% and sorting error of 10%.A correction was made to this article on 22 February 2011. The spacing of the title was amended on the abstract page. No changes were made to the article PDF and the print version was unaffected.

  5. 21 CFR 172.585 - Sugar beet extract flavor base.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Sugar beet extract flavor base. 172.585 Section 172... CONSUMPTION Flavoring Agents and Related Substances § 172.585 Sugar beet extract flavor base. Sugar beet extract flavor base may be safely used in food in accordance with the provisions of this section. (a...

  6. Web page sorting algorithm based on query keyword distance relation

    Science.gov (United States)

    Yang, Han; Cui, Hong Gang; Tang, Hao

    2017-08-01

    In order to optimize the problem of page sorting, according to the search keywords in the web page in the relationship between the characteristics of the proposed query keywords clustering ideas. And it is converted into the degree of aggregation of the search keywords in the web page. Based on the PageRank algorithm, the clustering degree factor of the query keyword is added to make it possible to participate in the quantitative calculation. This paper proposes an improved algorithm for PageRank based on the distance relation between search keywords. The experimental results show the feasibility and effectiveness of the method.

  7. A Novel Multiobjective Evolutionary Algorithm Based on Regression Analysis

    Directory of Open Access Journals (Sweden)

    Zhiming Song

    2015-01-01

    Full Text Available As is known, the Pareto set of a continuous multiobjective optimization problem with m objective functions is a piecewise continuous (m-1-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is (m-1-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper.

  8. Hybrid Genetic Algorithm Optimization for Case Based Reasoning Systems

    International Nuclear Information System (INIS)

    Mohamed, A.H.

    2008-01-01

    The success of a CBR system largely depen ds on an effective retrieval of useful prior case for the problem. Nearest neighbor and induction are the main CBR retrieval algorithms. Each of them can be more suitable in different situations. Integrated the two retrieval algorithms can catch the advantages of both of them. But, they still have some limitations facing the induction retrieval algorithm when dealing with a noisy data, a large number of irrelevant features, and different types of data. This research utilizes a hybrid approach using genetic algorithms (GAs) to case-based induction retrieval of the integrated nearest neighbor - induction algorithm in an attempt to overcome these limitations and increase the overall classification accuracy. GAs can be used to optimize the search space of all the possible subsets of the features set. It can deal with the irrelevant and noisy features while still achieving a significant improvement of the retrieval accuracy. Therefore, the proposed CBR-GA introduces an effective general purpose retrieval algorithm that can improve the performance of CBR systems. It can be applied in many application areas. CBR-GA has proven its success when applied for different problems in real-life

  9. Snake Model Based on Improved Genetic Algorithm in Fingerprint Image Segmentation

    Directory of Open Access Journals (Sweden)

    Mingying Zhang

    2016-12-01

    Full Text Available Automatic fingerprint identification technology is a quite mature research field in biometric identification technology. As the preprocessing step in fingerprint identification, fingerprint segmentation can improve the accuracy of fingerprint feature extraction, and also reduce the time of fingerprint preprocessing, which has a great significance in improving the performance of the whole system. Based on the analysis of the commonly used methods of fingerprint segmentation, the existing segmentation algorithm is improved in this paper. The snake model is used to segment the fingerprint image. Additionally, it is improved by using the global optimization of the improved genetic algorithm. Experimental results show that the algorithm has obvious advantages both in the speed of image segmentation and in the segmentation effect.

  10. Fast noise level estimation algorithm based on principal component analysis transform and nonlinear rectification

    Science.gov (United States)

    Xu, Shaoping; Zeng, Xiaoxia; Jiang, Yinnan; Tang, Yiling

    2018-01-01

    We proposed a noniterative principal component analysis (PCA)-based noise level estimation (NLE) algorithm that addresses the problem of estimating the noise level with a two-step scheme. First, we randomly extracted a number of raw patches from a given noisy image and took the smallest eigenvalue of the covariance matrix of the raw patches as the preliminary estimation of the noise level. Next, the final estimation was directly obtained with a nonlinear mapping (rectification) function that was trained on some representative noisy images corrupted with different known noise levels. Compared with the state-of-art NLE algorithms, the experiment results show that the proposed NLE algorithm can reliably infer the noise level and has robust performance over a wide range of image contents and noise levels, showing a good compromise between speed and accuracy in general.

  11. A Novel Generic Ball Recognition Algorithm Based on Omnidirectional Vision for Soccer Robots

    Directory of Open Access Journals (Sweden)

    Hui Zhang

    2013-11-01

    Full Text Available It is significant for the final goal of RoboCup to realize the recognition of generic balls for soccer robots. In this paper, a novel generic ball recognition algorithm based on omnidirectional vision is proposed by combining the modified Haar-like features and AdaBoost learning algorithm. The algorithm is divided into offline training and online recognition. During the phase of offline training, numerous sub-images are acquired from various panoramic images, including generic balls, and then the modified Haar-like features are extracted from them and used as the input of the AdaBoost learning algorithm to obtain a classifier. During the phase of online recognition, and according to the imaging characteristics of our omnidirectional vision system, rectangular windows are defined to search for the generic ball along the rotary and radial directions in the panoramic image, and the learned classifier is used to judge whether a ball is included in the window. After the ball has been recognized globally, ball tracking is realized by integrating a ball velocity estimation algorithm to reduce the computational cost. The experimental results show that good performance can be achieved using our algorithm, and that the generic ball can be recognized and tracked effectively.

  12. The Speech multi features fusion perceptual hash algorithm based on tensor decomposition

    Science.gov (United States)

    Huang, Y. B.; Fan, M. H.; Zhang, Q. Y.

    2018-03-01

    With constant progress in modern speech communication technologies, the speech data is prone to be attacked by the noise or maliciously tampered. In order to make the speech perception hash algorithm has strong robustness and high efficiency, this paper put forward a speech perception hash algorithm based on the tensor decomposition and multi features is proposed. This algorithm analyses the speech perception feature acquires each speech component wavelet packet decomposition. LPCC, LSP and ISP feature of each speech component are extracted to constitute the speech feature tensor. Speech authentication is done by generating the hash values through feature matrix quantification which use mid-value. Experimental results showing that the proposed algorithm is robust for content to maintain operations compared with similar algorithms. It is able to resist the attack of the common background noise. Also, the algorithm is highly efficiency in terms of arithmetic, and is able to meet the real-time requirements of speech communication and complete the speech authentication quickly.

  13. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  14. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  15. Comparison of some classification algorithms based on deterministic and nondeterministic decision rules

    KAUST Repository

    Delimata, Paweł

    2010-01-01

    We discuss two, in a sense extreme, kinds of nondeterministic rules in decision tables. The first kind of rules, called as inhibitory rules, are blocking only one decision value (i.e., they have all but one decisions from all possible decisions on their right hand sides). Contrary to this, any rule of the second kind, called as a bounded nondeterministic rule, can have on the right hand side only a few decisions. We show that both kinds of rules can be used for improving the quality of classification. In the paper, two lazy classification algorithms of polynomial time complexity are considered. These algorithms are based on deterministic and inhibitory decision rules, but the direct generation of rules is not required. Instead of this, for any new object the considered algorithms extract from a given decision table efficiently some information about the set of rules. Next, this information is used by a decision-making procedure. The reported results of experiments show that the algorithms based on inhibitory decision rules are often better than those based on deterministic decision rules. We also present an application of bounded nondeterministic rules in construction of rule based classifiers. We include the results of experiments showing that by combining rule based classifiers based on minimal decision rules with bounded nondeterministic rules having confidence close to 1 and sufficiently large support, it is possible to improve the classification quality. © 2010 Springer-Verlag.

  16. Quantum Behaved Particle Swarm Optimization Algorithm Based on Artificial Fish Swarm

    OpenAIRE

    Yumin, Dong; Li, Zhao

    2014-01-01

    Quantum behaved particle swarm algorithm is a new intelligent optimization algorithm; the algorithm has less parameters and is easily implemented. In view of the existing quantum behaved particle swarm optimization algorithm for the premature convergence problem, put forward a quantum particle swarm optimization algorithm based on artificial fish swarm. The new algorithm based on quantum behaved particle swarm algorithm, introducing the swarm and following activities, meanwhile using the a...

  17. FEATURE EXTRACTION FOR EMG BASED PROSTHESES CONTROL

    Directory of Open Access Journals (Sweden)

    R. Aishwarya

    2013-01-01

    Full Text Available The control of prosthetic limb would be more effective if it is based on Surface Electromyogram (SEMG signals from remnant muscles. The analysis of SEMG signals depend on a number of factors, such as amplitude as well as time- and frequency-domain properties. Time series analysis using Auto Regressive (AR model and Mean frequency which is tolerant to white Gaussian noise are used as feature extraction techniques. EMG Histogram is used as another feature vector that was seen to give more distinct classification. The work was done with SEMG dataset obtained from the NINAPRO DATABASE, a resource for bio robotics community. Eight classes of hand movements hand open, hand close, Wrist extension, Wrist flexion, Pointing index, Ulnar deviation, Thumbs up, Thumb opposite to little finger are taken into consideration and feature vectors are extracted. The feature vectors can be given to an artificial neural network for further classification in controlling the prosthetic arm which is not dealt in this paper.

  18. Fusion of Pixel-based and Object-based Features for Road Centerline Extraction from High-resolution Satellite Imagery

    Directory of Open Access Journals (Sweden)

    CAO Yungang

    2016-10-01

    Full Text Available A novel approach for road centerline extraction from high spatial resolution satellite imagery is proposed by fusing both pixel-based and object-based features. Firstly, texture and shape features are extracted at the pixel level, and spectral features are extracted at the object level based on multi-scale image segmentation maps. Then, extracted multiple features are utilized in the fusion framework of Dempster-Shafer evidence theory to roughly identify the road network regions. Finally, an automatic noise removing algorithm combined with the tensor voting strategy is presented to accurately extract the road centerline. Experimental results using high-resolution satellite imageries with different scenes and spatial resolutions showed that the proposed approach compared favorably with the traditional methods, particularly in the aspect of eliminating the salt noise and conglutination phenomenon.

  19. Gravitational wave extraction based on Cauchy-characteristic extraction and characteristic evolution

    Energy Technology Data Exchange (ETDEWEB)

    Babiuc, Maria [Department of Physics and Astronomy, University of Pittsburgh, Pittsburgh, PA 15260 (United States); Szilagyi, Bela [Department of Physics and Astronomy, University of Pittsburgh, Pittsburgh, PA 15260 (United States); Max-Planck-Institut fuer Gravitationsphysik, Albert-Einstein-Institut, Am Muehlenberg 1, D-14476 Golm (Germany); Hawke, Ian [Max-Planck-Institut fuer Gravitationsphysik, Albert-Einstein-Institut, Am Muehlenberg 1, D-14476 Golm (Germany); School of Mathematics, University of Southampton, Southampton SO17 1BJ (United Kingdom); Zlochower, Yosef [Department of Physics and Astronomy, and Center for Gravitational Wave Astronomy, University of Texas at Brownsville, Brownsville, TX 78520 (United States)

    2005-12-07

    We implement a code to find the gravitational news at future null infinity by using data from a Cauchy code as boundary data for a characteristic code. This technique of Cauchy-characteristic extraction (CCE) allows for the unambiguous extraction of gravitational waves from numerical simulations. We first test the technique on non-radiative spacetimes: Minkowski spacetime, perturbations of Minkowski spacetime and static black hole spacetimes in various gauges. We show the convergence and limitations of the algorithm and illustrate its success in cases where other wave extraction methods fail. We further apply our techniques to a standard radiative test case for wave extraction, a linearized Teukolsky wave, presenting our results in comparison to the Zerilli technique, and we argue for the advantages of our method of extraction.

  20. WATERSHED ALGORITHM BASED SEGMENTATION FOR HANDWRITTEN TEXT IDENTIFICATION

    Directory of Open Access Journals (Sweden)

    P. Mathivanan

    2014-02-01

    Full Text Available In this paper we develop a system for writer identification which involves four processing steps like preprocessing, segmentation, feature extraction and writer identification using neural network. In the preprocessing phase the handwritten text is subjected to slant removal process for segmentation and feature extraction. After this step the text image enters into the process of noise removal and gray level conversion. The preprocessed image is further segmented by using morphological watershed algorithm, where the text lines are segmented into single words and then into single letters. The segmented image is feature extracted by Daubechies’5/3 integer wavelet transform to reduce training complexity [1, 6]. This process is lossless and reversible [10], [14]. These extracted features are given as input to our neural network for writer identification process and a target image is selected for each training process in the 2-layer neural network. With the several trained output data obtained from different target help in text identification. It is a multilingual text analysis which provides simple and efficient text segmentation.

  1. Secondary Coordinated Control of Islanded Microgrids Based on Consensus Algorithms

    DEFF Research Database (Denmark)

    Wu, Dan; Dragicevic, Tomislav; Vasquez, Juan Carlos

    2014-01-01

    systems. Nevertheless, the conventional decentralized secondary control, although does not need to be implemented in a microgrid central controller (MGCC), it has the limitation that all decentralized controllers must be mutually synchronized. In a clear cut contrast, the proposed secondary control......This paper proposes a decentralized secondary control for islanded microgrids based on consensus algorithms. In a microgrid, the secondary control is implemented in order to eliminate the frequency changes caused by the primary control when coordinating renewable energy sources and energy storage...... requires only a more simplified communication protocol and a sparse communication network. Moreover, the proposed approach based on dynamic consensus algorithms is able to achieve the coordinated secondary performance even when all units are initially out-of-synchronism. The control algorithm implemented...

  2. A fast image encryption algorithm based on chaotic map

    Science.gov (United States)

    Liu, Wenhao; Sun, Kehui; Zhu, Congxu

    2016-09-01

    Derived from Sine map and iterative chaotic map with infinite collapse (ICMIC), a new two-dimensional Sine ICMIC modulation map (2D-SIMM) is proposed based on a close-loop modulation coupling (CMC) model, and its chaotic performance is analyzed by means of phase diagram, Lyapunov exponent spectrum and complexity. It shows that this map has good ergodicity, hyperchaotic behavior, large maximum Lyapunov exponent and high complexity. Based on this map, a fast image encryption algorithm is proposed. In this algorithm, the confusion and diffusion processes are combined for one stage. Chaotic shift transform (CST) is proposed to efficiently change the image pixel positions, and the row and column substitutions are applied to scramble the pixel values simultaneously. The simulation and analysis results show that this algorithm has high security, low time complexity, and the abilities of resisting statistical analysis, differential, brute-force, known-plaintext and chosen-plaintext attacks.

  3. Effective ANT based Routing Algorithm for Data Replication in MANETs

    Directory of Open Access Journals (Sweden)

    N.J. Nithya Nandhini

    2013-12-01

    Full Text Available In mobile ad hoc network, the nodes often move and keep on change its topology. Data packets can be forwarded from one node to another on demand. To increase the data accessibility data are replicated at nodes and made as sharable to other nodes. Assuming that all mobile host cooperative to share their memory and allow forwarding the data packets. But in reality, all nodes do not share the resources for the benefits of others. These nodes may act selfishly to share memory and to forward the data packets. This paper focuses on selfishness of mobile nodes in replica allocation and routing protocol based on Ant colony algorithm to improve the efficiency. The Ant colony algorithm is used to reduce the overhead in the mobile network, so that it is more efficient to access the data than with other routing protocols. This result shows the efficiency of ant based routing algorithm in the replication allocation.

  4. Node-Dependence-Based Dynamic Incentive Algorithm in Opportunistic Networks

    Directory of Open Access Journals (Sweden)

    Ruiyun Yu

    2014-01-01

    Full Text Available Opportunistic networks lack end-to-end paths between source nodes and destination nodes, so the communications are mainly carried out by the “store-carry-forward” strategy. Selfish behaviors of rejecting packet relay requests will severely worsen the network performance. Incentive is an efficient way to reduce selfish behaviors and hence improves the reliability and robustness of the networks. In this paper, we propose the node-dependence-based dynamic gaming incentive (NDI algorithm, which exploits the dynamic repeated gaming to motivate nodes relaying packets for other nodes. The NDI algorithm presents a mechanism of tolerating selfish behaviors of nodes. Reward and punishment methods are also designed based on the node dependence degree. Simulation results show that the NDI algorithm is effective in increasing the delivery ratio and decreasing average latency when there are a lot of selfish nodes in the opportunistic networks.

  5. Algorithm for Wireless Sensor Networks Based on Grid Management

    Directory of Open Access Journals (Sweden)

    Geng Zhang

    2014-05-01

    Full Text Available This paper analyzes the key issues for wireless sensor network trust model and describes a method to build a wireless sensor network, such as the definition of trust for wireless sensor networks, computing and credibility of trust model application. And for the problem that nodes are vulnerable to attack, this paper proposed a grid-based trust algorithm by deep exploration trust model within the framework of credit management. Algorithm for node reliability screening and rotation schedule to cover parallel manner based on the implementation of the nodes within the area covered by trust. And analyze the results of the size of trust threshold has great influence on the safety and quality of coverage throughout the coverage area. The simulation tests the validity and correctness of the algorithm.

  6. Image segmentation algorithm based on T-junctions cues

    Science.gov (United States)

    Qian, Yanyu; Cao, Fengyun; Wang, Lu; Yang, Xuejie

    2016-03-01

    To improve the over-segmentation and over-merge phenomenon of single image segmentation algorithm,a novel approach of combing Graph-Based algorithm and T-junctions cues is proposed in this paper. First, a method by L0 gradient minimization is applied to the smoothing of the target image eliminate artifacts caused by noise and texture detail; Then, the initial over-segmentation result of the smoothing image using the graph-based algorithm; Finally, the final results via a region fusion strategy by t-junction cues. Experimental results on a variety of images verify the new approach's efficiency in eliminating artifacts caused by noise,segmentation accuracy and time complexity has been significantly improved.

  7. An Improved FCM Medical Image Segmentation Algorithm Based on MMTD

    Directory of Open Access Journals (Sweden)

    Ningning Zhou

    2014-01-01

    Full Text Available Image segmentation plays an important role in medical image processing. Fuzzy c-means (FCM is one of the popular clustering algorithms for medical image segmentation. But FCM is highly vulnerable to noise due to not considering the spatial information in image segmentation. This paper introduces medium mathematics system which is employed to process fuzzy information for image segmentation. It establishes the medium similarity measure based on the measure of medium truth degree (MMTD and uses the correlation of the pixel and its neighbors to define the medium membership function. An improved FCM medical image segmentation algorithm based on MMTD which takes some spatial features into account is proposed in this paper. The experimental results show that the proposed algorithm is more antinoise than the standard FCM, with more certainty and less fuzziness. This will lead to its practicable and effective applications in medical image segmentation.

  8. On a Hopping-Points SVD and Hough Transform-Based Line Detection Algorithm for Robot Localization and Mapping

    Directory of Open Access Journals (Sweden)

    Abhijeet Ravankar

    2016-05-01

    Full Text Available Line detection is an important problem in computer vision, graphics and autonomous robot navigation. Lines detected using a laser range sensor (LRS mounted on a robot can be used as features to build a map of the environment, and later to localize the robot in the map, in a process known as Simultaneous Localization and Mapping (SLAM. We propose an efficient algorithm for line detection from LRS data using a novel hopping-points Singular Value Decomposition (SVD and Hough transform-based algorithm, in which SVD is applied to intermittent LRS points to accelerate the algorithm. A reverse-hop mechanism ensures that the end points of the line segments are accurately extracted. Line segments extracted from the proposed algorithm are used to form a map and, subsequently, LRS data points are matched with the line segments to localize the robot. The proposed algorithm eliminates the drawbacks of point-based matching algorithms like the Iterative Closest Points (ICP algorithm, the performance of which degrades with an increasing number of points. We tested the proposed algorithm for mapping and localization in both simulated and real environments, and found it to detect lines accurately and build maps with good self-localization.

  9. An iris recognition algorithm based on DCT and GLCM

    Science.gov (United States)

    Feng, G.; Wu, Ye-qing

    2008-04-01

    With the enlargement of mankind's activity range, the significance for person's status identity is becoming more and more important. So many different techniques for person's status identity were proposed for this practical usage. Conventional person's status identity methods like password and identification card are not always reliable. A wide variety of biometrics has been developed for this challenge. Among those biologic characteristics, iris pattern gains increasing attention for its stability, reliability, uniqueness, noninvasiveness and difficult to counterfeit. The distinct merits of the iris lead to its high reliability for personal identification. So the iris identification technique had become hot research point in the past several years. This paper presents an efficient algorithm for iris recognition using gray-level co-occurrence matrix(GLCM) and Discrete Cosine transform(DCT). To obtain more representative iris features, features from space and DCT transformation domain are extracted. Both GLCM and DCT are applied on the iris image to form the feature sequence in this paper. The combination of GLCM and DCT makes the iris feature more distinct. Upon GLCM and DCT the eigenvector of iris extracted, which reflects features of spatial transformation and frequency transformation. Experimental results show that the algorithm is effective and feasible with iris recognition.

  10. Parameter Selection for Ant Colony Algorithm Based on Bacterial Foraging Algorithm

    Directory of Open Access Journals (Sweden)

    Peng Li

    2016-01-01

    Full Text Available The optimal performance of the ant colony algorithm (ACA mainly depends on suitable parameters; therefore, parameter selection for ACA is important. We propose a parameter selection method for ACA based on the bacterial foraging algorithm (BFA, considering the effects of coupling between different parameters. Firstly, parameters for ACA are mapped into a multidimensional space, using a chemotactic operator to ensure that each parameter group approaches the optimal value, speeding up the convergence for each parameter set. Secondly, the operation speed for optimizing the entire parameter set is accelerated using a reproduction operator. Finally, the elimination-dispersal operator is used to strengthen the global optimization of the parameters, which avoids falling into a local optimal solution. In order to validate the effectiveness of this method, the results were compared with those using a genetic algorithm (GA and a particle swarm optimization (PSO, and simulations were conducted using different grid maps for robot path planning. The results indicated that parameter selection for ACA based on BFA was the superior method, able to determine the best parameter combination rapidly, accurately, and effectively.

  11. An adaptive clustering algorithm for image matching based on corner feature

    Science.gov (United States)

    Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song

    2018-04-01

    The traditional image matching algorithm always can not balance the real-time and accuracy better, to solve the problem, an adaptive clustering algorithm for image matching based on corner feature is proposed in this paper. The method is based on the similarity of the matching pairs of vector pairs, and the adaptive clustering is performed on the matching point pairs. Harris corner detection is carried out first, the feature points of the reference image and the perceived image are extracted, and the feature points of the two images are first matched by Normalized Cross Correlation (NCC) function. Then, using the improved algorithm proposed in this paper, the matching results are clustered to reduce the ineffective operation and improve the matching speed and robustness. Finally, the Random Sample Consensus (RANSAC) algorithm is used to match the matching points after clustering. The experimental results show that the proposed algorithm can effectively eliminate the most wrong matching points while the correct matching points are retained, and improve the accuracy of RANSAC matching, reduce the computation load of whole matching process at the same time.

  12. An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm

    Science.gov (United States)

    Zhang, B.; Sang, Jun; Alam, Mohammad S.

    2013-03-01

    An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm was proposed. Firstly, the original secret image was encrypted into two phase-only masks M1 and M2 via cascaded iterative Fourier transform (CIFT) algorithm. Then, the public-key encryption algorithm RSA was adopted to encrypt M2 into M2' . Finally, a host image was enlarged by extending one pixel into 2×2 pixels and each element in M1 and M2' was multiplied with a superimposition coefficient and added to or subtracted from two different elements in the 2×2 pixels of the enlarged host image. To recover the secret image from the stego-image, the two masks were extracted from the stego-image without the original host image. By applying public-key encryption algorithm, the key distribution was facilitated, and also compared with the image hiding method based on optical interference, the proposed method may reach higher robustness by employing the characteristics of the CIFT algorithm. Computer simulations show that this method has good robustness against image processing.

  13. Cryptanalysis of a chaos-based image encryption algorithm

    International Nuclear Information System (INIS)

    Cokal, Cahit; Solak, Ercan

    2009-01-01

    A chaos-based image encryption algorithm was proposed in [Z.-H. Guan, F. Huang, W. Guan, Phys. Lett. A 346 (2005) 153]. In this Letter, we analyze the security weaknesses of the proposal. By applying chosen-plaintext and known-plaintext attacks, we show that all the secret parameters can be revealed

  14. Security Analysis of A Chaos-based Image Encryption Algorithm

    OpenAIRE

    Lian, Shiguo; Sun, Jinsheng; Wang, Zhiquan

    2006-01-01

    The security of Fridrich Image Encryption Algorithm against brute-force attack, statistical attack, known-plaintext attack and select-plaintext attack is analyzed by investigating the properties of the involved chaotic maps and diffusion functions. Based on the given analyses, some means are proposed to strengthen the overall performance of the focused cryptosystem.

  15. Security analysis of a chaos-based image encryption algorithm

    Science.gov (United States)

    Lian, Shiguo; Sun, Jinsheng; Wang, Zhiquan

    2005-06-01

    The security of Fridrich's algorithm against brute-force attack, statistical attack, known-plaintext attack and select-plaintext attack is analyzed by investigating the properties of the involved chaotic maps and diffusion functions. Based on the given analyses, some means are proposed to strengthen the overall performance of the focused cryptosystem.

  16. Measuring Disorientation Based on the Needleman-Wunsch Algorithm

    Science.gov (United States)

    Güyer, Tolga; Atasoy, Bilal; Somyürek, Sibel

    2015-01-01

    This study offers a new method to measure navigation disorientation in web based systems which is powerful learning medium for distance and open education. The Needleman-Wunsch algorithm is used to measure disorientation in a more precise manner. The process combines theoretical and applied knowledge from two previously distinct research areas,…

  17. Resource allocation in smart homes based on Banker's algorithm

    NARCIS (Netherlands)

    Virag, A.; Bogdan, S.

    2011-01-01

    This paper proposes a method for improved energy management in smart homes by means of resource allocation. For this purpose, a Banker's algorithm based strategy has been developed. It is used to control the system and decide which of the given processes should be provided with resources at the

  18. Model-based remote sensing algorithms for particulate organic carbon

    Indian Academy of Sciences (India)

    PCA algorithms based on the first three, four, and five modes accounted for 90, 95, and 98% of total variance and yielded significant correlations with POC with 2 = 0.89, 0.92, and 0.93. These full waveband approaches provided robust estimates of POC in various water types. Three different analyses (root mean square ...

  19. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  20. An efficient similarity measure for content based image retrieval using memetic algorithm

    Directory of Open Access Journals (Sweden)

    Mutasem K. Alsmadi

    2017-06-01

    Full Text Available Content based image retrieval (CBIR systems work by retrieving images which are related to the query image (QI from huge databases. The available CBIR systems extract limited feature sets which confine the retrieval efficacy. In this work, extensive robust and important features were extracted from the images database and then stored in the feature repository. This feature set is composed of color signature with the shape and color texture features. Where, features are extracted from the given QI in the similar fashion. Consequently, a novel similarity evaluation using a meta-heuristic algorithm called a memetic algorithm (genetic algorithm with great deluge is achieved between the features of the QI and the features of the database images. Our proposed CBIR system is assessed by inquiring number of images (from the test dataset and the efficiency of the system is evaluated by calculating precision-recall value for the results. The results were superior to other state-of-the-art CBIR systems in regard to precision.

  1. New second-order difference algorithm for image segmentation based on cellular neural networks (CNNs)

    Science.gov (United States)

    Meng, Shukai; Mo, Yu L.

    2001-09-01

    Image segmentation is one of the most important operations in many image analysis problems, which is the process that subdivides an image into its constituents and extracts those parts of interest. In this paper, we present a new second order difference gray-scale image segmentation algorithm based on cellular neural networks. A 3x3 CNN cloning template is applied, which can make smooth processing and has a good ability to deal with the conflict between the capability of noise resistance and the edge detection of complex shapes. We use second order difference operator to calculate the coefficients of the control template, which are not constant but rather depend on the input gray-scale values. It is similar to Contour Extraction CNN in construction, but there are some different in algorithm. The result of experiment shows that the second order difference CNN has a good capability in edge detection. It is better than Contour Extraction CNN in detail detection and more effective than the Laplacian of Gauss (LOG) algorithm.

  2. A “Tuned” Mask Learnt Approach Based on Gravitational Search Algorithm

    Directory of Open Access Journals (Sweden)

    Youchuan Wan

    2016-01-01

    Full Text Available Texture image classification is an important topic in many applications in machine vision and image analysis. Texture feature extracted from the original texture image by using “Tuned” mask is one of the simplest and most effective methods. However, hill climbing based training methods could not acquire the satisfying mask at a time; on the other hand, some commonly used evolutionary algorithms like genetic algorithm (GA and particle swarm optimization (PSO easily fall into the local optimum. A novel approach for texture image classification exemplified with recognition of residential area is detailed in the paper. In the proposed approach, “Tuned” mask is viewed as a constrained optimization problem and the optimal “Tuned” mask is acquired by maximizing the texture energy via a newly proposed gravitational search algorithm (GSA. The optimal “Tuned” mask is achieved through the convergence of GSA. The proposed approach has been, respectively, tested on some public texture and remote sensing images. The results are then compared with that of GA, PSO, honey-bee mating optimization (HBMO, and artificial immune algorithm (AIA. Moreover, feature extracted by Gabor wavelet is also utilized to make a further comparison. Experimental results show that the proposed method is robust and adaptive and exhibits better performance than other methods involved in the paper in terms of fitness value and classification accuracy.

  3. Paroxysmal atrial fibrillation prediction based on HRV analysis and non-dominated sorting genetic algorithm III.

    Science.gov (United States)

    Boon, K H; Khalil-Hani, M; Malarvili, M B

    2018-01-01

    This paper presents a method that able to predict the paroxysmal atrial fibrillation (PAF). The method uses shorter heart rate variability (HRV) signals when compared to existing methods, and achieves good prediction accuracy. PAF is a common cardiac arrhythmia that increases the health risk of a patient, and the development of an accurate predictor of the onset of PAF is clinical important because it increases the possibility to electrically stabilize and prevent the onset of atrial arrhythmias with different pacing techniques. We propose a multi-objective optimization algorithm based on the non-dominated sorting genetic algorithm III for optimizing the baseline PAF prediction system, that consists of the stages of pre-processing, HRV feature extraction, and support vector machine (SVM) model. The pre-processing stage comprises of heart rate correction, interpolation, and signal detrending. After that, time-domain, frequency-domain, non-linear HRV features are extracted from the pre-processed data in feature extraction stage. Then, these features are used as input to the SVM for predicting the PAF event. The proposed optimization algorithm is used to optimize the parameters and settings of various HRV feature extraction algorithms, select the best feature subsets, and tune the SVM parameters simultaneously for maximum prediction performance. The proposed method achieves an accuracy rate of 87.7%, which significantly outperforms most of the previous works. This accuracy rate is achieved even with the HRV signal length being reduced from the typical 30 min to just 5 min (a reduction of 83%). Furthermore, another significant result is the sensitivity rate, which is considered more important that other performance metrics in this paper, can be improved with the trade-off of lower specificity. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Multiple Lookup Table-Based AES Encryption Algorithm Implementation

    Science.gov (United States)

    Gong, Jin; Liu, Wenyi; Zhang, Huixin

    Anew AES (Advanced Encryption Standard) encryption algorithm implementation was proposed in this paper. It is based on five lookup tables, which are generated from S-box(the substitution table in AES). The obvious advantages are reducing the code-size, improving the implementation efficiency, and helping new learners to understand the AES encryption algorithm and GF(28) multiplication which are necessary to correctly implement AES[1]. This method can be applied on processors with word length 32 or above, FPGA and others. And correspondingly we can implement it by VHDL, Verilog, VB and other languages.

  5. Core Business Selection Based on Ant Colony Clustering Algorithm

    Directory of Open Access Journals (Sweden)

    Yu Lan

    2014-01-01

    Full Text Available Core business is the most important business to the enterprise in diversified business. In this paper, we first introduce the definition and characteristics of the core business and then descript the ant colony clustering algorithm. In order to test the effectiveness of the proposed method, Tianjin Port Logistics Development Co., Ltd. is selected as the research object. Based on the current situation of the development of the company, the core business of the company can be acquired by ant colony clustering algorithm. Thus, the results indicate that the proposed method is an effective way to determine the core business for company.

  6. Analysis of velocity planning interpolation algorithm based on NURBS curve

    Science.gov (United States)

    Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng

    2017-04-01

    To reduce interpolation time and Max interpolation error in NURBS (Non-Uniform Rational B-Spline) inter-polation caused by planning Velocity. This paper proposed a velocity planning interpolation algorithm based on NURBS curve. Firstly, the second-order Taylor expansion is applied on the numerator in NURBS curve representation with parameter curve. Then, velocity planning interpolation algorithm can meet with NURBS curve interpolation. Finally, simulation results show that the proposed NURBS curve interpolator meet the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished.

  7. Algorithmic fault tree construction by component-based system modeling

    International Nuclear Information System (INIS)

    Majdara, Aref; Wakabayashi, Toshio

    2008-01-01

    Computer-aided fault tree generation can be easier, faster and less vulnerable to errors than the conventional manual fault tree construction. In this paper, a new approach for algorithmic fault tree generation is presented. The method mainly consists of a component-based system modeling procedure an a trace-back algorithm for fault tree synthesis. Components, as the building blocks of systems, are modeled using function tables and state transition tables. The proposed method can be used for a wide range of systems with various kinds of components, if an inclusive component database is developed. (author)

  8. Analog Group Delay Equalizers Design Based on Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    M. Laipert

    2006-04-01

    Full Text Available This paper deals with a design method of the analog all-pass filter designated for equalization of the group delay frequency response of the analog filter. This method is based on usage of evolutionary algorithm, the Differential Evolution algorithm in particular. We are able to design such equalizers to be obtained equal-ripple group delay frequency response in the pass-band of the low-pass filter. The procedure works automatically without an input estimation. The method is presented on solving practical examples.

  9. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search

    Directory of Open Access Journals (Sweden)

    Yuan-Jyun Chang

    2016-12-01

    Full Text Available The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO. The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.

  10. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search.

    Science.gov (United States)

    Chang, Yuan-Jyun; Hwang, Wen-Jyi; Chen, Chih-Chang

    2016-12-07

    The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO). The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.

  11. A Multi-Scale Settlement Matching Algorithm Based on ARG

    Science.gov (United States)

    Yue, Han; Zhu, Xinyan; Chen, Di; Liu, Lingjia

    2016-06-01

    Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG) is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.

  12. The Heeger & Bergen Pyramid Based Texture Synthesis Algorithm

    Directory of Open Access Journals (Sweden)

    Thibaud Briand

    2014-11-01

    Full Text Available This contribution deals with the Heeger-Bergen pyramid-based texture analysis/synthesis algorithm. It brings a detailed explanation of the original algorithm tested on many characteristic examples. Our analysis reproduces the original results, but also brings a minor improvement concerning non-periodic textures. Inspired by visual perception theories, Heeger and Bergen proposed to characterize a texture by its first-order statistics of both its color and its responses to multiscale and multi-orientation filters, namely the steerable pyramid. The Heeger-Bergen algorithm consists in the following procedure: starting from a white noise image, histogram matchings are performed to the noise alternatively in both the image domain and steerable pyramid domain, so that the corresponding histograms match the ones of the input texture.

  13. Algorithm of Defect Segmentation for AFP Based on Prepregs

    Directory of Open Access Journals (Sweden)

    CAI Zhiqiang

    2017-04-01

    Full Text Available In order to ensure the performance of the automated fiber placement forming parts, according to the homogeneity of the image of the prepreg surface along the fiber direction, a defect segmentation algorithm which was the combination of gray compensation and substraction algorithm based on image processing technology was proposed. The gray compensation matrix of image was used to compensate the gray image, and the maximum error point of the image matrix was eliminated according to the characteristics that the gray error obeys the normal distribution. The standard image was established, using the allowed deviation coefficient K as a criterion for substraction segmentation. Experiments show that the algorithm has good effect, fast speed in segmenting two kinds of typical laying defect of bubbles or foreign objects, and provides a good theoretical basis to realize automatic laying defect online monitoring.

  14. Design of synthetic biological logic circuits based on evolutionary algorithm.

    Science.gov (United States)

    Chuang, Chia-Hua; Lin, Chun-Liang; Chang, Yen-Chang; Jennawasin, Tanagorn; Chen, Po-Kuei

    2013-08-01

    The construction of an artificial biological logic circuit using systematic strategy is recognised as one of the most important topics for the development of synthetic biology. In this study, a real-structured genetic algorithm (RSGA), which combines general advantages of the traditional real genetic algorithm with those of the structured genetic algorithm, is proposed to deal with the biological logic circuit design problem. A general model with the cis-regulatory input function and appropriate promoter activity functions is proposed to synthesise a wide variety of fundamental logic gates such as NOT, Buffer, AND, OR, NAND, NOR and XOR. The results obtained can be extended to synthesise advanced combinational and sequential logic circuits by topologically distinct connections. The resulting optimal design of these logic gates and circuits are established via the RSGA. The in silico computer-based modelling technology has been verified showing its great advantages in the purpose.

  15. A homology sound-based algorithm for speech signal interference

    Science.gov (United States)

    Jiang, Yi-jiao; Chen, Hou-jin; Li, Ju-peng; Zhang, Zhan-song

    2015-12-01

    Aiming at secure analog speech communication, a homology sound-based algorithm for speech signal interference is proposed in this paper. We first split speech signal into phonetic fragments by a short-term energy method and establish an interference noise cache library with the phonetic fragments. Then we implement the homology sound interference by mixing the randomly selected interferential fragments and the original speech in real time. The computer simulation results indicated that the interference produced by this algorithm has advantages of real time, randomness, and high correlation with the original signal, comparing with the traditional noise interference methods such as white noise interference. After further studies, the proposed algorithm may be readily used in secure speech communication.

  16. A Multi-Scale Settlement Matching Algorithm Based on ARG

    Directory of Open Access Journals (Sweden)

    H. Yue

    2016-06-01

    Full Text Available Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.

  17. Community Clustering Algorithm in Complex Networks Based on Microcommunity Fusion

    Directory of Open Access Journals (Sweden)

    Jin Qi

    2015-01-01

    Full Text Available With the further research on physical meaning and digital features of the community structure in complex networks in recent years, the improvement of effectiveness and efficiency of the community mining algorithms in complex networks has become an important subject in this area. This paper puts forward a concept of the microcommunity and gets final mining results of communities through fusing different microcommunities. This paper starts with the basic definition of the network community and applies Expansion to the microcommunity clustering which provides prerequisites for the microcommunity fusion. The proposed algorithm is more efficient and has higher solution quality compared with other similar algorithms through the analysis of test results based on network data set.

  18. Dynamic Sensor Management Algorithm Based on Improved Efficacy Function

    Directory of Open Access Journals (Sweden)

    TANG Shujuan

    2016-01-01

    Full Text Available A dynamic sensor management algorithm based on improved efficacy function is proposed to solve the multi-target and multi-sensory management problem. The tracking task precision requirements (TPR, target priority and sensor use cost were considered to establish the efficacy function by weighted sum the normalized value of the three factors. The dynamic sensor management algorithm was accomplished through control the diversities of the desired covariance matrix (DCM and the filtering covariance matrix (FCM. The DCM was preassigned in terms of TPR and the FCM was obtained by the centralized sequential Kalman filtering algorithm. The simulation results prove that the proposed method could meet the requirements of desired tracking precision and adjust sensor selection according to target priority and cost of sensor source usage. This makes sensor management scheme more reasonable and effective.

  19. Pilot-based parametric channel estimation algorithm for DCO-OFDM-based visual light communications

    Science.gov (United States)

    Qian, Xuewen; Deng, Honggui; He, Hailang

    2017-10-01

    Due to wide modulation bandwidth in optical communication, multipath channels may be non-sparse and deteriorate communication performance heavily. Traditional compressive sensing-based channel estimation algorithm cannot be employed in this kind of situation. In this paper, we propose a practical parametric channel estimation algorithm for orthogonal frequency division multiplexing (OFDM)-based visual light communication (VLC) systems based on modified zero correlation code (ZCC) pair that has the impulse-like correlation property. Simulation results show that the proposed algorithm achieves better performances than existing least squares (LS)-based algorithm in both bit error ratio (BER) and frequency response estimation.

  20. LAI inversion algorithm based on directional reflectance kernels.

    Science.gov (United States)

    Tang, S; Chen, J M; Zhu, Q; Li, X; Chen, M; Sun, R; Zhou, Y; Deng, F; Xie, D

    2007-11-01

    Leaf area index (LAI) is an important ecological and environmental parameter. A new LAI algorithm is developed using the principles of ground LAI measurements based on canopy gap fraction. First, the relationship between LAI and gap fraction at various zenith angles is derived from the definition of LAI. Then, the directional gap fraction is acquired from a remote sensing bidirectional reflectance distribution function (BRDF) product. This acquisition is obtained by using a kernel driven model and a large-scale directional gap fraction algorithm. The algorithm has been applied to estimate a LAI distribution in China in mid-July 2002. The ground data acquired from two field experiments in Changbai Mountain and Qilian Mountain were used to validate the algorithm. To resolve the scale discrepancy between high resolution ground observations and low resolution remote sensing data, two TM images with a resolution approaching the size of ground plots were used to relate the coarse resolution LAI map to ground measurements. First, an empirical relationship between the measured LAI and a vegetation index was established. Next, a high resolution LAI map was generated using the relationship. The LAI value of a low resolution pixel was calculated from the area-weighted sum of high resolution LAIs composing the low resolution pixel. The results of this comparison showed that the inversion algorithm has an accuracy of 82%. Factors that may influence the accuracy are also discussed in this paper.

  1. Kriging-based algorithm for nuclear reactor neutronic design optimization

    International Nuclear Information System (INIS)

    Kempf, Stephanie; Forget, Benoit; Hu, Lin-Wen

    2012-01-01

    Highlights: ► A Kriging-based algorithm was selected to guide research reactor optimization. ► We examined impacts of parameter values upon the algorithm. ► The best parameter values were incorporated into a set of best practices. ► Algorithm with best practices used to optimize thermal flux of concept. ► Final design produces thermal flux 30% higher than other 5 MW reactors. - Abstract: Kriging, a geospatial interpolation technique, has been used in the present work to drive a search-and-optimization algorithm which produces the optimum geometric parameters for a 5 MW research reactor design. The technique has been demonstrated to produce an optimal neutronic solution after a relatively small number of core calculations. It has additionally been successful in producing a design which significantly improves thermal neutron fluxes by 30% over existing reactors of the same power rating. Best practices for use of this algorithm in reactor design were identified and indicated the importance of selecting proper correlation functions.

  2. Evolving Stochastic Learning Algorithm based on Tsallis entropic index

    Science.gov (United States)

    Anastasiadis, A. D.; Magoulas, G. D.

    2006-03-01

    In this paper, inspired from our previous algorithm, which was based on the theory of Tsallis statistical mechanics, we develop a new evolving stochastic learning algorithm for neural networks. The new algorithm combines deterministic and stochastic search steps by employing a different adaptive stepsize for each network weight, and applies a form of noise that is characterized by the nonextensive entropic index q, regulated by a weight decay term. The behavior of the learning algorithm can be made more stochastic or deterministic depending on the trade off between the temperature T and the q values. This is achieved by introducing a formula that defines a time-dependent relationship between these two important learning parameters. Our experimental study verifies that there are indeed improvements in the convergence speed of this new evolving stochastic learning algorithm, which makes learning faster than using the original Hybrid Learning Scheme (HLS). In addition, experiments are conducted to explore the influence of the entropic index q and temperature T on the convergence speed and stability of the proposed method.

  3. Incident Light Frequency-Based Image Defogging Algorithm

    Directory of Open Access Journals (Sweden)

    Wenbo Zhang

    2017-01-01

    Full Text Available To solve the color distortion problem produced by the dark channel prior algorithm, an improved method for calculating transmittance of all channels, respectively, was proposed in this paper. Based on the Beer-Lambert Law, the influence between the frequency of the incident light and the transmittance was analyzed, and the ratios between each channel’s transmittance were derived. Then, in order to increase efficiency, the input image was resized to a smaller size before acquiring the refined transmittance which will be resized to the same size of original image. Finally, all the transmittances were obtained with the help of the proportion between each color channel, and then they were used to restore the defogging image. Experiments suggest that the improved algorithm can produce a much more natural result image in comparison with original algorithm, which means the problem of high color saturation was eliminated. What is more, the improved algorithm speeds up by four to nine times compared to the original algorithm.

  4. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation

    Science.gov (United States)

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133

  5. A Greedy Algorithm for Neighborhood Overlap-Based Community Detection

    Directory of Open Access Journals (Sweden)

    Natarajan Meghanathan

    2016-01-01

    Full Text Available The neighborhood overlap (NOVER of an edge u-v is defined as the ratio of the number of nodes who are neighbors for both u and v to that of the number of nodes who are neighbors of at least u or v. In this paper, we hypothesize that an edge u-v with a lower NOVER score bridges two or more sets of vertices, with very few edges (other than u-v connecting vertices from one set to another set. Accordingly, we propose a greedy algorithm of iteratively removing the edges of a network in the increasing order of their neighborhood overlap and calculating the modularity score of the resulting network component(s after the removal of each edge. The network component(s that have the largest cumulative modularity score are identified as the different communities of the network. We evaluate the performance of the proposed NOVER-based community detection algorithm on nine real-world network graphs and compare the performance against the multi-level aggregation-based Louvain algorithm, as well as the original and time-efficient versions of the edge betweenness-based Girvan-Newman (GN community detection algorithm.

  6. Local Competition-Based Superpixel Segmentation Algorithm in Remote Sensing

    Directory of Open Access Journals (Sweden)

    Jiayin Liu

    2017-06-01

    Full Text Available Remote sensing technologies have been widely applied in urban environments’ monitoring, synthesis and modeling. Incorporating spatial information in perceptually coherent regions, superpixel-based approaches can effectively eliminate the “salt and pepper” phenomenon which is common in pixel-wise approaches. Compared with fixed-size windows, superpixels have adaptive sizes and shapes for different spatial structures. Moreover, superpixel-based algorithms can significantly improve computational efficiency owing to the greatly reduced number of image primitives. Hence, the superpixel algorithm, as a preprocessing technique, is more and more popularly used in remote sensing and many other fields. In this paper, we propose a superpixel segmentation algorithm called Superpixel Segmentation with Local Competition (SSLC, which utilizes a local competition mechanism to construct energy terms and label pixels. The local competition mechanism leads to energy terms locality and relativity, and thus, the proposed algorithm is less sensitive to the diversity of image content and scene layout. Consequently, SSLC could achieve consistent performance in different image regions. In addition, the Probability Density Function (PDF, which is estimated by Kernel Density Estimation (KDE with the Gaussian kernel, is introduced to describe the color distribution of superpixels as a more sophisticated and accurate measure. To reduce computational complexity, a boundary optimization framework is introduced to only handle boundary pixels instead of the whole image. We conduct experiments to benchmark the proposed algorithm with the other state-of-the-art ones on the Berkeley Segmentation Dataset (BSD and remote sensing images. Results demonstrate that the SSLC algorithm yields the best overall performance, while the computation time-efficiency is still competitive.

  7. Local Competition-Based Superpixel Segmentation Algorithm in Remote Sensing.

    Science.gov (United States)

    Liu, Jiayin; Tang, Zhenmin; Cui, Ying; Wu, Guoxing

    2017-06-12

    Remote sensing technologies have been widely applied in urban environments' monitoring, synthesis and modeling. Incorporating spatial information in perceptually coherent regions, superpixel-based approaches can effectively eliminate the "salt and pepper" phenomenon which is common in pixel-wise approaches. Compared with fixed-size windows, superpixels have adaptive sizes and shapes for different spatial structures. Moreover, superpixel-based algorithms can significantly improve computational efficiency owing to the greatly reduced number of image primitives. Hence, the superpixel algorithm, as a preprocessing technique, is more and more popularly used in remote sensing and many other fields. In this paper, we propose a superpixel segmentation algorithm called Superpixel Segmentation with Local Competition (SSLC), which utilizes a local competition mechanism to construct energy terms and label pixels. The local competition mechanism leads to energy terms locality and relativity, and thus, the proposed algorithm is less sensitive to the diversity of image content and scene layout. Consequently, SSLC could achieve consistent performance in different image regions. In addition, the Probability Density Function (PDF), which is estimated by Kernel Density Estimation (KDE) with the Gaussian kernel, is introduced to describe the color distribution of superpixels as a more sophisticated and accurate measure. To reduce computational complexity, a boundary optimization framework is introduced to only handle boundary pixels instead of the whole image. We conduct experiments to benchmark the proposed algorithm with the other state-of-the-art ones on the Berkeley Segmentation Dataset (BSD) and remote sensing images. Results demonstrate that the SSLC algorithm yields the best overall performance, while the computation time-efficiency is still competitive.

  8. Lagrangian relaxation based algorithm for trigeneration planning with storages

    DEFF Research Database (Denmark)

    Rong, Aiying; Lahdelma, Risto; Luh, Peter

    2008-01-01

    of three energy commodities follows a joint characteristic. This paper presents a Lagrangian relaxation (LR) based algorithm for trigeneration planning with storages based on deflected subgradient optimization method. The trigeneration planning problem is modeled as a linear programming (LP) problem...... an effective method for the long-term planning problem based on the proper strategy to form Lagrangian subproblems and solve the Lagrangian dual (LD) problem based on deflected subgradient optimization method. We also develop a heuristic for restoring feasibility from the LD solution. Numerical results based...

  9. Segment-based dose optimization using a genetic algorithm

    International Nuclear Information System (INIS)

    Cotrutz, Cristian; Xing Lei

    2003-01-01

    Intensity modulated radiation therapy (IMRT) inverse planning is conventionally done in two steps. Firstly, the intensity maps of the treatment beams are optimized using a dose optimization algorithm. Each of them is then decomposed into a number of segments using a leaf-sequencing algorithm for delivery. An alternative approach is to pre-assign a fixed number of field apertures and optimize directly the shapes and weights of the apertures. While the latter approach has the advantage of eliminating the leaf-sequencing step, the optimization of aperture shapes is less straightforward than that of beamlet-based optimization because of the complex dependence of the dose on the field shapes, and their weights. In this work we report a genetic algorithm for segment-based optimization. Different from a gradient iterative approach or simulated annealing, the algorithm finds the optimum solution from a population of candidate plans. In this technique, each solution is encoded using three chromosomes: one for the position of the left-bank leaves of each segment, the second for the position of the right-bank and the third for the weights of the segments defined by the first two chromosomes. The convergence towards the optimum is realized by crossover and mutation operators that ensure proper exchange of information between the three chromosomes of all the solutions in the population. The algorithm is applied to a phantom and a prostate case and the results are compared with those obtained using beamlet-based optimization. The main conclusion drawn from this study is that the genetic optimization of segment shapes and weights can produce highly conformal dose distribution. In addition, our study also confirms previous findings that fewer segments are generally needed to generate plans that are comparable with the plans obtained using beamlet-based optimization. Thus the technique may have useful applications in facilitating IMRT treatment planning

  10. A Novel Technique for Shape Feature Extraction Using Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Dhanoa Jaspreet Singh

    2016-01-01

    Full Text Available With the advent of technology and multimedia information, digital images are increasing very quickly. Various techniques are being developed to retrieve/search digital information or data contained in the image. Traditional Text Based Image Retrieval System is not plentiful. Since it is time consuming as it require manual image annotation. Also, the image annotation differs with different peoples. An alternate to this is Content Based Image Retrieval (CBIR system. It retrieves/search for image using its contents rather the text, keywords etc. A lot of exploration has been compassed in the range of Content Based Image Retrieval (CBIR with various feature extraction techniques. Shape is a significant image feature as it reflects the human perception. Moreover, Shape is quite simple to use by the user to define object in an image as compared to other features such as Color, texture etc. Over and above, if applied alone, no descriptor will give fruitful results. Further, by combining it with an improved classifier, one can use the positive features of both the descriptor and classifier. So, a tryout will be made to establish an algorithm for accurate feature (Shape extraction in Content Based Image Retrieval (CBIR. The main objectives of this project are: (a To propose an algorithm for shape feature extraction using CBIR, (b To evaluate the performance of proposed algorithm and (c To compare the proposed algorithm with state of art techniques.

  11. Thoracic cavity segmentation algorithm using multiorgan extraction and surface fitting in volumetric CT

    Energy Technology Data Exchange (ETDEWEB)

    Bae, JangPyo [Interdisciplinary Program, Bioengineering Major, Graduate School, Seoul National University, Seoul 110-744, South Korea and Department of Radiology, University of Ulsan College of Medicine, 388-1 Pungnap2-dong, Songpa-gu, Seoul 138-736 (Korea, Republic of); Kim, Namkug, E-mail: namkugkim@gmail.com; Lee, Sang Min; Seo, Joon Beom [Department of Radiology, University of Ulsan College of Medicine, 388-1 Pungnap2-dong, Songpa-gu, Seoul 138-736 (Korea, Republic of); Kim, Hee Chan [Department of Biomedical Engineering, College of Medicine and Institute of Medical and Biological Engineering, Medical Research Center, Seoul National University, Seoul 110-744 (Korea, Republic of)

    2014-04-15

    Purpose: To develop and validate a semiautomatic segmentation method for thoracic cavity volumetry and mediastinum fat quantification of patients with chronic obstructive pulmonary disease. Methods: The thoracic cavity region was separated by segmenting multiorgans, namely, the rib, lung, heart, and diaphragm. To encompass various lung disease-induced variations, the inner thoracic wall and diaphragm were modeled by using a three-dimensional surface-fitting method. To improve the accuracy of the diaphragm surface model, the heart and its surrounding tissue were segmented by a two-stage level set method using a shape prior. To assess the accuracy of the proposed algorithm, the algorithm results of 50 patients were compared to the manual segmentation results of two experts with more than 5 years of experience (these manual results were confirmed by an expert thoracic radiologist). The proposed method was also compared to three state-of-the-art segmentation methods. The metrics used to evaluate segmentation accuracy were volumetric overlap ratio (VOR), false positive ratio on VOR (FPRV), false negative ratio on VOR (FNRV), average symmetric absolute surface distance (ASASD), average symmetric squared surface distance (ASSSD), and maximum symmetric surface distance (MSSD). Results: In terms of thoracic cavity volumetry, the mean ± SD VOR, FPRV, and FNRV of the proposed method were (98.17 ± 0.84)%, (0.49 ± 0.23)%, and (1.34 ± 0.83)%, respectively. The ASASD, ASSSD, and MSSD for the thoracic wall were 0.28 ± 0.12, 1.28 ± 0.53, and 23.91 ± 7.64 mm, respectively. The ASASD, ASSSD, and MSSD for the diaphragm surface were 1.73 ± 0.91, 3.92 ± 1.68, and 27.80 ± 10.63 mm, respectively. The proposed method performed significantly better than the other three methods in terms of VOR, ASASD, and ASSSD. Conclusions: The proposed semiautomatic thoracic cavity segmentation method, which extracts multiple organs (namely, the rib, thoracic wall, diaphragm, and heart

  12. RED WINE EXTRACT OBTAINED BY MEMBRANE-BASED SUPERCRITICAL FLUID EXTRACTION: PRELIMINARY CHARACTERIZATION OF CHEMICAL PROPERTIES.

    Directory of Open Access Journals (Sweden)

    W. Silva

    Full Text Available ABSTRACT This study aims to obtain an extract from red wine by using membrane-based supercritical fluid extraction. This technique involves the use of porous membranes as contactors during the dense gas extraction process from liquid matrices. In this work, a Cabernet Sauvignon wine extract was obtained from supercritical fluid extraction using pressurized carbon dioxide as solvent and a hollow fiber contactor as extraction setup. The process was continuously conducted at pressures between 12 and 18 MPa and temperatures ranged from 30 to 50ºC. Meanwhile, flow rates of feed wine and supercritical CO2 varied from 0.1 to 0.5 mL min-1 and from 60 to 80 mL min-1 (NCPT, respectively. From extraction assays, the highest extraction percentage value obtained from the total amount of phenolic compounds was 14% in only one extraction step at 18MPa and 35ºC. A summarized chemical characterization of the obtained extract is reported in this work; one of the main compounds in this extract could be a low molecular weight organic acid with aromatic structure and methyl and carboxyl groups. Finally, this preliminary characterization of this extract shows a remarkable ORAC value equal to 101737 ± 5324 µmol Trolox equivalents (TE per 100 g of extract.

  13. A method of evolving novel feature extraction algorithms for detecting buried objects in FLIR imagery using genetic programming

    Science.gov (United States)

    Paino, A.; Keller, J.; Popescu, M.; Stone, K.

    2014-06-01

    In this paper we present an approach that uses Genetic Programming (GP) to evolve novel feature extraction algorithms for greyscale images. Our motivation is to create an automated method of building new feature extraction algorithms for images that are competitive with commonly used human-engineered features, such as Local Binary Pattern (LBP) and Histogram of Oriented Gradients (HOG). The evolved feature extraction algorithms are functions defined over the image space, and each produces a real-valued feature vector of variable length. Each evolved feature extractor breaks up the given image into a set of cells centered on every pixel, performs evolved operations on each cell, and then combines the results of those operations for every cell using an evolved operator. Using this method, the algorithm is flexible enough to reproduce both LBP and HOG features. The dataset we use to train and test our approach consists of a large number of pre-segmented image "chips" taken from a Forward Looking Infrared Imagery (FLIR) camera mounted on the hood of a moving vehicle. The goal is to classify each image chip as either containing or not containing a buried object. To this end, we define the fitness of a candidate solution as the cross-fold validation accuracy of the features generated by said candidate solution when used in conjunction with a Support Vector Machine (SVM) classifier. In order to validate our approach, we compare the classification accuracy of an SVM trained using our evolved features with the accuracy of an SVM trained using mainstream feature extraction algorithms, including LBP and HOG.

  14. A similarity based agglomerative clustering algorithm in networks

    Science.gov (United States)

    Liu, Zhiyuan; Wang, Xiujuan; Ma, Yinghong

    2018-04-01

    The detection of clusters is benefit for understanding the organizations and functions of networks. Clusters, or communities, are usually groups of nodes densely interconnected but sparsely linked with any other clusters. To identify communities, an efficient and effective community agglomerative algorithm based on node similarity is proposed. The proposed method initially calculates similarities between each pair of nodes, and form pre-partitions according to the principle that each node is in the same community as its most similar neighbor. After that, check each partition whether it satisfies community criterion. For the pre-partitions who do not satisfy, incorporate them with others that having the biggest attraction until there are no changes. To measure the attraction ability of a partition, we propose an attraction index that based on the linked node's importance in networks. Therefore, our proposed method can better exploit the nodes' properties and network's structure. To test the performance of our algorithm, both synthetic and empirical networks ranging in different scales are tested. Simulation results show that the proposed algorithm can obtain superior clustering results compared with six other widely used community detection algorithms.

  15. Development of antibiotic regimens using graph based evolutionary algorithms.

    Science.gov (United States)

    Corns, Steven M; Ashlock, Daniel A; Bryden, Kenneth M

    2013-12-01

    This paper examines the use of evolutionary algorithms in the development of antibiotic regimens given to production animals. A model is constructed that combines the lifespan of the animal and the bacteria living in the animal's gastro-intestinal tract from the early finishing stage until the animal reaches market weight. This model is used as the fitness evaluation for a set of graph based evolutionary algorithms to assess the impact of diversity control on the evolving antibiotic regimens. The graph based evolutionary algorithms have two objectives: to find an antibiotic treatment regimen that maintains the weight gain and health benefits of antibiotic use and to reduce the risk of spreading antibiotic resistant bacteria. This study examines different regimens of tylosin phosphate use on bacteria populations divided into Gram positive and Gram negative types, with a focus on Campylobacter spp. Treatment regimens were found that provided decreased antibiotic resistance relative to conventional methods while providing nearly the same benefits as conventional antibiotic regimes. By using a graph to control the information flow in the evolutionary algorithm, a variety of solutions along the Pareto front can be found automatically for this and other multi-objective problems. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  16. Nearby Search Indekos Based Android Using A Star (A*) Algorithm

    Science.gov (United States)

    Siregar, B.; Nababan, EB; Rumahorbo, JA; Andayani, U.; Fahmi, F.

    2018-03-01

    Indekos or rented room is a temporary residence for months or years. Society of academicians who come from out of town need a temporary residence, such as Indekos or rented room during their education, teaching, or duties. They are often found difficulty in finding a Indekos because lack of information about the Indekos. Besides, new society of academicians don’t recognize the areas around the campus and desire the shortest path from Indekos to get to the campus. The problem can be solved by implementing A Star (A*) algorithm. This algorithm is one of the shortest path algorithm to a finding shortest path from campus to the Indekos application, where the faculties in the campus as the starting point of the finding. Determination of the starting point used in this study aims to allow students to determine the starting point in finding the Indekos. The mobile based application facilitates the finding anytime and anywhere. Based on the experimental results, A* algorithm can find the shortest path with 86,67% accuracy.

  17. A Dynamic Neighborhood Learning-Based Gravitational Search Algorithm.

    Science.gov (United States)

    Zhang, Aizhu; Sun, Genyun; Ren, Jinchang; Li, Xiaodong; Wang, Zhenjie; Jia, Xiuping

    2018-01-01

    Balancing exploration and exploitation according to evolutionary states is crucial to meta-heuristic search (M-HS) algorithms. Owing to its simplicity in theory and effectiveness in global optimization, gravitational search algorithm (GSA) has attracted increasing attention in recent years. However, the tradeoff between exploration and exploitation in GSA is achieved mainly by adjusting the size of an archive, named , which stores those superior agents after fitness sorting in each iteration. Since the global property of remains unchanged in the whole evolutionary process, GSA emphasizes exploitation over exploration and suffers from rapid loss of diversity and premature convergence. To address these problems, in this paper, we propose a dynamic neighborhood learning (DNL) strategy to replace the model and thereby present a DNL-based GSA (DNLGSA). The method incorporates the local and global neighborhood topologies for enhancing the exploration and obtaining adaptive balance between exploration and exploitation. The local neighborhoods are dynamically formed based on evolutionary states. To delineate the evolutionary states, two convergence criteria named limit value and population diversity, are introduced. Moreover, a mutation operator is designed for escaping from the local optima on the basis of evolutionary states. The proposed algorithm was evaluated on 27 benchmark problems with different characteristic and various difficulties. The results reveal that DNLGSA exhibits competitive performances when compared with a variety of state-of-the-art M-HS algorithms. Moreover, the incorporation of local neighborhood topology reduces the numbers of calculations of gravitational force and thus alleviates the high computational cost of GSA.

  18. A Novel Assembly Line Balancing Method Based on PSO Algorithm

    Directory of Open Access Journals (Sweden)

    Xiaomei Hu

    2014-01-01

    Full Text Available Assembly line is widely used in manufacturing system. Assembly line balancing problem is a crucial question during design and management of assembly lines since it directly affects the productivity of the whole manufacturing system. The model of assembly line balancing problem is put forward and a general optimization method is proposed. The key data on assembly line balancing problem is confirmed, and the precedence relations diagram is described. A double objective optimization model based on takt time and smoothness index is built, and balance optimization scheme based on PSO algorithm is proposed. Through the simulation experiments of examples, the feasibility and validity of the assembly line balancing method based on PSO algorithm is proved.

  19. Oximeter-based autonomic state indicator algorithm for cardiovascular risk assessment.

    Science.gov (United States)

    Grote, Ludger; Sommermeyer, Dirk; Zou, Ding; Eder, Derek N; Hedner, Jan

    2011-02-01

    Cardiovascular (CV) risk assessment is important in clinical practice. An autonomic state indicator (ASI) algorithm based on pulse oximetry was developed and validated for CV risk assessment. One hundred forty-eight sleep clinic patients (98 men, mean age 50 ± 13 years) underwent an overnight study using a novel photoplethysmographic sensor. CV risk was classified according to the European Society of Hypertension/European Society of Cardiology (ESH/ESC) risk factor matrix. Five signal components reflecting cardiac and vascular activity (pulse wave attenuation, pulse rate acceleration, pulse propagation time, respiration-related pulse oscillation, and oxygen desaturation) extracted from 99 randomly selected subjects were used to train the classification algorithm. The capacity of the algorithm for CV risk prediction was validated in 49 additional patients. Each signal component contributed independently to CV risk prediction. The sensitivity and specificity of the algorithm to distinguish high/low CV risk in the validation group were 80% and 77%, respectively. The area under the receiver operating characteristic curve for high CV risk classification was 0.84. β-Blocker treatment was identified as an important factor for classification that was not in line with the ESH/ESC reference matrix. Signals derived from overnight oximetry recording provide a novel potential tool for CV risk classification. Prospective studies are warranted to establish the value of the ASI algorithm for prediction of outcome in CV disease.

  20. Effective pathfinding for four-wheeled robot based on combining Theta* and hybrid A* algorithms

    Directory of Open Access Journals (Sweden)

    Віталій Геннадійович Михалько

    2016-07-01

    Full Text Available Effective pathfinding algorithm based on Theta* and Hybrid A* algorithms was developed for four-wheeled robot. Pseudocode for algorithm was showed and explained. Algorithm and simulator for four-wheeled robot were implemented using Java programming language. Algorithm was tested on U-obstacles, complex maps and for parking problem

  1. Classification of Textures Using Filter Based Local Feature Extraction

    Directory of Open Access Journals (Sweden)

    Bocekci Veysel Gokhan

    2016-01-01

    Full Text Available In this work local features are used in feature extraction process in image processing for textures. The local binary pattern feature extraction method from textures are introduced. Filtering is also used during the feature extraction process for getting discriminative features. To show the effectiveness of the algorithm before the extraction process, three different noise are added to both train and test images. Wiener filter and median filter are used to remove the noise from images. We evaluate the performance of the method with Naïve Bayesian classifier. We conduct the comparative analysis on benchmark dataset with different filtering and size. Our experiments demonstrate that feature extraction process combine with filtering give promising results on noisy images.

  2. Genetic algorithm based on qubits and quantum gates

    International Nuclear Information System (INIS)

    Silva, Joao Batista Rosa; Ramos, Rubens Viana

    2003-01-01

    Full text: Genetic algorithm, a computational technique based on the evolution of the species, in which a possible solution of the problem is coded in a binary string, called chromosome, has been used successfully in several kinds of problems, where the search of a minimal or a maximal value is necessary, even when local minima are present. A natural generalization of a binary string is a qubit string. Hence, it is possible to use the structure of a genetic algorithm having a sequence of qubits as a chromosome and using quantum operations in the reproduction in order to find the best solution in some problems of quantum information. For example, given a unitary matrix U what is the pair of qubits that, when applied at the input, provides the output state with maximal entanglement? In order to solve this problem, a population of chromosomes of two qubits was created. The crossover was performed applying the quantum gates CNOT and SWAP at the pair of qubits, while the mutation was performed applying the quantum gates Hadamard, Z and Not in a single qubit. The result was compared with a classical genetic algorithm used to solve the same problem. A hundred simulations using the same U matrix was performed. Both algorithms, hereafter named by CGA (classical) and QGA (using qu bits), reached good results close to 1 however, the number of generations needed to find the best result was lower for the QGA. Another problem where the QGA can be useful is in the calculation of the relative entropy of entanglement. We have tested our algorithm using 100 pure states chosen randomly. The stop criterion used was the error lower than 0.01. The main advantages of QGA are its good precision, robustness and very easy implementation. The main disadvantage is its low velocity, as happen for all kind of genetic algorithms. (author)

  3. Idioms-based Business Rule Extraction

    NARCIS (Netherlands)

    R Smit (Rob)

    2011-01-01

    htmlabstractThis thesis studies the extraction of embedded business rules, using the idioms of the used framework to identify them. Embedded business rules exist as source code in the software system and knowledge about them may get lost. Extraction of those business rules could make them accessible

  4. Citizen-Centric Urban Planning through Extracting Emotion Information from Twitter in an Interdisciplinary Space-Time-Linguistics Algorithm

    Directory of Open Access Journals (Sweden)

    Bernd Resch

    2016-07-01

    Full Text Available Traditional urban planning processes typically happen in offices and behind desks. Modern types of civic participation can enhance those processes by acquiring citizens’ ideas and feedback in participatory sensing approaches like “People as Sensors”. As such, citizen-centric planning can be achieved by analysing Volunteered Geographic Information (VGI data such as Twitter tweets and posts from other social media channels. These user-generated data comprise several information dimensions, such as spatial and temporal information, and textual content. However, in previous research, these dimensions were generally examined separately in single-disciplinary approaches, which does not allow for holistic conclusions in urban planning. This paper introduces TwEmLab, an interdisciplinary approach towards extracting citizens’ emotions in different locations within a city. More concretely, we analyse tweets in three dimensions (space, time, and linguistics, based on similarities between each pair of tweets as defined by a specific set of functional relationships in each dimension. We use a graph-based semi-supervised learning algorithm to classify the data into discrete emotions (happiness, sadness, fear, anger/disgust, none. Our proposed solution allows tweets to be classified into emotion classes in a multi-parametric approach. Additionally, we created a manually annotated gold standard that can be used to evaluate TwEmLab’s performance. Our experimental results show that we are able to identify tweets carrying emotions and that our approach bears extensive potential to reveal new insights into citizens’ perceptions of the city.

  5. Review of algorithms for modeling metal distribution equilibria in liquid-liquid extraction processes

    Directory of Open Access Journals (Sweden)

    Lozano, L. J.

    2005-10-01

    Full Text Available This work focuses on general guidelines to be considered for application of least-squares routines and artificial neural networks (ANN in the estimation of metal distribution equilibria in liquid-liquid extraction process. The goal of the procedure in the statistical method is to find the values of the equilibrium constants (Kj for the reactions involved in the metal extraction which minimizes the differences between experimental distribution coefficient (Dexp and theoretical distribution coefficients according to the mechanism proposed (Dtheor- In the first part of the article, results obtained with the most frequently routine reported in the bibliography are compared with those obtained using the algorithms previously discussed. In the second part, the main features of a single back-propagation neural network for the same purpose are discussed, and the results obtained are compared with those obtained with the classical methods.

    El trabajo presenta las líneas generales a considerar para la estimación del equilibrio de distribución de metales en procesos de extracción líquido-líquido, según dos métodos: algoritmo clásico de mínimos cuadrados y redes neuronales artificiales. El objetivo del procedimiento, en el caso del método estadístico, es encontrar los valores de las constantes de equilibrio (Kj para las reacciones involucradas en la extracción del metal, que minimizan las diferencias entre el coeficiente de distribución experimental y el coeficiente de distribución teórico, de acuerdo al mecanismo propuesto. En la primera parte del artículo se comparan los resultados obtenidos a partir de los algoritmos usados más habitualmente en la bibliografía, con los datos obtenidos mediante el algoritmo previamente descrito. En la segunda parte, se presentan las características fundamentales para aplicar una red neuronal sencilla con algoritmo back-propagation y los

  6. Feature Fusion Based Road Extraction for HJ-1-C SAR Image

    Directory of Open Access Journals (Sweden)

    Lu Ping-ping

    2014-06-01

    Full Text Available Road network extraction in SAR images is one of the key tasks of military and civilian technologies. To solve the issues of road extraction of HJ-1-C SAR images, a road extraction algorithm is proposed based on the integration of ratio and directional information. Due to the characteristic narrow dynamic range and low signal to noise ratio of HJ-1-C SAR images, a nonlinear quantization and an image filtering method based on a multi-scale autoregressive model are proposed here. A road extraction algorithm based on information fusion, which considers ratio and direction information, is also proposed. By processing Radon transformation, main road directions can be extracted. Cross interferences can be suppressed, and the road continuity can then be improved by the main direction alignment and secondary road extraction. The HJ-1-C SAR image acquired in Wuhan, China was used to evaluate the proposed method. The experimental results show good performance with correctness (80.5% and quality (70.1% when applied to a SAR image with complex content.

  7. A test sheet generating algorithm based on intelligent genetic algorithm and hierarchical planning

    Science.gov (United States)

    Gu, Peipei; Niu, Zhendong; Chen, Xuting; Chen, Wei

    2013-03-01

    In recent years, computer-based testing has become an effective method to evaluate students' overall learning progress so that appropriate guiding strategies can be recommended. Research has been done to develop intelligent test assembling systems which can automatically generate test sheets based on given parameters of test items. A good multisubject test sheet depends on not only the quality of the test items but also the construction of the sheet. Effective and efficient construction of test sheets according to multiple subjects and criteria is a challenging problem. In this paper, a multi-subject test sheet generation problem is formulated and a test sheet generating approach based on intelligent genetic algorithm and hierarchical planning (GAHP) is proposed to tackle this problem. The proposed approach utilizes hierarchical planning to simplify the multi-subject testing problem and adopts genetic algorithm to process the layered criteria, enabling the construction of good test sheets according to multiple test item requirements. Experiments are conducted and the results show that the proposed approach is capable of effectively generating multi-subject test sheets that meet specified requirements and achieve good performance.

  8. Research of building information extraction and evaluation based on high-resolution remote-sensing imagery

    Science.gov (United States)

    Cao, Qiong; Gu, Lingjia; Ren, Ruizhi; Wang, Lang

    2016-09-01

    Building extraction currently is important in the application of high-resolution remote sensing imagery. At present, quite a few algorithms are available for detecting building information, however, most of them still have some obvious disadvantages, such as the ignorance of spectral information, the contradiction between extraction rate and extraction accuracy. The purpose of this research is to develop an effective method to detect building information for Chinese GF-1 data. Firstly, the image preprocessing technique is used to normalize the image and image enhancement is used to highlight the useful information in the image. Secondly, multi-spectral information is analyzed. Subsequently, an improved morphological building index (IMBI) based on remote sensing imagery is proposed to get the candidate building objects. Furthermore, in order to refine building objects and further remove false objects, the post-processing (e.g., the shape features, the vegetation index and the water index) is employed. To validate the effectiveness of the proposed algorithm, the omission errors (OE), commission errors (CE), the overall accuracy (OA) and Kappa are used at final. The proposed method can not only effectively use spectral information and other basic features, but also avoid extracting excessive interference details from high-resolution remote sensing images. Compared to the original MBI algorithm, the proposed method reduces the OE by 33.14% .At the same time, the Kappa increase by 16.09%. In experiments, IMBI achieved satisfactory results and outperformed other algorithms in terms of both accuracies and visual inspection

  9. Embedded algorithms within an FPGA-based system to process nonlinear time series data

    Science.gov (United States)

    Jones, Jonathan D.; Pei, Jin-Song; Tull, Monte P.

    2008-03-01

    This paper presents some preliminary results of an ongoing project. A pattern classification algorithm is being developed and embedded into a Field-Programmable Gate Array (FPGA) and microprocessor-based data processing core in this project. The goal is to enable and optimize the functionality of onboard data processing of nonlinear, nonstationary data for smart wireless sensing in structural health monitoring. Compared with traditional microprocessor-based systems, fast growing FPGA technology offers a more powerful, efficient, and flexible hardware platform including on-site (field-programmable) reconfiguration capability of hardware. An existing nonlinear identification algorithm is used as the baseline in this study. The implementation within a hardware-based system is presented in this paper, detailing the design requirements, validation, tradeoffs, optimization, and challenges in embedding this algorithm. An off-the-shelf high-level abstraction tool along with the Matlab/Simulink environment is utilized to program the FPGA, rather than coding the hardware description language (HDL) manually. The implementation is validated by comparing the simulation results with those from Matlab. In particular, the Hilbert Transform is embedded into the FPGA hardware and applied to the baseline algorithm as the centerpiece in processing nonlinear time histories and extracting instantaneous features of nonstationary dynamic data. The selection of proper numerical methods for the hardware execution of the selected identification algorithm and consideration of the fixed-point representation are elaborated. Other challenges include the issues of the timing in the hardware execution cycle of the design, resource consumption, approximation accuracy, and user flexibility of input data types limited by the simplicity of this preliminary design. Future work includes making an FPGA and microprocessor operate together to embed a further developed algorithm that yields better

  10. A new JPEG-based steganographic algorithm for mobile devices

    Science.gov (United States)

    Agaian, Sos S.; Cherukuri, Ravindranath C.; Schneider, Erik C.; White, Gregory B.

    2006-05-01

    Currently, cellular phones constitute a significant portion of the global telecommunications market. Modern cellular phones offer sophisticated features such as Internet access, on-board cameras, and expandable memory which provide these devices with excellent multimedia capabilities. Because of the high volume of cellular traffic, as well as the ability of these devices to transmit nearly all forms of data. The need for an increased level of security in wireless communications is becoming a growing concern. Steganography could provide a solution to this important problem. In this article, we present a new algorithm for JPEG-compressed images which is applicable to mobile platforms. This algorithm embeds sensitive information into quantized discrete cosine transform coefficients obtained from the cover JPEG. These coefficients are rearranged based on certain statistical properties and the inherent processing and memory constraints of mobile devices. Based on the energy variation and block characteristics of the cover image, the sensitive data is hidden by using a switching embedding technique proposed in this article. The proposed system offers high capacity while simultaneously withstanding visual and statistical attacks. Based on simulation results, the proposed method demonstrates an improved retention of first-order statistics when compared to existing JPEG-based steganographic algorithms, while maintaining a capacity which is comparable to F5 for certain cover images.

  11. Efficient sequential and parallel algorithms for finding edit distance based motifs.

    Science.gov (United States)

    Pal, Soumitra; Xiao, Peng; Rajasekaran, Sanguthevar

    2016-08-18

    Motif search is an important step in extracting meaningful patterns from biological data. The general problem of motif search is intractable and there is a pressing need to develop efficient, exact and approximation algorithms to solve this problem. In this paper, we present several novel, exact, sequential and parallel algorithms for solving the (l,d) Edit-distance-based Motif Search (EMS) problem: given two integers l,d and n biological strings, find all strings of length l that appear in each input string with atmost d errors of types substitution, insertion and deletion. One popular technique to solve the problem is to explore for each input string the set of all possible l-mers that belong to the d-neighborhood of any substring of the input string and output those which are common for all input strings. We introduce a novel and provably efficient neighborhood exploration technique. We show that it is enough to consider the candidates in neighborhood which are at a distance exactly d. We compactly represent these candidate motifs using wildcard characters and efficiently explore them with very few repetitions. Our sequential algorithm uses a trie based data structure to efficiently store and sort the candidate motifs. Our parallel algorithm in a multi-core shared memory setting uses arrays for storing and a novel modification of radix-sort for sorting the candidate motifs. The algorithms for EMS are customarily evaluated on several challenging instances such as (8,1), (12,2), (16,3), (20,4), and so on. The best previously known algorithm, EMS1, is sequential and in estimated 3 days solves up to instance (16,3). Our sequential algorithms are more than 20 times faster on (16,3). On other hard instances such as (9,2), (11,3), (13,4), our algorithms are much faster. Our parallel algorithm has more than 600 % scaling performance while using 16 threads. Our algorithms have pushed up the state-of-the-art of EMS solvers and we believe that the techniques introduced in

  12. A street rubbish detection algorithm based on Sift and RCNN

    Science.gov (United States)

    Yu, XiPeng; Chen, Zhong; Zhang, Shuo; Zhang, Ting

    2018-02-01

    This paper presents a street rubbish detection algorithm based on image registration with Sift feature and RCNN. Firstly, obtain the rubbish region proposal on the real-time street image and set up the CNN convolution neural network trained by the rubbish samples set consists of rubbish and non-rubbish images; Secondly, for every clean street image, obtain the Sift feature and do image registration with the real-time street image to obtain the differential image, the differential image filters a lot of background information, obtain the rubbish region proposal rect where the rubbish may appear on the differential image by the selective search algorithm. Then, the CNN model is used to detect the image pixel data in each of the region proposal on the real-time street image. According to the output vector of the CNN, it is judged whether the rubbish is in the region proposal or not. If it is rubbish, the region proposal on the real-time street image is marked. This algorithm avoids the large number of false detection caused by the detection on the whole image because the CNN is used to identify the image only in the region proposal on the real-time street image that may appear rubbish. Different from the traditional object detection algorithm based on the region proposal, the region proposal is obtained on the differential image not whole real-time street image, and the number of the invalid region proposal is greatly reduced. The algorithm has the high mean average precision (mAP).

  13. Optimization-based Method for Automated Road Network Extraction

    International Nuclear Information System (INIS)

    Xiong, D

    2001-01-01

    Automated road information extraction has significant applicability in transportation. It provides a means for creating, maintaining, and updating transportation network databases that are needed for purposes ranging from traffic management to automated vehicle navigation and guidance. This paper is to review literature on the subject of road extraction and to describe a study of an optimization-based method for automated road network extraction

  14. An elitist teaching-learning-based optimization algorithm for solving complex constrained optimization problems

    Directory of Open Access Journals (Sweden)

    Vivek Patel

    2012-08-01

    Full Text Available Nature inspired population based algorithms is a research field which simulates different natural phenomena to solve a wide range of problems. Researchers have proposed several algorithms considering different natural phenomena. Teaching-Learning-based optimization (TLBO is one of the recently proposed population based algorithm which simulates the teaching-learning process of the class room. This algorithm does not require any algorithm-specific control parameters. In this paper, elitism concept is introduced in the TLBO algorithm and its effect on the performance of the algorithm is investigated. The effects of common controlling parameters such as the population size and the number of generations on the performance of the algorithm are also investigated. The proposed algorithm is tested on 35 constrained benchmark functions with different characteristics and the performance of the algorithm is compared with that of other well known optimization algorithms. The proposed algorithm can be applied to various optimization problems of the industrial environment.

  15. Quantum Cryptography Based on the Deutsch-Jozsa Algorithm

    Science.gov (United States)

    Nagata, Koji; Nakamura, Tadao; Farouk, Ahmed

    2017-09-01

    Recently, secure quantum key distribution based on Deutsch's algorithm using the Bell state is reported (Nagata and Nakamura, Int. J. Theor. Phys. doi: 10.1007/s10773-017-3352-4, 2017). Our aim is of extending the result to a multipartite system. In this paper, we propose a highly speedy key distribution protocol. We present sequre quantum key distribution based on a special Deutsch-Jozsa algorithm using Greenberger-Horne-Zeilinger states. Bob has promised to use a function f which is of one of two kinds; either the value of f( x) is constant for all values of x, or else the value of f( x) is balanced, that is, equal to 1 for exactly half of the possible x, and 0 for the other half. Here, we introduce an additional condition to the function when it is balanced. Our quantum key distribution overcomes a classical counterpart by a factor O(2 N ).

  16. Memoryless cooperative graph search based on the simulated annealing algorithm

    International Nuclear Information System (INIS)

    Hou Jian; Yan Gang-Feng; Fan Zhen

    2011-01-01

    We have studied the problem of reaching a globally optimal segment for a graph-like environment with a single or a group of autonomous mobile agents. Firstly, two efficient simulated-annealing-like algorithms are given for a single agent to solve the problem in a partially known environment and an unknown environment, respectively. It shows that under both proposed control strategies, the agent will eventually converge to a globally optimal segment with probability 1. Secondly, we use multi-agent searching to simultaneously reduce the computation complexity and accelerate convergence based on the algorithms we have given for a single agent. By exploiting graph partition, a gossip-consensus method based scheme is presented to update the key parameter—radius of the graph, ensuring that the agents spend much less time finding a globally optimal segment. (interdisciplinary physics and related areas of science and technology)

  17. A Survey on Evolutionary Algorithm Based Hybrid Intelligence in Bioinformatics

    Directory of Open Access Journals (Sweden)

    Shan Li

    2014-01-01

    Full Text Available With the rapid advance in genomics, proteomics, metabolomics, and other types of omics technologies during the past decades, a tremendous amount of data related to molecular biology has been produced. It is becoming a big challenge for the bioinformatists to analyze and interpret these data with conventional intelligent techniques, for example, support vector machines. Recently, the hybrid intelligent methods, which integrate several standard intelligent approaches, are becoming more and more popular due to their robustness and efficiency. Specifically, the hybrid intelligent approaches based on evolutionary algorithms (EAs are widely used in various fields due to the efficiency and robustness of EAs. In this review, we give an introduction about the applications of hybrid intelligent methods, in particular those based on evolutionary algorithm, in bioinformatics. In particular, we focus on their applications to three common problems that arise in bioinformatics, that is, feature selection, parameter estimation, and reconstruction of biological networks.

  18. Fuzzy Sets-based Control Rules for Terminating Algorithms

    Directory of Open Access Journals (Sweden)

    Jose L. VERDEGAY

    2002-01-01

    Full Text Available In this paper some problems arising in the interface between two different areas, Decision Support Systems and Fuzzy Sets and Systems, are considered. The Model-Base Management System of a Decision Support System which involves some fuzziness is considered, and in that context the questions on the management of the fuzziness in some optimisation models, and then of using fuzzy rules for terminating conventional algorithms are presented, discussed and analyzed. Finally, for the concrete case of the Travelling Salesman Problem, and as an illustration of determination, management and using the fuzzy rules, a new algorithm easy to implement in the Model-Base Management System of any oriented Decision Support System is shown.

  19. APPECT: An Approximate Backbone-Based Clustering Algorithm for Tags

    DEFF Research Database (Denmark)

    Zong, Yu; Xu, Guandong; Jin, Pin

    2011-01-01

    algorithm for Tags (APPECT). The main steps of APPECT are: (1) we execute the K-means algorithm on a tag similarity matrix for M times and collect a set of tag clustering results Z={C1,C2,…,Cm}; (2) we form the approximate backbone of Z by executing a greedy search; (3) we fix the approximate backbone...... as the initial tag clustering result and then assign the rest tags into the corresponding clusters based on the similarity. Experimental results on three real world datasets namely MedWorm, MovieLens and Dmoz demonstrate the effectiveness and the superiority of the proposed method against the traditional...... Agglomerative Clustering on tagging data, which possess the inherent drawbacks, such as the sensitivity of initialization. In this paper, we instead make use of the approximate backbone of tag clustering results to find out better tag clusters. In particular, we propose an APProximate backbonE-based Clustering...

  20. An Alternative to Chaid Segmentation Algorithm Based on Entropy.

    Directory of Open Access Journals (Sweden)

    María Purificación Galindo Villardón

    2010-07-01

    Full Text Available The CHAID (Chi-Squared Automatic Interaction Detection treebased segmentation technique has been found to be an effective approach for obtaining meaningful segments that are predictive of a K-category (nominal or ordinal criterion variable. CHAID was designed to detect, in an automatic way, the  nteraction between several categorical or ordinal predictors in explaining a categorical response, but, this may not be true when Simpson’s paradox is present. This is due to the fact that CHAID is a forward selection algorithm based on the marginal counts. In this paper we propose a backwards elimination algorithm that starts with the full set of predictors (or full tree and eliminates predictors progressively. The elimination procedure is based on Conditional Independence contrasts using the concept of entropy. The proposed procedure is compared to CHAID.

  1. Variable Step Size Maximum Correntropy Criteria Based Adaptive Filtering Algorithm

    Directory of Open Access Journals (Sweden)

    S. Radhika

    2016-04-01

    Full Text Available Maximum correntropy criterion (MCC based adaptive filters are found to be robust against impulsive interference. This paper proposes a novel MCC based adaptive filter with variable step size in order to obtain improved performance in terms of both convergence rate and steady state error with robustness against impulsive interference. The optimal variable step size is obtained by minimizing the Mean Square Deviation (MSD error from one iteration to the other. Simulation results in the context of a highly impulsive system identification scenario show that the proposed algorithm has faster convergence and lesser steady state error than the conventional MCC based adaptive filters.

  2. Continuous Extraction of Subway Tunnel Cross Sections Based on Terrestrial Point Clouds

    Directory of Open Access Journals (Sweden)

    Zhizhong Kang

    2014-01-01

    Full Text Available An efficient method for the continuous extraction of subway tunnel cross sections using terrestrial point clouds is proposed. First, the continuous central axis of the tunnel is extracted using a 2D projection of the point cloud and curve fitting using the RANSAC (RANdom SAmple Consensus algorithm, and the axis is optimized using a global extraction strategy based on segment-wise fitting. The cross-sectional planes, which are orthogonal to the central axis, are then determined for every interval. The cross-sectional points are extracted by intersecting straight lines that rotate orthogonally around the central axis within the cross-sectional plane with the tunnel point cloud. An interpolation algorithm based on quadric parametric surface fitting, using the BaySAC (Bayesian SAmpling Consensus algorithm, is proposed to compute the cross-sectional point when it cannot be acquired directly from the tunnel points along the extraction direction of interest. Because the standard shape of the tunnel cross section is a circle, circle fitting is implemented using RANSAC to reduce the noise. The proposed approach is tested on terrestrial point clouds that cover a 150-m-long segment of a Shanghai subway tunnel, which were acquired using a LMS VZ-400 laser scanner. The results indicate that the proposed quadric parametric surface fitting using the optimized BaySAC achieves a higher overall fitting accuracy (0.9 mm than the accuracy (1.6 mm obtained by the plain RANSAC. The results also show that the proposed cross section extraction algorithm can achieve high accuracy (millimeter level, which was assessed by comparing the fitted radii with the designed radius of the cross section and comparing corresponding chord lengths in different cross sections and high efficiency (less than 3 s/section on average.

  3. DNA-based watermarks using the DNA-Crypt algorithm

    Science.gov (United States)

    Heider, Dominik; Barnekow, Angelika

    2007-01-01

    Background The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. Results The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. Conclusion The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms. PMID:17535434

  4. DNA-based watermarks using the DNA-Crypt algorithm

    Directory of Open Access Journals (Sweden)

    Barnekow Angelika

    2007-05-01

    Full Text Available Abstract Background The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. Results The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. Conclusion The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms.

  5. DNA-based watermarks using the DNA-Crypt algorithm.

    Science.gov (United States)

    Heider, Dominik; Barnekow, Angelika

    2007-05-29

    The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms.

  6. Adaptive algorithm for mobile user positioning based on environment estimation

    Directory of Open Access Journals (Sweden)

    Grujović Darko

    2014-01-01

    Full Text Available This paper analyzes the challenges to realize an infrastructure independent and a low-cost positioning method in cellular networks based on RSS (Received Signal Strength parameter, auxiliary timing parameter and environment estimation. The proposed algorithm has been evaluated using field measurements collected from GSM (Global System for Mobile Communications network, but it is technology independent and can be applied in UMTS (Universal Mobile Telecommunication Systems and LTE (Long-Term Evolution networks, also.

  7. Support vector machines optimization based theory, algorithms, and extensions

    CERN Document Server

    Deng, Naiyang; Zhang, Chunhua

    2013-01-01

    Support Vector Machines: Optimization Based Theory, Algorithms, and Extensions presents an accessible treatment of the two main components of support vector machines (SVMs)-classification problems and regression problems. The book emphasizes the close connection between optimization theory and SVMs since optimization is one of the pillars on which SVMs are built.The authors share insight on many of their research achievements. They give a precise interpretation of statistical leaning theory for C-support vector classification. They also discuss regularized twi

  8. Development of hybrid artificial intelligent based handover decision algorithm

    Directory of Open Access Journals (Sweden)

    A.M. Aibinu

    2017-04-01

    Full Text Available The possibility of seamless handover remains a mirage despite the plethora of existing handover algorithms. The underlying factor responsible for this has been traced to the Handover decision module in the Handover process. Hence, in this paper, the development of novel hybrid artificial intelligent handover decision algorithm has been developed. The developed model is made up of hybrid of Artificial Neural Network (ANN based prediction model and Fuzzy Logic. On accessing the network, the Received Signal Strength (RSS was acquired over a period of time to form a time series data. The data was then fed to the newly proposed k-step ahead ANN-based RSS prediction system for estimation of prediction model coefficients. The synaptic weights and adaptive coefficients of the trained ANN was then used to compute the k-step ahead ANN based RSS prediction model coefficients. The predicted RSS value was later codified as Fuzzy sets and in conjunction with other measured network parameters were fed into the Fuzzy logic controller in order to finalize handover decision process. The performance of the newly developed k-step ahead ANN based RSS prediction algorithm was evaluated using simulated and real data acquired from available mobile communication networks. Results obtained in both cases shows that the proposed algorithm is capable of predicting ahead the RSS value to about ±0.0002 dB. Also, the cascaded effect of the complete handover decision module was also evaluated. Results obtained show that the newly proposed hybrid approach was able to reduce ping-pong effect associated with other handover techniques.

  9. Physics-based signal processing algorithms for micromachined cantilever arrays

    Science.gov (United States)

    Candy, James V; Clague, David S; Lee, Christopher L; Rudd, Robert E; Burnham, Alan K; Tringe, Joseph W

    2013-11-19

    A method of using physics-based signal processing algorithms for micromachined cantilever arrays. The methods utilize deflection of a micromachined cantilever that represents the chemical, biological, or physical element being detected. One embodiment of the method comprises the steps of modeling the deflection of the micromachined cantilever producing a deflection model, sensing the deflection of the micromachined cantilever and producing a signal representing the deflection, and comparing the signal representing the deflection with the deflection model.

  10. Information Extraction of High-Resolution Remotely Sensed Image Based on Multiresolution Segmentation

    Directory of Open Access Journals (Sweden)

    Peng Shao

    2014-08-01

    Full Text Available The principle of multiresolution segmentation was represented in detail in this study, and the canny algorithm was applied for edge-detection of a remotely sensed image based on this principle. The target image was divided into regions based on object-oriented multiresolution segmentation and edge-detection. Furthermore, object hierarchy was created, and a series of features (water bodies, vegetation, roads, residential areas, bare land and other information were extracted by the spectral and geometrical features. The results indicate that the edge-detection has a positive effect on multiresolution segmentation, and overall accuracy of information extraction reaches to 94.6% by the confusion matrix.

  11. Optical Aperture Synthesis Object's Information Extracting Based on Wavelet Denoising

    International Nuclear Information System (INIS)

    Fan, W J; Lu, Y

    2006-01-01

    Wavelet denoising is studied to improve OAS(optical aperture synthesis) object's Fourier information extracting. Translation invariance wavelet denoising based on Donoho wavelet soft threshold denoising is researched to remove Pseudo-Gibbs in wavelet soft threshold image. OAS object's information extracting based on translation invariance wavelet denoising is studied. The study shows that wavelet threshold denoising can improve the precision and the repetition of object's information extracting from interferogram, and the translation invariance wavelet denoising information extracting is better than soft threshold wavelet denoising information extracting

  12. Cryptographic protocol security analysis based on bounded constructing algorithm

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    An efficient approach to analyzing cryptographic protocols is to develop automatic analysis tools based on formal methods. However, the approach has encountered the high computational complexity problem due to reasons that participants of protocols are arbitrary, their message structures are complex and their executions are concurrent. We propose an efficient automatic verifying algorithm for analyzing cryptographic protocols based on the Cryptographic Protocol Algebra (CPA) model proposed recently, in which algebraic techniques are used to simplify the description of cryptographic protocols and their executions. Redundant states generated in the analysis processes are much reduced by introducing a new algebraic technique called Universal Polynomial Equation and the algorithm can be used to verify the correctness of protocols in the infinite states space. We have implemented an efficient automatic analysis tool for cryptographic protocols, called ACT-SPA, based on this algorithm, and used the tool to check more than 20 cryptographic protocols. The analysis results show that this tool is more efficient, and an attack instance not offered previously is checked by using this tool.

  13. Multifeature Fusion Vehicle Detection Algorithm Based on Choquet Integral

    Directory of Open Access Journals (Sweden)

    Wenhui Li

    2014-01-01

    Full Text Available Vision-based multivehicle detection plays an important role in Forward Collision Warning Systems (FCWS and Blind Spot Detection Systems (BSDS. The performance of these systems depends on the real-time capability, accuracy, and robustness of vehicle detection methods. To improve the accuracy of vehicle detection algorithm, we propose a multifeature fusion vehicle detection algorithm based on Choquet integral. This algorithm divides the vehicle detection problem into two phases: feature similarity measure and multifeature fusion. In the feature similarity measure phase, we first propose a taillight-based vehicle detection method, and then vehicle taillight feature similarity measure is defined. Second, combining with the definition of Choquet integral, the vehicle symmetry similarity measure and the HOG + AdaBoost feature similarity measure are defined. Finally, these three features are fused together by Choquet integral. Being evaluated on public test collections and our own test images, the experimental results show that our method has achieved effective and robust multivehicle detection in complicated environments. Our method can not only improve the detection rate but also reduce the false alarm rate, which meets the engineering requirements of Advanced Driving Assistance Systems (ADAS.

  14. A motion algorithm to extract physical and motion parameters of mobile targets from cone-beam computed tomographic images.

    Science.gov (United States)

    Alsbou, Nesreen; Ahmad, Salahuddin; Ali, Imad

    2016-05-17

    A motion algorithm has been developed to extract length, CT number level and motion amplitude of a mobile target from cone-beam CT (CBCT) images. The algorithm uses three measurable parameters: Apparent length and blurred CT number distribution of a mobile target obtained from CBCT images to determine length, CT-number value of the stationary target, and motion amplitude. The predictions of this algorithm are tested with mobile targets having different well-known sizes that are made from tissue-equivalent gel which is inserted into a thorax phantom. The phantom moves sinusoidally in one-direction to simulate respiratory motion using eight amplitudes ranging 0-20 mm. Using this motion algorithm, three unknown parameters are extracted that include: Length of the target, CT number level, speed or motion amplitude for the mobile targets from CBCT images. The motion algorithm solves for the three unknown parameters using measured length, CT number level and gradient for a well-defined mobile target obtained from CBCT images. The motion model agrees with the measured lengths which are dependent on the target length and motion amplitude. The gradient of the CT number distribution of the mobile target is dependent on the stationary CT number level, the target length and motion amplitude. Motion frequency and phase do not affect the elongation and CT number distribution of the mobile target and could not be determined. A motion algorithm has been developed to extract three parameters that include length, CT number level and motion amplitude or speed of mobile targets directly from reconstructed CBCT images without prior knowledge of the stationary target parameters. This algorithm provides alternative to 4D-CBCT without requirement of motion tracking and sorting of the images into different breathing phases. The motion model developed here works well for tumors that have simple shapes, high contrast relative to surrounding tissues and move nearly in regular motion pattern

  15. An accelerated threshold-based back-projection algorithm for Compton camera image reconstruction

    International Nuclear Information System (INIS)

    Mundy, Daniel W.; Herman, Michael G.

    2011-01-01

    Purpose: Compton camera imaging (CCI) systems are currently under investigation for radiotherapy dose reconstruction and verification. The ability of such a system to provide real-time images during dose delivery will be limited by the computational speed of the image reconstruction algorithm. In this work, the authors present a fast and simple method by which to generate an initial back-projected image from acquired CCI data, suitable for use in a filtered back-projection algorithm or as a starting point for iterative reconstruction algorithms, and compare its performance to the current state of the art. Methods: Each detector event in a CCI system describes a conical surface that includes the true point of origin of the detected photon. Numerical image reconstruction algorithms require, as a first step, the back-projection of each of these conical surfaces into an image space. The algorithm presented here first generates a solution matrix for each slice of the image space by solving the intersection of the conical surface with the image plane. Each element of the solution matrix is proportional to the distance of the corresponding voxel from the true intersection curve. A threshold function was developed to extract those pixels sufficiently close to the true intersection to generate a binary intersection curve. This process is repeated for each image plane for each CCI detector event, resulting in a three-dimensional back-projection image. The performance of this algorithm was tested against a marching algorithm known for speed and accuracy. Results: The threshold-based algorithm was found to be approximately four times faster than the current state of the art with minimal deficit to image quality, arising from the fact that a generically applicable threshold function cannot provide perfect results in all situations. The algorithm fails to extract a complete intersection curve in image slices near the detector surface for detector event cones having axes nearly

  16. Analysis of computational complexity for HT-based fingerprint alignment algorithms on java card environment

    CSIR Research Space (South Africa)

    Mlambo, CS

    2015-01-01

    Full Text Available In this paper, implementations of three Hough Transform based fingerprint alignment algorithms are analyzed with respect to time complexity on Java Card environment. Three algorithms are: Local Match Based Approach (LMBA), Discretized Rotation Based...

  17. A New Curve Tracing Algorithm Based on Local Feature in the Vectorization of Paper Seismograms

    Directory of Open Access Journals (Sweden)

    Maofa Wang

    2014-02-01

    Full Text Available History paper seismograms are very important information for earthquake monitoring and prediction. The vectorization of paper seismograms is an import problem to be resolved. Auto tracing of waveform curves is a key technology for the vectorization of paper seismograms. It can transform an original scanning image into digital waveform data. Accurately tracing out all the key points of each curve in seismograms is the foundation for vectorization of paper seismograms. In the paper, we present a new curve tracing algorithm based on local feature, applying to auto extraction of earthquake waveform in paper seismograms.

  18. Software design of automatic counting system for nuclear track based on mathematical morphology algorithm

    International Nuclear Information System (INIS)

    Pan Yi; Mao Wanchong

    2010-01-01

    The parameter measurement of nuclear track occupies an important position in the field of nuclear technology. However, traditional artificial counting method has many limitations. In recent years, DSP and digital image processing technology have been applied in nuclear field more and more. For the sake of reducing errors of visual measurement in artificial counting method, an automatic counting system for nuclear track based on DM642 real-time image processing platform is introduced in this article, which is able to effectively remove interferences from the background and noise points, as well as automatically extract nuclear track-points by using mathematical morphology algorithm. (authors)

  19. A Location-Based Business Information Recommendation Algorithm

    Directory of Open Access Journals (Sweden)

    Shudong Liu

    2015-01-01

    Full Text Available Recently, many researches on information (e.g., POI, ADs recommendation based on location have been done in both research and industry. In this paper, we firstly construct a region-based location graph (RLG, in which region node respectively connects with user node and business information node, and then we propose a location-based recommendation algorithm based on RLG, which can combine with user short-ranged mobility formed by daily activity and long-distance mobility formed by social network ties and sequentially can recommend local business information and long-distance business information to users. Moreover, it can combine user-based collaborative filtering with item-based collaborative filtering, and it can alleviate cold start problem which traditional recommender systems often suffer from. Empirical studies from large-scale real-world data from Yelp demonstrate that our method outperforms other methods on the aspect of recommendation accuracy.

  20. Creating Very True Quantum Algorithms for Quantum Energy Based Computing

    Science.gov (United States)

    Nagata, Koji; Nakamura, Tadao; Geurdes, Han; Batle, Josep; Abdalla, Soliman; Farouk, Ahmed; Diep, Do Ngoc

    2018-04-01

    An interpretation of quantum mechanics is discussed. It is assumed that quantum is energy. An algorithm by means of the energy interpretation is discussed. An algorithm, based on the energy interpretation, for fast determining a homogeneous linear function f( x) := s. x = s 1 x 1 + s 2 x 2 + ⋯ + s N x N is proposed. Here x = ( x 1, … , x N ), x j ∈ R and the coefficients s = ( s 1, … , s N ), s j ∈ N. Given the interpolation values (f(1), f(2),...,f(N))=ěc {y}, the unknown coefficients s = (s1(ěc {y}),\\dots , sN(ěc {y})) of the linear function shall be determined, simultaneously. The speed of determining the values is shown to outperform the classical case by a factor of N. Our method is based on the generalized Bernstein-Vazirani algorithm to qudit systems. Next, by using M parallel quantum systems, M homogeneous linear functions are determined, simultaneously. The speed of obtaining the set of M homogeneous linear functions is shown to outperform the classical case by a factor of N × M.

  1. Expeditious 3D poisson vlasov algorithm applied to ion extraction from a plasma

    International Nuclear Information System (INIS)

    Whealton, J.H.; McGaffey, R.W.; Meszaros, P.S.

    1983-01-01

    A new 3D Poisson Vlasov algorithm is under development which differs from a previous algorithm, referenced in this paper, in two respects: the mesh lines are cartesian, and the Poisson equation is solved iteratively. The resulting algorithm has been used to examine the same boundary value problem as considered in the earlier algorithm except that the number of nodes is 2 times greater. The same physical results were obtained except the computational time was reduced by a factor of 60 and the memory requirement was reduced by a factor of 10. This algorithm at present restricts Neumann boundary conditions to orthogonal planes lying along mesh lines. No such restriction applies to Dirichlet boundaries. An emittance diagram is shown below where those points lying on the y = 0 line start on the axis of symmetry and those near the y = 1 line start near the slot end

  2. Aeromagnetic Compensation Algorithm Based on Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Peilin Wu

    2018-01-01

    Full Text Available Aeromagnetic exploration is an important exploration method in geophysics. The data is typically measured by optically pumped magnetometer mounted on an aircraft. But any aircraft produces significant levels of magnetic interference. Therefore, aeromagnetic compensation is important in aeromagnetic exploration. However, multicollinearity of the aeromagnetic compensation model degrades the performance of the compensation. To address this issue, a novel aeromagnetic compensation method based on principal component analysis is proposed. Using the algorithm, the correlation in the feature matrix is eliminated and the principal components are using to construct the hyperplane to compensate the platform-generated magnetic fields. The algorithm was tested using a helicopter, and the obtained improvement ratio is 9.86. The compensated quality is almost the same or slightly better than the ridge regression. The validity of the proposed method was experimentally demonstrated.

  3. ABC Algorithm based Fuzzy Modeling of Optical Glucose Detection

    Directory of Open Access Journals (Sweden)

    SARACOGLU, O. G.

    2016-08-01

    Full Text Available This paper presents a modeling approach based on the use of fuzzy reasoning mechanism to define a measured data set obtained from an optical sensing circuit. For this purpose, we implemented a simple but effective an in vitro optical sensor to measure glucose content of an aqueous solution. Measured data contain analog voltages representing the absorbance values of three wavelengths measured from an RGB LED in different glucose concentrations. To achieve a desired model performance, the parameters of the fuzzy models are optimized by using the artificial bee colony (ABC algorithm. The modeling results presented in this paper indicate that the fuzzy model optimized by the algorithm provide a successful modeling performance having the minimum mean squared error (MSE of 0.0013 which are in clearly good agreement with the measurements.

  4. Algorithms for MDC-Based Multi-locus Phylogeny Inference

    Science.gov (United States)

    Yu, Yun; Warnow, Tandy; Nakhleh, Luay

    One of the criteria for inferring a species tree from a collection of gene trees, when gene tree incongruence is assumed to be due to incomplete lineage sorting (ILS), is minimize deep coalescence, or MDC. Exact algorithms for inferring the species tree from rooted, binary trees under MDC were recently introduced. Nevertheless, in phylogenetic analyses of biological data sets, estimated gene trees may differ from true gene trees, be incompletely resolved, and not necessarily rooted. In this paper, we propose new MDC formulations for the cases where the gene trees are unrooted/binary, rooted/non-binary, and unrooted/non-binary. Further, we prove structural theorems that allow us to extend the algorithms for the rooted/binary gene tree case to these cases in a straightforward manner. Finally, we study the performance of these methods in coalescent-based computer simulations.

  5. An Improved Piecewise Linear Chaotic Map Based Image Encryption Algorithm

    Directory of Open Access Journals (Sweden)

    Yuping Hu

    2014-01-01

    Full Text Available An image encryption algorithm based on improved piecewise linear chaotic map (MPWLCM model was proposed. The algorithm uses the MPWLCM to permute and diffuse plain image simultaneously. Due to the sensitivity to initial key values, system parameters, and ergodicity in chaotic system, two pseudorandom sequences are designed and used in the processes of permutation and diffusion. The order of processing pixels is not in accordance with the index of pixels, but it is from beginning or end alternately. The cipher feedback was introduced in diffusion process. Test results and security analysis show that not only the scheme can achieve good encryption results but also its key space is large enough to resist against brute attack.

  6. Image Retrieval Algorithm Based on Discrete Fractional Transforms

    Science.gov (United States)

    Jindal, Neeru; Singh, Kulbir

    2013-06-01

    The discrete fractional transforms is a signal processing tool which suggests computational algorithms and solutions to various sophisticated applications. In this paper, a new technique to retrieve the encrypted and scrambled image based on discrete fractional transforms has been proposed. Two-dimensional image was encrypted using discrete fractional transforms with three fractional orders and two random phase masks placed in the two intermediate planes. The significant feature of discrete fractional transforms benefits from its extra degree of freedom that is provided by its fractional orders. Security strength was enhanced (1024!)4 times by scrambling the encrypted image. In decryption process, image retrieval is sensitive for both correct fractional order keys and scrambling algorithm. The proposed approach make the brute force attack infeasible. Mean square error and relative error are the recital parameters to verify validity of proposed method.

  7. Multirobot FastSLAM Algorithm Based on Landmark Consistency Correction

    Directory of Open Access Journals (Sweden)

    Shi-Ming Chen

    2014-01-01

    Full Text Available Considering the influence of uncertain map information on multirobot SLAM problem, a multirobot FastSLAM algorithm based on landmark consistency correction is proposed. Firstly, electromagnetism-like mechanism is introduced to the resampling procedure in single-robot FastSLAM, where we assume that each sampling particle is looked at as a charged electron and attraction-repulsion mechanism in electromagnetism field is used to simulate interactive force between the particles to improve the distribution of particles. Secondly, when multiple robots observe the same landmarks, every robot is regarded as one node and Kalman-Consensus Filter is proposed to update landmark information, which further improves the accuracy of localization and mapping. Finally, the simulation results show that the algorithm is suitable and effective.

  8. Research on Wavelet-Based Algorithm for Image Contrast Enhancement

    Institute of Scientific and Technical Information of China (English)

    Wu Ying-qian; Du Pei-jun; Shi Peng-fei

    2004-01-01

    A novel wavelet-based algorithm for image enhancement is proposed in the paper. On the basis of multiscale analysis, the proposed algorithm solves efficiently the problem of noise over-enhancement, which commonly occurs in the traditional methods for contrast enhancement. The decomposed coefficients at same scales are processed by a nonlinear method, and the coefficients at different scales are enhanced in different degree. During the procedure, the method takes full advantage of the properties of Human visual system so as to achieve better performance. The simulations demonstrate that these characters of the proposed approach enable it to fully enhance the content in images, to efficiently alleviate the enhancement of noise and to achieve much better enhancement effect than the traditional approaches.

  9. Genetic algorithm based reactive power dispatch for voltage stability improvement

    Energy Technology Data Exchange (ETDEWEB)

    Devaraj, D. [Department of Electrical and Electronics, Kalasalingam University, Krishnankoil 626 190 (India); Roselyn, J. Preetha [Department of Electrical and Electronics, SRM University, Kattankulathur 603 203, Chennai (India)

    2010-12-15

    Voltage stability assessment and control form the core function in a modern energy control centre. This paper presents an improved Genetic algorithm (GA) approach for voltage stability enhancement. The proposed technique is based on the minimization of the maximum of L-indices of load buses. Generator voltages, switchable VAR sources and transformer tap changers are used as optimization variables of this problem. The proposed approach permits the optimization variables to be represented in their natural form in the genetic population. For effective genetic processing, the crossover and mutation operators which can directly deal with the floating point numbers and integers are used. The proposed algorithm has been tested on IEEE 30-bus and IEEE 57-bus test systems and successful results have been obtained. (author)

  10. An approach of traffic signal control based on NLRSQP algorithm

    Science.gov (United States)

    Zou, Yuan-Yang; Hu, Yu

    2017-11-01

    This paper presents a linear program model with linear complementarity constraints (LPLCC) to solve traffic signal optimization problem. The objective function of the model is to obtain the minimization of total queue length with weight factors at the end of each cycle. Then, a combination algorithm based on the nonlinear least regression and sequence quadratic program (NLRSQP) is proposed, by which the local optimal solution can be obtained. Furthermore, four numerical experiments are proposed to study how to set the initial solution of the algorithm that can get a better local optimal solution more quickly. In particular, the results of numerical experiments show that: The model is effective for different arrival rates and weight factors; and the lower bound of the initial solution is, the better optimal solution can be obtained.

  11. Machine learning based global particle indentification algorithms at LHCb experiment

    CERN Multimedia

    Derkach, Denis; Likhomanenko, Tatiana; Rogozhnikov, Aleksei; Ratnikov, Fedor

    2017-01-01

    One of the most important aspects of data processing at LHC experiments is the particle identification (PID) algorithm. In LHCb, several different sub-detector systems provide PID information: the Ring Imaging CHerenkov (RICH) detector, the hadronic and electromagnetic calorimeters, and the muon chambers. To improve charged particle identification, several neural networks including a deep architecture and gradient boosting have been applied to data. These new approaches provide higher identification efficiencies than existing implementations for all charged particle types. It is also necessary to achieve a flat dependency between efficiencies and spectator variables such as particle momentum, in order to reduce systematic uncertainties during later stages of data analysis. For this purpose, "flat” algorithms that guarantee the flatness property for efficiencies have also been developed. This talk presents this new approach based on machine learning and its performance.

  12. Fitchi: haplotype genealogy graphs based on the Fitch algorithm.

    Science.gov (United States)

    Matschiner, Michael

    2016-04-15

    : In population genetics and phylogeography, haplotype genealogy graphs are important tools for the visualization of population structure based on sequence data. In this type of graph, node sizes are often drawn in proportion to haplotype frequencies and edge lengths represent the minimum number of mutations separating adjacent nodes. I here present Fitchi, a new program that produces publication-ready haplotype genealogy graphs based on the Fitch algorithm. http://www.evoinformatics.eu/fitchi.htm : michaelmatschiner@mac.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Vision Based Autonomous Robot Navigation Algorithms and Implementations

    CERN Document Server

    Chatterjee, Amitava; Nirmal Singh, N

    2013-01-01

    This book is devoted to the theory and development of autonomous navigation of mobile robots using computer vision based sensing mechanism. The conventional robot navigation systems, utilizing traditional sensors like ultrasonic, IR, GPS, laser sensors etc., suffer several drawbacks related to either the physical limitations of the sensor or incur high cost. Vision sensing has emerged as a popular alternative where cameras can be used to reduce the overall cost, maintaining high degree of intelligence, flexibility and robustness. This book includes a detailed description of several new approaches for real life vision based autonomous navigation algorithms and SLAM. It presents the concept of how subgoal based goal-driven navigation can be carried out using vision sensing. The development concept of vision based robots for path/line tracking using fuzzy logic is presented, as well as how a low-cost robot can be indigenously developed in the laboratory with microcontroller based sensor systems. The book descri...

  14. Electronic Health Record Based Algorithm to Identify Patients with Autism Spectrum Disorder.

    Directory of Open Access Journals (Sweden)

    Todd Lingren

    Full Text Available Cohort selection is challenging for large-scale electronic health record (EHR analyses, as International Classification of Diseases 9th edition (ICD-9 diagnostic codes are notoriously unreliable disease predictors. Our objective was to develop, evaluate, and validate an automated algorithm for determining an Autism Spectrum Disorder (ASD patient cohort from EHR. We demonstrate its utility via the largest investigation to date of the co-occurrence patterns of medical comorbidities in ASD.We extracted ICD-9 codes and concepts derived from the clinical notes. A gold standard patient set was labeled by clinicians at Boston Children's Hospital (BCH (N = 150 and Cincinnati Children's Hospital and Medical Center (CCHMC (N = 152. Two algorithms were created: (1 rule-based implementing the ASD criteria from Diagnostic and Statistical Manual of Mental Diseases 4th edition, (2 predictive classifier. The positive predictive values (PPV achieved by these algorithms were compared to an ICD-9 code baseline. We clustered the patients based on grouped ICD-9 codes and evaluated subgroups.The rule-based algorithm produced the best PPV: (a BCH: 0.885 vs. 0.273 (baseline; (b CCHMC: 0.840 vs. 0.645 (baseline; (c combined: 0.864 vs. 0.460 (baseline. A validation at Children's Hospital of Philadelphia yielded 0.848 (PPV. Clustering analyses of comorbidities on the three-site large cohort (N = 20,658 ASD patients identified psychiatric, developmental, and seizure disorder clusters.In a large cross-institutional cohort, co-occurrence patterns of comorbidities in ASDs provide further hypothetical evidence for distinct courses in ASD. The proposed automated algorithms for cohort selection open avenues for other large-scale EHR studies and individualized treatment of ASD.

  15. Earthquake—explosion discrimination using genetic algorithm-based boosting approach

    Science.gov (United States)

    Orlic, Niksa; Loncaric, Sven

    2010-02-01

    An important and challenging problem in seismic data processing is to discriminate between natural seismic events such as earthquakes and artificial seismic events such as explosions. Many automatic techniques for seismogram classification have been proposed in the literature. Most of these methods have a similar approach to seismogram classification: a predefined set of features based on ad-hoc feature selection criteria is extracted from the seismogram waveform or spectral data and these features are used for signal classification. In this paper we propose a novel approach for seismogram classification. A specially formulated genetic algorithm has been employed to automatically search for a near-optimal seismogram feature set, instead of using ad-hoc feature selection criteria. A boosting method is added to the genetic algorithm when searching for multiple features in order to improve classification performance. A learning set of seismogram data is used by the genetic algorithm to discover a near-optimal feature set. The feature set identified by the genetic algorithm is then used for seismogram classification. The described method is developed to classify seismograms in two groups, whereas a brief overview of method extension for multiple group classification is given. For method verification, a learning set consisting of 40 local earthquake seismograms and 40 explosion seismograms was used. The method was validated on seismogram set consisting of 60 local earthquake seismograms and 60 explosion seismograms, with correct classification of 85%.

  16. Coarse Alignment Technology on Moving base for SINS Based on the Improved Quaternion Filter Algorithm.

    Science.gov (United States)

    Zhang, Tao; Zhu, Yongyun; Zhou, Feng; Yan, Yaxiong; Tong, Jinwu

    2017-06-17

    Initial alignment of the strapdown inertial navigation system (SINS) is intended to determine the initial attitude matrix in a short time with certain accuracy. The alignment accuracy of the quaternion filter algorithm is remarkable, but the convergence rate is slow. To solve this problem, this paper proposes an improved quaternion filter algorithm for faster initial alignment based on the error model of the quaternion filter algorithm. The improved quaternion filter algorithm constructs the K matrix based on the principle of optimal quaternion algorithm, and rebuilds the measurement model by containing acceleration and velocity errors to make the convergence rate faster. A doppler velocity log (DVL) provides the reference velocity for the improved quaternion filter alignment algorithm. In order to demonstrate the performance of the improved quaternion filter algorithm in the field, a turntable experiment and a vehicle test are carried out. The results of the experiments show that the convergence rate of the proposed improved quaternion filter is faster than that of the tradition quaternion filter algorithm. In addition, the improved quaternion filter algorithm also demonstrates advantages in terms of correctness, effectiveness, and practicability.

  17. Ship Block Transportation Scheduling Problem Based on Greedy Algorithm

    Directory of Open Access Journals (Sweden)

    Chong Wang

    2016-05-01

    Full Text Available Ship block transportation problems are crucial issues to address in reducing the construction cost and improving the productivity of shipyards. Shipyards aim to maximize the workload balance of transporters with time constraint such that all blocks should be transported during the planning horizon. This process leads to three types of penalty time: empty transporter travel time, delay time, and tardy time. This study aims to minimize the sum of the penalty time. First, this study presents the problem of ship block transportation with the generalization of the block transportation restriction on the multi-type transporter. Second, the problem is transformed into the classical traveling salesman problem and assignment problem through a reasonable model simplification and by adding a virtual node to the proposed directed graph. Then, a heuristic algorithm based on greedy algorithm is proposed to assign blocks to available transporters and sequencing blocks for each transporter simultaneously. Finally, the numerical experiment method is used to validate the model, and its result shows that the proposed algorithm is effective in realizing the efficient use of the transporters in shipyards. Numerical simulation results demonstrate the promising application of the proposed method to efficiently improve the utilization of transporters and to reduce the cost of ship block logistics for shipyards.

  18. Time-advance algorithms based on Hamilton's principle

    International Nuclear Information System (INIS)

    Lewis, H.R.; Kostelec, P.J.

    1993-01-01

    Time-advance algorithms based on Hamilton's variational principle are being developed for application to problems in plasma physics and other areas. Hamilton's principle was applied previously to derive a system of ordinary differential equations in time whose solution provides an approximation to the evolution of a plasma described by the Vlasov-Maxwell equations. However, the variational principle was not used to obtain an algorithm for solving the ordinary differential equations numerically. The present research addresses the numerical solution of systems of ordinary differential equations via Hamilton's principle. The basic idea is first to choose a class of functions for approximating the solution of the ordinary differential equations over a specific time interval. Then the parameters in the approximating function are determined by applying Hamilton's principle exactly within the class of approximating functions. For example, if an approximate solution is desired between time t and time t + Δ t, the class of approximating functions could be polynomials in time up to some degree. The issue of how to choose time-advance algorithms is very important for achieving efficient, physically meaningful computer simulations. The objective is to reliably simulate those characteristics of an evolving system that are scientifically most relevant. Preliminary numerical results are presented, including comparisons with other computational methods

  19. Quad-Rotor Helicopter Autonomous Navigation Based on Vanishing Point Algorithm

    Directory of Open Access Journals (Sweden)

    Jialiang Wang

    2014-01-01

    Full Text Available Quad-rotor helicopter is becoming popular increasingly as they can well implement many flight missions in more challenging environments, with lower risk of damaging itself and its surroundings. They are employed in many applications, from military operations to civilian tasks. Quad-rotor helicopter autonomous navigation based on the vanishing point fast estimation (VPFE algorithm using clustering principle is implemented in this paper. For images collected by the camera of quad-rotor helicopter, the system executes the process of preprocessing of image, deleting noise interference, edge extracting using Canny operator, and extracting straight lines by randomized hough transformation (RHT method. Then system obtains the position of vanishing point and regards it as destination point and finally controls the autonomous navigation of the quad-rotor helicopter by continuous modification according to the calculated navigation error. The experimental results show that the quad-rotor helicopter can implement the destination navigation well in the indoor environment.

  20. Post-processing of Deep Web Information Extraction Based on Domain Ontology

    Directory of Open Access Journals (Sweden)

    PENG, T.

    2013-11-01

    Full Text Available Many methods are utilized to extract and process query results in deep Web, which rely on the different structures of Web pages and various designing modes of databases. However, some semantic meanings and relations are ignored. So, in this paper, we present an approach for post-processing deep Web query results based on domain ontology which can utilize the semantic meanings and relations. A block identification model (BIM based on node similarity is defined to extract data blocks that are relevant to specific domain after reducing noisy nodes. Feature vector of domain books is obtained by result set extraction model (RSEM based on vector space model (VSM. RSEM, in combination with BIM, builds the domain ontology on books which can not only remove the limit of Web page structures when extracting data information, but also make use of semantic meanings of domain ontology. After extracting basic information of Web pages, a ranking algorithm is adopted to offer an ordered list of data records to users. Experimental results show that BIM and RSEM extract data blocks and build domain ontology accurately. In addition, relevant data records and basic information are extracted and ranked. The performances precision and recall show that our proposed method is feasible and efficient.

  1. Contrast Enhancement Algorithm Based on Gap Adjustment for Histogram Equalization

    Directory of Open Access Journals (Sweden)

    Chung-Cheng Chiu

    2016-06-01

    Full Text Available Image enhancement methods have been widely used to improve the visual effects of images. Owing to its simplicity and effectiveness histogram equalization (HE is one of the methods used for enhancing image contrast. However, HE may result in over-enhancement and feature loss problems that lead to unnatural look and loss of details in the processed images. Researchers have proposed various HE-based methods to solve the over-enhancement problem; however, they have largely ignored the feature loss problem. Therefore, a contrast enhancement algorithm based on gap adjustment for histogram equalization (CegaHE is proposed. It refers to a visual contrast enhancement algorithm based on histogram equalization (VCEA, which generates visually pleasing enhanced images, and improves the enhancement effects of VCEA. CegaHE adjusts the gaps between two gray values based on the adjustment equation, which takes the properties of human visual perception into consideration, to solve the over-enhancement problem. Besides, it also alleviates the feature loss problem and further enhances the textures in the dark regions of the images to improve the quality of the processed images for human visual perception. Experimental results demonstrate that CegaHE is a reliable method for contrast enhancement and that it significantly outperforms VCEA and other methods.

  2. Contrast Enhancement Algorithm Based on Gap Adjustment for Histogram Equalization

    Science.gov (United States)

    Chiu, Chung-Cheng; Ting, Chih-Chung

    2016-01-01

    Image enhancement methods have been widely used to improve the visual effects of images. Owing to its simplicity and effectiveness histogram equalization (HE) is one of the methods used for enhancing image contrast. However, HE may result in over-enhancement and feature loss problems that lead to unnatural look and loss of details in the processed images. Researchers have proposed various HE-based methods to solve the over-enhancement problem; however, they have largely ignored the feature loss problem. Therefore, a contrast enhancement algorithm based on gap adjustment for histogram equalization (CegaHE) is proposed. It refers to a visual contrast enhancement algorithm based on histogram equalization (VCEA), which generates visually pleasing enhanced images, and improves the enhancement effects of VCEA. CegaHE adjusts the gaps between two gray values based on the adjustment equation, which takes the properties of human visual perception into consideration, to solve the over-enhancement problem. Besides, it also alleviates the feature loss problem and further enhances the textures in the dark regions of the images to improve the quality of the processed images for human visual perception. Experimental results demonstrate that CegaHE is a reliable method for contrast enhancement and that it significantly outperforms VCEA and other methods. PMID:27338412

  3. An Efficient Sleepy Algorithm for Particle-Based Fluids

    Directory of Open Access Journals (Sweden)

    Xiao Nie

    2014-01-01

    Full Text Available We present a novel Smoothed Particle Hydrodynamics (SPH based algorithm for efficiently simulating compressible and weakly compressible particle fluids. Prior particle-based methods simulate all fluid particles; however, in many cases some particles appearing to be at rest can be safely ignored without notably affecting the fluid flow behavior. To identify these particles, a novel sleepy strategy is introduced. By utilizing this strategy, only a portion of the fluid particles requires computational resources; thus an obvious performance gain can be achieved. In addition, in order to resolve unphysical clumping issue due to tensile instability in SPH based methods, a new artificial repulsive force is provided. We demonstrate that our approach can be easily integrated with existing SPH based methods to improve the efficiency without sacrificing visual quality.

  4. Autonomous Time-Frequency Cropping and Feature-Extraction Algorithms for Classification of LPI Radar Modulations

    National Research Council Canada - National Science Library

    Zilberman, Eric R

    2006-01-01

    ...), uses the marginal frequency distribution and the adaptive threshold binarization algorithm to determine the start and stop frequencies of the modulation energy to locate and adapt the size of the cropping window...

  5. Ant-based extraction of rules in simple decision systems over ontological graphs

    Directory of Open Access Journals (Sweden)

    Pancerz Krzysztof

    2015-06-01

    Full Text Available In the paper, the problem of extraction of complex decision rules in simple decision systems over ontological graphs is considered. The extracted rules are consistent with the dominance principle similar to that applied in the dominancebased rough set approach (DRSA. In our study, we propose to use a heuristic algorithm, utilizing the ant-based clustering approach, searching the semantic spaces of concepts presented by means of ontological graphs. Concepts included in the semantic spaces are values of attributes describing objects in simple decision systems

  6. An Adaptive Filtering Algorithm Based on Genetic Algorithm-Backpropagation Network

    Directory of Open Access Journals (Sweden)

    Kai Hu

    2013-01-01

    Full Text Available A new image filtering algorithm is proposed. GA-BPN algorithm uses genetic algorithm (GA to decide weights in a back propagation neural network (BPN. It has better global optimal characteristics than traditional optimal algorithm. In this paper, we used GA-BPN to do image noise filter researching work. Firstly, this paper uses training samples to train GA-BPN as the noise detector. Then, we utilize the well-trained GA-BPN to recognize noise pixels in target image. And at last, an adaptive weighted average algorithm is used to recover noise pixels recognized by GA-BPN. Experiment data shows that this algorithm has better performance than other filters.

  7. Shadow Detection Based on Regions of Light Sources for Object Extraction in Nighttime Video

    Directory of Open Access Journals (Sweden)

    Gil-beom Lee

    2017-03-01

    Full Text Available Intelligent video surveillance systems detect pre-configured surveillance events through background modeling, foreground and object extraction, object tracking, and event detection. Shadow regions inside video frames sometimes appear as foreground objects, interfere with ensuing processes, and finally degrade the event detection performance of the systems. Conventional studies have mostly used intensity, color, texture, and geometric information to perform shadow detection in daytime video, but these methods lack the capability of removing shadows in nighttime video. In this paper, a novel shadow detection algorithm for nighttime video is proposed; this algorithm partitions each foreground object based on the object’s vertical histogram and screens out shadow objects by validating their orientations heading toward regions of light sources. From the experimental results, it can be seen that the proposed algorithm shows more than 93.8% shadow removal and 89.9% object extraction rates for nighttime video sequences, and the algorithm outperforms conventional shadow removal algorithms designed for daytime videos.

  8. A Coupled User Clustering Algorithm Based on Mixed Data for Web-Based Learning Systems

    Directory of Open Access Journals (Sweden)

    Ke Niu

    2015-01-01

    Full Text Available In traditional Web-based learning systems, due to insufficient learning behaviors analysis and personalized study guides, a few user clustering algorithms are introduced. While analyzing the behaviors with these algorithms, researchers generally focus on continuous data but easily neglect discrete data, each of which is generated from online learning actions. Moreover, there are implicit coupled interactions among the data but are frequently ignored in the introduced algorithms. Therefore, a mass of significant information which can positively affect clustering accuracy is neglected. To solve the above issues, we proposed a coupled user clustering algorithm for Wed-based learning systems by taking into account both discrete and continuous data, as well as intracoupled and intercoupled interactions of the data. The experiment result in this paper demonstrates the outperformance of the proposed algorithm.

  9. Super-pixel extraction based on multi-channel pulse coupled neural network

    Science.gov (United States)

    Xu, GuangZhu; Hu, Song; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun

    2018-04-01

    Super-pixel extraction techniques group pixels to form over-segmented image blocks according to the similarity among pixels. Compared with the traditional pixel-based methods, the image descripting method based on super-pixel has advantages of less calculation, being easy to perceive, and has been widely used in image processing and computer vision applications. Pulse coupled neural network (PCNN) is a biologically inspired model, which stems from the phenomenon of synchronous pulse release in the visual cortex of cats. Each PCNN neuron can correspond to a pixel of an input image, and the dynamic firing pattern of each neuron contains both the pixel feature information and its context spatial structural information. In this paper, a new color super-pixel extraction algorithm based on multi-channel pulse coupled neural network (MPCNN) was proposed. The algorithm adopted the block dividing idea of SLIC algorithm, and the image was divided into blocks with same size first. Then, for each image block, the adjacent pixels of each seed with similar color were classified as a group, named a super-pixel. At last, post-processing was adopted for those pixels or pixel blocks which had not been grouped. Experiments show that the proposed method can adjust the number of superpixel and segmentation precision by setting parameters, and has good potential for super-pixel extraction.

  10. Evaluation of Item-Based Top-N Recommendation Algorithms

    Science.gov (United States)

    2000-09-15

    Furthermore, one of the advantages of the item-based algorithm is that it has much smaller computational require- 11 0.0 0.1 0.2 0.3 0.4 0.5 0.6 ecommerce ...items, utilized by many e-commerce sites, cannot take advantage of pre-computed user-to-user similarities. Consequently, even though the throughput of...Non-Zeros ecommerce 6667 17491 91222 catalog 50918 39080 435524 ccard 42629 68793 398619 skills 4374 2125 82612 movielens 943 1682 100000 Table 1: The

  11. The wind power prediction research based on mind evolutionary algorithm

    Science.gov (United States)

    Zhuang, Ling; Zhao, Xinjian; Ji, Tianming; Miao, Jingwen; Cui, Haina

    2018-04-01

    When the wind power is connected to the power grid, its characteristics of fluctuation, intermittent and randomness will affect the stability of the power system. The wind power prediction can guarantee the power quality and reduce the operating cost of power system. There were some limitations in several traditional wind power prediction methods. On the basis, the wind power prediction method based on Mind Evolutionary Algorithm (MEA) is put forward and a prediction model is provided. The experimental results demonstrate that MEA performs efficiently in term of the wind power prediction. The MEA method has broad prospect of engineering application.

  12. Improved Adaptive LSB Steganography Based on Chaos and Genetic Algorithm

    Science.gov (United States)

    Yu, Lifang; Zhao, Yao; Ni, Rongrong; Li, Ting

    2010-12-01

    We propose a novel steganographic method in JPEG images with high performance. Firstly, we propose improved adaptive LSB steganography, which can achieve high capacity while preserving the first-order statistics. Secondly, in order to minimize visual degradation of the stego image, we shuffle bits-order of the message based on chaos whose parameters are selected by the genetic algorithm. Shuffling message's bits-order provides us with a new way to improve the performance of steganography. Experimental results show that our method outperforms classical steganographic methods in image quality, while preserving characteristics of histogram and providing high capacity.

  13. Improved Adaptive LSB Steganography Based on Chaos and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Yu Lifang

    2010-01-01

    Full Text Available We propose a novel steganographic method in JPEG images with high performance. Firstly, we propose improved adaptive LSB steganography, which can achieve high capacity while preserving the first-order statistics. Secondly, in order to minimize visual degradation of the stego image, we shuffle bits-order of the message based on chaos whose parameters are selected by the genetic algorithm. Shuffling message's bits-order provides us with a new way to improve the performance of steganography. Experimental results show that our method outperforms classical steganographic methods in image quality, while preserving characteristics of histogram and providing high capacity.

  14. Entropy-Based Algorithm for Supply-Chain Complexity Assessment

    Directory of Open Access Journals (Sweden)

    Boris Kriheli

    2018-03-01

    Full Text Available This paper considers a graph model of hierarchical supply chains. The goal is to measure the complexity of links between different components of the chain, for instance, between the principal equipment manufacturer (a root node and its suppliers (preceding supply nodes. The information entropy is used to serve as a measure of knowledge about the complexity of shortages and pitfalls in relationship between the supply chain components under uncertainty. The concept of conditional (relative entropy is introduced which is a generalization of the conventional (non-relative entropy. An entropy-based algorithm providing efficient assessment of the supply chain complexity as a function of the SC size is developed.

  15. FPGA based algorithms for data reduction at Belle II

    Energy Technology Data Exchange (ETDEWEB)

    Muenchow, David; Gessler, Thomas; Kuehn, Wolfgang; Lange, Jens Soeren; Liu, Ming; Spruck, Bjoern [II. Physikalisches Institut, Universitaet Giessen (Germany)

    2011-07-01

    Belle II, the upgrade of the existing Belle experiment at Super-KEKB in Tsukuba, Japan, is an asymmetric e{sup +}e{sup -} collider with a design luminosity of 8.10{sup 35}cm{sup -2}s{sup -1}. At Belle II the estimated event rate is {<=}30 kHz. The resulting data rate at the Pixel Detector (PXD) will be {<=}7.2 GB/s. This data rate needs to be reduced to be able to process and store the data. A region of interest (ROI) selection is based upon two mechanisms. a.) a tracklet finder using the silicon strip detector and b.) the HLT using all other Belle II subdetectors. These ROIs and the pixel data are forwarded to an FPGA based Compute Node for processing. Here a VHDL based algorithm on FPGA with the benefit of pipelining and parallelisation will be implemented. For a fast data handling we developed a dedicated memory management system for buffering and storing the data. The status of the implementation and performance tests of the memory manager and data reduction algorithm is presented.

  16. High-speed MRF-based segmentation algorithm using pixonal images

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Hassanpour, H.; Naimi, H. M.

    2013-01-01

    Segmentation is one of the most complicated procedures in the image processing that has important role in the image analysis. In this paper, an improved pixon-based method for image segmentation is proposed. In proposed algorithm, complex partial differential equations (PDEs) is used as a kernel...... function to make pixonal image. Using this kernel function causes noise on images to reduce and an image not to be over-segment when the pixon-based method is used. Utilising the PDE-based method leads to elimination of some unnecessary details and results in a fewer pixon number, faster performance...... and more robustness against unwanted environmental noises. As the next step, the appropriate pixons are extracted and eventually, we segment the image with the use of a Markov random field. The experimental results indicate that the proposed pixon-based approach has a reduced computational load...

  17. Novel density-based and hierarchical density-based clustering algorithms for uncertain data.

    Science.gov (United States)

    Zhang, Xianchao; Liu, Han; Zhang, Xiaotong

    2017-09-01

    Uncertain data has posed a great challenge to traditional clustering algorithms. Recently, several algorithms have been proposed for clustering uncertain data, and among them density-based techniques seem promising for handling data uncertainty. However, some issues like losing uncertain information, high time complexity and nonadaptive threshold have not been addressed well in the previous density-based algorithm FDBSCAN and hierarchical density-based algorithm FOPTICS. In this paper, we firstly propose a novel density-based algorithm PDBSCAN, which improves the previous FDBSCAN from the following aspects: (1) it employs a more accurate method to compute the probability that the distance between two uncertain objects is less than or equal to a boundary value, instead of the sampling-based method in FDBSCAN; (2) it introduces new definitions of probability neighborhood, support degree, core object probability, direct reachability probability, thus reducing the complexity and solving the issue of nonadaptive threshold (for core object judgement) in FDBSCAN. Then, we modify the algorithm PDBSCAN to an improved version (PDBSCANi), by using a better cluster assignment strategy to ensure that every object will be assigned to the most appropriate cluster, thus solving the issue of nonadaptive threshold (for direct density reachability judgement) in FDBSCAN. Furthermore, as PDBSCAN and PDBSCANi have difficulties for clustering uncertain data with non-uniform cluster density, we propose a novel hierarchical density-based algorithm POPTICS by extending the definitions of PDBSCAN, adding new definitions of fuzzy core distance and fuzzy reachability distance, and employing a new clustering framework. POPTICS can reveal the cluster structures of the datasets with different local densities in different regions better than PDBSCAN and PDBSCANi, and it addresses the issues in FOPTICS. Experimental results demonstrate the superiority of our proposed algorithms over the existing

  18. TPSLVM: a dimensionality reduction algorithm based on thin plate splines.

    Science.gov (United States)

    Jiang, Xinwei; Gao, Junbin; Wang, Tianjiang; Shi, Daming

    2014-10-01

    Dimensionality reduction (DR) has been considered as one of the most significant tools for data analysis. One type of DR algorithms is based on latent variable models (LVM). LVM-based models can handle the preimage problem easily. In this paper we propose a new LVM-based DR model, named thin plate spline latent variable model (TPSLVM). Compared to the well-known Gaussian process latent variable model (GPLVM), our proposed TPSLVM is more powerful especially when the dimensionality of the latent space is low. Also, TPSLVM is robust to shift and rotation. This paper investigates two extensions of TPSLVM, i.e., the back-constrained TPSLVM (BC-TPSLVM) and TPSLVM with dynamics (TPSLVM-DM) as well as their combination BC-TPSLVM-DM. Experimental results show that TPSLVM and its extensions provide better data visualization and more efficient dimensionality reduction compared to PCA, GPLVM, ISOMAP, etc.

  19. Vision-based vehicle detection and tracking algorithm design

    Science.gov (United States)

    Hwang, Junyeon; Huh, Kunsoo; Lee, Donghwi

    2009-12-01

    The vision-based vehicle detection in front of an ego-vehicle is regarded as promising for driver assistance as well as for autonomous vehicle guidance. The feasibility of vehicle detection in a passenger car requires accurate and robust sensing performance. A multivehicle detection system based on stereo vision has been developed for better accuracy and robustness. This system utilizes morphological filter, feature detector, template matching, and epipolar constraint techniques in order to detect the corresponding pairs of vehicles. After the initial detection, the system executes the tracking algorithm for the vehicles. The proposed system can detect front vehicles such as the leading vehicle and side-lane vehicles. The position parameters of the vehicles located in front are obtained based on the detection information. The proposed vehicle detection system is implemented on a passenger car, and its performance is verified experimentally.

  20. Calculation of electromagnetic parameter based on interpolation algorithm

    International Nuclear Information System (INIS)

    Zhang, Wenqiang; Yuan, Liming; Zhang, Deyuan

    2015-01-01

    Wave-absorbing material is an important functional material of electromagnetic protection. The wave-absorbing characteristics depend on the electromagnetic parameter of mixed media. In order to accurately predict the electromagnetic parameter of mixed media and facilitate the design of wave-absorbing material, based on the electromagnetic parameters of spherical and flaky carbonyl iron mixture of paraffin base, this paper studied two different interpolation methods: Lagrange interpolation and Hermite interpolation of electromagnetic parameters. The results showed that Hermite interpolation is more accurate than the Lagrange interpolation, and the reflectance calculated with the electromagnetic parameter obtained by interpolation is consistent with that obtained through experiment on the whole. - Highlights: • We use interpolation algorithm on calculation of EM-parameter with limited samples. • Interpolation method can predict EM-parameter well with different particles added. • Hermite interpolation is more accurate than Lagrange interpolation. • Calculating RL based on interpolation is consistent with calculating RL from experiment

  1. A class of kernel based real-time elastography algorithms.

    Science.gov (United States)

    Kibria, Md Golam; Hasan, Md Kamrul

    2015-08-01

    In this paper, a novel real-time kernel-based and gradient-based Phase Root Seeking (PRS) algorithm for ultrasound elastography is proposed. The signal-to-noise ratio of the strain image resulting from this method is improved by minimizing the cross-correlation discrepancy between the pre- and post-compression radio frequency signals with an adaptive temporal stretching method and employing built-in smoothing through an exponentially weighted neighborhood kernel in the displacement calculation. Unlike conventional PRS algorithms, displacement due to tissue compression is estimated from the root of the weighted average of the zero-lag cross-correlation phases of the pair of corresponding analytic pre- and post-compression windows in the neighborhood kernel. In addition to the proposed one, the other time- and frequency-domain elastography algorithms (Ara et al., 2013; Hussain et al., 2012; Hasan et al., 2012) proposed by our group are also implemented in real-time using Java where the computations are serially executed or parallely executed in multiple processors with efficient memory management. Simulation results using finite element modeling simulation phantom show that the proposed method significantly improves the strain image quality in terms of elastographic signal-to-noise ratio (SNRe), elastographic contrast-to-noise ratio (CNRe) and mean structural similarity (MSSIM) for strains as high as 4% as compared to other reported techniques in the literature. Strain images obtained for the experimental phantom as well as in vivo breast data of malignant or benign masses also show the efficacy of our proposed method over the other reported techniques in the literature. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. A Karnaugh-Map based fingerprint minutiae extraction method

    Directory of Open Access Journals (Sweden)

    Sunil Kumar Singla

    2010-07-01

    Full Text Available Fingerprint is one of the most promising method among all the biometric techniques and has been used for thepersonal authentication for a long time because of its wide acceptance and reliability. Features (Minutiae are extracted fromthe fingerprint in question and are compared with the features already stored in the database for authentication. Crossingnumber (CN is the most commonly used minutiae extraction method for fingerprints. In this paper, a new Karnaugh-Mapbased fingerprint minutiae extraction method has been proposed and discussed. In the proposed algorithm the 8 neighborsof a pixel in a 33 window are arranged as 8 bits of a byte and corresponding hexadecimal (hex value is calculated. Thesehex values are simplified using standard Karnaugh-Map (K-map technique to obtain the minimized logical expression.Experiments conducted on the FVC2002/Db1_a database reveals that the developed method is better than the crossingnumber (CN method.

  3. Efficient Online Learning Algorithms Based on LSTM Neural Networks.

    Science.gov (United States)

    Ergen, Tolga; Kozat, Suleyman Serdar

    2017-09-13

    We investigate online nonlinear regression and introduce novel regression structures based on the long short term memory (LSTM) networks. For the introduced structures, we also provide highly efficient and effective online training methods. To train these novel LSTM-based structures, we put the underlying architecture in a state space form and introduce highly efficient and effective particle filtering (PF)-based updates. We also provide stochastic gradient descent and extended Kalman filter-based updates. Our PF-based training method guarantees convergence to the optimal parameter estimation in the mean square error sense provided that we have a sufficient number of particles and satisfy certain technical conditions. More importantly, we achieve this performance with a computational complexity in the order of the first-order gradient-based methods by controlling the number of particles. Since our approach is generic, we also introduce a gated recurrent unit (GRU)-based approach by directly replacing the LSTM architecture with the GRU architecture, where we demonstrate the superiority of our LSTM-based approach in the sequential prediction task via different real life data sets. In addition, the experimental results illustrate significant performance improvements achieved by the introduced algorithms with respect to the conventional methods over several different benchmark real life data sets.

  4. New Collaborative Filtering Algorithms Based on SVD++ and Differential Privacy

    Directory of Open Access Journals (Sweden)

    Zhengzheng Xian

    2017-01-01

    Full Text Available Collaborative filtering technology has been widely used in the recommender system, and its implementation is supported by the large amount of real and reliable user data from the big-data era. However, with the increase of the users’ information-security awareness, these data are reduced or the quality of the data becomes worse. Singular Value Decomposition (SVD is one of the common matrix factorization methods used in collaborative filtering, which introduces the bias information of users and items and is realized by using algebraic feature extraction. The derivative model SVD++ of SVD achieves better predictive accuracy due to the addition of implicit feedback information. Differential privacy is defined very strictly and can be proved, which has become an effective measure to solve the problem of attackers indirectly deducing the personal privacy information by using background knowledge. In this paper, differential privacy is applied to the SVD++ model through three approaches: gradient perturbation, objective-function perturbation, and output perturbation. Through theoretical derivation and experimental verification, the new algorithms proposed can better protect the privacy of the original data on the basis of ensuring the predictive accuracy. In addition, an effective scheme is given that can measure the privacy protection strength and predictive accuracy, and a reasonable range for selection of the differential privacy parameter is provided.

  5. The Parallel Algorithm Based on Genetic Algorithm for Improving the Performance of Cognitive Radio

    Directory of Open Access Journals (Sweden)

    Liu Miao

    2018-01-01

    Full Text Available The intercarrier interference (ICI problem of cognitive radio (CR is severe. In this paper, the machine learning algorithm is used to obtain the optimal interference subcarriers of an unlicensed user (un-LU. Masking the optimal interference subcarriers can suppress the ICI of CR. Moreover, the parallel ICI suppression algorithm is designed to improve the calculation speed and meet the practical requirement of CR. Simulation results show that the data transmission rate threshold of un-LU can be set, the data transmission quality of un-LU can be ensured, the ICI of a licensed user (LU is suppressed, and the bit error rate (BER performance of LU is improved by implementing the parallel suppression algorithm. The ICI problem of CR is solved well by the new machine learning algorithm. The computing performance of the algorithm is improved by designing a new parallel structure and the communication performance of CR is enhanced.

  6. A Novel Immune-Inspired Shellcode Detection Algorithm Based on Hyperellipsoid Detectors

    Directory of Open Access Journals (Sweden)

    Tianliang Lu

    2018-01-01

    Full Text Available Shellcodes are machine language codes injected into target programs in the form of network packets or malformed files. Shellcodes can trigger buffer overflow vulnerability and execute malicious instructions. Signature matching technology used by antivirus software or intrusion detection system has low detection rate for unknown or polymorphic shellcodes; to solve such problem, an immune-inspired shellcode detection algorithm was proposed, named ISDA. Static analysis and dynamic analysis were both applied. The shellcodes were disassembled to assembly instructions during static analysis and, for dynamic analysis, the API function sequences of shellcodes were obtained by simulation execution to get the behavioral features of polymorphic shellcodes. The extracted features of shellcodes were encoded to antigens based on n-gram model. Immature detectors become mature after immune tolerance based on negative selection algorithm. To improve nonself space coverage rate, the immune detectors were encoded to hyperellipsoids. To generate better antibody offspring, the detectors were optimized through clonal selection algorithm with genetic mutation. Finally, shellcode samples were collected and tested, and result shows that the proposed method has higher detection accuracy for both nonencoded and polymorphic shellcodes.

  7. A Detection Algorithm for the BOC Signal Based on Quadrature Channel Correlation

    Directory of Open Access Journals (Sweden)

    Bo Qian

    2018-01-01

    Full Text Available In order to solve the problem of detecting a BOC signal, which uses a long-period pseudo random sequence, an algorithm is presented based on quadrature channel correlation. The quadrature channel correlation method eliminates the autocorrelation component of the carrier wave, allowing for the extraction of the absolute autocorrelation peaks of the BOC sequence. If the same lag difference and height difference exist for the adjacent peaks, the BOC signal can be detected effectively using a statistical analysis of the multiple autocorrelation peaks. The simulation results show that the interference of the carrier wave component is eliminated and the autocorrelation peaks of the BOC sequence are obtained effectively without demodulation. The BOC signal can be detected effectively when the SNR is greater than −12 dB. The detection ability can be improved further by increasing the number of sampling points. The higher the ratio of the square wave subcarrier speed to the pseudo random sequence speed is, the greater the detection ability is with a lower SNR. The algorithm presented in this paper is superior to the algorithm based on the spectral correlation.

  8. Wavelet transform and Huffman coding based electrocardiogram compression algorithm: Application to telecardiology

    International Nuclear Information System (INIS)

    Chouakri, S A; Djaafri, O; Taleb-Ahmed, A

    2013-01-01

    We present in this work an algorithm for electrocardiogram (ECG) signal compression aimed to its transmission via telecommunication channel. Basically, the proposed ECG compression algorithm is articulated on the use of wavelet transform, leading to low/high frequency components separation, high order statistics based thresholding, using level adjusted kurtosis value, to denoise the ECG signal, and next a linear predictive coding filter is applied to the wavelet coefficients producing a lower variance signal. This latter one will be coded using the Huffman encoding yielding an optimal coding length in terms of average value of bits per sample. At the receiver end point, with the assumption of an ideal communication channel, the inverse processes are carried out namely the Huffman decoding, inverse linear predictive coding filter and inverse discrete wavelet transform leading to the estimated version of the ECG signal. The proposed ECG compression algorithm is tested upon a set of ECG records extracted from the MIT-BIH Arrhythmia Data Base including different cardiac anomalies as well as the normal ECG signal. The obtained results are evaluated in terms of compression ratio and mean square error which are, respectively, around 1:8 and 7%. Besides the numerical evaluation, the visual perception demonstrates the high quality of ECG signal restitution where the different ECG waves are recovered correctly

  9. Wavelet-based de-noising algorithm for images acquired with parallel magnetic resonance imaging (MRI)

    International Nuclear Information System (INIS)

    Delakis, Ioannis; Hammad, Omer; Kitney, Richard I

    2007-01-01

    Wavelet-based de-noising has been shown to improve image signal-to-noise ratio in magnetic resonance imaging (MRI) while maintaining spatial resolution. Wavelet-based de-noising techniques typically implemented in MRI require that noise displays uniform spatial distribution. However, images acquired with parallel MRI have spatially varying noise levels. In this work, a new algorithm for filtering images with parallel MRI is presented. The proposed algorithm extracts the edges from the original image and then generates a noise map from the wavelet coefficients at finer scales. The noise map is zeroed at locations where edges have been detected and directional analysis is also used to calculate noise in regions of low-contrast edges that may not have been detected. The new methodology was applied on phantom and brain images and compared with other applicable de-noising techniques. The performance of the proposed algorithm was shown to be comparable with other techniques in central areas of the images, where noise levels are high. In addition, finer details and edges were maintained in peripheral areas, where noise levels are low. The proposed methodology is fully automated and can be applied on final reconstructed images without requiring sensitivity profiles or noise matrices of the receiver coils, therefore making it suitable for implementation in a clinical MRI setting

  10. Hausdorff-Based RC and IESIL Combined Positioning Algorithm for Underwater Geomagnetic Navigation

    Directory of Open Access Journals (Sweden)

    Lin Yi

    2010-01-01

    Full Text Available This paper presents a primitive solution with novel scheme and algorithm for Underwater geoMagnetic Navigation (UMN, which now occurs as the hot-point in the research field of navigation. UMN as an independent or supplementary technique can theoretically supply accurate locations for marine vehicles, but in practice there are plenty of restrictions for UMN's application (e.g., geomagnetic daily variation. After analysis of the theoretical model of geomagnetic positioning in the correlation-matching mode from the viewpoint of pattern recognition, this paper proposed an appropriate matching scenario and a combined positioning algorithm for UMN. The subalgorithm of Hausdorff-based Relative Correlation (RC corresponding to the pattern classification module implements the coarse positioning, and the subalgorithm of Isograms Equidistance-Segmenting theIntersection Lines (IESILs associated with the module of feature extraction continues the fine positioning. The experiments based on the simulation platform and the real-surveyed data both validate the new algorithm, and its efficiency and accuracy are also discussed. It can be concluded that the work introduced in this paper gives an initial and real validation of UMN's potentiality.

  11. Resizing Technique-Based Hybrid Genetic Algorithm for Optimal Drift Design of Multistory Steel Frame Buildings

    Directory of Open Access Journals (Sweden)

    Hyo Seon Park

    2014-01-01

    Full Text Available Since genetic algorithm-based optimization methods are computationally expensive for practical use in the field of structural optimization, a resizing technique-based hybrid genetic algorithm for the drift design of multistory steel frame buildings is proposed to increase the convergence speed of genetic algorithms. To reduce the number of structural analyses required for the convergence, a genetic algorithm is combined with a resizing technique that is an efficient optimal technique to control the drift of buildings without the repetitive structural analysis. The resizing technique-based hybrid genetic algorithm proposed in this paper is applied to the minimum weight design of three steel frame buildings. To evaluate the performance of the algorithm, optimum weights, computational times, and generation numbers from the proposed algorithm are compared with those from a genetic algorithm. Based on the comparisons, it is concluded that the hybrid genetic algorithm shows clear improvements in convergence properties.

  12. A cooperative control algorithm for camera based observational systems.

    Energy Technology Data Exchange (ETDEWEB)

    Young, Joseph G.

    2012-01-01

    Over the last several years, there has been considerable growth in camera based observation systems for a variety of safety, scientific, and recreational applications. In order to improve the effectiveness of these systems, we frequently desire the ability to increase the number of observed objects, but solving this problem is not as simple as adding more cameras. Quite often, there are economic or physical restrictions that prevent us from adding additional cameras to the system. As a result, we require methods that coordinate the tracking of objects between multiple cameras in an optimal way. In order to accomplish this goal, we present a new cooperative control algorithm for a camera based observational system. Specifically, we present a receding horizon control where we model the underlying optimal control problem as a mixed integer linear program. The benefit of this design is that we can coordinate the actions between each camera while simultaneously respecting its kinematics. In addition, we further improve the quality of our solution by coupling our algorithm with a Kalman filter. Through this integration, we not only add a predictive component to our control, but we use the uncertainty estimates provided by the filter to encourage the system to periodically observe any outliers in the observed area. This combined approach allows us to intelligently observe the entire region of interest in an effective and thorough manner.

  13. Generalized SMO algorithm for SVM-based multitask learning.

    Science.gov (United States)

    Cai, Feng; Cherkassky, Vladimir

    2012-06-01

    Exploiting additional information to improve traditional inductive learning is an active research area in machine learning. In many supervised-learning applications, training data can be naturally separated into several groups, and incorporating this group information into learning may improve generalization. Recently, Vapnik proposed a general approach to formalizing such problems, known as "learning with structured data" and its support vector machine (SVM) based optimization formulation called SVM+. Liang and Cherkassky showed the connection between SVM+ and multitask learning (MTL) approaches in machine learning, and proposed an SVM-based formulation for MTL called SVM+MTL for classification. Training the SVM+MTL classifier requires the solution of a large quadratic programming optimization problem which scales as O(n(3)) with sample size n. So there is a need to develop computationally efficient algorithms for implementing SVM+MTL. This brief generalizes Platt's sequential minimal optimization (SMO) algorithm to the SVM+MTL setting. Empirical results show that, for typical SVM+MTL problems, the proposed generalized SMO achieves over 100 times speed-up, in comparison with general-purpose optimization routines.

  14. Particle swarm optimization algorithm based low cost magnetometer calibration

    Science.gov (United States)

    Ali, A. S.; Siddharth, S., Syed, Z., El-Sheimy, N.

    2011-12-01

    Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a microprocessor provide inertial digital data from which position and orientation is obtained by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the absolute user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are corrupted by several errors including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO) based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometer. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. The estimated bias and scale factor errors from the proposed algorithm improve the heading accuracy and the results are also statistically significant. Also, it can help in the development of the Pedestrian Navigation Devices (PNDs) when combined with the INS and GPS/Wi-Fi especially in the indoor environments

  15. Chaos Time Series Prediction Based on Membrane Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Meng Li

    2015-01-01

    Full Text Available This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ,m and least squares support vector machine (LS-SVM (γ,σ by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE, root mean square error (RMSE, and mean absolute percentage error (MAPE.

  16. Warehouse stocking optimization based on dynamic ant colony genetic algorithm

    Science.gov (United States)

    Xiao, Xiaoxu

    2018-04-01

    In view of the various orders of FAW (First Automotive Works) International Logistics Co., Ltd., the SLP method is used to optimize the layout of the warehousing units in the enterprise, thus the warehouse logistics is optimized and the external processing speed of the order is improved. In addition, the relevant intelligent algorithms for optimizing the stocking route problem are analyzed. The ant colony algorithm and genetic algorithm which have good applicability are emphatically studied. The parameters of ant colony algorithm are optimized by genetic algorithm, which improves the performance of ant colony algorithm. A typical path optimization problem model is taken as an example to prove the effectiveness of parameter optimization.

  17. Smoke regions extraction based on two steps segmentation and motion detection in early fire

    Science.gov (United States)

    Jian, Wenlin; Wu, Kaizhi; Yu, Zirong; Chen, Lijuan

    2018-03-01

    Aiming at the early problems of video-based smoke detection in fire video, this paper proposes a method to extract smoke suspected regions by combining two steps segmentation and motion characteristics. Early smoldering smoke can be seen as gray or gray-white regions. In the first stage, regions of interests (ROIs) with smoke are obtained by using two step segmentation methods. Then, suspected smoke regions are detected by combining the two step segmentation and motion detection. Finally, morphological processing is used for smoke regions extracting. The Otsu algorithm is used as segmentation method and the ViBe algorithm is used to detect the motion of smoke. The proposed method was tested on 6 test videos with smoke. The experimental results show the effectiveness of our proposed method over visual observation.

  18. A probabilistic fragment-based protein structure prediction algorithm.

    Directory of Open Access Journals (Sweden)

    David Simoncini

    Full Text Available Conformational sampling is one of the bottlenecks in fragment-based protein structure prediction approaches. They generally start with a coarse-grained optimization where mainchain atoms and centroids of side chains are considered, followed by a fine-grained optimization with an all-atom representation of proteins. It is during this coarse-grained phase that fragment-based methods sample intensely the conformational space. If the native-like region is sampled more, the accuracy of the final all-atom predictions may be improved accordingly. In this work we present EdaFold, a new method for fragment-based protein structure prediction based on an Estimation of Distribution Algorithm. Fragment-based approaches build protein models by assembling short fragments from known protein structures. Whereas the probability mass functions over the fragment libraries are uniform in the usual case, we propose an algorithm that learns from previously generated decoys and steers the search toward native-like regions. A comparison with Rosetta AbInitio protocol shows that EdaFold is able to generate models with lower energies and to enhance the percentage of near-native coarse-grained decoys on a benchmark of [Formula: see text] proteins. The best coarse-grained models produced by both methods were refined into all-atom models and used in molecular replacement. All atom decoys produced out of EdaFold's decoy set reach high enough accuracy to solve the crystallographic phase problem by molecular replacement for some test proteins. EdaFold showed a higher success rate in molecular replacement when compared to Rosetta. Our study suggests that improving low resolution coarse-grained decoys allows computational methods to avoid subsequent sampling issues during all-atom refinement and to produce better all-atom models. EdaFold can be downloaded from http://www.riken.jp/zhangiru/software.html [corrected].

  19. Species-specific audio detection: a comparison of three template-based detection algorithms using random forests

    Directory of Open Access Journals (Sweden)

    Carlos J. Corrada Bravo

    2017-04-01

    Full Text Available We developed a web-based cloud-hosted system that allow users to archive, listen, visualize, and annotate recordings. The system also provides tools to convert these annotations into datasets that can be used to train a computer to detect the presence or absence of a species. The algorithm used by the system was selected after comparing the accuracy and efficiency of three variants of a template-based detection. The algorithm computes a similarity vector by comparing a template of a species call with time increments across the spectrogram. Statistical features are extracted from this vector and used as input for a Random Forest classifier that predicts presence or absence of the species in the recording. The fastest algorithm variant had the highest average accuracy and specificity; therefore, it was implemented in the ARBIMON web-based system.

  20. Comparative performance of an elitist teaching-learning-based optimization algorithm for solving unconstrained optimization problems

    Directory of Open Access Journals (Sweden)

    R. Venkata Rao

    2013-01-01

    Full Text Available Teaching-Learning-based optimization (TLBO is a recently proposed population based algorithm, which simulates the teaching-learning process of the class room. This algorithm requires only the common control parameters and does not require any algorithm-specific control parameters. In this paper, the effect of elitism on the performance of the TLBO algorithm is investigated while solving unconstrained benchmark problems. The effects of common control parameters such as the population size and the number of generations on the performance of the algorithm are also investigated. The proposed algorithm is tested on 76 unconstrained benchmark functions with different characteristics and the performance of the algorithm is compared with that of other well known optimization algorithms. A statistical test is also performed to investigate the results obtained using different algorithms. The results have proved the effectiveness of the proposed elitist TLBO algorithm.

  1. A novel image encryption algorithm based on synchronized random bit generated in cascade-coupled chaotic semiconductor ring lasers

    Science.gov (United States)

    Li, Jiafu; Xiang, Shuiying; Wang, Haoning; Gong, Junkai; Wen, Aijun

    2018-03-01

    In this paper, a novel image encryption algorithm based on synchronization of physical random bit generated in a cascade-coupled semiconductor ring lasers (CCSRL) system is proposed, and the security analysis is performed. In both transmitter and receiver parts, the CCSRL system is a master-slave configuration consisting of a master semiconductor ring laser (M-SRL) with cross-feedback and a solitary SRL (S-SRL). The proposed image encryption algorithm includes image preprocessing based on conventional chaotic maps, pixel confusion based on control matrix extracted from physical random bit, and pixel diffusion based on random bit stream extracted from physical random bit. Firstly, the preprocessing method is used to eliminate the correlation between adjacent pixels. Secondly, physical random bit with verified randomness is generated based on chaos in the CCSRL system, and is used to simultaneously generate the control matrix and random bit stream. Finally, the control matrix and random bit stream are used for the encryption algorithm in order to change the position and the values of pixels, respectively. Simulation results and security analysis demonstrate that the proposed algorithm is effective and able to resist various typical attacks, and thus is an excellent candidate for secure image communication application.

  2. Fusion of phase congruency and harris algorithm for extraction of iris corner points

    CSIR Research Space (South Africa)

    Mabuza-Hocquet, G

    2015-12-01

    Full Text Available iris identification systems. Low cost devices used under uncontrolled environments acquire poor iris images with inconsistent illumination and specular reflections. These factors inflict challenges towards the accurate identification and extraction...

  3. Distributed Classification of Localization Attacks in Sensor Networks Using Exchange-Based Feature Extraction and Classifier

    Directory of Open Access Journals (Sweden)

    Su-Zhe Wang

    2016-01-01

    Full Text Available Secure localization under different forms of attack has become an essential task in wireless sensor networks. Despite the significant research efforts in detecting the malicious nodes, the problem of localization attack type recognition has not yet been well addressed. Motivated by this concern, we propose a novel exchange-based attack classification algorithm. This is achieved by a distributed expectation maximization extractor integrated with the PECPR-MKSVM classifier. First, the mixed distribution features based on the probabilistic modeling are extracted using a distributed expectation maximization algorithm. After feature extraction, by introducing the theory from support vector machine, an extensive contractive Peaceman-Rachford splitting method is derived to build the distributed classifier that diffuses the iteration calculation among neighbor sensors. To verify the efficiency of the distributed recognition scheme, four groups of experiments were carried out under various conditions. The average success rate of the proposed classification algorithm obtained in the presented experiments for external attacks is excellent and has achieved about 93.9% in some cases. These testing results demonstrate that the proposed algorithm can produce much greater recognition rate, and it can be also more robust and efficient even in the presence of excessive malicious scenario.

  4. Multiple-algorithm parallel fusion of infrared polarization and intensity images based on algorithmic complementarity and synergy

    Science.gov (United States)

    Zhang, Lei; Yang, Fengbao; Ji, Linna; Lv, Sheng

    2018-01-01

    Diverse image fusion methods perform differently. Each method has advantages and disadvantages compared with others. One notion is that the advantages of different image methods can be effectively combined. A multiple-algorithm parallel fusion method based on algorithmic complementarity and synergy is proposed. First, in view of the characteristics of the different algorithms and difference-features among images, an index vector-based feature-similarity is proposed to define the degree of complementarity and synergy. This proposed index vector is a reliable evidence indicator for algorithm selection. Second, the algorithms with a high degree of complementarity and synergy are selected. Then, the different degrees of various features and infrared intensity images are used as the initial weights for the nonnegative matrix factorization (NMF). This avoids randomness of the NMF initialization parameter. Finally, the fused images of different algorithms are integrated using the NMF because of its excellent data fusing performance on independent features. Experimental results demonstrate that the visual effect and objective evaluation index of the fused images obtained using the proposed method are better than those obtained using traditional methods. The proposed method retains all the advantages that individual fusion algorithms have.

  5. Automatic feature learning using multichannel ROI based on deep structured algorithms for computerized lung cancer diagnosis.

    Science.gov (United States)

    Sun, Wenqing; Zheng, Bin; Qian, Wei

    2017-10-01

    This study aimed to analyze the ability of extracting automatically generated features using deep structured algorithms in lung nodule CT image diagnosis, and compare its performance with traditional computer aided diagnosis (CADx) systems using hand-crafted features. All of the 1018 cases were acquired from Lung Image Database Consortium (LIDC) public lung cancer database. The nodules were segmented according to four radiologists' markings, and 13,668 samples were generated by rotating every slice of nodule images. Three multichannel ROI based deep structured algorithms were designed and implemented in this study: convolutional neural network (CNN), deep belief network (DBN), and stacked denoising autoencoder (SDAE). For the comparison purpose, we also implemented a CADx system using hand-crafted features including density features, texture features and morphological features. The performance of every scheme was evaluated by using a 10-fold cross-validation method and an assessment index of the area under the receiver operating characteristic curve (AUC). The observed highest area under the curve (AUC) was 0.899±0.018 achieved by CNN, which was significantly higher than traditional CADx with the AUC=0.848±0.026. The results from DBN was also slightly higher than CADx, while SDAE was slightly lower. By visualizing the automatic generated features, we found some meaningful detectors like curvy stroke detectors from deep structured schemes. The study results showed the deep structured algorithms with automatically generated features can achieve desirable performance in lung nodule diagnosis. With well-tuned parameters and large enough dataset, the deep learning algorithms can have better performance than current popular CADx. We believe the deep learning algorithms with similar data preprocessing procedure can be used in other medical image analysis areas as well. Copyright © 2017. Published by Elsevier Ltd.

  6. Medical imaging in clinical applications algorithmic and computer-based approaches

    CERN Document Server

    Bhateja, Vikrant; Hassanien, Aboul

    2016-01-01

    This volume comprises of 21 selected chapters, including two overview chapters devoted to abdominal imaging in clinical applications supported computer aided diagnosis approaches as well as different techniques for solving the pectoral muscle extraction problem in the preprocessing part of the CAD systems for detecting breast cancer in its early stage using digital mammograms. The aim of this book is to stimulate further research in medical imaging applications based algorithmic and computer based approaches and utilize them in real-world clinical applications. The book is divided into four parts, Part-I: Clinical Applications of Medical Imaging, Part-II: Classification and clustering, Part-III: Computer Aided Diagnosis (CAD) Tools and Case Studies and Part-IV: Bio-inspiring based Computer Aided diagnosis techniques. .

  7. Image Blocking Encryption Algorithm Based on Laser Chaos Synchronization

    Directory of Open Access Journals (Sweden)

    Shu-Ying Wang

    2016-01-01

    Full Text Available In view of the digital image transmission security, based on laser chaos synchronization and Arnold cat map, a novel image encryption scheme is proposed. Based on pixel values of plain image a parameter is generated to influence the secret key. Sequences of the drive system and response system are pretreated by the same method and make image blocking encryption scheme for plain image. Finally, pixels position are scrambled by general Arnold transformation. In decryption process, the chaotic synchronization accuracy is fully considered and the relationship between the effect of synchronization and decryption is analyzed, which has characteristics of high precision, higher efficiency, simplicity, flexibility, and better controllability. The experimental results show that the encryption algorithm image has high security and good antijamming performance.

  8. Optimized Laplacian image sharpening algorithm based on graphic processing unit

    Science.gov (United States)

    Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah

    2014-12-01

    In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.

  9. Retinal biometrics based on Iterative Closest Point algorithm.

    Science.gov (United States)

    Hatanaka, Yuji; Tajima, Mikiya; Kawasaki, Ryo; Saito, Koko; Ogohara, Kazunori; Muramatsu, Chisako; Sunayama, Wataru; Fujita, Hiroshi

    2017-07-01

    The pattern of blood vessels in the eye is unique to each person because it rarely changes over time. Therefore, it is well known that retinal blood vessels are useful for biometrics. This paper describes a biometrics method using the Jaccard similarity coefficient (JSC) based on blood vessel regions in retinal image pairs. The retinal image pairs were rough matched by the center of their optic discs. Moreover, the image pairs were aligned using the Iterative Closest Point algorithm based on detailed blood vessel skeletons. For registration, perspective transform was applied to the retinal images. Finally, the pairs were classified as either correct or incorrect using the JSC of the blood vessel region in the image pairs. The proposed method was applied to temporal retinal images, which were obtained in 2009 (695 images) and 2013 (87 images). The 87 images acquired in 2013 were all from persons already examined in 2009. The accuracy of the proposed method reached 100%.

  10. A dual-adaptive support-based stereo matching algorithm

    Science.gov (United States)

    Zhang, Yin; Zhang, Yun

    2017-07-01

    Many stereo matching algorithms use fixed color thresholds and a rigid cross skeleton to segment supports (viz., Cross method), which, however, does not work well for different images. To address this issue, this paper proposes a novel dual adaptive support (viz., DAS)-based stereo matching method, which uses both appearance and shape information of a local region to segment supports automatically, and, then, integrates the DAS-based cost aggregation with the absolute difference plus census transform cost, scanline optimization and disparity refinement to develop a stereo matching system. The performance of the DAS method is also evaluated in the Middlebury benchmark and by comparing with the Cross method. The results show that the average error for the DAS method 25.06% lower than that for the Cross method, indicating that the proposed method is more accurate, with fewer parameters and suitable for parallel computing.

  11. Symmetric Stream Cipher using Triple Transposition Key Method and Base64 Algorithm for Security Improvement

    Science.gov (United States)

    Nurdiyanto, Heri; Rahim, Robbi; Wulan, Nur

    2017-12-01

    Symmetric type cryptography algorithm is known many weaknesses in encryption process compared with asymmetric type algorithm, symmetric stream cipher are algorithm that works on XOR process between plaintext and key, to improve the security of symmetric stream cipher algorithm done improvisation by using Triple Transposition Key which developed from Transposition Cipher and also use Base64 algorithm for encryption ending process, and from experiment the ciphertext that produced good enough and very random.

  12. A study of Hough Transform-based fingerprint alignment algorithms

    CSIR Research Space (South Africa)

    Mlambo, CS

    2014-10-01

    Full Text Available the implementation of each algorithm. The comparison is performed by considering the alignment results computed using each group of algorithms when varying number of minutiae points, rotation angle, and translation. In addition, the memory usage, computing time...

  13. Packet-Based Control Algorithms for Cooperative Surveillance and Reconnaissance

    National Research Council Canada - National Science Library

    Murray, Richard M

    2007-01-01

    ..., and repeated transmissions. Results include analysis and design of estimation and control algorithms in the presence of packet loss and across multi-hop data networks, distributed estimation and sensor fusion algorithms...

  14. A time domain phase-gradient based ISAR autofocus algorithm

    CSIR Research Space (South Africa)

    Nel, W

    2011-10-01

    Full Text Available . Results on simulated and measured data show that the algorithm performs well. Unlike many other ISAR autofocus techniques, the algorithm does not make use of several computationally intensive iterations between the data and image domains as part...

  15. Multi-user cognitive radio network resource allocation based on the adaptive niche immune genetic algorithm

    International Nuclear Information System (INIS)

    Zu Yun-Xiao; Zhou Jie

    2012-01-01

    Multi-user cognitive radio network resource allocation based on the adaptive niche immune genetic algorithm is proposed, and a fitness function is provided. Simulations are conducted using the adaptive niche immune genetic algorithm, the simulated annealing algorithm, the quantum genetic algorithm and the simple genetic algorithm, respectively. The results show that the adaptive niche immune genetic algorithm performs better than the other three algorithms in terms of the multi-user cognitive radio network resource allocation, and has quick convergence speed and strong global searching capability, which effectively reduces the system power consumption and bit error rate. (geophysics, astronomy, and astrophysics)

  16. GPU-based parallel algorithm for blind image restoration using midfrequency-based methods

    Science.gov (United States)

    Xie, Lang; Luo, Yi-han; Bao, Qi-liang

    2013-08-01

    GPU-based general-purpose computing is a new branch of modern parallel computing, so the study of parallel algorithms specially designed for GPU hardware architecture is of great significance. In order to solve the problem of high computational complexity and poor real-time performance in blind image restoration, the midfrequency-based algorithm for blind image restoration was analyzed and improved in this paper. Furthermore, a midfrequency-based filtering method is also used to restore the image hardly with any recursion or iteration. Combining the algorithm with data intensiveness, data parallel computing and GPU execution model of single instruction and multiple threads, a new parallel midfrequency-based algorithm for blind image restoration is proposed in this paper, which is suitable for stream computing of GPU. In this algorithm, the GPU is utilized to accelerate the estimation of class-G point spread functions and midfrequency-based filtering. Aiming at better management of the GPU threads, the threads in a grid are scheduled according to the decomposition of the filtering data in frequency domain after the optimization of data access and the communication between the host and the device. The kernel parallelism structure is determined by the decomposition of the filtering data to ensure the transmission rate to get around the memory bandwidth limitation. The results show that, with the new algorithm, the operational speed is significantly increased and the real-time performance of image restoration is effectively improved, especially for high-resolution images.

  17. Effective arithmetic in finite fields based on Chudnovsky's multiplication algorithm

    OpenAIRE

    Atighehchi , Kévin; Ballet , Stéphane; Bonnecaze , Alexis; Rolland , Robert

    2016-01-01

    International audience; Thanks to a new construction of the Chudnovsky and Chudnovsky multiplication algorithm, we design efficient algorithms for both the exponentiation and the multiplication in finite fields. They are tailored to hardware implementation and they allow computations to be parallelized, while maintaining a low number of bilinear multiplications.À partir d'une nouvelle construction de l'algorithme de multiplication de Chudnovsky et Chudnovsky, nous concevons des algorithmes ef...

  18. A Novel Quad Harmony Search Algorithm for Grid-Based Path Finding

    Directory of Open Access Journals (Sweden)

    Saso Koceski

    2014-09-01

    Full Text Available A novel approach to the problem of grid-based path finding has been introduced. The method is a block-based search algorithm, founded on the bases of two algorithms, namely the quad-tree algorithm, which offered a great opportunity for decreasing the time needed to compute the solution, and the harmony search (HS algorithm, a meta-heuristic algorithm used to obtain the optimal solution. This quad HS algorithm uses the quad-tree decomposition of free space in the grid to mark the free areas and treat them as a single node, which greatly improves the execution. The results of the quad HS algorithm have been compared to other meta-heuristic algorithms, i.e., ant colony, genetic algorithm, particle swarm optimization and simulated annealing, and it was proved to obtain the best results in terms of time and giving the optimal path.

  19. ANNIT - An Efficient Inversion Algorithm based on Prediction Principles

    Science.gov (United States)

    Růžek, B.; Kolář, P.

    2009-04-01

    Solution of inverse problems represents meaningful job in geophysics. The amount of data is continuously increasing, methods of modeling are being improved and the computer facilities are also advancing great technical progress. Therefore the development of new and efficient algorithms and computer codes for both forward and inverse modeling is still up to date. ANNIT is contributing to this stream since it is a tool for efficient solution of a set of non-linear equations. Typical geophysical problems are based on parametric approach. The system is characterized by a vector of parameters p, the response of the system is characterized by a vector of data d. The forward problem is usually represented by unique mapping F(p)=d. The inverse problem is much more complex and the inverse mapping p=G(d) is available in an analytical or closed form only exceptionally and generally it may not exist at all. Technically, both forward and inverse mapping F and G are sets of non-linear equations. ANNIT solves such situation as follows: (i) joint subspaces {pD, pM} of original data and model spaces D, M, resp. are searched for, within which the forward mapping F is sufficiently smooth that the inverse mapping G does exist, (ii) numerical approximation of G in subspaces {pD, pM} is found, (iii) candidate solution is predicted by using this numerical approximation. ANNIT is working in an iterative way in cycles. The subspaces {pD, pM} are searched for by generating suitable populations of individuals (models) covering data and model spaces. The approximation of the inverse mapping is made by using three methods: (a) linear regression, (b) Radial Basis Function Network technique, (c) linear prediction (also known as "Kriging"). The ANNIT algorithm has built in also an archive of already evaluated models. Archive models are re-used in a suitable way and thus the number of forward evaluations is minimized. ANNIT is now implemented both in MATLAB and SCILAB. Numerical tests show good

  20. Evolutionary algorithm based heuristic scheme for nonlinear heat transfer equations.

    Science.gov (United States)

    Ullah, Azmat; Malik, Suheel Abdullah; Alimgeer, Khurram Saleem

    2018-01-01

    In this paper, a hybrid heuristic scheme based on two different basis functions i.e. Log Sigmoid and Bernstein Polynomial with unknown parameters is used for solving the nonlinear heat transfer equations efficiently. The proposed technique transforms the given nonlinear ordinary differential equation into an equivalent global error minimization problem. Trial solution for the given nonlinear differential equation is formulated using a fitness function with unknown parameters. The proposed hybrid scheme of Genetic Algorithm (GA) with Interior Point Algorithm (IPA) is opted to solve the minimization problem and to achieve the optimal values of unknown parameters. The effectiveness of the proposed scheme is validated by solving nonlinear heat transfer equations. The results obtained by the proposed scheme are compared and found in sharp agreement with both the exact solution and solution obtained by Haar Wavelet-Quasilinearization technique which witnesses the effectiveness and viability of the suggested scheme. Moreover, the statistical analysis is also conducted for investigating the stability and reliability of the presented scheme.

  1. Neighborhood Hypergraph Based Classification Algorithm for Incomplete Information System

    Directory of Open Access Journals (Sweden)

    Feng Hu

    2015-01-01

    Full Text Available The problem of classification in incomplete information system is a hot issue in intelligent information processing. Hypergraph is a new intelligent method for machine learning. However, it is hard to process the incomplete information system by the traditional hypergraph, which is due to two reasons: (1 the hyperedges are generated randomly in traditional hypergraph model; (2 the existing methods are unsuitable to deal with incomplete information system, for the sake of missing values in incomplete information system. In this paper, we propose a novel classification algorithm for incomplete information system based on hypergraph model and rough set theory. Firstly, we initialize the hypergraph. Second, we classify the training set by neighborhood hypergraph. Third, under the guidance of rough set, we replace the poor hyperedges. After that, we can obtain a good classifier. The proposed approach is tested on 15 data sets from UCI machine learning repository. Furthermore, it is compared with some existing methods, such as C4.5, SVM, NavieBayes, and KNN. The experimental results show that the proposed algorithm has better performance via Precision, Recall, AUC, and F-measure.

  2. Evolutionary algorithm based heuristic scheme for nonlinear heat transfer equations.

    Directory of Open Access Journals (Sweden)

    Azmat Ullah

    Full Text Available In this paper, a hybrid heuristic scheme based on two different basis functions i.e. Log Sigmoid and Bernstein Polynomial with unknown parameters is used for solving the nonlinear heat transfer equations efficiently. The proposed technique transforms the given nonlinear ordinary differential equation into an equivalent global error minimization problem. Trial solution for the given nonlinear differential equation is formulated using a fitness function with unknown parameters. The proposed hybrid scheme of Genetic Algorithm (GA with Interior Point Algorithm (IPA is opted to solve the minimization problem and to achieve the optimal values of unknown parameters. The effectiveness of the proposed scheme is validated by solving nonlinear heat transfer equations. The results obtained by the proposed scheme are compared and found in sharp agreement with both the exact solution and solution obtained by Haar Wavelet-Quasilinearization technique which witnesses the effectiveness and viability of the suggested scheme. Moreover, the statistical analysis is also conducted for investigating the stability and reliability of the presented scheme.

  3. Disaster Monitoring using Grid Based Data Fusion Algorithms

    Directory of Open Access Journals (Sweden)

    Cătălin NAE

    2010-12-01

    Full Text Available This is a study of the application of Grid technology and high performance parallelcomputing to a candidate algorithm for jointly accomplishing data fusion from different sensors. Thisincludes applications for both image analysis and/or data processing for simultaneously trackingmultiple targets in real-time. The emphasis is on comparing the architectures of the serial andparallel algorithms, and characterizing the performance benefits achieved by the parallel algorithmwith both on-ground and in-space hardware implementations. The improved performance levelsachieved by the use of Grid technology (middleware for Parallel Data Fusion are presented for themain metrics of interest in near real-time applications, namely latency, total computation load, andtotal sustainable throughput. The objective of this analysis is, therefore, to demonstrate animplementation of multi-sensor data fusion and/or multi-target tracking functions within an integratedmulti-node portable HPC architecture based on emerging Grid technology. The key metrics to bedetermined in support of ongoing system analyses includes: required computational throughput inMFLOPS; latency between receipt of input data and resulting outputs; and scalability, processorutilization and memory requirements. Furthermore, the standard MPI functions are considered to beused for inter-node communications in order to promote code portability across multiple HPCcomputer platforms, both in space and on-ground.

  4. Genetic Algorithm-Based Identification of Fractional-Order Systems

    Directory of Open Access Journals (Sweden)

    Shengxi Zhou

    2013-05-01

    Full Text Available Fractional calculus has become an increasingly popular tool for modeling the complex behaviors of physical systems from diverse domains. One of the key issues to apply fractional calculus to engineering problems is to achieve the parameter identification of fractional-order systems. A time-domain identification algorithm based on a genetic algorithm (GA is proposed in this paper. The multi-variable parameter identification is converted into a parameter optimization by applying GA to the identification of fractional-order systems. To evaluate the identification accuracy and stability, the time-domain output error considering the condition variation is designed as the fitness function for parameter optimization. The identification process is established under various noise levels and excitation levels. The effects of external excitation and the noise level on the identification accuracy are analyzed in detail. The simulation results show that the proposed method could identify the parameters of both commensurate rate and non-commensurate rate fractional-order systems from the data with noise. It is also observed that excitation signal is an important factor influencing the identification accuracy of fractional-order systems.

  5. An airport surface surveillance solution based on fusion algorithm

    Science.gov (United States)

    Liu, Jianliang; Xu, Yang; Liang, Xuelin; Yang, Yihuang

    2017-01-01

    In this paper, we propose an airport surface surveillance solution combined with Multilateration (MLAT) and Automatic Dependent Surveillance Broadcast (ADS-B). The moving target to be monitored is regarded as a linear stochastic hybrid system moving freely and each surveillance technology is simplified as a sensor with white Gaussian noise. The dynamic model of target and the observation model of sensor are established in this paper. The measurements of sensors are filtered properly by estimators to get the estimation results for current time. Then, we analysis the characteristics of two fusion solutions proposed, and decide to use the scheme based on sensor estimation fusion for our surveillance solution. In the proposed fusion algorithm, according to the output of estimators, the estimation error is quantified, and the fusion weight of each sensor is calculated. The two estimation results are fused with weights, and the position estimation of target is computed accurately. Finally the proposed solution and algorithm are validated by an illustrative target tracking simulation.

  6. A ROBUST METHOD FOR STEREO VISUAL ODOMETRY BASED ON MULTIPLE EUCLIDEAN DISTANCE CONSTRAINT AND RANSAC ALGORITHM

    Directory of Open Access Journals (Sweden)

    Q. Zhou

    2017-07-01

    Full Text Available Visual Odometry (VO is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC and Random Sample Consensus (RANSAC algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation. The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.

  7. Design of SVC Controller Based on Improved Biogeography-Based Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Feifei Dong

    2014-01-01

    Full Text Available Considering that common subsynchronous resonance controllers cannot adapt to the characteristics of the time-varying and nonlinear behavior of a power system, the cosine migration model, the improved migration operator, and the mutative scale of chaos and Cauchy mutation strategy are introduced into an improved biogeography-based optimization (IBBO algorithm in order to design an optimal subsynchronous damping controller based on the mechanism of suppressing SSR by static var compensator (SVC. The effectiveness of the improved controller is verified by eigenvalue analysis and electromagnetic simulations. The simulation results of Jinjie plant indicate that the subsynchronous damping controller optimized by the IBBO algorithm can remarkably improve the damping of torsional modes and thus effectively depress SSR, and ensure the safety and stability of units and power grid operation. Moreover, the IBBO algorithm has the merits of a faster searching speed and higher searching accuracy in seeking the optimal control parameters over traditional algorithms, such as BBO algorithm, PSO algorithm, and GA algorithm.

  8. Petri nets SM-cover-based on heuristic coloring algorithm

    Science.gov (United States)

    Tkacz, Jacek; Doligalski, Michał

    2015-09-01

    In the paper, coloring heuristic algorithm of interpreted Petri nets is presented. Coloring is used to determine the State Machines (SM) subnets. The present algorithm reduces the Petri net in order to reduce the computational complexity and finds one of its possible State Machines cover. The proposed algorithm uses elements of interpretation of Petri nets. The obtained result may not be the best, but it is sufficient for use in rapid prototyping of logic controllers. Found SM-cover will be also used in the development of algorithms for decomposition, and modular synthesis and implementation of parallel logic controllers. Correctness developed heuristic algorithm was verified using Gentzen formal reasoning system.

  9. Physics-Based Image Segmentation Using First Order Statistical Properties and Genetic Algorithm for Inductive Thermography Imaging.

    Science.gov (United States)

    Gao, Bin; Li, Xiaoqing; Woo, Wai Lok; Tian, Gui Yun

    2018-05-01

    Thermographic inspection has been widely applied to non-destructive testing and evaluation with the capabilities of rapid, contactless, and large surface area detection. Image segmentation is considered essential for identifying and sizing defects. To attain a high-level performance, specific physics-based models that describe defects generation and enable the precise extraction of target region are of crucial importance. In this paper, an effective genetic first-order statistical image segmentation algorithm is proposed for quantitative crack detection. The proposed method automatically extracts valuable spatial-temporal patterns from unsupervised feature extraction algorithm and avoids a range of issues associated with human intervention in laborious manual selection of specific thermal video frames for processing. An internal genetic functionality is built into the proposed algorithm to automatically control the segmentation threshold to render enhanced accuracy in sizing the cracks. Eddy current pulsed thermography will be implemented as a platform to demonstrate surface crack detection. Experimental tests and comparisons have been conducted to verify the efficacy of the proposed method. In addition, a global quantitative assessment index F-score has been adopted to objectively evaluate the performance of different segmentation algorithms.

  10. GPR identification of voids inside concrete based on the support vector machine algorithm

    International Nuclear Information System (INIS)

    Xie, Xiongyao; Li, Pan; Qin, Hui; Liu, Lanbo; Nobes, David C

    2013-01-01

    Voids inside reinforced concrete, which affect structural safety, are identified from ground penetrating radar (GPR) images using a completely automatic method based on the support vector machine (SVM) algorithm. The entire process can be characterized into four steps: (1) the original SVM model is built by training synthetic GPR data generated by finite difference time domain simulation and after data preprocessing, segmentation and feature extraction. (2) The classification accuracy of different kernel functions is compared with the cross-validation method and the penalty factor (c) of the SVM and the coefficient (σ2) of kernel functions are optimized by using the grid algorithm and the genetic algorithm. (3) To test the success of classification, this model is then verified and validated by applying it to another set of synthetic GPR data. The result shows a high success rate for classification. (4) This original classifier model is finally applied to a set of real GPR data to identify and classify voids. The result is less than ideal when compared with its application to synthetic data before the original model is improved. In general, this study shows that the SVM exhibits promising performance in the GPR identification of voids inside reinforced concrete. Nevertheless, the recognition of shape and distribution of voids may need further improvement. (paper)

  11. Multiple Sclerosis Identification Based on Fractional Fourier Entropy and a Modified Jaya Algorithm

    Directory of Open Access Journals (Sweden)

    Shui-Hua Wang

    2018-04-01

    Full Text Available Aim: Currently, identifying multiple sclerosis (MS by human experts may come across the problem of “normal-appearing white matter”, which causes a low sensitivity. Methods: In this study, we presented a computer vision based approached to identify MS in an automatic way. This proposed method first extracted the fractional Fourier entropy map from a specified brain image. Afterwards, it sent the features to a multilayer perceptron trained by a proposed improved parameter-free Jaya algorithm. We used cost-sensitivity learning to handle the imbalanced data problem. Results: The 10 × 10-fold cross validation showed our method yielded a sensitivity of 97.40 ± 0.60%, a specificity of 97.39 ± 0.65%, and an accuracy of 97.39 ± 0.59%. Conclusions: We validated by experiments that the proposed improved Jaya performs better than plain Jaya algorithm and other latest bioinspired algorithms in terms of classification performance and training speed. In addition, our method is superior to four state-of-the-art MS identification approaches.

  12. Identification of Anisomerous Motor Imagery EEG Signals Based on Complex Algorithms

    Science.gov (United States)

    Zhang, Zhiwen; Duan, Feng; Zhou, Xin; Meng, Zixuan

    2017-01-01

    Motor imagery (MI) electroencephalograph (EEG) signals are widely applied in brain-computer interface (BCI). However, classified MI states are limited, and their classification accuracy rates are low because of the characteristics of nonlinearity and nonstationarity. This study proposes a novel MI pattern recognition system that is based on complex algorithms for classifying MI EEG signals. In electrooculogram (EOG) artifact preprocessing, band-pass filtering is performed to obtain the frequency band of MI-related signals, and then, canonical correlation analysis (CCA) combined with wavelet threshold denoising (WTD) is used for EOG artifact preprocessing. We propose a regularized common spatial pattern (R-CSP) algorithm for EEG feature extraction by incorporating the principle of generic learning. A new classifier combining the K-nearest neighbor (KNN) and support vector machine (SVM) approaches is used to classify four anisomerous states, namely, imaginary movements with the left hand, right foot, and right shoulder and the resting state. The highest classification accuracy rate is 92.5%, and the average classification accuracy rate is 87%. The proposed complex algorithm identification method can significantly improve the identification rate of the minority samples and the overall classification performance. PMID:28874909

  13. Identification of Anisomerous Motor Imagery EEG Signals Based on Complex Algorithms.

    Science.gov (United States)

    Liu, Rensong; Zhang, Zhiwen; Duan, Feng; Zhou, Xin; Meng, Zixuan

    2017-01-01

    Motor imagery (MI) electroencephalograph (EEG) signals are widely applied in brain-computer interface (BCI). However, classified MI states are limited, and their classification accuracy rates are low because of the characteristics of nonlinearity and nonstationarity. This study proposes a novel MI pattern recognition system that is based on complex algorithms for classifying MI EEG signals. In electrooculogram (EOG) artifact preprocessing, band-pass filtering is performed to obtain the frequency band of MI-related signals, and then, canonical correlation analysis (CCA) combined with wavelet threshold denoising (WTD) is used for EOG artifact preprocessing. We propose a regularized common spatial pattern (R-CSP) algorithm for EEG feature extraction by incorporating the principle of generic learning. A new classifier combining the K -nearest neighbor (KNN) and support vector machine (SVM) approaches is used to classify four anisomerous states, namely, imaginary movements with the left hand, right foot, and right shoulder and the resting state. The highest classification accuracy rate is 92.5%, and the average classification accuracy rate is 87%. The proposed complex algorithm identification method can significantly improve the identification rate of the minority samples and the overall classification performance.

  14. A unified classifier for robust face recognition based on combining multiple subspace algorithms

    Science.gov (United States)

    Ijaz Bajwa, Usama; Ahmad Taj, Imtiaz; Waqas Anwar, Muhammad

    2012-10-01

    Face recognition being the fastest growing biometric technology has expanded manifold in the last few years. Various new algorithms and commercial systems have been proposed and developed. However, none of the proposed or developed algorithm is a complete solution because it may work very well on one set of images with say illumination changes but may not work properly on another set of image variations like expression variations. This study is motivated by the fact that any single classifier cannot claim to show generally better performance against all facial image variations. To overcome this shortcoming and achieve generality, combining several classifiers using various strategies has been studied extensively also incorporating the question of suitability of any classifier for this task. The study is based on the outcome of a comprehensive comparative analysis conducted on a combination of six subspace extraction algorithms and four distance metrics on three facial databases. The analysis leads to the selection of the most suitable classifiers which performs better on one task or the other. These classifiers are then combined together onto an ensemble classifier by two different strategies of weighted sum and re-ranking. The results of the ensemble classifier show that these strategies can be effectively used to construct a single classifier that can successfully handle varying facial image conditions of illumination, aging and facial expressions.

  15. An algorithm to construct Groebner bases for solving integration by parts relations

    International Nuclear Information System (INIS)

    Smirnov, Alexander V.

    2006-01-01

    This paper is a detailed description of an algorithm based on a generalized Buchberger algorithm for constructing Groebner-type bases associated with polynomials of shift operators. The algorithm is used to calculate Feynman integrals and has proved to be efficient in several complicated cases

  16. An Agent-Based Co-Evolutionary Multi-Objective Algorithm for Portfolio Optimization

    Directory of Open Access Journals (Sweden)

    Rafał Dreżewski

    2017-08-01

    Full Text Available Algorithms based on the process of natural evolution are widely used to solve multi-objective optimization problems. In this paper we propose the agent-based co-evolutionary algorithm for multi-objective portfolio optimization. The proposed technique is compared experimentally to the genetic algorithm, co-evolutionary algorithm and a more classical approach—the trend-following algorithm. During the experiments historical data from the Warsaw Stock Exchange is used in order to assess the performance of the compared algorithms. Finally, we draw some conclusions from these experiments, showing the strong and weak points of all the techniques.

  17. A Novel Sub-pixel Measurement Algorithm Based on Mixed the Fractal and Digital Speckle Correlation in Frequency Domain

    Directory of Open Access Journals (Sweden)

    Zhangfang Hu

    2014-10-01

    Full Text Available The digital speckle correlation is a non-contact in-plane displacement measurement method based on machine vision. Motivated by the facts that the low accuracy and large amount of calculation produced by the traditional digital speckle correlation method in spatial domain, we introduce a sub-pixel displacement measurement algorithm which employs a fast interpolation method based on fractal theory and digital speckle correlation in frequency domain. This algorithm can overcome either the blocking effect or the blurring caused by the traditional interpolation methods, and the frequency domain processing also avoids the repeated searching in the correlation recognition of the spatial domain, thus the operation quantity is largely reduced and the information extracting speed is improved. The comparative experiment is given to verify that the proposed algorithm in this paper is effective.

  18. Leads Detection Using Mixture Statistical Distribution Based CRF Algorithm from Sentinel-1 Dual Polarization SAR Imagery

    Science.gov (United States)

    Zhang, Yu; Li, Fei; Zhang, Shengkai; Zhu, Tingting

    2017-04-01

    Synthetic Aperture Radar (SAR) is significantly important for polar remote sensing since it can provide continuous observations in all days and all weather. SAR can be used for extracting the surface roughness information characterized by the variance of dielectric properties and different polarization channels, which make it possible to observe different ice types and surface structure for deformation analysis. In November, 2016, Chinese National Antarctic Research Expedition (CHINARE) 33rd cruise has set sails in sea ice zone in Antarctic. Accurate leads spatial distribution in sea ice zone for routine planning of ship navigation is essential. In this study, the semantic relationship between leads and sea ice categories has been described by the Conditional Random Fields (CRF) model, and leads characteristics have been modeled by statistical distributions in SAR imagery. In the proposed algorithm, a mixture statistical distribution based CRF is developed by considering the contexture information and the statistical characteristics of sea ice for improving leads detection in Sentinel-1A dual polarization SAR imagery. The unary potential and pairwise potential in CRF model is constructed by integrating the posteriori probability estimated from statistical distributions. For mixture statistical distribution parameter estimation, Method of Logarithmic Cumulants (MoLC) is exploited for single statistical distribution parameters estimation. The iteration based Expectation Maximal (EM) algorithm is investigated to calculate the parameters in mixture statistical distribution based CRF model. In the posteriori probability inference, graph-cut energy minimization method is adopted in the initial leads detection. The post-processing procedures including aspect ratio constrain and spatial smoothing approaches are utilized to improve the visual result. The proposed method is validated on Sentinel-1A SAR C-band Extra Wide Swath (EW) Ground Range Detected (GRD) imagery with a

  19. A Framing Link Based Tabu Search Algorithm for Large-Scale Multidepot Vehicle Routing Problems

    Directory of Open Access Journals (Sweden)

    Xuhao Zhang

    2014-01-01

    Full Text Available A framing link (FL based tabu search algorithm is proposed in this paper for a large-scale multidepot vehicle routing problem (LSMDVRP. Framing links are generated during continuous great optimization of current solutions and then taken as skeletons so as to improve optimal seeking ability, speed up the process of optimization, and obtain better results. Based on the comparison between pre- and postmutation routes in the current solution, different parts are extracted. In the current optimization period, links involved in the optimal solution are regarded as candidates to the FL base. Multiple optimization periods exist in the whole algorithm, and there are several potential FLs in each period. If the update condition is satisfied, the FL base is updated, new FLs are added into the current route, and the next period starts. Through adjusting the borderline of multidepot sharing area with dynamic parameters, the authors define candidate selection principles for three kinds of customer connections, respectively. Link split and the roulette approach are employed to choose FLs. 18 LSMDVRP instances in three groups are studied and new optimal solution values for nine of them are obtained, with higher computation speed and reliability.

  20. Academic Activities Transaction Extraction Based on Deep Belief Network

    Directory of Open Access Journals (Sweden)

    Xiangqian Wang

    2017-01-01

    Full Text Available Extracting information about academic activity transactions from unstructured documents is a key problem in the analysis of academic behaviors of researchers. The academic activities transaction includes five elements: person, activities, objects, attributes, and time phrases. The traditional method of information extraction is to extract shallow text features and then to recognize advanced features from text with supervision. Since the information processing of different levels is completed in steps, the error generated from various steps will be accumulated and affect the accuracy of final results. However, because Deep Belief Network (DBN model has the ability to automatically unsupervise learning of the advanced features from shallow text features, the model is employed to extract the academic activities transaction. In addition, we use character-based feature to describe the raw features of named entities of academic activity, so as to improve the accuracy of named entity recognition. In this paper, the accuracy of the academic activities extraction is compared by using character-based feature vector and word-based feature vector to express the text features, respectively, and with the traditional text information extraction based on Conditional Random Fields. The results show that DBN model is more effective for the extraction of academic activities transaction information.