WorldWideScience

Sample records for bathymetric features detected

  1. Variability In Long-Wave Runup as a Function of Nearshore Bathymetric Features

    Energy Technology Data Exchange (ETDEWEB)

    Dunkin, Lauren McNeill [Texas A & M Univ., College Station, TX (United States)

    2010-05-01

    Beaches and barrier islands are vulnerable to extreme storm events, such as hurricanes, that can cause severe erosion and overwash to the system. Having dunes and a wide beach in front of coastal infrastructure can provide protection during a storm, but the influence that nearshore bathymetric features have in protecting the beach and barrier island system is not completely understood. The spatial variation in nearshore features, such as sand bars and beach cusps, can alter nearshore hydrodynamics, including wave setup and runup. The influence of bathymetric features on long-wave runup can be used in evaluating the vulnerability of coastal regions to erosion and dune overtopping, evaluating the changing morphology, and implementing plans to protect infrastructure. In this thesis, long-wave runup variation due to changing bathymetric features as determined with the numerical model XBeach is quantified (eXtreme Beach behavior model). Wave heights are analyzed to determine the energy through the surfzone. XBeach assumes that coastal erosion at the land-sea interface is dominated by bound long-wave processes. Several hydrodynamic conditions are used to force the numerical model. The XBeach simulation results suggest that bathymetric irregularity induces significant changes in the extreme long-wave runup at the beach and the energy indicator through the surfzone.

  2. Bathymetric Contours

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Coastal bathymetric depth, measured in meters at depth values of: -10, -20, -30, -40, -50, -60, -70, -80, -90, -100, -150 -200, -400, -600

  3. Features Based Text Similarity Detection

    CERN Document Server

    Kent, Chow Kok

    2010-01-01

    As the Internet help us cross cultural border by providing different information, plagiarism issue is bound to arise. As a result, plagiarism detection becomes more demanding in overcoming this issue. Different plagiarism detection tools have been developed based on various detection techniques. Nowadays, fingerprint matching technique plays an important role in those detection tools. However, in handling some large content articles, there are some weaknesses in fingerprint matching technique especially in space and time consumption issue. In this paper, we propose a new approach to detect plagiarism which integrates the use of fingerprint matching technique with four key features to assist in the detection process. These proposed features are capable to choose the main point or key sentence in the articles to be compared. Those selected sentence will be undergo the fingerprint matching process in order to detect the similarity between the sentences. Hence, time and space usage for the comparison process is r...

  4. Linear feature detection based on ridgelet

    Institute of Scientific and Technical Information of China (English)

    HOU; Biao; (侯彪); LIU; Fang; (刘芳); JIAO; Licheng; (焦李成)

    2003-01-01

    Linear feature detection is very important in image processing. The detection efficiency will directly affect the perfomance of pattern recognition and pattern classification. Based on the idea of ridgelet, this paper presents a new discrete localized ridgelet transform and a new method for detecting linear feature in anisotropic images. Experimental results prove the efficiency of the proposed method.

  5. Feature detection of triangular meshes vianeighbor supporting

    Institute of Scientific and Technical Information of China (English)

    Xiao-chao WANG; Jun-jie CAO; Xiu-ping LIU; Bao-jun LI; Xi-quan SHI; Yi-zhen SUN

    2012-01-01

    We propose a robust method for detecting features on triangular meshes by combining normal tensor voting with neighbor supporting.Our method contains two stages:feature detection and feature refinement.First,the normal tensor voting method is modified to detect the initial features,which may include some pseudo features.Then,at the feature refinement stage,a novel salient measure deriving from the idea of neighbor supporting is developed. Benefiting from the integrated reliable salient measure feature,pseudo features can be effectively discriminated from the initially detected features and removed. Compared to previous methods based on the differential geometric property,the main advantage of our method is that it can detect both sharp and weak features.Numerical experiments show that our algorithm is robust,effective,and can produce more accurate results.We also discuss how detected features are incorporated into applications,such as feature-preserving mesh denoising and hole-filling,and present visually appealing results by integrating feature information.

  6. NOS Bathymetric Maps

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This collection of bathymetric contour maps which represent the seafloor topography includes over 400 individual titles and covers US offshore areas including Hawaii...

  7. Feature Sets for Screenshot Detection

    Science.gov (United States)

    2013-06-01

    Python Imaging Library [35] and OpenCV [25] provided image processing and feature ex- traction capabilities, NumPy [36] was used for general numeric...www.pythonware.com/products/pil/ [36] NumPy . (2013, Apr.) Scientific computer tools for python - numpy ). [Online]. Available: http://www.numpy.org/ [37] T. Curk, J

  8. Patch layout generation by detecting feature networks

    KAUST Repository

    Cao, Yuanhao

    2015-02-01

    The patch layout of 3D surfaces reveals the high-level geometric and topological structures. In this paper, we study the patch layout computation by detecting and enclosing feature loops on surfaces. We present a hybrid framework which combines several key ingredients, including feature detection, feature filtering, feature curve extension, patch subdivision and boundary smoothing. Our framework is able to compute patch layouts through concave features as previous approaches, but also able to generate nice layouts through smoothing regions. We demonstrate the effectiveness of our framework by comparing with the state-of-the-art methods.

  9. Community Detection in Networks with Node Features

    CERN Document Server

    Zhang, Yuan; Zhu, Ji

    2015-01-01

    Many methods have been proposed for community detection in networks, but most of them do not take into account additional information on the nodes that is often available in practice. In this paper, we propose a new joint community detection criterion that uses both the network edge information and the node features to detect community structures. One advantage our method has over existing joint detection approaches is the flexibility of learning the impact of different features which may differ across communities. Another advantage is the flexibility of choosing the amount of influence the feature information has on communities. The method is asymptotically consistent under the block model with additional assumptions on the feature distributions, and performs well on simulated and real networks.

  10. Elderly fall detection using SIFT hybrid features

    Science.gov (United States)

    Wang, Xiaoxiao; Gao, Chao; Guo, Yongcai

    2015-10-01

    With the tendency of aging society, countries all over the world are dealing with the demographic change. Fall had been proven to be of the highest fatality rate among the elderly. To realize the elderly fall detection, the proposed algorithm used the hybrid feature. Based on the rate of centroid change, the algorithm adopted VEI to offer the posture feature, this combined motion feature with posture feature. The algorithm also took advantage of SIFT descriptor of VEI(V-SIFT) to show more details of behaviors with occlusion. An improved motion detection method was proposed to improve the accuracy of front-view motion detection. The experimental results on CASIA database and self-built database showed that the proposed approach has high efficiency and strong robustness which effectively improved the accuracy of fall detection.

  11. Feature detection techniques for preprocessing proteomic data.

    Science.gov (United States)

    Sellers, Kimberly F; Miecznikowski, Jeffrey C

    2010-01-01

    Numerous gel-based and nongel-based technologies are used to detect protein changes potentially associated with disease. The raw data, however, are abundant with technical and structural complexities, making statistical analysis a difficult task. Low-level analysis issues (including normalization, background correction, gel and/or spectral alignment, feature detection, and image registration) are substantial problems that need to be addressed, because any large-level data analyses are contingent on appropriate and statistically sound low-level procedures. Feature detection approaches are particularly interesting due to the increased computational speed associated with subsequent calculations. Such summary data corresponding to image features provide a significant reduction in overall data size and structure while retaining key information. In this paper, we focus on recent advances in feature detection as a tool for preprocessing proteomic data. This work highlights existing and newly developed feature detection algorithms for proteomic datasets, particularly relating to time-of-flight mass spectrometry, and two-dimensional gel electrophoresis. Note, however, that the associated data structures (i.e., spectral data, and images containing spots) used as input for these methods are obtained via all gel-based and nongel-based methods discussed in this manuscript, and thus the discussed methods are likewise applicable.

  12. Fall Detection Using Smartphone Audio Features.

    Science.gov (United States)

    Cheffena, Michael

    2016-07-01

    An automated fall detection system based on smartphone audio features is developed. The spectrogram, mel frequency cepstral coefficents (MFCCs), linear predictive coding (LPC), and matching pursuit (MP) features of different fall and no-fall sound events are extracted from experimental data. Based on the extracted audio features, four different machine learning classifiers: k-nearest neighbor classifier (k-NN), support vector machine (SVM), least squares method (LSM), and artificial neural network (ANN) are investigated for distinguishing between fall and no-fall events. For each audio feature, the performance of each classifier in terms of sensitivity, specificity, accuracy, and computational complexity is evaluated. The best performance is achieved using spectrogram features with ANN classifier with sensitivity, specificity, and accuracy all above 98%. The classifier also has acceptable computational requirement for training and testing. The system is applicable in home environments where the phone is placed in the vicinity of the user.

  13. Detection and tracking of facial features

    Science.gov (United States)

    De Silva, Liyanage C.; Aizawa, Kiyoharu; Hatori, Mitsutoshi

    1995-04-01

    Detection and tracking of facial features without using any head mounted devices may become required in various future visual communication applications, such as teleconferencing, virtual reality etc. In this paper we propose an automatic method of face feature detection using a method called edge pixel counting. Instead of utilizing color or gray scale information of the facial image, the proposed edge pixel counting method utilized the edge information to estimate the face feature positions such as eyes, nose and mouth in the first frame of a moving facial image sequence, using a variable size face feature template. For the remaining frames, feature tracking is carried out alternatively using a method called deformable template matching and edge pixel counting. One main advantage of using edge pixel counting in feature tracking is that it does not require the condition of a high inter frame correlation around the feature areas as is required in template matching. Some experimental results are shown to demonstrate the effectiveness of the proposed method.

  14. Separated Same Rectangle Feature for Face Detection

    Institute of Scientific and Technical Information of China (English)

    Yong-hee HONG; Hwan-ik CHUNG; Hern-soo HAHN

    2010-01-01

    The paper proposes a new method of "Separated Same Rectangle Feature (SSRF)" for face detection.Generally,Haar-Like feature is used to make an Adaboost training algorithm with strong classifier.Haar-like feature is composed of two or more attached same rectangles,Inefficiency of the Haar-like feature often results from two or more attached same rectangles.But the proposed SSRF are composed of two separated same rectangles.So,it is very flexible and detailed.Therefore it creates more accuate strong classifier than Haar-Like feature.SSRF uses integral image to reduce executive time.Haar-Iike feature calculates the sum of intensities of pixels on two or more rectangles.But SSRF always calculates the sum of in-tensities of pixels on only two rectangles.The weak classifier of Ada-boost algorithm based on SSRF is faster than one based an Haar-likefeature.In the experiment,we use 1 000 face images and 1 000non-face images for Adaboost training.The proposed SSRF shows about 0,9% higher accuracy than Haar-Iike features.

  15. Toward Automated Feature Detection in UAVSAR Images

    Science.gov (United States)

    Parker, J. W.; Donnellan, A.; Glasscoe, M. T.

    2014-12-01

    Edge detection identifies seismic or aseismic fault motion, as demonstrated in repeat-pass inteferograms obtained by the Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) program. But this identification is not robust at present: it requires a flattened background image, interpolation into missing data (holes) and outliers, and background noise that is either sufficiently small or roughly white Gaussian. Identification and mitigation of nongaussian background image noise is essential to creating a robust, automated system to search for such features. Clearly a robust method is needed for machine scanning of the thousands of UAVSAR repeat-pass interferograms for evidence of fault slip, landslides, and other local features.Empirical examination of detrended noise based on 20 km east-west profiles through desert terrain with little tectonic deformation for a suite of flight interferograms shows nongaussian characteristics. Statistical measurement of curvature with varying length scale (Allan variance) shows nearly white behavior (Allan variance slope with spatial distance from roughly -1.76 to -2) from 25 to 400 meters, deviations from -2 suggesting short-range differences (such as used in detecting edges) are often freer of noise than longer-range differences. At distances longer than 400 m the Allan variance flattens out without consistency from one interferogram to another. We attribute this additional noise afflicting difference estimates at longer distances to atmospheric water vapor and uncompensated aircraft motion.Paradoxically, California interferograms made with increasing time intervals before and after the El Mayor Cucapah earthquake (2008, M7.2, Mexico) show visually stronger and more interesting edges, but edge detection methods developed for the first year do not produce reliable results over the first two years, because longer time spans suffer reduced coherence in the interferogram. The changes over time are reflecting fault slip and block

  16. 2011 Groundhog Reservoir Bathymetric Contours

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The U.S. Geological Survey performed a bathymetric survey of Groundhog Reservoir using a man-operated boat-mounted multibeam echo sounder integrated with a global...

  17. Wireless distributed computing for cyclostationary feature detection

    Directory of Open Access Journals (Sweden)

    Mohammed I.M. Alfaqawi

    2016-02-01

    Full Text Available Recently, wireless distributed computing (WDC concept has emerged promising manifolds improvements to current wireless technologies. Despite the various expected benefits of this concept, significant drawbacks were addressed in the open literature. One of WDC key challenges is the impact of wireless channel quality on the load of distributed computations. Therefore, this research investigates the wireless channel impact on WDC performance when the latter is applied to spectrum sensing in cognitive radio (CR technology. However, a trade-off is found between accuracy and computational complexity in spectrum sensing approaches. Increasing these approaches accuracy is accompanied by an increase in computational complexity. This results in greater power consumption and processing time. A novel WDC scheme for cyclostationary feature detection spectrum sensing approach is proposed in this paper and thoroughly investigated. The benefits of the proposed scheme are firstly presented. Then, the impact of the wireless channel of the proposed scheme is addressed considering two scenarios. In the first scenario, workload matrices are distributed over the wireless channel. Then, a fusion center combines these matrices in order to make a decision. Meanwhile, in the second scenario, local decisions are made by CRs, then, only a binary flag is sent to the fusion center.

  18. A new approach for detecting local features

    DEFF Research Database (Denmark)

    Nguyen, Phuong Giang; Andersen, Hans Jørgen

    2010-01-01

    Local features up to now are often mentioned in the meaning of interest points. A patch around each point is formed to compute descriptors or feature vectors. Therefore, in order to satisfy different invariant imaging conditions such as scales and viewpoints, an input image is often represented i...

  19. Lake Bathymetric Aquatic Vegetation

    Data.gov (United States)

    Minnesota Department of Natural Resources — Aquatic vegetation represented as polygon features, coded with vegetation type (emergent, submergent, etc.) and field survey date. Polygons were digitized from...

  20. Structure damage detection based on random forest recursive feature elimination

    Science.gov (United States)

    Zhou, Qifeng; Zhou, Hao; Zhou, Qingqing; Yang, Fan; Luo, Linkai

    2014-05-01

    Feature extraction is a key former step in structural damage detection. In this paper, a structural damage detection method based on wavelet packet decomposition (WPD) and random forest recursive feature elimination (RF-RFE) is proposed. In order to gain the most effective feature subset and to improve the identification accuracy a two-stage feature selection method is adopted after WPD. First, the damage features are sorted according to original random forest variable importance analysis. Second, using RF-RFE to eliminate the least important feature and reorder the feature list each time, then get the new feature importance sequence. Finally, k-nearest neighbor (KNN) algorithm, as a benchmark classifier, is used to evaluate the extracted feature subset. A four-storey steel shear building model is chosen as an example in method verification. The experimental results show that using the fewer features got from proposed method can achieve higher identification accuracy and reduce the detection time cost.

  1. Multi-features Based Approach for Moving Shadow Detection

    Institute of Scientific and Technical Information of China (English)

    ZHOU Ning; ZHOU Man-li; XU Yi-ping; FANG Bao-hong

    2004-01-01

    In the video-based surveillance application, moving shadows can affect the correct localization and detection of moving objects. This paper aims to present a method for shadow detection and suppression used for moving visual object detection. The major novelty of the shadow suppression is the integration of several features including photometric invariant color feature, motion edge feature, and spatial feature etc. By modifying process for false shadow detected, the averaging detection rate of moving object reaches above 90% in the test of Hall-Monitor sequence.

  2. Evaluation of feature detection algorithms for structure from motion

    CSIR Research Space (South Africa)

    Govender, N

    2009-11-01

    Full Text Available such as Harris corner detectors and feature descriptors such as SIFT (Scale Invariant Feature Transform) and SURF (Speeded Up Robust Features) given a set of input images. This paper implements state-of-the art feature detection algorithms and evaluates...

  3. Using Polarization features of visible light for automatic landmine detection

    NARCIS (Netherlands)

    Jong, W. de; Schavemaker, J.G.M.

    2007-01-01

    This chapter describes the usage of polarization features of visible light for automatic landmine detection. The first section gives an introduction to land-mine detection and the usage of camera systems. In section 2 detection concepts and methods that use polarization features are described. Secti

  4. Breast Cancer Detection with Reduced Feature Set.

    Science.gov (United States)

    Mert, Ahmet; Kılıç, Niyazi; Bilgili, Erdem; Akan, Aydin

    2015-01-01

    This paper explores feature reduction properties of independent component analysis (ICA) on breast cancer decision support system. Wisconsin diagnostic breast cancer (WDBC) dataset is reduced to one-dimensional feature vector computing an independent component (IC). The original data with 30 features and reduced one feature (IC) are used to evaluate diagnostic accuracy of the classifiers such as k-nearest neighbor (k-NN), artificial neural network (ANN), radial basis function neural network (RBFNN), and support vector machine (SVM). The comparison of the proposed classification using the IC with original feature set is also tested on different validation (5/10-fold cross-validations) and partitioning (20%-40%) methods. These classifiers are evaluated how to effectively categorize tumors as benign and malignant in terms of specificity, sensitivity, accuracy, F-score, Youden's index, discriminant power, and the receiver operating characteristic (ROC) curve with its criterion values including area under curve (AUC) and 95% confidential interval (CI). This represents an improvement in diagnostic decision support system, while reducing computational complexity.

  5. Breast Cancer Detection with Reduced Feature Set

    Directory of Open Access Journals (Sweden)

    Ahmet Mert

    2015-01-01

    Full Text Available This paper explores feature reduction properties of independent component analysis (ICA on breast cancer decision support system. Wisconsin diagnostic breast cancer (WDBC dataset is reduced to one-dimensional feature vector computing an independent component (IC. The original data with 30 features and reduced one feature (IC are used to evaluate diagnostic accuracy of the classifiers such as k-nearest neighbor (k-NN, artificial neural network (ANN, radial basis function neural network (RBFNN, and support vector machine (SVM. The comparison of the proposed classification using the IC with original feature set is also tested on different validation (5/10-fold cross-validations and partitioning (20%–40% methods. These classifiers are evaluated how to effectively categorize tumors as benign and malignant in terms of specificity, sensitivity, accuracy, F-score, Youden’s index, discriminant power, and the receiver operating characteristic (ROC curve with its criterion values including area under curve (AUC and 95% confidential interval (CI. This represents an improvement in diagnostic decision support system, while reducing computational complexity.

  6. Voronoi poles-based saliency feature detection from point clouds

    Science.gov (United States)

    Xu, Tingting; Wei, Ning; Dong, Fangmin; Yang, Yuanqin

    2016-12-01

    In this paper, we represent a novel algorithm for point cloud feature detection. Firstly, the algorithm estimates the local feature for each sample point by computing the ratio of the distance from the inner voronoi pole and the outer voronoi pole to the surface. Then the surface global saliency feature is detected by adding the results of the difference of Gaussian for local feature under different scales. Compared with the state of the art methods, our algorithm has higher computing efficiency and more accurate feature detection for sharp edge. The detected saliency features are applied as the weights for surface mesh simplification. The numerical results for mesh simplification show that our method keeps the more details of key features than the traditional methods.

  7. Detection of Fraudulent Emails by Employing Advanced Feature Abundance

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Glasdam, Mathies

    2014-01-01

    In this paper, we present a fraudulent email detection model using advanced feature choice. We extracted various kinds of features and compared the performance of each category of features with the others in terms of the fraudulent email detection rate. The different types of features...... are incorporated step by step. The detection of fraudulent email has been considered as a classification problem and it is evaluated using various state-of-the art algorithms and on CCM [1] which is authors' previous cluster based classification model. The experiments have been performed on diverse feature sets...... and the different classification methods. The comparison of the results is also presented and the evaluations shows that for the fraudulent email detection tasks, the feature set is more important regardless of classification method. The results of the study suggest that the task of fraudulent emails detection...

  8. Boosted cascade of scattered rectangle features for object detection

    Institute of Scientific and Technical Information of China (English)

    ZHANG WeiZe; TONG RuoFeng; DONG JinXiang

    2009-01-01

    This paper presents a variant of Hear-like feature used In Viola and Jones detection framework, called scattered rectangle feature, based on the common-component analysis of local region feature. Three common components, feature filter, feature structure and feature form, are extracted without concern-Ing the details of the studied region features, which cast a new light on region feature design for spe-cific applications and requirements: modifying some component(s) of a feature for an Improved one or combining different components of existing features for a new favorable one. Scattered rectangle feature follows the former way, extending the feature structure component of Hear-like feature out of the restriction of the geometry adjacency rule, which results in a richer representation that explores much more orientations other than horizontal, vertical and diagonal, as well as misaligned, detached and non-rectangle shape information that is unreachable to Hear-like feature. The training result of the two face detectors in the experiments illustrates the benefits of scattered rectangle feature empirically; the comparison of the ROC curves under a rigid and objective detection criterion on MIT+CMU upright face test set shows that the cascade based on scattered rectangle features outperforms that based on Hear-like features.

  9. A Subset Feature Elimination Mechanism for Intrusion Detection System

    OpenAIRE

    Herve Nkiama; Syed Zainudeen Mohd Said; Muhammad Saidu

    2016-01-01

    several studies have suggested that by selecting relevant features for intrusion detection system, it is possible to considerably improve the detection accuracy and performance of the detection engine. Nowadays with the emergence of new technologies such as Cloud Computing or Big Data, large amount of network traffic are generated and the intrusion detection system must dynamically collected and analyzed the data produce by the incoming traffic. However in a large dataset not all features con...

  10. Epidemic features affecting the performance of outbreak detection algorithms

    Directory of Open Access Journals (Sweden)

    Kuang Jie

    2012-06-01

    Full Text Available Abstract Background Outbreak detection algorithms play an important role in effective automated surveillance. Although many algorithms have been designed to improve the performance of outbreak detection, few published studies have examined how epidemic features of infectious disease impact on the detection performance of algorithms. This study compared the performance of three outbreak detection algorithms stratified by epidemic features of infectious disease and examined the relationship between epidemic features and performance of outbreak detection algorithms. Methods Exponentially weighted moving average (EWMA, cumulative sum (CUSUM and moving percentile method (MPM algorithms were applied. We inserted simulated outbreaks into notifiable infectious disease data in China Infectious Disease Automated-alert and Response System (CIDARS, and compared the performance of the three algorithms with optimized parameters at a fixed false alarm rate of 5% classified by epidemic features of infectious disease. Multiple linear regression was adopted to analyse the relationship of the algorithms’ sensitivity and timeliness with the epidemic features of infectious diseases. Results The MPM had better detection performance than EWMA and CUSUM through all simulated outbreaks, with or without stratification by epidemic features (incubation period, baseline counts and outbreak magnitude. The epidemic features were associated with both sensitivity and timeliness. Compared with long incubation, short incubation had lower probability (β* = −0.13, P  Conclusions The results of this study suggest that the MPM is a prior algorithm for outbreak detection and differences of epidemic features in detection performance should be considered in automatic surveillance practice.

  11. Modeling Suspicious Email Detection using Enhanced Feature Selection

    OpenAIRE

    2013-01-01

    The paper presents a suspicious email detection model which incorporates enhanced feature selection. In the paper we proposed the use of feature selection strategies along with classification technique for terrorists email detection. The presented model focuses on the evaluation of machine learning algorithms such as decision tree (ID3), logistic regression, Na\\"ive Bayes (NB), and Support Vector Machine (SVM) for detecting emails containing suspicious content. In the literature, various algo...

  12. A Subset Feature Elimination Mechanism for Intrusion Detection System

    Directory of Open Access Journals (Sweden)

    Herve Nkiama

    2016-04-01

    Full Text Available several studies have suggested that by selecting relevant features for intrusion detection system, it is possible to considerably improve the detection accuracy and performance of the detection engine. Nowadays with the emergence of new technologies such as Cloud Computing or Big Data, large amount of network traffic are generated and the intrusion detection system must dynamically collected and analyzed the data produce by the incoming traffic. However in a large dataset not all features contribute to represent the traffic, therefore reducing and selecting a number of adequate features may improve the speed and accuracy of the intrusion detection system. In this study, a feature selection mechanism has been proposed which aims to eliminate non-relevant features as well as identify the features which will contribute to improve the detection rate, based on the score each features have established during the selection process. To achieve that objective, a recursive feature elimination process was employed and associated with a decision tree based classifier and later on, the suitable relevant features were identified. This approach was applied on the NSL-KDD dataset which is an improved version of the previous KDD 1999 Dataset, scikit-learn that is a machine learning library written in python was used in this paper. Using this approach, relevant features were identified inside the dataset and the accuracy rate was improved. These results lend to support the idea that features selection improve significantly the classifier performance. Understanding the factors that help identify relevant features will allow the design of a better intrusion detection system.

  13. Lightweight Phishing URLs Detection Using N-gram Features

    Directory of Open Access Journals (Sweden)

    Ammar Yahya Daeef

    2016-06-01

    Full Text Available Phishing is a kind of attack that belongs to social engineering and this attack seeks to trick people in order to let them reveal their confidential information. Several methods are introduced to detect phishing websites by using different types of features. Unfortunately, these techniques implemented for specific attack vector such as detecting phishing emails which make implementing wide scope detection system crucial demand. URLs analysis proved to be a strong method to detect malicious attacks by previous researches. This technique uses various URL features such as host information, lexical, and other type of features. In this paper, we present wide scope and lightweight phishing detection system using lexical features only. The proposed classifier provides accuracy of 93% with 0.12 second processing time per URL.

  14. Detection of fraudulent emails by employing advanced feature abundance

    Directory of Open Access Journals (Sweden)

    Sarwat Nizamani

    2014-11-01

    Full Text Available In this paper, we present a fraudulent email detection model using advanced feature choice. We extracted various kinds of features and compared the performance of each category of features with the others in terms of the fraudulent email detection rate. The different types of features are incorporated step by step. The detection of fraudulent email has been considered as a classification problem and it is evaluated using various state-of-the art algorithms and on CCM (Nizamani et al., 2011 [1] which is authors’ previous cluster based classification model. The experiments have been performed on diverse feature sets and the different classification methods. The comparison of the results is also presented and the evaluation show that for the fraudulent email detection tasks, the feature set is more important regardless of classification method. The results of the study suggest that the task of fraudulent emails detection requires the better choice of feature set; while the choice of classification method is of less importance.

  15. Convolutional neural network features based change detection in satellite images

    Science.gov (United States)

    Mohammed El Amin, Arabi; Liu, Qingjie; Wang, Yunhong

    2016-07-01

    With the popular use of high resolution remote sensing (HRRS) satellite images, a huge research efforts have been placed on change detection (CD) problem. An effective feature selection method can significantly boost the final result. While hand-designed features have proven difficulties to design features that effectively capture high and mid-level representations, the recent developments in machine learning (Deep Learning) omit this problem by learning hierarchical representation in an unsupervised manner directly from data without human intervention. In this letter, we propose approaching the change detection problem from a feature learning perspective. A novel deep Convolutional Neural Networks (CNN) features based HR satellite images change detection method is proposed. The main guideline is to produce a change detection map directly from two images using a pretrained CNN. This method can omit the limited performance of hand-crafted features. Firstly, CNN features are extracted through different convolutional layers. Then, a concatenation step is evaluated after an normalization step, resulting in a unique higher dimensional feature map. Finally, a change map was computed using pixel-wise Euclidean distance. Our method has been validated on real bitemporal HRRS satellite images according to qualitative and quantitative analyses. The results obtained confirm the interest of the proposed method.

  16. Mismatched feature detection with finer granularity for emotional speaker recognition

    Institute of Scientific and Technical Information of China (English)

    Li CHEN; Ying-chun YANG; Zhao-hui WU

    2014-01-01

    The shapes of speakers’ vocal organs change under their different emotional states, which leads to the deviation of the emotional acoustic space of short-time features from the neutral acoustic space and thereby the degradation of the speaker recognition performance. Features deviating greatly from the neutral acoustic space are considered as mismatched features, and they negatively affect speaker recognition systems. Emotion variation produces different feature deformations for different phonemes, so it is reasonable to build a fi ner model to detect mismatched features under each phoneme. However, given the difficulty of phoneme recognition, three sorts of acoustic class recognition- phoneme classes, Gaussian mixture model (GMM) tokenizer, and probabilistic GMM tokenizer- are proposed to replace phoneme recognition. We propose feature pruning and feature regulation methods to process the mismatched features to improve speaker recognition performance. As for the feature regulation method, a strategy of maximizing the between-class distance and minimizing the within-class distance is adopted to train the transformation matrix to regulate the mismatched features. Experiments conducted on the Mandarin affective speech corpus (MASC) show that our feature pruning and feature regulation methods increase the identifi cation rate (IR) by 3.64% and 6.77%, compared with the baseline GMM-UBM (universal background model) algorithm. Also, corresponding IR increases of 2.09% and 3.32% can be obtained with our methods when applied to the state-of-the-art algorithm i-vector.

  17. Effects of Feature Extraction and Classification Methods on Cyberbully Detection

    Directory of Open Access Journals (Sweden)

    Esra SARAÇ

    2016-12-01

    Full Text Available Cyberbullying is defined as an aggressive, intentional action against a defenseless person by using the Internet, or other electronic contents. Researchers have found that many of the bullying cases have tragically ended in suicides; hence automatic detection of cyberbullying has become important. In this study we show the effects of feature extraction, feature selection, and classification methods that are used, on the performance of automatic detection of cyberbullying. To perform the experiments FormSpring.me dataset is used and the effects of preprocessing methods; several classifiers like C4.5, Naïve Bayes, kNN, and SVM; and information gain and chi square feature selection methods are investigated. Experimental results indicate that the best classification results are obtained when alphabetic tokenization, no stemming, and no stopwords removal are applied. Using feature selection also improves cyberbully detection performance. When classifiers are compared, C4.5 performs the best for the used dataset.

  18. Fusion of polarimetric infrared features and GPR features for landmine detection

    NARCIS (Netherlands)

    Cremer, F.; Jong, W. de; Schutte, K.

    2003-01-01

    Currently no single sensor reaches the performance requirements for humanitarian landmine detection, Using sensor-fusion methods, multiple sensors can be combined for improved detection performance. This paper focuses on the feature-level fusion procedure for a sensor combination consisting of a pol

  19. Detection of Seed Methods for Quantification of Feature Confinement

    DEFF Research Database (Denmark)

    Olszak, Andrzej; Bouwers, Eric; Jørgensen, Bo Nørregaard

    2012-01-01

    links between features and source code which hinders the ability to perform cost-efficient and consistent evaluations over time or on a large portfolio of systems. In this paper, we propose an approach to automating measurement of feature confinement by detecting the methods which play a central role......The way features are implemented in source code has a significant influence on multiple quality aspects of a software system. Hence, it is important to regularly evaluate the quality of feature confinement. Unfortunately, existing approaches to such measurement rely on expert judgement for tracing...

  20. Moment feature based fast feature extraction algorithm for moving object detection using aerial images.

    Directory of Open Access Journals (Sweden)

    A F M Saifuddin Saif

    Full Text Available Fast and computationally less complex feature extraction for moving object detection using aerial images from unmanned aerial vehicles (UAVs remains as an elusive goal in the field of computer vision research. The types of features used in current studies concerning moving object detection are typically chosen based on improving detection rate rather than on providing fast and computationally less complex feature extraction methods. Because moving object detection using aerial images from UAVs involves motion as seen from a certain altitude, effective and fast feature extraction is a vital issue for optimum detection performance. This research proposes a two-layer bucket approach based on a new feature extraction algorithm referred to as the moment-based feature extraction algorithm (MFEA. Because a moment represents the coherent intensity of pixels and motion estimation is a motion pixel intensity measurement, this research used this relation to develop the proposed algorithm. The experimental results reveal the successful performance of the proposed MFEA algorithm and the proposed methodology.

  1. The Effect of Feature Selection on Phish Website Detection

    Directory of Open Access Journals (Sweden)

    Hiba Zuhair

    2015-10-01

    Full Text Available Recently, limited anti-phishing campaigns have given phishers more possibilities to bypass through their advanced deceptions. Moreover, failure to devise appropriate classification techniques to effectively identify these deceptions has degraded the detection of phishing websites. Consequently, exploiting as new; few; predictive; and effective features as possible has emerged as a key challenge to keep the detection resilient. Thus, some prior works had been carried out to investigate and apply certain selected methods to develop their own classification techniques. However, no study had generally agreed on which feature selection method that could be employed as the best assistant to enhance the classification performance. Hence, this study empirically examined these methods and their effects on classification performance. Furthermore, it recommends some promoting criteria to assess their outcomes and offers contribution on the problem at hand. Hybrid features, low and high dimensional datasets, different feature selection methods, and classification models were examined in this study. As a result, the findings displayed notably improved detection precision with low latency, as well as noteworthy gains in robustness and prediction susceptibilities. Although selecting an ideal feature subset was a challenging task, the findings retrieved from this study had provided the most advantageous feature subset as possible for robust selection and effective classification in the phishing detection domain.

  2. Dominant Local Binary Pattern Based Face Feature Selection and Detection

    Directory of Open Access Journals (Sweden)

    Kavitha.T

    2010-04-01

    Full Text Available Face Detection plays a major role in Biometrics.Feature selection is a problem of formidable complexity. Thispaper proposes a novel approach to extract face features forface detection. The LBP features can be extracted faster in asingle scan through the raw image and lie in a lower dimensional space, whilst still retaining facial information efficiently. The LBP features are robust to low-resolution images. The dominant local binary pattern (DLBP is used to extract features accurately. A number of trainable methods are emerging in the empirical practice due to their effectiveness. The proposed method is a trainable system for selecting face features from over-completes dictionaries of imagemeasurements. After the feature selection procedure is completed the SVM classifier is used for face detection. The main advantage of this proposal is that it is trained on a very small training set. The classifier is used to increase the selection accuracy. This is not only advantageous to facilitate the datagathering stage, but, more importantly, to limit the training time. CBCL frontal faces dataset is used for training and validation.

  3. Detecting feature interactions in Web services with model checking techniques

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    As a platform-independent software system, a Web service is designed to offer interoperability among diverse and heterogeneous applications.With the introduction of service composition in the Web service creation, various message interactions among the atomic services result in a problem resembling the feature interaction problem in the telecommunication area.This article defines the problem as feature interaction in Web services and proposes a model checking-based detection method.In the method, the Web service description is translated to the Promela language - the input language of the model checker simple promela interpreter (SPIN), and the specific properties, expressed as linear temporal logic (LTL) formulas, are formulated according to our classification of feature interaction.Then, SPIN is used to check these specific properties to detect the feature interaction in Web services.

  4. GPR-Based Landmine Detection and Identification Using Multiple Features

    Directory of Open Access Journals (Sweden)

    Kwang Hee Ko

    2012-01-01

    Full Text Available This paper presents a method to identify landmines in various burial conditions. A ground penetration radar is used to generate data set, which is then processed to reduce the ground effect and noise to obtain landmine signals. Principal components and Fourier coefficients of the landmine signals are computed, which are used as features of each landmine for detection and identification. A database is constructed based on the features of various types of landmines and the ground conditions, including the different levels of moisture and types of ground and the burial depths of the landmines. Detection and identification is performed by searching for features in the database. For a robust decision, the counting method and the Mahalanobis distance-based likelihood ratio test method are employed. Four landmines, different in size and material, are considered as examples that demonstrate the efficiency of the proposed method for detecting and identifying landmines.

  5. Features Extraction for Object Detection Based on Interest Point

    Directory of Open Access Journals (Sweden)

    Amin Mohamed Ahsan

    2013-05-01

    Full Text Available In computer vision, object detection is an essential process for further processes such as object tracking, analyzing and so on. In the same context, extraction features play important role to detect the object correctly. In this paper we present a method to extract local features based on interest point which is used to detect key-points within an image, then, compute histogram of gradient (HOG for the region surround that point. Proposed method used speed-up robust feature (SURF method as interest point detector and exclude the descriptor. The new descriptor is computed by using HOG method. The proposed method got advantages of both mentioned methods. To evaluate the proposed method, we used well-known dataset which is Caltech101. The initial result is encouraging in spite of using a small data for training.

  6. Saliency Detection Using Sparse and Nonlinear Feature Representation

    Science.gov (United States)

    Zhao, Qingjie; Manzoor, Muhammad Farhan; Ishaq Khan, Saqib

    2014-01-01

    An important aspect of visual saliency detection is how features that form an input image are represented. A popular theory supports sparse feature representation, an image being represented with a basis dictionary having sparse weighting coefficient. Another method uses a nonlinear combination of image features for representation. In our work, we combine the two methods and propose a scheme that takes advantage of both sparse and nonlinear feature representation. To this end, we use independent component analysis (ICA) and covariant matrices, respectively. To compute saliency, we use a biologically plausible center surround difference (CSD) mechanism. Our sparse features are adaptive in nature; the ICA basis function are learnt at every image representation, rather than being fixed. We show that Adaptive Sparse Features when used with a CSD mechanism yield better results compared to fixed sparse representations. We also show that covariant matrices consisting of nonlinear integration of color information alone are sufficient to efficiently estimate saliency from an image. The proposed dual representation scheme is then evaluated against human eye fixation prediction, response to psychological patterns, and salient object detection on well-known datasets. We conclude that having two forms of representation compliments one another and results in better saliency detection. PMID:24895644

  7. Hemorrhage detection in MRI brain images using images features

    Science.gov (United States)

    Moraru, Luminita; Moldovanu, Simona; Bibicu, Dorin; Stratulat (Visan), Mirela

    2013-11-01

    The abnormalities appear frequently on Magnetic Resonance Images (MRI) of brain in elderly patients presenting either stroke or cognitive impairment. Detection of brain hemorrhage lesions in MRI is an important but very time-consuming task. This research aims to develop a method to extract brain tissue features from T2-weighted MR images of the brain using a selection of the most valuable texture features in order to discriminate between normal and affected areas of the brain. Due to textural similarity between normal and affected areas in brain MR images these operation are very challenging. A trauma may cause microstructural changes, which are not necessarily perceptible by visual inspection, but they could be detected by using a texture analysis. The proposed analysis is developed in five steps: i) in the pre-processing step: the de-noising operation is performed using the Daubechies wavelets; ii) the original images were transformed in image features using the first order descriptors; iii) the regions of interest (ROIs) were cropped from images feature following up the axial symmetry properties with respect to the mid - sagittal plan; iv) the variation in the measurement of features was quantified using the two descriptors of the co-occurrence matrix, namely energy and homogeneity; v) finally, the meaningful of the image features is analyzed by using the t-test method. P-value has been applied to the pair of features in order to measure they efficacy.

  8. An automated detection of glaucoma using histogram features

    Institute of Scientific and Technical Information of China (English)

    Karthikeyan; Sakthivel; Rengarajan; Narayanan

    2015-01-01

    Glaucoma is a chronic and progressive optic neurodegenerative disease leading to vision deterioration and in most cases produce increased pressure within the eye. This is due to the backup of fluid in the eye; it causes damage to the optic nerve. Hence, early detection diagnosis and treatment of an eye help to prevent the loss of vision. In this paper, a novel method is proposed for the early detection of glaucoma using a combination of magnitude and phase features from the digital fundus images. Local binary patterns(LBP) and Daugman’s algorithm are used to perform the feature set extraction.The histogram features are computed for both the magnitude and phase components. The Euclidean distance between the feature vectors are analyzed to predict glaucoma. The performance of the proposed method is compared with the higher order spectra(HOS)features in terms of sensitivity, specificity, classification accuracy and execution time. The proposed system results 95.45% output for sensitivity, specificity and classification. Also, the execution time for the proposed method takes lesser time than the existing method which is based on HOS features. Hence, the proposed system is accurate, reliable and robust than the existing approach to predict the glaucoma features.

  9. Unified Saliency Detection Model Using Color and Texture Features.

    Science.gov (United States)

    Zhang, Libo; Yang, Lin; Luo, Tiejian

    2016-01-01

    Saliency detection attracted attention of many researchers and had become a very active area of research. Recently, many saliency detection models have been proposed and achieved excellent performance in various fields. However, most of these models only consider low-level features. This paper proposes a novel saliency detection model using both color and texture features and incorporating higher-level priors. The SLIC superpixel algorithm is applied to form an over-segmentation of the image. Color saliency map and texture saliency map are calculated based on the region contrast method and adaptive weight. Higher-level priors including location prior and color prior are incorporated into the model to achieve a better performance and full resolution saliency map is obtained by using the up-sampling method. Experimental results on three datasets demonstrate that the proposed saliency detection model outperforms the state-of-the-art models.

  10. Feature analysis for detecting people from remotely sensed images

    Science.gov (United States)

    Sirmacek, Beril; Reinartz, Peter

    2013-01-01

    We propose a novel approach using airborne image sequences for detecting dense crowds and individuals. Although airborne images of this resolution range are not enough to see each person in detail, we can still notice a change of color and intensity components of the acquired image in the location where a person exists. Therefore, we propose a local feature detection-based probabilistic framework to detect people automatically. Extracted local features behave as observations of the probability density function (PDF) of the people locations to be estimated. Using an adaptive kernel density estimation method, we estimate the corresponding PDF. First, we use estimated PDF to detect boundaries of dense crowds. After that, using background information of dense crowds and previously extracted local features, we detect other people in noncrowd regions automatically for each image in the sequence. To test our crowd and people detection algorithm, we use airborne images taken over Munich during the Oktoberfest event, two different open-air concerts, and an outdoor festival. In addition, we apply tests on GeoEye-1 satellite images. Our experimental results indicate possible use of the algorithm in real-life mass events.

  11. Computed Tomography Features of Incidentally Detected Diffuse Thyroid Disease

    Directory of Open Access Journals (Sweden)

    Myung Ho Rho

    2014-01-01

    Full Text Available Objective. This study aimed to evaluate the CT features of incidentally detected DTD in the patients who underwent thyroidectomy and to assess the diagnostic accuracy of CT diagnosis. Methods. We enrolled 209 consecutive patients who received preoperative neck CT and subsequent thyroid surgery. Neck CT in each case was retrospectively investigated by a single radiologist. We evaluated the diagnostic accuracy of individual CT features and the cut-off CT criteria for detecting DTD by comparing the CT features with histopathological results. Results. Histopathological examination of the 209 cases revealed normal thyroid (n=157, Hashimoto thyroiditis (n=17, non-Hashimoto lymphocytic thyroiditis (n=34, and diffuse hyperplasia (n=1. The CT features suggestive of DTD included low attenuation, inhomogeneous attenuation, increased glandular size, lobulated margin, and inhomogeneous enhancement. ROC curve analysis revealed that CT diagnosis of DTD based on the CT classification of “3 or more” abnormal CT features was superior. When the “3 or more” CT classification was selected, the sensitivity, specificity, positive and negative predictive values, and accuracy of CT diagnosis for DTD were 55.8%, 95.5%, 80.6%, 86.7%, and 85.6%, respectively. Conclusion. Neck CT may be helpful for the detection of incidental DTD.

  12. Lean histogram of oriented gradients features for effective eye detection

    Science.gov (United States)

    Sharma, Riti; Savakis, Andreas

    2015-11-01

    Reliable object detection is very important in computer vision and robotics applications. The histogram of oriented gradients (HOG) is established as one of the most popular hand-crafted features, which along with support vector machine (SVM) classification provides excellent performance for object recognition. We investigate dimensionality deduction on HOG features in combination with SVM classifiers to obtain efficient feature representation and improved classification performance. In addition to lean HOG features, we explore descriptors resulting from dimensionality reduction on histograms of binary descriptors. We consider three-dimensionality reduction techniques: standard principal component analysis, random projections, a computationally efficient linear mapping that is data independent, and locality preserving projections (LPP), which learns the manifold structure of the data. Our methods focus on the application of eye detection and were tested on an eye database created using the BioID and FERET face databases. Our results indicate that manifold learning is beneficial to classification utilizing HOG features. To demonstrate the broader usefulness of lean HOG features for object class recognition, we evaluated our system's classification performance on the CalTech-101 dataset with favorable outcomes.

  13. DROIDSWAN: DETECTING MALICIOUS ANDROID APPLICATIONS BASED ON STATIC FEATURE ANALYSIS

    Directory of Open Access Journals (Sweden)

    Babu Rajesh V

    2015-07-01

    Full Text Available Android being a widely used mobile platform has witnessed an increase in the number of malicious samples on its market place. The availability of multiple sources for downloading applications has also contributed to users falling prey to malicious applications. Classification of an Android application as malicious or benign remains a challenge as malicious applications maneuver to pose themselves as benign. This paper presents an approach which extracts various features from Android Application Package file (APK using static analysis and subsequently classifies using machine learning techniques. The contribution of this work includes deriving, extracting and analyzing crucial features of Android applications that aid in efficient classification. The analysis is carried out using various machine learning algorithms with both weighted and non-weighted approaches. It was observed that weighted approach depicts higher detection rates using fewer features. Random Forest algorithm exhibited high detection rate and shows the least false positive rate.

  14. Gamelan Music Onset Detection based on Spectral Features

    Directory of Open Access Journals (Sweden)

    Yoyon Kusnendar Suprapto

    2013-03-01

    Full Text Available This research detects onsets of percussive instruments by examining the performance on the sound signals of gamelan instruments as one of traditional music instruments in Indonesia. Onset plays important role in determining musical rythmic structure, like beat, tempo, and is highly required in many applications of music information retrieval. There are four onset detection methods compared that employ spectral features, such as magnitude, phase, and the combination of both, which are phase slope (PS, weighted phase deviation (WPD, spectral flux (SF, and rectified complex domain (RCD. These features are extracted by representing the sound signals into time-frequency domain using overlapped Short-time Fourier Transform (STFT and varying the window length. Onset detection functions are processed through peak-picking using dynamic threshold. The results showed that by using suitable window length and parameter setting of dynamic threshold, F-measure which is greater than 0.80 can be obtained for certain methods.

  15. Analyzing edge detection techniques for feature extraction in dental radiographs

    Directory of Open Access Journals (Sweden)

    Kanika Lakhani

    2016-09-01

    Full Text Available Several dental problems can be detected using radiographs but the main issue with radiographs is that they are not very prominent. In this paper, two well known edge detection techniques have been implemented for a set of 20 radiographs and number of pixels in each image has been calculated. Further, Gaussian filter has been applied over the images to smoothen the images so as to highlight the defect in the tooth. If the images data are available in the form of pixels for both healthy and decayed tooth, the images can easily be compared using edge detection techniques and the diagnosis is much easier. Further, Laplacian edge detection technique is applied to sharpen the edges of the given image. The aim is to detect discontinuities in dental radiographs when compared to original healthy tooth. Future work includes the feature extraction on the images for the classification of dental problems.

  16. Salient Region Detection via Feature Combination and Discriminative Classifier

    Directory of Open Access Journals (Sweden)

    Deming Kong

    2015-01-01

    Full Text Available We introduce a novel approach to detect salient regions of an image via feature combination and discriminative classifier. Our method, which is based on hierarchical image abstraction, uses the logistic regression approach to map the regional feature vector to a saliency score. Four saliency cues are used in our approach, including color contrast in a global context, center-boundary priors, spatially compact color distribution, and objectness, which is as an atomic feature of segmented region in the image. By mapping a four-dimensional regional feature to fifteen-dimensional feature vector, we can linearly separate the salient regions from the clustered background by finding an optimal linear combination of feature coefficients in the fifteen-dimensional feature space and finally fuse the saliency maps across multiple levels. Furthermore, we introduce the weighted salient image center into our saliency analysis task. Extensive experiments on two large benchmark datasets show that the proposed approach achieves the best performance over several state-of-the-art approaches.

  17. Massachusetts Bay - Internal wave packets digitized from SAR imagery and intersected with a bathymetrically derived slope surface

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This feature class contains internal wave packets digitized from SAR imagery and intersected with a bathymetrically derived slope surface for Massachusetts Bay. The...

  18. HYBRID FEATURE SELECTION ALGORITHM FOR INTRUSION DETECTION SYSTEM

    Directory of Open Access Journals (Sweden)

    Seyed Reza Hasani

    2014-01-01

    Full Text Available Network security is a serious global concern. Usefulness Intrusion Detection Systems (IDS are increasing incredibly in Information Security research using Soft computing techniques. In the previous researches having irrelevant and redundant features are recognized causes of increasing the processing speed of evaluating the known intrusive patterns. In addition, an efficient feature selection method eliminates dimension of data and reduce redundancy and ambiguity caused by none important attributes. Therefore, feature selection methods are well-known methods to overcome this problem. There are various approaches being utilized in intrusion detections, they are able to perform their method and relatively they are achieved with some improvements. This work is based on the enhancement of the highest Detection Rate (DR algorithm which is Linear Genetic Programming (LGP reducing the False Alarm Rate (FAR incorporates with Bees Algorithm. Finally, Support Vector Machine (SVM is one of the best candidate solutions to settle IDSs problems. In this study four sample dataset containing 4000 random records are excluded randomly from this dataset for training and testing purposes. Experimental results show that the LGP_BA method improves the accuracy and efficiency compared with the previous related research and the feature subcategory offered by LGP_BA gives a superior representation of data.

  19. Mariana Trench Bathymetric Digital Elevation Model

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NOAA's National Geophysical Data Center (NGDC) created a bathymetric digital elevation model (DEM) for the Mariana Trench and adjacent seafloor in the Western...

  20. Farsi License Plate Detection and Recognition Based on Characters Features

    Directory of Open Access Journals (Sweden)

    Sedigheh Ghofrani

    2011-06-01

    Full Text Available In this paper a license plate detection and recognition system for Iranian private cars is implemented. The proposed license plate localization algorithm is based on region elements analysis which works properly independent of distance (how far a vehicle is, rotation (angle between camera and vehicle, and contrast (being dirty, reflected, or deformed. In addition, more than one car may exist in the image. The proposed method extracts edges and then determines the candidate regions by applying window movement. The region elements analysis includes binarization, character analysis, character continuity analysis and character parallelism analysis. After detecting license plates, we estimate the rotation angle and try to compensate it. In order to identify a detected plate, every character should be recognized. For this purpose, we present 25 features and use them as the input to an artificial neural network classifier. The experimental results show that our proposed method achieves appropriate performance for both detection and recognition of the Iranian license plates.

  1. Scale Invariant Feature Transform Based Fingerprint Corepoint Detection

    Directory of Open Access Journals (Sweden)

    Madasu Hanmandlu

    2013-07-01

    Full Text Available The detection of singular points (core and delta accurately and reliably is very important for classification and matching of fingerprints. This paper presents a new approach for core point detection based on scale invariant feature transform (SIFT. Firstly, SIFT points are extracted ,then reliability and ridge frequency criteria are applied to reduce the candidate points required to make a decision on the core point. Finally a suitable mask is applied to detect an accurate core point. Experiments on FVC2002 and FVC2004 databases show that our approach locates a unique reference point with high accuracy. Results of our approach are compared with those of the existing methods in terms of accuracy of core point detection.Defence Science Journal, 2013, 63(4, pp.402-407, DOI:http://dx.doi.org/10.14429/dsj.63.2708

  2. Feature Extraction and Selection From the Perspective of Explosive Detection

    Energy Technology Data Exchange (ETDEWEB)

    Sengupta, S K

    2009-09-01

    Features are extractable measurements from a sample image summarizing the information content in an image and in the process providing an essential tool in image understanding. In particular, they are useful for image classification into pre-defined classes or grouping a set of image samples (also called clustering) into clusters with similar within-cluster characteristics as defined by such features. At the lowest level, features may be the intensity levels of a pixel in an image. The intensity levels of the pixels in an image may be derived from a variety of sources. For example, it can be the temperature measurement (using an infra-red camera) of the area representing the pixel or the X-ray attenuation in a given volume element of a 3-d image or it may even represent the dielectric differential in a given volume element obtained from an MIR image. At a higher level, geometric descriptors of objects of interest in a scene may also be considered as features in the image. Examples of such features are: area, perimeter, aspect ratio and other shape features, or topological features like the number of connected components, the Euler number (the number of connected components less the number of 'holes'), etc. Occupying an intermediate level in the feature hierarchy are texture features which are typically derived from a group of pixels often in a suitably defined neighborhood of a pixel. These texture features are useful not only in classification but also in the segmentation of an image into different objects/regions of interest. At the present state of our investigation, we are engaged in the task of finding a set of features associated with an object under inspection ( typically a piece of luggage or a brief case) that will enable us to detect and characterize an explosive inside, when present. Our tool of inspection is an X-Ray device with provisions for computed tomography (CT) that generate one or more (depending on the number of energy levels used

  3. Face liveness detection using shearlet-based feature descriptors

    Science.gov (United States)

    Feng, Litong; Po, Lai-Man; Li, Yuming; Yuan, Fang

    2016-07-01

    Face recognition is a widely used biometric technology due to its convenience but it is vulnerable to spoofing attacks made by nonreal faces such as photographs or videos of valid users. The antispoof problem must be well resolved before widely applying face recognition in our daily life. Face liveness detection is a core technology to make sure that the input face is a live person. However, this is still very challenging using conventional liveness detection approaches of texture analysis and motion detection. The aim of this paper is to propose a feature descriptor and an efficient framework that can be used to effectively deal with the face liveness detection problem. In this framework, new feature descriptors are defined using a multiscale directional transform (shearlet transform). Then, stacked autoencoders and a softmax classifier are concatenated to detect face liveness. We evaluated this approach using the CASIA Face antispoofing database and replay-attack database. The experimental results show that our approach performs better than the state-of-the-art techniques following the provided protocols of these databases, and it is possible to significantly enhance the security of the face recognition biometric system. In addition, the experimental results also demonstrate that this framework can be easily extended to classify different spoofing attacks.

  4. Stamps Detection and Classification Using Simple Features Ensemble

    Directory of Open Access Journals (Sweden)

    Paweł Forczmański

    2015-01-01

    Full Text Available The paper addresses a problem of detection and classification of rubber stamp instances in scanned documents. A variety of methods from the field of image processing, pattern recognition, and some heuristic are utilized. Presented method works on typical stamps of different colors and shapes. For color images, color space transformation is applied in order to find potential color stamps. Monochrome stamps are detected through shape specific algorithms. Following feature extraction stage, identified candidates are subjected to classification task using a set of shape descriptors. Selected elementary properties form an ensemble of features which is rotation, scale, and translation invariant; hence this approach is document size and orientation independent. We perform two-tier classification in order to discriminate between stamps and no-stamps and then classify stamps in terms of their shape. The experiments carried out on a considerable set of real documents gathered from the Internet showed high potential of the proposed method.

  5. An auditory feature detection circuit for sound pattern recognition.

    Science.gov (United States)

    Schöneich, Stefan; Kostarakos, Konstantinos; Hedwig, Berthold

    2015-09-01

    From human language to birdsong and the chirps of insects, acoustic communication is based on amplitude and frequency modulation of sound signals. Whereas frequency processing starts at the level of the hearing organs, temporal features of the sound amplitude such as rhythms or pulse rates require processing by central auditory neurons. Besides several theoretical concepts, brain circuits that detect temporal features of a sound signal are poorly understood. We focused on acoustically communicating field crickets and show how five neurons in the brain of females form an auditory feature detector circuit for the pulse pattern of the male calling song. The processing is based on a coincidence detector mechanism that selectively responds when a direct neural response and an intrinsically delayed response to the sound pulses coincide. This circuit provides the basis for auditory mate recognition in field crickets and reveals a principal mechanism of sensory processing underlying the perception of temporal patterns.

  6. Human detection using HOG-HSC feature and PLS

    Institute of Scientific and Technical Information of China (English)

    HU Bin; ZHAO Chunxia; YUAN Xia; SUN Ling

    2012-01-01

    By combining histogram of oriented gradient and histograms of shearlet coefficients, which analyzes images at multiple scales and orientations based on shearlet transforms, as the feature set, we proposed a novel haman detection feature. We employ partial least squares analysis, an efficient dimensionality reduction technique, to project the feature onto a much lower dimensional subspace. We test it in INRIA person dataset by using a linear SVM, and it yields an error rate of 1.38% with a false negatives (FN) rate of 0.40% and a false positive (FP) rate of 0.98%, while the error rate of HOG is 7.11%, with a FN rate of 4.09% and a FP rate of 3.02%.

  7. Face Detection Based on Feature Tailoring and Skin Color Space

    Directory of Open Access Journals (Sweden)

    Jiang Wenbo

    2015-01-01

    Full Text Available This paper is used to solve the time-consuming problem of training samples in Adaboost algorithm and propose an improved FTAdaboost algorithm based on feature tailoring. In the beginning, this paper is used to make all samples have the same weight, train them once and tailor the features before the first reflection point of the error rate curve which have high error rate and poor classification ability, then reduce the number of samples and save training time. According to the distribution of facial organs, the algorithm determines whether the specified area meets the characteristics of skin-color space, then eliminates the influence of wrong facial images. The experimental results show that the algorithm based on feature tailoring can shorten the training time significantly and the detection with the skin-color space can decrease the error rate to some extent.

  8. Clinical Detection and Feature Analysis on Neuro Signals

    Institute of Scientific and Technical Information of China (English)

    张晓文; 杨煜普; 许晓鸣; 胡天培; 高忠华; 张键; 陈中伟; 陈统一

    2004-01-01

    Research on neuro signals is challenging and significative in modern natural science. By clinical experiment, signals from three main nerves (median nerve, radial nerve and ulnar nerve) are successfully detected and recorded without any infection. Further analysis on their features under different movements, their mechanics and correlations in dominating actions are also performed. The original discovery and first-hand materials make it possible for developing practical neuro-prosthesis.

  9. Decision Cost Feature Weighting and Its Application in Intrusion Detection

    Institute of Scientific and Technical Information of China (English)

    QIAN Quan; GENG Huan-tong; WANG Xu-fa

    2004-01-01

    This paper introduces the cost-sensitive feature weighting strategy and its application in intrusion detection.Cost factors and cost matrix are proposed to demonstrate the misclassification cost for IDS.How to get the whole minimal risk, is mainly discussed in this paper in detail.From experiments, it shows that although decision cost based weight learning exists somewhat attack misclassification, it can achieve relatively low misclassification costs on the basis of keeping relatively high rate of recognition precision.

  10. Asymmetry features for classification of thermograms in breast cancer detection

    Science.gov (United States)

    Nowak, Robert M.; Okuniewski, Rafał; Oleszkiewicz, Witold; Cichosz, Paweł; Jagodziński, Dariusz; Matysiewicz, Mateusz; Neumann, Łukasz

    2016-09-01

    The computer system for an automatic interpretation of thermographic pictures created by the Br-aster devices uses image processing and machine learning algorithms. The huge set of attributes analyzed by this software includes the asymmetry measurements between corresponding images, and these features are analyzed in presented paper. The system was tested on real data and achieves accuracy comparable to other popular techniques used for breast tumour detection.

  11. Wavelet features for failure detection and identification in vibration systems

    Science.gov (United States)

    Deckert, James C.; Rhenals, Alonso E.; Tenney, Robert R.; Willsky, Alan S.

    1992-12-01

    The result of this effort is an extremely flexible and powerful methodology for failure detection and identification (FDI) in vibrating systems. The essential elements of this methodology are: (1) an off-line set of techniques to identify high-energy, statistically significant features in the continuous wavelet transform (CWT); (2) a CWT-based preprocessor to extract the most useful features from the sensor signal; and (3) simple artificial neural networks (incorporating a mechanism to defer any decision if the current feature sample is determined to be ambiguous) for the subsequent classification task. For the helicopter intermediate gearbox test-stand data and centrifugal and fire pump shipboard (mild operating condition) data used, the algorithms designed using this method achieved perfect detection performance (1.000 probability of detection, and 0.000 false alarm probability), with a probability less than 0.04 that a decision would be deferred-based on only 500 milliseconds of data from each sample case. While this effort shows the exceptional promise of our wavelet-based method for FDI in vibrating systems, more demanding applications, which also have other sources of high-energy vibration, raise additional technical issues that could provide the focus for a Phase 2 effort.

  12. Capabilities of the bathymetric Hawk Eye LiDAR for coastal habitat mapping: A case study within a Basque estuary

    Science.gov (United States)

    Chust, Guillem; Grande, Maitane; Galparsoro, Ibon; Uriarte, Adolfo; Borja, Ángel

    2010-10-01

    The bathymetric LiDAR system is an airborne laser that detects sea bottom at high vertical and horizontal resolutions in shallow coastal waters. This study assesses the capabilities of the airborne bathymetric LiDAR sensor (Hawk Eye system) for coastal habitat mapping in the Oka estuary (within the Biosphere Reserve of Urdaibai, SE Bay of Biscay, northern Spain), where water conditions are moderately turbid. Three specific objectives were addressed: 1) to assess the data quality of the Hawk Eye LiDAR, both for terrestrial and subtidal zones, in terms of height measurement density, coverage, and vertical accuracy; 2) to compare bathymetric LiDAR with a ship-borne multibeam echosounder (MBES) for different bottom types and depth ranges; and 3) to test the discrimination potential of LiDAR height and reflectance information, together with multi-spectral imagery (three visible and near infrared bands), for the classification of 22 salt marsh and rocky shore habitats, covering supralittoral, intertidal and subtidal zones. The bathymetric LiDAR Hawk Eye data enabled the generation of a digital elevation model (DEM) of the Oka estuary, at 2 m of horizontal spatial resolution in the terrestrial zone (with a vertical accuracy of 0.15 m) and at 4 m within the subtidal, extending a water depth of 21 m. Data gaps occurred in 14.4% of the area surveyed with the LiDAR (13.69 km 2). Comparison of the LiDAR system and the MBES showed no significant mean difference in depth. However, the Root Mean Square error of the former was high (0.84 m), especially concentrated upon rocky (0.55-1.77 m) rather than in sediment bottoms (0.38-0.62 m). The potential of LiDAR topographic variables and reflectance alone for discriminating 15 intertidal and submerged habitats was low (with overall classification accuracy between 52.4 and 65.4%). In particular, reflectance retrieved for this case study has been found to be not particularly useful for classification purposes. The combination of the Li

  13. Crowding is unlike ordinary masking: distinguishing feature integration from detection.

    Science.gov (United States)

    Pelli, Denis G; Palomares, Melanie; Majaj, Najib J

    2004-12-30

    A letter in the peripheral visual field is much harder to identify in the presence of nearby letters. This is "crowding." Both crowding and ordinary masking are special cases of "masking," which, in general, refers to any effect of a "mask" pattern on the discriminability of a signal. Here we characterize crowding, and propose a diagnostic test to distinguish it from ordinary masking. In ordinary masking, the signal disappears. In crowding, it remains visible, but is ambiguous, jumbled with its neighbors. Masks are usually effective only if they overlap the signal, but the crowding effect extends over a large region. The width of that region is proportional to signal eccentricity from the fovea and independent of signal size, mask size, mask contrast, signal and mask font, and number of masks. At 4 deg eccentricity, the threshold contrast for identification of a 0.32 deg signal letter is elevated (up to six-fold) by mask letters anywhere in a 2.3 deg region, 7 times wider than the signal. In ordinary masking, threshold contrast rises as a power function of mask contrast, with a shallow log-log slope of 0.5 to 1, whereas, in crowding, threshold is a sigmoidal function of mask contrast, with a steep log-log slope of 2 at close spacing. Most remarkably, although the threshold elevation decreases exponentially with spacing, the threshold and saturation contrasts of crowding are independent of spacing. Finally, ordinary masking is similar for detection and identification, but crowding occurs only for identification, not detection. More precisely, crowding occurs only in tasks that cannot be done based on a single detection by coarsely coded feature detectors. These results (and observers' introspections) suggest that ordinary masking blocks feature detection, so the signal disappears, while crowding (like "illusory conjunction") is excessive feature integration - detected features are integrated over an inappropriately large area because there are no smaller integration

  14. Deep PDF parsing to extract features for detecting embedded malware.

    Energy Technology Data Exchange (ETDEWEB)

    Munson, Miles Arthur; Cross, Jesse S. (Missouri University of Science and Technology, Rolla, MO)

    2011-09-01

    The number of PDF files with embedded malicious code has risen significantly in the past few years. This is due to the portability of the file format, the ways Adobe Reader recovers from corrupt PDF files, the addition of many multimedia and scripting extensions to the file format, and many format properties the malware author may use to disguise the presence of malware. Current research focuses on executable, MS Office, and HTML formats. In this paper, several features and properties of PDF Files are identified. Features are extracted using an instrumented open source PDF viewer. The feature descriptions of benign and malicious PDFs can be used to construct a machine learning model for detecting possible malware in future PDF files. The detection rate of PDF malware by current antivirus software is very low. A PDF file is easy to edit and manipulate because it is a text format, providing a low barrier to malware authors. Analyzing PDF files for malware is nonetheless difficult because of (a) the complexity of the formatting language, (b) the parsing idiosyncrasies in Adobe Reader, and (c) undocumented correction techniques employed in Adobe Reader. In May 2011, Esparza demonstrated that PDF malware could be hidden from 42 of 43 antivirus packages by combining multiple obfuscation techniques [4]. One reason current antivirus software fails is the ease of varying byte sequences in PDF malware, thereby rendering conventional signature-based virus detection useless. The compression and encryption functions produce sequences of bytes that are each functions of multiple input bytes. As a result, padding the malware payload with some whitespace before compression/encryption can change many of the bytes in the final payload. In this study we analyzed a corpus of 2591 benign and 87 malicious PDF files. While this corpus is admittedly small, it allowed us to test a system for collecting indicators of embedded PDF malware. We will call these indicators features throughout

  15. Deep PDF parsing to extract features for detecting embedded malware.

    Energy Technology Data Exchange (ETDEWEB)

    Munson, Miles Arthur; Cross, Jesse S. (Missouri University of Science and Technology, Rolla, MO)

    2011-09-01

    The number of PDF files with embedded malicious code has risen significantly in the past few years. This is due to the portability of the file format, the ways Adobe Reader recovers from corrupt PDF files, the addition of many multimedia and scripting extensions to the file format, and many format properties the malware author may use to disguise the presence of malware. Current research focuses on executable, MS Office, and HTML formats. In this paper, several features and properties of PDF Files are identified. Features are extracted using an instrumented open source PDF viewer. The feature descriptions of benign and malicious PDFs can be used to construct a machine learning model for detecting possible malware in future PDF files. The detection rate of PDF malware by current antivirus software is very low. A PDF file is easy to edit and manipulate because it is a text format, providing a low barrier to malware authors. Analyzing PDF files for malware is nonetheless difficult because of (a) the complexity of the formatting language, (b) the parsing idiosyncrasies in Adobe Reader, and (c) undocumented correction techniques employed in Adobe Reader. In May 2011, Esparza demonstrated that PDF malware could be hidden from 42 of 43 antivirus packages by combining multiple obfuscation techniques [4]. One reason current antivirus software fails is the ease of varying byte sequences in PDF malware, thereby rendering conventional signature-based virus detection useless. The compression and encryption functions produce sequences of bytes that are each functions of multiple input bytes. As a result, padding the malware payload with some whitespace before compression/encryption can change many of the bytes in the final payload. In this study we analyzed a corpus of 2591 benign and 87 malicious PDF files. While this corpus is admittedly small, it allowed us to test a system for collecting indicators of embedded PDF malware. We will call these indicators features throughout

  16. Multimodal spectroscopy detects features of vulnerable atherosclerotic plaque

    Science.gov (United States)

    Šćepanović, Obrad R.; Fitzmaurice, Maryann; Miller, Arnold; Kong, Chae-Ryon; Volynskaya, Zoya; Dasari, Ramachandra R.; Kramer, John R.; Feld, Michael S.

    2011-01-01

    Early detection and treatment of rupture-prone vulnerable atherosclerotic plaques is critical to reducing patient mortality associated with cardiovascular disease. The combination of reflectance, fluorescence, and Raman spectroscopy-termed multimodal spectroscopy (MMS)-provides detailed biochemical information about tissue and can detect vulnerable plaque features: thin fibrous cap (TFC), necrotic core (NC), superficial foam cells (SFC), and thrombus. Ex vivo MMS spectra are collected from 12 patients that underwent carotid endarterectomy or femoral bypass surgery. Data are collected by means of a unitary MMS optical fiber probe and a portable clinical instrument. Blinded histopathological analysis is used to assess the vulnerability of each spectrally evaluated artery lesion. Modeling of the ex vivo MMS spectra produce objective parameters that correlate with the presence of vulnerable plaque features: TFC with fluorescence parameters indicative of collagen presence; NC/SFC with a combination of diffuse reflectance β-carotene/ceroid absorption and the Raman spectral signature of lipids; and thrombus with its Raman signature. Using these parameters, suspected vulnerable plaques can be detected with a sensitivity of 96% and specificity of 72%. These encouraging results warrant the continued development of MMS as a catheter-based clinical diagnostic technique for early detection of vulnerable plaques.

  17. Multispectral image feature fusion for detecting land mines

    Energy Technology Data Exchange (ETDEWEB)

    Clark, G.A.; Fields, D.J.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)] [and others

    1994-11-15

    Our system fuses information contained in registered images from multiple sensors to reduce the effect of clutter and improve the the ability to detect surface and buried land mines. The sensor suite currently consists if a camera that acquires images in sixible wavelength bands, du, dual-band infrared (5 micron and 10 micron) and ground penetrating radar. Past research has shown that it is extremely difficult to distinguish land mines from background clutter in images obtained from a single sensor. It is hypothesized, however, that information fused from a suite of various sensors is likely to provide better detection reliability, because the suite of sensors detects a variety of physical properties that are more separate in feature space. The materials surrounding the mines can include natural materials (soil, rocks, foliage, water, holes made by animals and natural processes, etc.) and some artifacts.

  18. Non-contact feature detection using ultrasonic Lamb waves

    Science.gov (United States)

    Sinha, Dipen N [Los Alamos, NM

    2011-06-28

    Apparatus and method for non-contact ultrasonic detection of features on or within the walls of hollow pipes are described. An air-coupled, high-power ultrasonic transducer for generating guided waves in the pipe wall, and a high-sensitivity, air-coupled transducer for detecting these waves, are disposed at a distance apart and at chosen angle with respect to the surface of the pipe, either inside of or outside of the pipe. Measurements may be made in reflection or transmission modes depending on the relative position of the transducers and the pipe. Data are taken by sweeping the frequency of the incident ultrasonic waves, using a tracking narrow-band filter to reduce detected noise, and transforming the frequency domain data into the time domain using fast Fourier transformation, if required.

  19. Breast Cancer Detection with Gabor Features from Digital Mammograms

    Directory of Open Access Journals (Sweden)

    Yufeng Zheng

    2010-01-01

    Full Text Available A new breast cancer detection algorithm, named the “Gabor Cancer Detection” (GCD algorithm, utilizing Gabor features is proposed. Three major steps are involved in the GCD algorithm, preprocessing, segmentation (generating alarm segments, and classification (reducing false alarms. In preprocessing, a digital mammogram is down-sampled, quantized, denoised and enhanced. Nonlinear diffusion is used for noise suppression. In segmentation, a band-pass filter is formed by rotating a 1-D Gaussian filter (off center in frequency space, termed as “Circular Gaussian Filter” (CGF. A CGF can be uniquely characterized by specifying a central frequency and a frequency band. A mass or calcification is a space-occupying lesion and usually appears as a bright region on a mammogram. The alarm segments (suspicious to be masses/calcifications can be extracted out using a threshold that is adaptively decided upon the histogram analysis of the CGF-filtered mammogram. In classification, a Gabor filter bank is formed with five bands by four orientations (horizontal, vertical, 45 and 135 degree in Fourier frequency domain. For each mammographic image, twenty Gabor-filtered images are produced. A set of edge histogram descriptors (EHD are then extracted from 20 Gabor images for classification. An EHD signature is computed with four orientations of Gabor images along each band and five EHD signatures are then joined together to form an EHD feature vector of 20 dimensions. With the EHD features, the fuzzy C-means clustering technique and k-nearest neighbor (KNN classifier are used to reduce the number of false alarms. The experimental results tested on the DDSM database (University of South Florida show the promises of GCD algorithm in breast cancer detection, which achieved TP (true positive rate = 90% at FPI (false positives per image = 1.21 in mass detection; and TP = 93% at FPI = 1.19 in calcification detection.

  20. 2011 NOAA Bathymetric Lidar: U.S. Virgin Islands - St. Thomas, St. John, St. Croix (Salt River Bay, Buck Island)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data represents a LiDAR (Light Detection & Ranging) gridded bathymetric surface and a gridded relative seafloor reflectivity surface (incorporated into the...

  1. 2011 NOAA Bathymetric Lidar: U.S. Virgin Islands - St. Thomas, St. John, St. Croix (Salt River Bay, Buck Island)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data represents a LiDAR (Light Detection & Ranging) gridded bathymetric surface and a gridded relative seafloor reflectivity surface (incorporated into the...

  2. Feature detection on 3D images of dental imprints

    Science.gov (United States)

    Mokhtari, Marielle; Laurendeau, Denis

    1994-09-01

    A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.

  3. Hdr Imaging for Feature Detection on Detailed Architectural Scenes

    Science.gov (United States)

    Kontogianni, G.; Stathopoulou, E. K.; Georgopoulos, A.; Doulamis, A.

    2015-02-01

    3D reconstruction relies on accurate detection, extraction, description and matching of image features. This is even truer for complex architectural scenes that pose needs for 3D models of high quality, without any loss of detail in geometry or color. Illumination conditions influence the radiometric quality of images, as standard sensors cannot depict properly a wide range of intensities in the same scene. Indeed, overexposed or underexposed pixels cause irreplaceable information loss and degrade digital representation. Images taken under extreme lighting environments may be thus prohibitive for feature detection/extraction and consequently for matching and 3D reconstruction. High Dynamic Range (HDR) images could be helpful for these operators because they broaden the limits of illumination range that Standard or Low Dynamic Range (SDR/LDR) images can capture and increase in this way the amount of details contained in the image. Experimental results of this study prove this assumption as they examine state of the art feature detectors applied both on standard dynamic range and HDR images.

  4. HDR IMAGING FOR FEATURE DETECTION ON DETAILED ARCHITECTURAL SCENES

    Directory of Open Access Journals (Sweden)

    G. Kontogianni

    2015-02-01

    Full Text Available 3D reconstruction relies on accurate detection, extraction, description and matching of image features. This is even truer for complex architectural scenes that pose needs for 3D models of high quality, without any loss of detail in geometry or color. Illumination conditions influence the radiometric quality of images, as standard sensors cannot depict properly a wide range of intensities in the same scene. Indeed, overexposed or underexposed pixels cause irreplaceable information loss and degrade digital representation. Images taken under extreme lighting environments may be thus prohibitive for feature detection/extraction and consequently for matching and 3D reconstruction. High Dynamic Range (HDR images could be helpful for these operators because they broaden the limits of illumination range that Standard or Low Dynamic Range (SDR/LDR images can capture and increase in this way the amount of details contained in the image. Experimental results of this study prove this assumption as they examine state of the art feature detectors applied both on standard dynamic range and HDR images.

  5. A Hybrid method of face detection based on Feature Extraction using PIFR and Feature Optimization using TLBO

    Directory of Open Access Journals (Sweden)

    Kapil Verma

    2016-01-01

    Full Text Available In this paper we proposed a face detection method based on feature selection and feature optimization. Now in current research trend of biometric security used the process of feature optimization for better improvement of face detection technique. Basically our face consists of three types of feature such as skin color, texture and shape and size of face. The most important feature of face is skin color and texture of face. In this detection technique used texture feature of face image. For the texture extraction of image face used partial feature extraction function, these function is most promising shape feature analysis. For the selection of feature and optimization of feature used multi-objective TLBO. TLBO algorithm is population based searching technique and defines two constraints function for the process of selection and optimization. The proposed algorithm of face detection based on feature selection and feature optimization process. Initially used face image data base and passes through partial feature extractor function and these transform function gives a texture feature of face image. For the evaluation of performance our proposed algorithm implemented in MATLAB 7.8.0 software and face image used provided by Google face image database. For numerical analysis of result used hit and miss ratio. Our empirical evaluation of result shows better prediction result in compression of PIFR method of face detection.

  6. A Feature-Based Forensic Procedure for Splicing Forgeries Detection

    Directory of Open Access Journals (Sweden)

    Irene Amerini

    2015-01-01

    Full Text Available Nowadays, determining if an image appeared somewhere on the web or in a magazine or is authentic or not has become crucial. Image forensics methods based on features have demonstrated so far to be very effective in detecting forgeries in which a portion of an image is cloned somewhere else onto the same image. Anyway such techniques cannot be adopted to deal with splicing attack, that is, when the image portion comes from another picture that then, usually, is not available anymore for an operation of feature match. In this paper, a procedure in which these techniques could also be employed will be shown to get rid of splicing attack by resorting to the use of some repositories of images available on the Internet like Google Images or TinEye Reverse Image Search. Experimental results are presented on some real case images retrieved on the Internet to demonstrate the capacity of the proposed procedure.

  7. Inverted dipole feature in directional detection of exothermic dark matter

    CERN Document Server

    Bozorgnia, Nassim; Gondolo, Paolo

    2016-01-01

    Directional dark matter detection attempts to measure the direction of motion of nuclei recoiling after having interacted with dark matter particles in the halo of our Galaxy. Due to Earth's motion with respect to the Galaxy, the dark matter flux is concentrated around a preferential direction. An anisotropy in the recoil direction rate is expected as an unmistakable signature of dark matter. The average nuclear recoil direction is expected to coincide with the average direction of dark matter particles arriving to Earth. Here we point out that for a particular type of dark matter, inelastic exothermic dark matter, the mean recoil direction as well as a secondary feature, a ring of maximum recoil rate around the mean recoil direction, could instead be opposite to the average dark matter arrival direction. Thus, the detection of an average nuclear recoil direction opposite to the usually expected direction would constitute a spectacular experimental confirmation of this type of dark matter.

  8. Colitis detection on abdominal CT scans by rich feature hierarchies

    Science.gov (United States)

    Liu, Jiamin; Lay, Nathan; Wei, Zhuoshi; Lu, Le; Kim, Lauren; Turkbey, Evrim; Summers, Ronald M.

    2016-03-01

    Colitis is inflammation of the colon due to neutropenia, inflammatory bowel disease (such as Crohn disease), infection and immune compromise. Colitis is often associated with thickening of the colon wall. The wall of a colon afflicted with colitis is much thicker than normal. For example, the mean wall thickness in Crohn disease is 11-13 mm compared to the wall of the normal colon that should measure less than 3 mm. Colitis can be debilitating or life threatening, and early detection is essential to initiate proper treatment. In this work, we apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals to detect potential colitis on CT scans. Our method first generates around 3000 category-independent region proposals for each slice of the input CT scan using selective search. Then, a fixed-length feature vector is extracted from each region proposal using a CNN. Finally, each region proposal is classified and assigned a confidence score with linear SVMs. We applied the detection method to 260 images from 26 CT scans of patients with colitis for evaluation. The detection system can achieve 0.85 sensitivity at 1 false positive per image.

  9. Detection and analysis of diamond fingerprinting feature and its application

    Science.gov (United States)

    Li, Xin; Huang, Guoliang; Li, Qiang; Chen, Shengyi

    2011-01-01

    Before becoming a jewelry diamonds need to be carved artistically with some special geometric features as the structure of the polyhedron. There are subtle differences in the structure of this polyhedron in each diamond. With the spatial frequency spectrum analysis of diamond surface structure, we can obtain the diamond fingerprint information which represents the "Diamond ID" and has good specificity. Based on the optical Fourier Transform spatial spectrum analysis, the fingerprinting identification of surface structure of diamond in spatial frequency domain was studied in this paper. We constructed both the completely coherent diamond fingerprinting detection system illuminated by laser and the partially coherent diamond fingerprinting detection system illuminated by led, and analyzed the effect of the coherence of light source to the diamond fingerprinting feature. We studied rotation invariance and translation invariance of the diamond fingerprinting and verified the feasibility of real-time and accurate identification of diamond fingerprint. With the profit of this work, we can provide customs, jewelers and consumers with a real-time and reliable diamonds identification instrument, which will curb diamond smuggling, theft and other crimes, and ensure the healthy development of the diamond industry.

  10. Detection and analysis of diamond fingerprinting feature and its application

    Energy Technology Data Exchange (ETDEWEB)

    Li Xin; Huang Guoliang; Li Qiang; Chen Shengyi, E-mail: tshgl@tsinghua.edu.cn [Department of Biomedical Engineering, the School of Medicine, Tsinghua University, Beijing, 100084 (China)

    2011-01-01

    Before becoming a jewelry diamonds need to be carved artistically with some special geometric features as the structure of the polyhedron. There are subtle differences in the structure of this polyhedron in each diamond. With the spatial frequency spectrum analysis of diamond surface structure, we can obtain the diamond fingerprint information which represents the 'Diamond ID' and has good specificity. Based on the optical Fourier Transform spatial spectrum analysis, the fingerprinting identification of surface structure of diamond in spatial frequency domain was studied in this paper. We constructed both the completely coherent diamond fingerprinting detection system illuminated by laser and the partially coherent diamond fingerprinting detection system illuminated by led, and analyzed the effect of the coherence of light source to the diamond fingerprinting feature. We studied rotation invariance and translation invariance of the diamond fingerprinting and verified the feasibility of real-time and accurate identification of diamond fingerprint. With the profit of this work, we can provide customs, jewelers and consumers with a real-time and reliable diamonds identification instrument, which will curb diamond smuggling, theft and other crimes, and ensure the healthy development of the diamond industry.

  11. Flow feature detection for grid adaptation and flow visualization

    Science.gov (United States)

    Kallinderis, Yannis; Lymperopoulou, Eleni M.; Antonellis, Panagiotis

    2017-07-01

    Adaptive grid refinement/coarsening is an important method for achieving increased accuracy of flow simulations with reduced computing resources. Further, flow visualization of complex 3-D fields is a major task of both computational fluid dynamics (CFD), as well as experimental data analysis. A primary issue of adaptive simulations and flow visualization is the reliable detection of the local regions containing features of interest. A relatively wide spectrum of detection functions (sensors) is employed for representative flow cases which include boundary layers, vortices, jets, wakes, shock waves, contact discontinuities, and expansions. The focus is on relatively simple sensors based on local flow field variation using 3-D general hybrid grids consisting of multiple types of elements. A quantitative approach for sensors evaluation and comparison is proposed and applied. It is accomplished via the employment of analytic flow fields. Automation and effectiveness of an adaptive grid or flow visualization process requires the reliable determination of an appropriate threshold for the sensor. Statistical evaluation of the distributions of the sensors results in a proposed empirical formula for the threshold. The qualified sensors along with the automatic threshold determination are tested with more complex flow cases exhibiting multiple flow features.

  12. Landmine detection using discrete hidden Markov models with Gabor features

    Science.gov (United States)

    Frigui, Hichem; Missaoui, Oualid; Gader, Paul

    2007-04-01

    We propose a general method for detecting landmine signatures in vehicle mounted ground penetrating radar (GPR) using discrete hidden Markov models and Gabor wavelet features. Observation vectors are constructed based on the expansion of the signature's B-scan using a bank of scale and orientation selective Gabor filters. This expansion provides localized frequency description that gets encoded in the observation sequence. These observations do not impose an explicit structure on the mine model, and are used to naturally model the time-varying signatures produced by the interaction of the GPR and the landmines as the vehicle moves. The proposed method is evaluated on real data collected by a GPR mounted on a moving vehicle at three different geographical locations that include several lanes. The model parameters are optimized using the BaumWelch algorithm, and lane-based cross-validation, in which each mine lane is in turn treated as a test set with the rest of the lanes used for training, is used to train and test the model. Preliminary results show that observations encoded with Gabor wavelet features perform better than observation encoded with gradient-based edge features.

  13. Quantification of storm-induced bathymetric change in a back-barrier estuary

    Science.gov (United States)

    Ganju, Neil K.; Suttles, Steven E.; Beudin, Alexis; Nowacki, Daniel J.; Miselis, Jennifer L.; Andrews, Brian D.

    2017-01-01

    Geomorphology is a fundamental control on ecological and economic function of estuaries. However, relative to open coasts, there has been little quantification of storm-induced bathymetric change in back-barrier estuaries. Vessel-based and airborne bathymetric mapping can cover large areas quickly, but change detection is difficult because measurement errors can be larger than the actual changes over the storm timescale. We quantified storm-induced bathymetric changes at several locations in Chincoteague Bay, Maryland/Virginia, over the August 2014 to July 2015 period using fixed, downward-looking altimeters and numerical modeling. At sand-dominated shoal sites, measurements showed storm-induced changes on the order of 5 cm, with variability related to stress magnitude and wind direction. Numerical modeling indicates that the predominantly northeasterly wind direction in the fall and winter promotes southwest-directed sediment transport, causing erosion of the northern face of sandy shoals; southwesterly winds in the spring and summer lead to the opposite trend. Our results suggest that storm-induced estuarine bathymetric change magnitudes are often smaller than those detectable with methods such as LiDAR. More precise fixed-sensor methods have the ability to elucidate the geomorphic processes responsible for modulating estuarine bathymetry on the event and seasonal timescale, but are limited spatially. Numerical modeling enables interpretation of broad-scale geomorphic processes and can be used to infer the long-term trajectory of estuarine bathymetric change due to episodic events, when informed by fixed-sensor methods.

  14. Subsurface Characterization of Shallow Water Regions using Airborne Bathymetric Lidar

    Science.gov (United States)

    Bradford, B.; Neuenschwander, A. L.; Magruder, L. A.

    2013-12-01

    Understanding the complex interactions between air, land, and water in shallow water regions is becoming increasingly critical in the age of climate change. To effectively monitor and manage these zones, scientific data focused on changing water levels, quality, and subsurface topography are needed. Airborne remote sensing using light detection and ranging (LIDAR) is naturally suited to address this need as it can simultaneously provide detailed three-dimensional spatial data for both topographic and bathymetric applications in an efficient and effective manner. The key to useful data, however, is the correct interpretation of the incoming laser returns to distinguish between land, water, and objects. The full waveform lidar receiver captures the complete returning signal reflected from the Earth, which contains detailed information about the structure of the objects and surfaces illuminated by the beam. This study examines the characterization of this full waveform with respect to water surface depth penetration and subsurface classification, including sand, rock, and vegetation. Three assessments are performed to help characterize the laser interaction within the shallow water zone: evaluation of water surface backscatter as a function of depth and location, effects from water bottom surface roughness and reflectivity, and detection and classification of subsurface structure. Using the Chiroptera dual-laser lidar mapping system from Airborne Hydrography AB (AHAB), both bathymetric and topographic mapping are possible. The Chiroptera system combines a 1064nm near infrared topographic laser with a 515nm green bathymetric laser to seamlessly map the land/water interface in coastal areas. Two survey sites are examined: Lake Travis in Austin, Texas, USA, and Lake Vättern in Jönköping, Sweden. Water quality conditions were found to impact depth penetration of the lidar, as a maximum depth of 5.5m was recorded at Lake Travis and 11m at Lake Vättern.

  15. Metamorphic Virus Detection in Portable Executables Using Opcodes Statistical Feature

    CERN Document Server

    Rad, Babak Bashari

    2011-01-01

    Metamorphic viruses engage different mutation techniques to escape from string signature based scanning. They try to change their code in new offspring so that the variants appear non-similar and have no common sequences of string as signature. However, all versions of a metamorphic virus have similar task and performance. This obfuscation process helps to keep them safe from the string based signature detection. In this study, we make use of instructions statistical features to compare the similarity of two hosted files probably occupied by two mutated forms of a specific metamorphic virus. The introduced solution in this paper is relied on static analysis and employs the frequency histogram of machine opcodes in different instances of obfuscated viruses. We use Minkowski-form histogram distance measurements in order to check the likeness of portable executables (PE). The purpose of this research is to present an idea that for a number of special obfuscation approaches the presented solution can be used to i...

  16. Detection of tuberculosis using hybrid features from chest radiographs

    Science.gov (United States)

    Fatima, Ayesha; Akram, M. Usman; Akhtar, Mahmood; Shafique, Irrum

    2017-02-01

    Tuberculosis is an infectious disease and becomes a major threat all over the world but still diagnosis of tuberculosis is a challenging task. In literature, chest radiographs are considered as most commonly used medical images in under developed countries for the diagnosis of TB. Different methods have been proposed but they are not helpful for radiologists due to cost and accuracy issues. Our paper presents a methodology in which different combinations of features are extracted based on intensities, shape and texture of chest radiograph and given to classifier for the detection of TB. The performance of our methodology is evaluated using publically available standard dataset Montgomery Country (MC) which contains 138 CXRs among which 80 CXRs are normal and 58 CXRs are abnormal including effusion and miliary patterns etc. The accuracy of 81.16% was achieved and the results show that proposed method have outperformed existing state of the art methods on MC dataset.

  17. Exploiting Product Related Review Features for Fake Review Detection

    Directory of Open Access Journals (Sweden)

    Chengai Sun

    2016-01-01

    Full Text Available Product reviews are now widely used by individuals for making their decisions. However, due to the purpose of profit, reviewers game the system by posting fake reviews for promoting or demoting the target products. In the past few years, fake review detection has attracted significant attention from both the industrial organizations and academic communities. However, the issue remains to be a challenging problem due to lacking of labelling materials for supervised learning and evaluation. Current works made many attempts to address this problem from the angles of reviewer and review. However, there has been little discussion about the product related review features which is the main focus of our method. This paper proposes a novel convolutional neural network model to integrate the product related review features through a product word composition model. To reduce overfitting and high variance, a bagging model is introduced to bag the neural network model with two efficient classifiers. Experiments on the real-life Amazon review dataset demonstrate the effectiveness of the proposed approach.

  18. Mass detection with digitized screening mammograms by using Gabor features

    Science.gov (United States)

    Zheng, Yufeng; Agyepong, Kwabena

    2007-03-01

    Breast cancer is the leading cancer among American women. The current lifetime risk of developing breast cancer is 13.4% (one in seven). Mammography is the most effective technology presently available for breast cancer screening. With digital mammograms computer-aided detection (CAD) has proven to be a useful tool for radiologists. In this paper, we focus on mass detection that is a common category of breast cancers relative to calcification and architecture distortion. We propose a new mass detection algorithm utilizing Gabor filters, termed as "Gabor Mass Detection" (GMD). There are three steps in the GMD algorithm, (1) preprocessing, (2) generating alarms and (3) classification (reducing false alarms). Down-sampling, quantization, denoising and enhancement are done in the preprocessing step. Then a total of 30 Gabor filtered images (along 6 bands by 5 orientations) are produced. Alarm segments are generated by thresholding four Gabor images of full orientations (Stage-I classification) with image-dependent thresholds computed via histogram analysis. Next a set of edge histogram descriptors (EHD) are extracted from 24 Gabor images (6 by 4) that will be used for Stage-II classification. After clustering EHD features with fuzzy C-means clustering method, a k-nearest neighbor classifier is used to reduce the number of false alarms. We initially analyzed 431 digitized mammograms (159 normal images vs. 272 cancerous images, from the DDSM project, University of South Florida) with the proposed GMD algorithm. And a ten-fold cross validation was used for testing the GMD algorithm upon the available data. The GMD performance is as follows: sensitivity (true positive rate) = 0.88 at false positives per image (FPI) = 1.25, and the area under the ROC curve = 0.83. The overall performance of the GMD algorithm is satisfactory and the accuracy of locating masses (highlighting the boundaries of suspicious areas) is relatively high. Furthermore, the GMD algorithm can

  19. Bathymetric survey of Lake Calumet, Cook County, Illinois

    Science.gov (United States)

    Duncker, James J.; Johnson, Kevin K.; Sharpe, Jennifer B.

    2015-01-01

    The U.S. Geological Survey collected bathymetric data in Lake Calumet and a portion of the Calumet River in the vicinity of Lake Calumet to produce a bathymetric map. The bathymetric survey was made over 3 days (July 26, September 11, and November 7, 2012). Lake Calumet has become a focus area for Asian carp rapid-response efforts by state and federal agencies, and very little bathymetric data existed prior to this survey. This bathymetric survey provides data for a variety of scientific and engineering studies of the area; for example, hydraulic modeling of water and sediment transport from Lake Calumet to the Calumet River.

  20. The effect of destination linked feature selection in real-time network intrusion detection

    CSIR Research Space (South Africa)

    Mzila, P

    2013-07-01

    Full Text Available . Elimination of the insignificant features leads to a simplified problem and may enhance detection rate, which is itself a problem in network intrusion detection system. Furthermore, removal of specifically the destination linked features will allow the trained...

  1. A new morpho-bathymetric map of the Eastern Mediterranean Sea

    Science.gov (United States)

    Mascle, Jean; Brosolo, Laetitia

    2016-04-01

    A new morpho-bathymetric synthesis of the Eastern Mediterranean Sea has been compiled using a digital terrain model (DTM) based on a 100-meter grid. This DTM has been constructed using data provided by several peri-mediterranean Institutes, and collected using various swath bathymetry systems operated by different research vessels. One may estimate that 90% of the seabed extending by water depths higher than 2000m have been mapped using swath systems. The aim of this synthesis is chiefly to illustrate, in detail, the morphological features resulting from the various (sedimentary, tectonic, geochemical, magmatic, etc.) active geological processes operating on the four main physiographic domains, which characterize the Eastern Mediterranean Sea: the Calabria outer arc (Ionian Sea), the Mediterranean Ridge (most of the central basin), the Nile sedimentary cone (off Egypt) and the Eratosthenes seamount (south of Cyprus). For areas not yet covered by swath bathymetric systems the map has been completed by digital data extracted either from GEBCO or from EMODNET DTM files (http://www.gebco.net/data_and_products/gebco_digital_atlas/) (http://www.emodnet-hydrography.eu/). Several artifacts introduced by the use of these files, for example theoccurrences of their grids, can be detected along most of the steep continental slopes not yet mapped in detail, as well as in the southern domain of the Adriatic Sea. Similarly it has not been possible to systematically correct a few, but non-linear, discrepancies in Z values between various DTM files. Such discrepancies result either from the use of data collected by swath systems operating at different frequencies and/or from minor differences in seawater sound velocity corrections.

  2. Metamorphic Virus Detection in Portable Executables Using Opcodes Statistical Feature

    Directory of Open Access Journals (Sweden)

    Babak Bashari Rad

    2011-01-01

    Full Text Available Metamorphic viruses  engage different mutation techniques to escape from string signature based scanning. They try to change their code in new offspring so that the variants appear non-similar and have no common sequences of string as signature. However, all versions of a metamorphic virus have similar task and performance. This obfuscation process helps to keep them safe from the string based signature detection. In this study, we make use of instructions statistical features to compare the similarity of two hosted files probably occupied by two mutated forms of a specific metamorphic virus. The introduced solution in this paper is relied on static analysis and employs the frequency histogram of machine opcodes in different instances of obfuscated viruses. We use Minkowski-form histogram distance measurements in order to check the likeness of portable executables (PE. The purpose of this research is to  present an idea that for  a number of special  obfuscation approaches the presented solution can be  used to identify morphed copies of a file. Thus, it can be applied by antivirus scanner to recognize different versions of a metamorphic virus.

  3. Detecting Lo cal Manifold Structure for Unsup ervised Feature Selection

    Institute of Scientific and Technical Information of China (English)

    FENG Ding-Cheng; CHEN Feng; XU Wen-Li

    2014-01-01

    Unsupervised feature selection is fundamental in statistical pattern recognition, and has drawn persistent attention in the past several decades. Recently, much work has shown that feature selection can be formulated as nonlinear dimensionality reduction with discrete constraints. This line of research emphasizes utilizing the manifold learning techniques, where feature selection and learning can be studied based on the manifold assumption in data distribution. Many existing feature selection methods such as Laplacian score, SPEC (spectrum decomposition of graph Laplacian), TR (trace ratio) criterion, MSFS (multi-cluster feature selection) and EVSC (eigenvalue sensitive criterion) apply the basic properties of graph Laplacian, and select the optimal feature subsets which best preserve the manifold structure defined on the graph Laplacian. In this paper, we propose a new feature selection perspective from locally linear embedding (LLE), which is another popular manifold learning method. The main difficulty of using LLE for feature selection is that its optimization involves quadratic programming and eigenvalue decomposition, both of which are continuous procedures and different from discrete feature selection. We prove that the LLE objective can be decomposed with respect to data dimensionalities in the subset selection problem, which also facilitates constructing better coordinates from data using the principal component analysis (PCA) technique. Based on these results, we propose a novel unsupervised feature selection algorithm, called locally linear selection (LLS), to select a feature subset representing the underlying data manifold. The local relationship among samples is computed from the LLE formulation, which is then used to estimate the contribution of each individual feature to the underlying manifold structure. These contributions, represented as LLS scores, are ranked and selected as the candidate solution to feature selection. We further develop a

  4. Feature Detection, Characterization and Confirmation Methodology: Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Karasaki, Kenzi; Apps, John; Doughty, Christine; Gwatney, Hope; Onishi, Celia Tiemi; Trautz, Robert; Tsang, Chin-Fu

    2007-03-01

    This is the final report of the NUMO-LBNL collaborative project: Feature Detection, Characterization and Confirmation Methodology under NUMO-DOE/LBNL collaboration agreement, the task description of which can be found in the Appendix. We examine site characterization projects from several sites in the world. The list includes Yucca Mountain in the USA, Tono and Horonobe in Japan, AECL in Canada, sites in Sweden, and Olkiluoto in Finland. We identify important geologic features and parameters common to most (or all) sites to provide useful information for future repository siting activity. At first glance, one could question whether there was any commonality among the sites, which are in different rock types at different locations. For example, the planned Yucca Mountain site is a dry repository in unsaturated tuff, whereas the Swedish sites are situated in saturated granite. However, the study concludes that indeed there are a number of important common features and parameters among all the sites--namely, (1) fault properties, (2) fracture-matrix interaction (3) groundwater flux, (4) boundary conditions, and (5) the permeability and porosity of the materials. We list the lessons learned from the Yucca Mountain Project and other site characterization programs. Most programs have by and large been quite successful. Nonetheless, there are definitely 'should-haves' and 'could-haves', or lessons to be learned, in all these programs. Although each site characterization program has some unique aspects, we believe that these crosscutting lessons can be very useful for future site investigations to be conducted in Japan. One of the most common lessons learned is that a repository program should allow for flexibility, in both schedule and approach. We examine field investigation technologies used to collect site characterization data in the field. An extensive list of existing field technologies is presented, with some discussion on usage and limitations

  5. Feature Detection, Characterization and Confirmation Methodology: Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Karasaki, Kenzi; Apps, John; Doughty, Christine; Gwatney, Hope; Onishi, Celia Tiemi; Trautz, Robert; Tsang, Chin-Fu

    2007-03-01

    This is the final report of the NUMO-LBNL collaborative project: Feature Detection, Characterization and Confirmation Methodology under NUMO-DOE/LBNL collaboration agreement, the task description of which can be found in the Appendix. We examine site characterization projects from several sites in the world. The list includes Yucca Mountain in the USA, Tono and Horonobe in Japan, AECL in Canada, sites in Sweden, and Olkiluoto in Finland. We identify important geologic features and parameters common to most (or all) sites to provide useful information for future repository siting activity. At first glance, one could question whether there was any commonality among the sites, which are in different rock types at different locations. For example, the planned Yucca Mountain site is a dry repository in unsaturated tuff, whereas the Swedish sites are situated in saturated granite. However, the study concludes that indeed there are a number of important common features and parameters among all the sites--namely, (1) fault properties, (2) fracture-matrix interaction (3) groundwater flux, (4) boundary conditions, and (5) the permeability and porosity of the materials. We list the lessons learned from the Yucca Mountain Project and other site characterization programs. Most programs have by and large been quite successful. Nonetheless, there are definitely 'should-haves' and 'could-haves', or lessons to be learned, in all these programs. Although each site characterization program has some unique aspects, we believe that these crosscutting lessons can be very useful for future site investigations to be conducted in Japan. One of the most common lessons learned is that a repository program should allow for flexibility, in both schedule and approach. We examine field investigation technologies used to collect site characterization data in the field. An extensive list of existing field technologies is presented, with some discussion on usage and limitations

  6. Linear- and Repetitive-Feature Detection Within Remotely Sensed Imagery

    Science.gov (United States)

    2017-04-01

    1.1 Background The Army desires the ability to deliver cargo, equipment, and personnel to harsh locations almost anywhere on the planet . This...because the Hough transform is designed to look for straight linear features, which most real- life fea- tures are not. As mention previously, it is...repetitive features are differentiated based on their appearance in the images of interest; however, real- life repetitive features often corre- spond to

  7. Linear- and Repetitive Feature Detection Within Remotely Sensed Imagery

    Science.gov (United States)

    2017-04-01

    1.1 Background The Army desires the ability to deliver cargo, equipment, and personnel to harsh locations almost anywhere on the planet . This...because the Hough transform is designed to look for straight linear features, which most real- life fea- tures are not. As mention previously, it is...repetitive features are differentiated based on their appearance in the images of interest; however, real- life repetitive features often corre- spond to

  8. Fast and efficient local features detection for building recognition

    DEFF Research Database (Denmark)

    Nguyen, Phuong Giang; Andersen, Hans Jørgen

    2011-01-01

    The vast growth of image databases creates many challenges for computer vision applications, for instance image retrieval and object recognition. Large variation in imaging conditions such as illumination and geometrical properties (including scale, rotation, and viewpoint) gives rise to the need...... for invariant features; i.e. image features should have minimal differences under these conditions. Local image features in the form of key points are widely used because of their invariant properties. In this chapter, we analyze different issues relating to existing local feature detectors. Based...

  9. Oil Spill Detection by SAR Images: Dark Formation Detection, Feature Extraction and Classification Algorithms

    Directory of Open Access Journals (Sweden)

    Konstantinos N. Topouzelis

    2008-10-01

    Full Text Available This paper provides a comprehensive review of the use of Synthetic Aperture Radar images (SAR for detection of illegal discharges from ships. It summarizes the current state of the art, covering operational and research aspects of the application. Oil spills are seriously affecting the marine ecosystem and cause political and scientific concern since they seriously effect fragile marine and coastal ecosystem. The amount of pollutant discharges and associated effects on the marine environment are important parameters in evaluating sea water quality. Satellite images can improve the possibilities for the detection of oil spills as they cover large areas and offer an economical and easier way of continuous coast areas patrolling. SAR images have been widely used for oil spill detection. The present paper gives an overview of the methodologies used to detect oil spills on the radar images. In particular we concentrate on the use of the manual and automatic approaches to distinguish oil spills from other natural phenomena. We discuss the most common techniques to detect dark formations on the SAR images, the features which are extracted from the detected dark formations and the most used classifiers. Finally we conclude with discussion of suggestions for further research. The references throughout the review can serve as starting point for more intensive studies on the subject.

  10. A behavioral role for feature detection by sensory bursts.

    Science.gov (United States)

    Marsat, Gary; Pollack, Gerald S

    2006-10-11

    Brief episodes of high-frequency firing of sensory neurons, or bursts, occur in many systems, including mammalian auditory and visual systems, and are believed to signal the occurrence of particularly important stimulus features, i.e., to function as feature detectors. However, the behavioral relevance of sensory bursts has not been established in any system. Here, we show that bursts in an identified auditory interneuron of crickets reliably signal salient stimulus features and reliably predict behavioral responses. Our results thus demonstrate the close link between sensory bursts and behavior.

  11. A general method for generating bathymetric data for hydrodynamic computer models

    Science.gov (United States)

    Burau, J.R.; Cheng, R.T.

    1989-01-01

    To generate water depth data from randomly distributed bathymetric data for numerical hydrodymamic models, raw input data from field surveys, water depth data digitized from nautical charts, or a combination of the two are sorted to given an ordered data set on which a search algorithm is used to isolate data for interpolation. Water depths at locations required by hydrodynamic models are interpolated from the bathymetric data base using linear or cubic shape functions used in the finite-element method. The bathymetric database organization and preprocessing, the search algorithm used in finding the bounding points for interpolation, the mathematics of the interpolation formulae, and the features of the automatic generation of water depths at hydrodynamic model grid points are included in the analysis. This report includes documentation of two computer programs which are used to: (1) organize the input bathymetric data; and (2) to interpolate depths for hydrodynamic models. An example of computer program operation is drawn from a realistic application to the San Francisco Bay estuarine system. (Author 's abstract)

  12. Stereo vision-based pedestrian detection using multiple features for automotive application

    Science.gov (United States)

    Lee, Chung-Hee; Kim, Dongyoung

    2015-12-01

    In this paper, we propose a stereo vision-based pedestrian detection using multiple features for automotive application. The disparity map from stereo vision system and multiple features are utilized to enhance the pedestrian detection performance. Because the disparity map offers us 3D information, which enable to detect obstacles easily and reduce the overall detection time by removing unnecessary backgrounds. The road feature is extracted from the v-disparity map calculated by the disparity map. The road feature is a decision criterion to determine the presence or absence of obstacles on the road. The obstacle detection is performed by comparing the road feature with all columns in the disparity. The result of obstacle detection is segmented by the bird's-eye-view mapping to separate the obstacle area which has multiple objects into single obstacle area. The histogram-based clustering is performed in the bird's-eye-view map. Each segmented result is verified by the classifier with the training model. To enhance the pedestrian recognition performance, multiple features such as HOG, CSS, symmetry features are utilized. In particular, the symmetry feature is proper to represent the pedestrian standing or walking. The block-based symmetry feature is utilized to minimize the type of image and the best feature among the three symmetry features of H-S-V image is selected as the symmetry feature in each pixel. ETH database is utilized to verify our pedestrian detection algorithm.

  13. Investigation of kinematic features for dismount detection and tracking

    Science.gov (United States)

    Narayanaswami, Ranga; Tyurina, Anastasia; Diel, David; Mehra, Raman K.; Chinn, Janice M.

    2012-05-01

    With recent changes in threats and methods of warfighting and the use of unmanned aircrafts, ISR (Intelligence, Surveillance and Reconnaissance) activities have become critical to the military's efforts to maintain situational awareness and neutralize the enemy's activities. The identification and tracking of dismounts from surveillance video is an important step in this direction. Our approach combines advanced ultra fast registration techniques to identify moving objects with a classification algorithm based on both static and kinematic features of the objects. Our objective was to push the acceptable resolution beyond the capability of industry standard feature extraction methods such as SIFT (Scale Invariant Feature Transform) based features and inspired by it, SURF (Speeded-Up Robust Feature). Both of these methods utilize single frame images. We exploited the temporal component of the video signal to develop kinematic features. Of particular interest were the easily distinguishable frequencies characteristic of bipedal human versus quadrupedal animal motion. We examine limits of performance, frame rates and resolution required for human, animal and vehicles discrimination. A few seconds of video signal with the acceptable frame rate allow us to lower resolution requirements for individual frames as much as by a factor of five, which translates into the corresponding increase of the acceptable standoff distance between the sensor and the object of interest.

  14. Improved epileptic seizure detection combining dynamic feature normalization with EEG novelty detection.

    Science.gov (United States)

    Bogaarts, J G; Hilkman, D M W; Gommer, E D; van Kranen-Mastenbroek, V H J M; Reulen, J P H

    2016-12-01

    Continuous electroencephalographic monitoring of critically ill patients is an established procedure in intensive care units. Seizure detection algorithms, such as support vector machines (SVM), play a prominent role in this procedure. To correct for inter-human differences in EEG characteristics, as well as for intra-human EEG variability over time, dynamic EEG feature normalization is essential. Recently, the median decaying memory (MDM) approach was determined to be the best method of normalization. MDM uses a sliding baseline buffer of EEG epochs to calculate feature normalization constants. However, while this method does include non-seizure EEG epochs, it also includes EEG activity that can have a detrimental effect on the normalization and subsequent seizure detection performance. In this study, EEG data that is to be incorporated into the baseline buffer are automatically selected based on a novelty detection algorithm (Novelty-MDM). Performance of an SVM-based seizure detection framework is evaluated in 17 long-term ICU registrations using the area under the sensitivity-specificity ROC curve. This evaluation compares three different EEG normalization methods, namely a fixed baseline buffer (FB), the median decaying memory (MDM) approach, and our novelty median decaying memory (Novelty-MDM) method. It is demonstrated that MDM did not improve overall performance compared to FB (p < 0.27), partly because seizure like episodes were included in the baseline. More importantly, Novelty-MDM significantly outperforms both FB (p = 0.015) and MDM (p = 0.0065).

  15. Bathymetric surveys of the Neosho River, Spring River, and Elk River, northeastern Oklahoma and southwestern Missouri, 2016–17

    Science.gov (United States)

    Hunter, Shelby L.; Ashworth, Chad E.; Smith, S. Jerrod

    2017-09-26

    In February 2017, the Grand River Dam Authority filed to relicense the Pensacola Hydroelectric Project with the Federal Energy Regulatory Commission. The predominant feature of the Pensacola Hydroelectric Project is Pensacola Dam, which impounds Grand Lake O’ the Cherokees (locally called Grand Lake) in northeastern Oklahoma. Identification of information gaps and assessment of project effects on stakeholders are central aspects of the Federal Energy Regulatory Commission relicensing process. Some upstream stakeholders have expressed concerns about the dynamics of sedimentation and flood flows in the transition zone between major rivers and Grand Lake O’ the Cherokees. To relicense the Pensacola Hydroelectric Project with the Federal Energy Regulatory Commission, the hydraulic models for these rivers require high-resolution bathymetric data along the river channels. In support of the Federal Energy Regulatory Commission relicensing process, the U.S. Geological Survey, in cooperation with the Grand River Dam Authority, performed bathymetric surveys of (1) the Neosho River from the Oklahoma border to the U.S. Highway 60 bridge at Twin Bridges State Park, (2) the Spring River from the Oklahoma border to the U.S. Highway 60 bridge at Twin Bridges State Park, and (3) the Elk River from Noel, Missouri, to the Oklahoma State Highway 10 bridge near Grove, Oklahoma. The Neosho River and Spring River bathymetric surveys were performed from October 26 to December 14, 2016; the Elk River bathymetric survey was performed from February 27 to March 21, 2017. Only areas inundated during those periods were surveyed.The bathymetric surveys covered a total distance of about 76 river miles and a total area of about 5 square miles. Greater than 1.4 million bathymetric-survey data points were used in the computation and interpolation of bathymetric-survey digital elevation models and derived contours at 1-foot (ft) intervals. The minimum bathymetric-survey elevation of the Neosho

  16. A novel feature selection approach for intrusion detection data classification

    NARCIS (Netherlands)

    Ambusaidi, Mohammed A.; He, Xiangjian; Tan, Zhiyuan; Nanda, Priyadarsi; Lu, Liang Fu; Nagar, Upasana T.

    2014-01-01

    Intrusion Detection Systems (IDSs) play a significant role in monitoring and analyzing daily activities occurring in computer systems to detect occurrences of security threats. However, the routinely produced analytical data from computer networks are usually of very huge in size. This creates a

  17. The Effect of Resolution on Detecting Visually Salient Preattentive Features

    Science.gov (United States)

    2015-06-01

    distinguish a dull yellow daffodil among a field of dull yellow dandelions versus finding a bright red rose in that same field. The human eye is directed...to particular regions in a scene by highly salient 2 features, for example, the color of the flower discussed in the previous example. These

  18. Repolarization features as detectable from electrograms and electrocardiograms

    NARCIS (Netherlands)

    Oosterom, A. van

    2013-01-01

    This contribution discusses the feasibility of extracting the major features of repolarization: its spatio-temporal behaviour, and how much of its global or local behaviour might be deduced from signals that can be observed experimentally. The analysis presented is based on source-volume-conductor c

  19. Detection of Abnormal Events via Optical Flow Feature Analysis

    Directory of Open Access Journals (Sweden)

    Tian Wang

    2015-03-01

    Full Text Available In this paper, a novel algorithm is proposed to detect abnormal events in video streams. The algorithm is based on the histogram of the optical flow orientation descriptor and the classification method. The details of the histogram of the optical flow orientation descriptor are illustrated for describing movement information of the global video frame or foreground frame. By combining one-class support vector machine and kernel principal component analysis methods, the abnormal events in the current frame can be detected after a learning period characterizing normal behaviors. The difference abnormal detection results are analyzed and explained. The proposed detection method is tested on benchmark datasets, then the experimental results show the effectiveness of the algorithm.

  20. Eigenvalue-weighting and feature selection for computer-aided polyp detection in CT colonography

    Science.gov (United States)

    Zhu, Hongbin; Wang, Su; Fan, Yi; Lu, Hongbing; Liang, Zhengrong

    2010-03-01

    With the development of computer-aided polyp detection towards virtual colonoscopy screening, the trade-off between detection sensitivity and specificity has gained increasing attention. An optimum detection, with least number of false positives and highest true positive rate, is desirable and involves interdisciplinary knowledge, such as feature extraction, feature selection as well as machine learning. Toward that goal, various geometrical and textural features, associated with each suspicious polyp candidate, have been individually extracted and stacked together as a feature vector. However, directly inputting these high-dimensional feature vectors into a learning machine, e.g., neural network, for polyp detection may introduce redundant information due to feature correlation and induce the curse of dimensionality. In this paper, we explored an indispensable building block of computer-aided polyp detection, i.e., principal component analysis (PCA)-weighted feature selection for neural network classifier of true and false positives. The major concepts proposed in this paper include (1) the use of PCA to reduce the feature correlation, (2) the scheme of adaptively weighting each principal component (PC) by the associated eigenvalue, and (3) the selection of feature combinations via the genetic algorithm. As such, the eigenvalue is also taken as part of the characterizing feature, and the necessary number of features can be exposed to mitigate the curse of dimensionality. Learned and tested by radial basis neural network, the proposed computer-aided polyp detection has achieved 95% sensitivity at a cost of average 2.99 false positives per polyp.

  1. STUDY ON THE TECHNIQUE TO DETECT TEXTURE FEATURES IN SAR IMAGES

    Institute of Scientific and Technical Information of China (English)

    Fu Yusheng; Ding Dongtao; Hou Yinming

    2004-01-01

    This letter studies on the detection of texture features in Synthetic Aperture Radar (SAR) images. Through analyzing the feature detection method proposed by Lopes, an improved texture detection method is proposed, which can not only detect the edge and lines but also avoid stretching edge and suppressing lines of the former algorithm. Experimental results with both simulated and real SAR images verify the advantage and practicability of the improved method.

  2. Psoriasis Detection Using Skin Color and Texture Features

    Directory of Open Access Journals (Sweden)

    Nidhal K.A. Abbadi

    2010-01-01

    Full Text Available Problem statement: In this study a skin disease diagnosis system was developed and tested. The system was used for diagnosis of psoriases skin disease. Approach: Present study relied on both skin color and texture features (features derives from the GLCM to give a better and more efficient recognition accuracy of skin diseases. We used feed forward neural networks to classify input images to be psoriases infected or non psoriasis infected. Results: The system gave very encouraging results during the neural network training and generalization face. Conclusion: The aim of this worked to evaluate the ability of the proposed skin texture recognition algorithm to discriminate between healthy and infected skins and we took the psoriasis disease as example.

  3. DETECTION AND TRACKING OF SUBTLE CLOUD FEATURES ON URANUS

    Energy Technology Data Exchange (ETDEWEB)

    Fry, P. M.; Sromovsky, L. A. [Space Science and Engineering Center, University of Wisconsin, Madison, WI 53706 (United States); De Pater, I. [Astronomy Department, University of California, Berkeley, CA 94720 (United States); Hammel, H. B. [Association of Universities for Research in Astronomy, Washington, DC 20005 (United States); Rages, K. A., E-mail: pat.fry@ssec.wisc.edu [SETI Institute, Mountain View, CA 94043 (United States)

    2012-06-15

    The recently updated Uranus zonal wind profile (Sromovsky et al.) samples latitudes from 71 Degree-Sign S to 73 Degree-Sign N. But many latitudes remain grossly undersampled (outside 20 Degree-Sign -45 Degree-Sign S and 20 Degree-Sign -50 Degree-Sign N) due to a lack of trackable cloud features. Offering some hope of filling these gaps is our recent discovery of low-contrast cloud that can be revealed by imaging at much higher signal-to-noise ratios (S/Ns) than previously obtained. This is demonstrated using an average of 2007 Keck II NIRC2 near-IR observations. Eleven one-minute H-band exposures, acquired over a 1.6 hr time span, were rectilinearly remapped and zonally shifted to account for planetary rotation. This increased the S/N by about a factor of 3.3. A new fine structure in latitude bands appeared, small previously unobservable cloud tracers became discernible, and some faint cloud features became prominent. While we could produce one such high-quality average, we could not produce enough to actually track the newly revealed features. This requires a specially designed observational effort. We have designed recent Hubble Space Telescope WFC3 F845M observations to allow application of the technique. We measured eight zonal winds by tracking features in these images and found that several fall off of the current zonal wind profile of Sromovsky et al., and are consistent with a partial reversal of their hemispherically asymmetric profile.

  4. Empirical Evaluation of Different Feature Representations for Social Circles Detection

    Science.gov (United States)

    2015-06-16

    Kaggle competition on learning social circles in networks [5]. The data consist of hand- labelled friendship egonets from Facebook and a set of 57...study and compare the performance on the available labelled Facebook data from the Kaggle competition on learning social circles in networks. We...from both structural egonet information and user profile features. We study and compare the performance on the available labelled Facebook data from

  5. Image Recognition and Feature Detection in Solar Physics

    Science.gov (United States)

    Martens, Petrus C.

    2012-05-01

    The Solar Dynamics Observatory (SDO) data repository will dwarf the archives of all previous solar physics missions put together. NASA recognized early on that the traditional methods of analyzing the data -- solar scientists and grad students in particular analyzing the images by hand -- would simply not work and tasked our Feature Finding Team (FFT) with developing automated feature recognition modules for solar events and phenomena likely to be observed by SDO. Having these metadata available on-line will enable solar scientist to conduct statistical studies involving large sets of events that would be impossible now with traditional means. We have followed a two-track approach in our project: we have been developing some existing task-specific solar feature finding modules to be "pipe-line" ready for the stream of SDO data, plus we are designing a few new modules. Secondly, we took it upon us to develop an entirely new "trainable" module that would be capable of identifying different types of solar phenomena starting from a limited number of user-provided examples. Both approaches are now reaching fruition, and I will show examples and movies with results from several of our feature finding modules. In the second part of my presentation I will focus on our “trainable” module, which is the most innovative in character. First, there is the strong similarity between solar and medical X-ray images with regard to their texture, which has allowed us to apply some advances made in medical image recognition. Second, we have found that there is a strong similarity between the way our trainable module works and the way our brain recognizes images. The brain can quickly recognize similar images from key characteristics, just as our code does. We conclude from that that our approach represents the beginning of a more human-like procedure for computer image recognition.

  6. Feature learning and change feature classification based on deep learning for ternary change detection in SAR images

    Science.gov (United States)

    Gong, Maoguo; Yang, Hailun; Zhang, Puzhao

    2017-07-01

    Ternary change detection aims to detect changes and group the changes into positive change and negative change. It is of great significance in the joint interpretation of spatial-temporal synthetic aperture radar images. In this study, sparse autoencoder, convolutional neural networks (CNN) and unsupervised clustering are combined to solve ternary change detection problem without any supervison. Firstly, sparse autoencoder is used to transform log-ratio difference image into a suitable feature space for extracting key changes and suppressing outliers and noise. And then the learned features are clustered into three classes, which are taken as the pseudo labels for training a CNN model as change feature classifier. The reliable training samples for CNN are selected from the feature maps learned by sparse autoencoder with certain selection rules. Having training samples and the corresponding pseudo labels, the CNN model can be trained by using back propagation with stochastic gradient descent. During its training procedure, CNN is driven to learn the concept of change, and more powerful model is established to distinguish different types of changes. Unlike the traditional methods, the proposed framework integrates the merits of sparse autoencoder and CNN to learn more robust difference representations and the concept of change for ternary change detection. Experimental results on real datasets validate the effectiveness and superiority of the proposed framework.

  7. Chromatic Information and Feature Detection in Fast Visual Analysis

    Science.gov (United States)

    Del Viva, Maria M.; Punzi, Giovanni; Shevell, Steven K.

    2016-01-01

    The visual system is able to recognize a scene based on a sketch made of very simple features. This ability is likely crucial for survival, when fast image recognition is necessary, and it is believed that a primal sketch is extracted very early in the visual processing. Such highly simplified representations can be sufficient for accurate object discrimination, but an open question is the role played by color in this process. Rich color information is available in natural scenes, yet artist's sketches are usually monochromatic; and, black-and-white movies provide compelling representations of real world scenes. Also, the contrast sensitivity of color is low at fine spatial scales. We approach the question from the perspective of optimal information processing by a system endowed with limited computational resources. We show that when such limitations are taken into account, the intrinsic statistical properties of natural scenes imply that the most effective strategy is to ignore fine-scale color features and devote most of the bandwidth to gray-scale information. We find confirmation of these information-based predictions from psychophysics measurements of fast-viewing discrimination of natural scenes. We conclude that the lack of colored features in our visual representation, and our overall low sensitivity to high-frequency color components, are a consequence of an adaptation process, optimizing the size and power consumption of our brain for the visual world we live in. PMID:27478891

  8. Motion Blobs as a Feature for Detection on Smoke

    Directory of Open Access Journals (Sweden)

    Khalid Nazim S. A.

    2011-09-01

    Full Text Available Disturbance that is caused due to visual perception with the atmosphere is coined as smoke, but the major problem is to quantify the detected smoke that is made up of small particles of carbonaceous matter in the air, resulting mainly from the burning of organic material. The present work focuses on the detection of smoke immaterial it being accidental, arson or created one and raise an alarm through an electrical device that senses the presence of visible or invisible particles or in simple terms a smoke detector issuing a signal to fire alarm system / issue a local audible alarm from detector itself.

  9. Automatic detection of suspicious behavior of pickpockets with track-based features in a shopping mall

    NARCIS (Netherlands)

    Bouma, H.; Baan, J.; Burghouts, G.J.; Eendebak, P.T.; Huis, J.R. van; Dijk, J.; Rest, J.H.C. van

    2014-01-01

    Proactive detection of incidents is required to decrease the cost of security incidents. This paper focusses on the automatic early detection of suspicious behavior of pickpockets with track-based features in a crowded shopping mall. Our method consists of several steps: pedestrian tracking, feature

  10. Feature level fusion of polarimetric infrared and GPR data for landmine detection

    NARCIS (Netherlands)

    Cremer, F.; Jong, W. de; Schutte, K.; Yarovoy, A.G.; Kovalenko, V.; Bloemenkamp, R.F.

    2003-01-01

    Feature-level sensor fusion is the process where specific information (i.e. features) from objects detected by different sensors are combined and classified. This paper focuses on the feature-level fusion procedure for a sensor combination consisting of a polarimetric infrared (IR) imaging sensor an

  11. Towards botnet detection through features using network traffic classification

    Directory of Open Access Journals (Sweden)

    Harpinder Kaur

    2016-07-01

    Full Text Available Botnet are becoming the most significant threat to the internet world. Botnet is the automated process of attackers that interacts with network traffic and its services. Botnet are automatically updated into the compromised system to collect the authenticated information. In this paper, we present a model to extract some features which are helpful to analyze the behaviour of bot members present in the particular network traffic. On the other hand, various superior methods are evaluated to extract weather network traffic contain bot or not. In particularly, our evaluation shows that the particular traffic contain any bot member in their communication.

  12. Southwest Indian Ocean Bathymetric Compilation (swIOBC)

    Science.gov (United States)

    Jensen, L.; Dorschel, B.; Arndt, J. E.; Jokat, W.

    2014-12-01

    As result of long-term scientific activities in the southwest Indian Ocean, an extensive amount of swath bathymetric data has accumulated in the AWI database. Using this data as a backbone, supplemented by additional bathymetric data sets and predicted bathymetry, we generate a comprehensive regional bathymetric data compilation for the southwest Indian Ocean. A high resolution bathymetric chart of this region will support geological and climate research: Identification of current-induced seabed structures will help modelling oceanic currents and, thus, provide proxy information about the paleo-climate. Analysis of the sediment distribution will contribute to reconstruct the erosional history of Eastern Africa. The aim of swIOBC is to produce a homogeneous and seamless bathymetric grid with an associated meta-database and a corresponding map for the area from 5° to 39° S and 20° to 44° E. Recently, multibeam data with a track length of approximately 86,000 km are held in-house. In combination with external echosounding data this allows for the generation of a regional grid, significantly improving the existing, mostly satellite altimetry derived, bathymetric models. The collected data sets are heterogeneous in terms of age, acquisition system, background data, resolution, accuracy, and documentation. As a consequence, the production of a bathymetric grid requires special techniques and algorithms, which were already developed for the IBCAO (Jakobsson et al., 2012) and further refined for the IBCSO (Arndt et al., 2013). The new regional southwest Indian Ocean chart will be created based on these methods. Arndt, J.E., et al., 2013. The International Bathymetric Chart of the Southern Ocean (IBCSO) Version 1.0—A new bathymetric compilation covering circum-Antarctic waters. GRL 40, 1-7, doi: 10.1002/grl.50413, 2013. Jakobsson, M., et al., 2012. The International Bathymetric Chart of the Arctic Ocean (IBCAO) Version 3.0. GRL 39, L12609, doi: 10.1029/2012GL052219.

  13. Multi-Cue-Based Face and Facial Feature Detection on Video Segments

    Institute of Scientific and Technical Information of China (English)

    PENG ZhenYun(彭振云); AI HaiZhou(艾海舟); Hong Wei(洪微); LIANG LuHong(梁路宏); XU GuangYou(徐光祐)

    2003-01-01

    An approach is presented to detect faces and facial features on a video segmentbased on multi-cues, including gray-level distribution, color, motion, templates, algebraic featuresand so on. Faces are first detected across the frames by using color segmentation, template matchingand artificial neural network. A PCA-based (Principal Component Analysis) feature detector forstill images is then used to detect facial features on each single frame until the resulting features ofthree adjacent frames, named as base frames, are consistent with each other. The features of framesneighboring the base frames are first detected by the still-image feature detector, then verifiedand corrected according to the smoothness constraint and the planar surface motion constraint.Experiments have been performed on video segments captured under different environments, andthe presented method is proved to be robust and accurate over variable poses, ages and illuminationconditions.

  14. Pair normalized channel feature and statistics-based learning for high-performance pedestrian detection

    Science.gov (United States)

    Zeng, Bobo; Wang, Guijin; Ruan, Zhiwei; Lin, Xinggang; Meng, Long

    2012-07-01

    High-performance pedestrian detection with good accuracy and fast speed is an important yet challenging task in computer vision. We design a novel feature named pair normalized channel feature (PNCF), which simultaneously combines and normalizes two channel features in image channels, achieving a highly discriminative power and computational efficiency. PNCF applies to both gradient channels and color channels so that shape and appearance information are described and integrated in the same feature. To efficiently explore the formidably large PNCF feature space, we propose a statistics-based feature learning method to select a small number of potentially discriminative candidate features, which are fed into the boosting algorithm. In addition, channel compression and a hybrid pyramid are employed to speed up the multiscale detection. Experiments illustrate the effectiveness of PNCF and its learning method. Our proposed detector outperforms the state-of-the-art on several benchmark datasets in both detection accuracy and efficiency.

  15. Novel Detection Features for SSVEP Based BCI: Coefficient of Variation and Variation Speed

    OpenAIRE

    Abdullah Talha Sözer; Can Bülent

    2017-01-01

    This paper introduces novel detection features for the steady-state visually evoked potential (SSVEP) based brain computer interfaces. The coefficient of variation and variation speed features were developed using the stability of SSVEP response. The developed features were tested on 13 subjects. On this dataset, for which the chance level is 12.5%, about 70% detection accuracy was obtained. Based on these results, it is considered that the coefficient of variation and the variation speed can...

  16. RFI detection by automated feature extraction and statistical analysis

    Science.gov (United States)

    Winkel, B.; Kerp, J.; Stanko, S.

    2007-01-01

    In this paper we present an interference detection toolbox consisting of a high dynamic range Digital Fast-Fourier-Transform spectrometer (DFFT, based on FPGA-technology) and data analysis software for automated radio frequency interference (RFI) detection. The DFFT spectrometer allows high speed data storage of spectra on time scales of less than a second. The high dynamic range of the device assures constant calibration even during extremely powerful RFI events. The software uses an algorithm which performs a two-dimensional baseline fit in the time-frequency domain, searching automatically for RFI signals superposed on the spectral data. We demonstrate, that the software operates successfully on computer-generated RFI data as well as on real DFFT data recorded at the Effelsberg 100-m telescope. At 21-cm wavelength RFI signals can be identified down to the 4σ_rms level. A statistical analysis of all RFI events detected in our observational data revealed that: (1) mean signal strength is comparable to the astronomical line emission of the Milky Way, (2) interferences are polarised, (3) electronic devices in the neighbourhood of the telescope contribute significantly to the RFI radiation. We also show that the radiometer equation is no longer fulfilled in presence of RFI signals.

  17. RFI detection by automated feature extraction and statistical analysis

    CERN Document Server

    Winkel, B; Stanko, S; Winkel, Benjamin; Kerp, Juergen; Stanko, Stephan

    2006-01-01

    In this paper we present an interference detection toolbox consisting of a high dynamic range Digital Fast-Fourier-Transform spectrometer (DFFT, based on FPGA-technology) and data analysis software for automated radio frequency interference (RFI) detection. The DFFT spectrometer allows high speed data storage of spectra on time scales of less than a second. The high dynamic range of the device assures constant calibration even during extremely powerful RFI events. The software uses an algorithm which performs a two-dimensional baseline fit in the time-frequency domain, searching automatically for RFI signals superposed on the spectral data. We demonstrate, that the software operates successfully on computer-generated RFI data as well as on real DFFT data recorded at the Effelsberg 100-m telescope. At 21-cm wavelength RFI signals can be identified down to the 4-sigma level. A statistical analysis of all RFI events detected in our observational data revealed that: (1) mean signal strength is comparable to the a...

  18. Water Feature Extraction and Change Detection Using Multitemporal Landsat Imagery

    Directory of Open Access Journals (Sweden)

    Komeil Rokni

    2014-05-01

    Full Text Available Lake Urmia is the 20th largest lake and the second largest hyper saline lake (before September 2010 in the world. It is also the largest inland body of salt water in the Middle East. Nevertheless, the lake has been in a critical situation in recent years due to decreasing surface water and increasing salinity. This study modeled the spatiotemporal changes of Lake Urmia in the period 2000–2013 using the multi-temporal Landsat 5-TM, 7-ETM+ and 8-OLI images. In doing so, the applicability of different satellite-derived indexes including Normalized Difference Water Index (NDWI, Modified NDWI (MNDWI, Normalized Difference Moisture Index (NDMI, Water Ratio Index (WRI, Normalized Difference Vegetation Index (NDVI, and Automated Water Extraction Index (AWEI were investigated for the extraction of surface water from Landsat data. Overall, the NDWI was found superior to other indexes and hence it was used to model the spatiotemporal changes of the lake. In addition, a new approach based on Principal Components of multi-temporal NDWI (NDWI-PCs was proposed and evaluated for surface water change detection. The results indicate an intense decreasing trend in Lake Urmia surface area in the period 2000–2013, especially between 2010 and 2013 when the lake lost about one third of its surface area compared to the year 2000. The results illustrate the effectiveness of the NDWI-PCs approach for surface water change detection, especially in detecting the changes between two and three different times, simultaneously.

  19. AUTOMATED DETECTION OF SKIN DISEASES USING TEXTURE FEATURES

    Directory of Open Access Journals (Sweden)

    DR.RANJAN PAREKH

    2011-06-01

    Full Text Available This paper proposes an automated system for recognizing disease conditions of human skin in context to health informatics. The disease conditions are recognized by analyzing skin texture images using a set of normalized symmetrical Grey Level Co-occurrence Matrices (GLCM. GLCM defines the probability of grey level i occurring in the neighborhood of another grey level j at a distance d in direction θ. Directional GLCMs are computed along four directions: horizontal (θ = 0º, vertical (θ = 90º, right diagonal (θ = 45º and left diagonal (θ= 135º, and a set of features computed from each, are averaged to provide an estimation of the texture class.The system is tested using 180 images pertaining to three dermatological skin conditions viz. Dermatitis, Eczema, Urticaria. An accuracy of 96.6% is obtained using a multilayer perceptron (MLP as a classifier.

  20. Detecting submerged features in water: modeling, sensors, and measurements

    Science.gov (United States)

    Bostater, Charles R., Jr.; Bassetti, Luce

    2004-11-01

    It is becoming more important to understand the remote sensing systems and associated autonomous or semi-autonomous methodologies (robotic & mechatronics) that may be utilized in freshwater and marine aquatic environments. This need comes from several issues related not only to advances in our scientific understanding and technological capabilities, but also from the desire to insure that the risk associated with UXO (unexploded ordnance), related submerged mines, as well as submerged targets (such as submerged aquatic vegetation) and debris left from previous human activities are remotely sensed and identified followed by reduced risks through detection and removal. This paper will describe (a) remote sensing systems, (b) platforms (fixed and mobile, as well as to demonstrate (c) the value of thinking in terms of scalability as well as modularity in the design and application of new systems now being constructed within our laboratory and other laboratories, as well as future systems. New remote sensing systems - moving or fixed sensing systems, as well as autonomous or semi-autonomous robotic and mechatronic systems will be essential to secure domestic preparedness for humanitarian reasons. These remote sensing systems hold tremendous value, if thoughtfully designed for other applications which include environmental monitoring in ambient environments.

  1. Feature learning for a hidden Markov model approach to landmine detection

    Science.gov (United States)

    Zhang, Xuping; Gader, Paul; Frigui, Hichem

    2007-04-01

    Hidden Markov Models (HMMs) are useful tools for landmine detection and discrimination using Ground Penetrating Radar (GPR). The performance of HMMs, as well as other feature-based methods, depends not only on the design of the classifier but on the features. Traditionally, algorithms for learning the parameters of classifiers have been intensely investigated while algorithms for learning parameters of the feature extraction process have been much less intensely investigated. In this paper, we describe experiments for learning feature extraction and classification parameters simultaneously in the context of using hidden Markov models for landmine detection.

  2. Thermal stability of soils and detectability of intrinsic soil features

    Science.gov (United States)

    Siewert, Christian; Kucerik, Jiri

    2014-05-01

    applicability of thermogravimetry for soil property determination. Despite of the extreme diversity of individual substances in soils, the thermal decay can be predicted with simple mathematical models. For example, the sum of mass losses in the large temperature interval from 100 °C to 550 °C (known from organic matter determination by ignition mass loss in past) can be predicted using TML in two small temperature intervals: 130 - 140 °C and 320 - 330 °C. In this case, the coefficient of determination between measured and calculated results reached an R2 above 0.97. Further, we found close autocorrelations between thermal mass losses in different temperature intervals. They refer to interrelations between evaporation of bound water and thermal decay of organo-mineral complexes in soils less affected by human influence. In contrast, deviations from such interrelations were found under extreme environmental conditions and in soils under human use. Those results confirm current knowledge about influence of clay on both water binding and organic matter accumulation during natural soil formation. Therefore, these interrelations between soil components are discussed as intrinsic features of soils which open the opportunity for experimental distinction of natural soils from organic and inorganic materials which do not have pedogenetic origin.

  3. Automatic layout feature extraction for lithography hotspot detection based on deep neural network

    Science.gov (United States)

    Matsunawa, Tetsuaki; Nojima, Shigeki; Kotani, Toshiya

    2016-03-01

    Lithography hotspot detection in the physical verification phase is one of the most important techniques in today's optical lithography based manufacturing process. Although lithography simulation based hotspot detection is widely used, it is also known to be time-consuming. To detect hotspots in a short runtime, several machine learning based methods have been proposed. However, it is difficult to realize highly accurate detection without an increase in false alarms because an appropriate layout feature is undefined. This paper proposes a new method to automatically extract a proper layout feature from a given layout for improvement in detection performance of machine learning based methods. Experimental results show that using a deep neural network can achieve better performance than other frameworks using manually selected layout features and detection algorithms, such as conventional logistic regression or artificial neural network.

  4. Vehicle Detection in Still Images by Using Boosted Local Feature Detector

    Institute of Scientific and Technical Information of China (English)

    Qing LIN; Young-joon HAN; Hern-soo HAHN

    2010-01-01

    Vehicle detection in still images is a comparatively difficult task.This paper presents a method for this task by using boosted local pattem detector constructed from two local features including Haar-like and oriented gradient features.The whole process is composed of three stages.In the first stage,local appearance features of vehicles and non-vehicle objects are extracted.Haar-like and oriented gradient features arc extracted separately in this stage as local features.In the second stage,Adaboost algorithm is used to select the mast discriminative features as weak detectors from the two local feature sets,and a strong local pattern detector is built by the weighted combination of these selected weak detectors.Finally,vehicle detection can be performed in still images by using the boosted strong local feature detector.Experiment results show that the local pattern detectur constructed in this way combines the advantages of Haar-like and oriented gradient features,and can achieve better detection results than the datector by using single Haar-like features.

  5. A Blind Blur Detection Scheme Using Statistical Features of Phase Congruency and Gradient Magnitude

    Directory of Open Access Journals (Sweden)

    Shamik Tiwari

    2014-01-01

    Full Text Available The growing uses of camera-based barcode readers have recently gained a lot of attention. This has boosted interest in no-reference blur detection algorithms. Blur is an undesirable phenomenon which appears as one of the most frequent causes of image degradation. In this paper we present a new no-reference blur detection scheme that is based on the statistical features of phase congruency and gradient magnitude maps. Blur detection is achieved by approximating the functional relationship between these features using a feed forward neural network. Simulation results show that the proposed scheme gives robust blur detection scheme.

  6. Sequential feature selection for detecting buried objects using forward looking ground penetrating radar

    Science.gov (United States)

    Shaw, Darren; Stone, Kevin; Ho, K. C.; Keller, James M.; Luke, Robert H.; Burns, Brian P.

    2016-05-01

    Forward looking ground penetrating radar (FLGPR) has the benefit of detecting objects at a significant standoff distance. The FLGPR signal is radiated over a large surface area and the radar signal return is often weak. Improving detection, especially for buried in road targets, while maintaining an acceptable false alarm rate remains to be a challenging task. Various kinds of features have been developed over the years to increase the FLGPR detection performance. This paper focuses on investigating the use of as many features as possible for detecting buried targets and uses the sequential feature selection technique to automatically choose the features that contribute most for improving performance. Experimental results using data collected at a government test site are presented.

  7. Detection of Brain Tumor and Extraction of Texture Features using Magnetic Resonance Images

    Directory of Open Access Journals (Sweden)

    Prof. Dilip Kumar Gandhi

    2012-10-01

    Full Text Available Brain Cancer Detection system is designed. Aim of this paper is to locate the tumor and determine the texture features from a Brain Cancer affected MRI. A computer based diagnosis is performed in order to detect the tumors from given Magnetic Resonance Image. Basic image processing techniques are used to locate the tumor region. Basic techniques consist of image enhancement, image bianarization, and image morphological operations. Texture features are computed using the Gray Level Co-occurrence Matrix. Texture features consists of five distinct features. Selective features or the combination of selective features will be used in the future to determine the class of the query image. Astrocytoma type of Brain Cancer affected images are used only for simplicity

  8. Bathymetric Contours for Prairie Rose Lake, Shelby County, Iowa

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data set consists of digital bathymetry contours for Prairie Rose Lake in Shelby Co., Iowa. The U.S. Geological Survey conducted a bathymetric survey of...

  9. Bathymetric Contours for Lake Minnewashta, Dickinson County, Iowa

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data set consists of digital bathymetry contours for Lake Minnewashta in Dickinson Co., Iowa. The U.S. Geological Survey conducted a bathymetric survey of Lake...

  10. Bathymetric Contours for Littlefield Lake, Audubon County, Iowa

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data set consists of digital bathymetry contours for Littlefield Lake in Audubon Co., Iowa. The U.S. Geological Survey conducted a bathymetric survey of...

  11. Bathymetric Contours for Nine Eagles Lake, Decatur County, Iowa

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data set consists of digital bathymetry contours for Nine Eagles Lake in Decatur Co., Iowa. The U.S. Geological Survey conducted a bathymetric survey of Nine...

  12. Topographic and Bathymetric Shaded Relief of North America - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The Topographic and Bathymetric Shaded Relief of North America map layer shows depth and elevation ranges using colors, with relief enhanced by shading. The image...

  13. Bathymetric Shaded Relief of North America - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The Bathymetric Shaded Relief of North America map layer shows depth ranges using colors, with relief enhanced by shading. The image was derived from the National...

  14. Bathymetric Contours for Upper Gar Lake, Dickinson County, Iowa

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data set consists of digital bathymetry contours for Upper Gar Lake in Dickinson Co., Iowa. The U.S. Geological Survey conducted a bathymetric survey of Upper...

  15. Bathymetric Contours for Lake Darling, Washington County, Iowa

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data set consists of digital bathymetry contours for Lake Darling in Washington Co., Iowa. The U.S. Geological Survey conducted a bathymetric survey of Lake...

  16. Tampa Bay Topographic/Bathymetric Digital Elevation Model

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — In this joint demonstration project for the Tampa Bay region, NOAA's National Ocean Service (NOS) and the U.S. Geological Survey (USGS) have merged NOAA bathymetric...

  17. Bathymetric maps of Lake Becharof and the Ugashik Lakes

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — In order to understand the production of smolts in a sockeye salmon nursery lake, it is mandatory to produce a bathymetric map. This must be detailed enough so that...

  18. Bathymetric Surveys of the Trinity River, CA

    Science.gov (United States)

    Matthews, W. V.; Pryor, C. T.

    2012-12-01

    Shallow water (0-5m) bathymetric surveys in alluvial mountain rivers present numerous data collection challenges including highly variable flow depths, rapidly changing topography, turbulent and aerated water in riffles and around large roughness elements, and poor GPS reception. In addition, confined and shallow reaches present access challenges for survey platforms. Recently, nearly 70km of detailed bathymetric surveys were collected along the Trinity River in northwestern CA for the Trinity River Restoration Program. The data collection platform consisted of a 5m jet boat equipped with a multi-transducer hydrographic survey system (or sweep system). The system is capable of collecting data in as little as 0.4m of water and consists of seven transducers, three on each port and starboard collapsible boom and one permanently mounted in the hull of the survey boat. The total swath width is 7.5m, with each transducer evenly spaced at approximately 1.3m apart. Each boom both articulates and collapses, providing flexibility to quickly raise the booms to move upstream under full power or to reduce the boom width in confined areas. A RTK GNSS (GPS+GLONASS) receiver with internal radio for RTK positioning is located directly over the middle transducer and offsets locate the other six transducers. A GNSS heading receiver is used to provide a precise heading for the sweep system. A pitch and roll sensor is placed on the boom just below the GPS antenna and compensates for roll and pitch of the vessel. Depths are sent from the electronics package to a ruggedized laptop running Hypack™ data collection software. Mapping occurred on the falling limb of the Restoration Programs spring flow release. Most data were collected while drifting downstream with the boat matching the water velocity. Data collected along the edges required much greater maneuvering capability and occurred with the boat moving upstream. The boom system allowed data collection up to half the width of the

  19. A FEATURE SELECTION ALGORITHM DESIGN AND ITS IMPLEMENTATION IN INTRUSION DETECTION SYSTEM

    Institute of Scientific and Technical Information of China (English)

    杨向荣; 沈钧毅

    2003-01-01

    Objective Present a new features selection algorithm. Methods based on rule induction and field knowledge. Results This algorithm can be applied in catching dataflow when detecting network intrusions, only the sub-dataset including discriminating features is catched. Then the time spend in following behavior patterns mining is reduced and the patterns mined are more precise. Conclusion The experiment results show that the feature subset catched by this algorithm is more informative and the dataset's quantity is reduced significantly.

  20. Feature Selection in Detection of Adverse Drug Reactions from the Health Improvement Network (THIN Database

    Directory of Open Access Journals (Sweden)

    Yihui Liu

    2015-02-01

    Full Text Available Adverse drug reaction (ADR is widely concerned for public health issue. ADRs are one of most common causes to withdraw some drugs from market. Prescription event monitoring (PEM is an important approach to detect the adverse drug reactions. The main problem to deal with this method is how to automatically extract the medical events or side effects from high-throughput medical events, which are collected from day to day clinical practice. In this study we propose a novel concept of feature matrix to detect the ADRs. Feature matrix, which is extracted from big medical data from The Health Improvement Network (THIN database, is created to characterize the medical events for the patients who take drugs. Feature matrix builds the foundation for the irregular and big medical data. Then feature selection methods are performed on feature matrix to detect the significant features. Finally the ADRs can be located based on the significant features. The experiments are carried out on three drugs: Atorvastatin, Alendronate, and Metoclopramide. Major side effects for each drug are detected and better performance is achieved compared to other computerized methods. The detected ADRs are based on computerized methods, further investigation is needed.

  1. Earth analysis methods, subsurface feature detection methods, earth analysis devices, and articles of manufacture

    Science.gov (United States)

    West, Phillip B [Idaho Falls, ID; Novascone, Stephen R [Idaho Falls, ID; Wright, Jerry P [Idaho Falls, ID

    2012-05-29

    Earth analysis methods, subsurface feature detection methods, earth analysis devices, and articles of manufacture are described. According to one embodiment, an earth analysis method includes engaging a device with the earth, analyzing the earth in a single substantially lineal direction using the device during the engaging, and providing information regarding a subsurface feature of the earth using the analysis.

  2. Earth analysis methods, subsurface feature detection methods, earth analysis devices, and articles of manufacture

    Science.gov (United States)

    West, Phillip B.; Novascone, Stephen R.; Wright, Jerry P.

    2011-09-27

    Earth analysis methods, subsurface feature detection methods, earth analysis devices, and articles of manufacture are described. According to one embodiment, an earth analysis method includes engaging a device with the earth, analyzing the earth in a single substantially lineal direction using the device during the engaging, and providing information regarding a subsurface feature of the earth using the analysis.

  3. An extensive analysis of various texture feature extractors to detect Diabetes Mellitus using facial specific regions.

    Science.gov (United States)

    Shu, Ting; Zhang, Bob; Yan Tang, Yuan

    2017-04-01

    Researchers have recently discovered that Diabetes Mellitus can be detected through non-invasive computerized method. However, the focus has been on facial block color features. In this paper, we extensively study the effects of texture features extracted from facial specific regions at detecting Diabetes Mellitus using eight texture extractors. The eight methods are from four texture feature families: (1) statistical texture feature family: Image Gray-scale Histogram, Gray-level Co-occurance Matrix, and Local Binary Pattern, (2) structural texture feature family: Voronoi Tessellation, (3) signal processing based texture feature family: Gaussian, Steerable, and Gabor filters, and (4) model based texture feature family: Markov Random Field. In order to determine the most appropriate extractor with optimal parameter(s), various parameter(s) of each extractor are experimented. For each extractor, the same dataset (284 Diabetes Mellitus and 231 Healthy samples), classifiers (k-Nearest Neighbors and Support Vector Machines), and validation method (10-fold cross validation) are used. According to the experiments, the first and third families achieved a better outcome at detecting Diabetes Mellitus than the other two. The best texture feature extractor for Diabetes Mellitus detection is the Image Gray-scale Histogram with bin number=256, obtaining an accuracy of 99.02%, a sensitivity of 99.64%, and a specificity of 98.26% by using SVM. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Feature selection for anomaly–based network intrusion detection using cluster validity indices

    CSIR Research Space (South Africa)

    Naidoo, T

    2015-09-01

    Full Text Available A feature selection algorithm that is novel in the context of anomaly–based network intrusion detection is proposed in this paper. The distinguishing factor of the proposed feature selection algorithm is its complete lack of dependency on labelled...

  5. Automated mitosis detection in histopathology using morphological and multi-channel statistics features.

    Science.gov (United States)

    Irshad, Humayun

    2013-01-01

    According to Nottingham grading system, mitosis count plays a critical role in cancer diagnosis and grading. Manual counting of mitosis is tedious and subject to considerable inter- and intra-reader variations. The aim is to improve the accuracy of mitosis detection by selecting the color channels that better capture the statistical and morphological features, which classify mitosis from other objects. We propose a framework that includes comprehensive analysis of statistics and morphological features in selected channels of various color spaces that assist pathologists in mitosis detection. In candidate detection phase, we perform Laplacian of Gaussian, thresholding, morphology and active contour model on blue-ratio image to detect and segment candidates. In candidate classification phase, we extract a total of 143 features including morphological, first order and second order (texture) statistics features for each candidate in selected channels and finally classify using decision tree classifier. The proposed method has been evaluated on Mitosis Detection in Breast Cancer Histological Images (MITOS) dataset provided for an International Conference on Pattern Recognition 2012 contest and achieved 74% and 71% detection rate, 70% and 56% precision and 72% and 63% F-Measure on Aperio and Hamamatsu images, respectively. The proposed multi-channel features computation scheme uses fixed image scale and extracts nuclei features in selected channels of various color spaces. This simple but robust model has proven to be highly efficient in capturing multi-channels statistical features for mitosis detection, during the MITOS international benchmark. Indeed, the mitosis detection of critical importance in cancer diagnosis is a very challenging visual task. In future work, we plan to use color deconvolution as preprocessing and Hough transform or local extrema based candidate detection in order to reduce the number of candidates in mitosis and non-mitosis classes.

  6. Detection of Far-Infrared Features in Star-Forming Regions

    CERN Document Server

    Onaka, T; Onaka, Takashi; Okada, Yoko

    2003-01-01

    We report the detection of a feature at 65um and a broad feature around 100um in the far-infrared spectra of the diffuse emission from two active star-forming regions, the Carina nebula and Sharpless 171. The features are seen in the spectra over a wide area of the observed regions, indicating that the carriers are fairly ubiquitous species in the interstellar medium. A similar 65um feature has been detected in evolved stars and attributed to diopside, a Ca-bearing crystalline silicate. The present observations indicate the first detection of a crystalline silicate in the interstellar medium if this identification holds true also for the interstellar feature. A similar broad feature around 90um reported in the spectra of evolved stars has been attributed to calcite, a Ca-bearing carbonate mineral. The interstellar feature seems to be shifted to longer wavelengths and have a broader width although the precise estimate of the feature profile is difficult. As a carrier for the interstellar 100um feature, we inve...

  7. Integration of Image-Derived and Pos-Derived Features for Image Blur Detection

    Science.gov (United States)

    Teo, Tee-Ann; Zhan, Kai-Zhi

    2016-06-01

    The image quality plays an important role for Unmanned Aerial Vehicle (UAV)'s applications. The small fixed wings UAV is suffering from the image blur due to the crosswind and the turbulence. Position and Orientation System (POS), which provides the position and orientation information, is installed onto an UAV to enable acquisition of UAV trajectory. It can be used to calculate the positional and angular velocities when the camera shutter is open. This study proposes a POS-assisted method to detect the blur image. The major steps include feature extraction, blur image detection and verification. In feature extraction, this study extracts different features from images and POS. The image-derived features include mean and standard deviation of image gradient. For POS-derived features, we modify the traditional degree-of-linear-blur (blinear) method to degree-of-motion-blur (bmotion) based on the collinear condition equations and POS parameters. Besides, POS parameters such as positional and angular velocities are also adopted as POS-derived features. In blur detection, this study uses Support Vector Machines (SVM) classifier and extracted features (i.e. image information, POS data, blinear and bmotion) to separate blur and sharp UAV images. The experiment utilizes SenseFly eBee UAV system. The number of image is 129. In blur image detection, we use the proposed degree-of-motion-blur and other image features to classify the blur image and sharp images. The classification result shows that the overall accuracy using image features is only 56%. The integration of image-derived and POS-derived features have improved the overall accuracy from 56% to 76% in blur detection. Besides, this study indicates that the performance of the proposed degree-of-motion-blur is better than the traditional degree-of-linear-blur.

  8. Co-Occurrence of Local Binary Patterns Features for Frontal Face Detection in Surveillance Applications

    Directory of Open Access Journals (Sweden)

    Louis Wael

    2011-01-01

    Full Text Available Face detection in video sequence is becoming popular in surveillance applications. The tradeoff between obtaining discriminative features to achieve accurate detection versus computational overhead of extracting these features, which affects the classification speed, is a persistent problem. This paper proposes to use multiple instances of rotational Local Binary Patterns (LBP of pixels as features instead of using the histogram bins of the LBP of pixels. The multiple features are selected using the sequential forward selection algorithm we called Co-occurrence of LBP (CoLBP. CoLBP feature extraction is computationally efficient and produces a high-performance rate. CoLBP features are used to implement a frontal face detector applied on a 2D low-resolution surveillance sequence. Experiments show that the CoLBP face features outperform state-of-the-art Haar-like features and various other LBP features extensions. Also, the CoLBP features can tolerate a wide range of illumination and blurring changes.

  9. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection.

    Science.gov (United States)

    Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun

    2016-07-19

    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated

  10. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    Directory of Open Access Journals (Sweden)

    Sungho Kim

    2016-07-01

    Full Text Available Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR images or infrared (IR images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter and an asymmetric morphological closing filter (AMCF, post-filter into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic

  11. AUTOMATIC SHIP DETECTION IN SINGLE-POL SAR IMAGES USING TEXTURE FEATURES IN ARTIFICIAL NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    E. Khesali

    2015-12-01

    Full Text Available This paper presents a novel method for detecting ships from high-resolution synthetic aperture radar (SAR images. This method categorizes ship targets from single-pol SAR images using texture features in artificial neural networks. As such, the method tries to overcome the lack of an operational solution that is able to reliably detect ships with one SAR channel. The method has the following three main stages: 1 feature extraction; 2 feature selection; and 3 ship detection. The first part extracts different texture features from SAR image. These textures include occurrence and co occurrence measures with different window sizes. Then, best features are selected. Finally, the artificial neural network is used to extract ship pixels from sea ones. In post processing stage some morphological filters are used to improve the result. The effectiveness of the proposed method is verified using Sentinel-1 data in VV polarization. Experimental results indicate that the proposed algorithm can be implemented with time-saving, high precision ship extraction, feature analysis, and detection. The results also show that using texture features the algorithm properly discriminates speckle noise from ships.

  12. Detection of corn and weed species by the combination of spectral, shape and textural features

    Science.gov (United States)

    Accurate detection of weeds in farmland can help reduce pesticide use and protect the agricultural environment. To develop intelligent equipment for weed detection, this study used an imaging spectrometer system, which supports micro-scale plant feature analysis by acquiring high-resolution hyper sp...

  13. Attentional effects on preattentive vision: Spatial precues affect the detection of simple features

    NARCIS (Netherlands)

    Theeuwes, J.; Kramer, A.F.; Atchley, P.

    1999-01-01

    Most accounts of visual perception hold that the detection of primitive features occurs preattentively, in parallel across the visual field. Evidence that preattentive vision operates without attentional limitations comes from visual search tasks in which the detection of the presence or absence of

  14. Increasing computer-aided detection specificity by projection features for CT colonography

    OpenAIRE

    Zhu, Hongbin; Liang, Zhengrong; Pickhardt, Perry J.; Barish, Matthew A.; You, Jiangsheng; Fan, Yi; Lu, Hongbing; Posniak, Erica J.; Richards, Robert J.; Cohen, Harris L.

    2010-01-01

    Purpose: A large number of false positives (FPs) generated by computer-aided detection (CAD) schemes is likely to distract radiologists’ attention and decrease their interpretation efficiency. This study aims to develop projection-based features which characterize true and false positives to increase the specificity while maintaining high sensitivity in detecting colonic polyps.

  15. Change detection in high resolution SAR images based on multiscale texture features

    Science.gov (United States)

    Wen, Caihuan; Gao, Ziqiang

    2011-12-01

    This paper studied on change detection algorithm of high resolution (HR) Synthetic Aperture Radar (SAR) images based on multi-scale texture features. Firstly, preprocessed multi-temporal Terra-SAR images were decomposed by 2-D dual tree complex wavelet transform (DT-CWT), and multi-scale texture features were extracted from those images. Then, log-ratio operation was utilized to get difference images, and the Bayes minimum error theory was used to extract change information from difference images. Lastly, precision assessment was done. Meanwhile, we compared with the result of method based on texture features extracted from gray-level cooccurrence matrix (GLCM). We had a conclusion that, change detection algorithm based on multi-scale texture features has a great more improvement, which proves an effective method to change detect of high spatial resolution SAR images.

  16. Shape and texture based novel features for automated juxtapleural nodule detection in lung CTs.

    Science.gov (United States)

    Taşcı, Erdal; Uğur, Aybars

    2015-05-01

    Lung cancer is one of the types of cancer with highest mortality rate in the world. In case of early detection and diagnosis, the survival rate of patients significantly increases. In this study, a novel method and system that provides automatic detection of juxtapleural nodule pattern have been developed from cross-sectional images of lung CT (Computerized Tomography). Shape-based and both shape and texture based 7 features are contributed to the literature for lung nodules. System that we developed consists of six main stages called preprocessing, lung segmentation, detection of nodule candidate regions, feature extraction, feature selection (with five feature ranking criteria) and classification. LIDC dataset containing cross-sectional images of lung CT has been utilized, 1410 nodule candidate regions and 40 features have been extracted from 138 cross-sectional images for 24 patients. Experimental results for 10 classifiers are obtained and presented. Adding our derived features to known 33 features has increased nodule recognition performance from 0.9639 to 0.9679 AUC value on generalized linear model regression (GLMR) for 22 selected features and being reached one of the most successful results in the literature.

  17. Spectrum based feature extraction using spectrum intensity ratio for SSVEP detection.

    Science.gov (United States)

    Itai, Akitoshi; Funase, Arao

    2012-01-01

    Recent years, a Steady-State Visual Evoked Potential (SSVEP) is used as a basis for Brain Computer Interface (BCI)[1]. Various feature extraction and classification techniques are proposed to achieve BCI based on SSVEP. The feature extraction of SSVEP is developed in the frequency domain regardless of the limitation in flickering frequency of visual stimulus caused by hardware architecture. We introduce here the feature extraction using a spectrum intensity ratio. Results show that the detection ratio reaches 84% by using a spectrum intensity ratio with unsupervised classification. It also indicates the SSVEP is enhanced by proposed feature extraction with second harmonic.

  18. Bathymetric Surveys of Lake Arthur and Raccoon Lake, Pennsylvania, June 2007

    Science.gov (United States)

    Hittle, Clinton D.; Ruby, A. Thomas

    2008-01-01

    In spring of 2007, bathymetric surveys of two Pennsylvania State Park lakes were performed to collect accurate data sets of lake-bed elevations and to develop methods and techniques to conduct similar surveys across the state. The lake-bed elevations and associated geographical position data can be merged with land-surface elevations acquired through Light Detection and Ranging (LIDAR) techniques. Lake Arthur in Butler County and Raccoon Lake in Beaver County were selected for this initial data-collection activity. In order to establish accurate water-surface elevations during the surveys, benchmarks referenced to NAVD 88 were established on land at each lake by use of differential global positioning system (DGPS) surveys. Bathymetric data were collected using a single beam, 210 kilohertz (kHz) echo sounder and were coupled with the DGPS position data utilizing a computer software package. Transects of depth data were acquired at predetermined intervals on each lake, and the shoreline was delineated using a laser range finder and compass module. Final X, Y, Z coordinates of the geographic positions and lake-bed elevations were referenced to NAD 83 and NAVD 88 and are available to create bathymetric maps of the lakes.

  19. The relationship study between image features and detection probability based on psychology experiments

    Science.gov (United States)

    Lin, Wei; Chen, Yu-hua; Wang, Ji-yuan; Gao, Hong-sheng; Wang, Ji-jun; Su, Rong-hua; Mao, Wei

    2011-04-01

    Detection probability is an important index to represent and estimate target viability, which provides basis for target recognition and decision-making. But it will expend a mass of time and manpower to obtain detection probability in reality. At the same time, due to the different interpretation of personnel practice knowledge and experience, a great difference will often exist in the datum obtained. By means of studying the relationship between image features and perception quantity based on psychology experiments, the probability model has been established, in which the process is as following.Firstly, four image features have been extracted and quantified, which affect directly detection. Four feature similarity degrees between target and background were defined. Secondly, the relationship between single image feature similarity degree and perception quantity was set up based on psychological principle, and psychological experiments of target interpretation were designed which includes about five hundred people for interpretation and two hundred images. In order to reduce image features correlativity, a lot of artificial synthesis images have been made which include images with single brightness feature difference, images with single chromaticity feature difference, images with single texture feature difference and images with single shape feature difference. By analyzing and fitting a mass of experiments datum, the model quantitys have been determined. Finally, by applying statistical decision theory and experimental results, the relationship between perception quantity with target detection probability has been found. With the verification of a great deal of target interpretation in practice, the target detection probability can be obtained by the model quickly and objectively.

  20. Automated mitosis detection using texture, SIFT features and HMAX biologically inspired approach.

    Science.gov (United States)

    Irshad, Humayun; Jalali, Sepehr; Roux, Ludovic; Racoceanu, Daniel; Hwee, Lim Joo; Naour, Gilles Le; Capron, Frédérique

    2013-01-01

    According to Nottingham grading system, mitosis count in breast cancer histopathology is one of three components required for cancer grading and prognosis. Manual counting of mitosis is tedious and subject to considerable inter- and intra-reader variations. The aim is to investigate the various texture features and Hierarchical Model and X (HMAX) biologically inspired approach for mitosis detection using machine-learning techniques. We propose an approach that assists pathologists in automated mitosis detection and counting. The proposed method, which is based on the most favorable texture features combination, examines the separability between different channels of color space. Blue-ratio channel provides more discriminative information for mitosis detection in histopathological images. Co-occurrence features, run-length features, and Scale-invariant feature transform (SIFT) features were extracted and used in the classification of mitosis. Finally, a classification is performed to put the candidate patch either in the mitosis class or in the non-mitosis class. Three different classifiers have been evaluated: Decision tree, linear kernel Support Vector Machine (SVM), and non-linear kernel SVM. We also evaluate the performance of the proposed framework using the modified biologically inspired model of HMAX and compare the results with other feature extraction methods such as dense SIFT. The proposed method has been tested on Mitosis detection in breast cancer histological images (MITOS) dataset provided for an International Conference on Pattern Recognition (ICPR) 2012 contest. The proposed framework achieved 76% recall, 75% precision and 76% F-measure. Different frameworks for classification have been evaluated for mitosis detection. In future work, instead of regions, we intend to compute features on the results of mitosis contour segmentation and use them to improve detection and classification rate.

  1. Water turbidity estimation from airborne hyperspectral imagery and full waveform bathymetric LiDAR

    Science.gov (United States)

    Pan, Z.; Glennie, C. L.; Fernandez-Diaz, J. C.

    2015-12-01

    The spatial and temporal variations in water turbidity are of great interest for the study of fluvial and coastal environments; and for predicting the performance of remote sensing systems that are used to map these. Conventional water turbidity estimates from remote sensing observations have normally been derived using near infrared reflectance. We have investigated the potential of determining water turbidity from additional remote sensing sources, namely airborne hyperspectral imagery and single wavelength bathymetric LiDAR (Light Detection and Ranging). The confluence area of the Blue and Colorado River, CO was utilized as a study area to investigate the capabilities of both airborne bathymetric LiDAR and hyperspectral imagery for water turbidity estimation. Discrete and full waveform bathymetric data were collected using Optech's Gemini (1064 nm) and Aquarius (532 nm) LiDAR sensors. Hyperspectral imagery (1.2 m pixel resolution and 72 spectral bands) was acquired using an ITRES CASI-1500 imaging system. As an independent reference, measurements of turbidity were collected concurrent with the airborne remote sensing acquisitions, using a WET Labs EcoTriplet deployed from a kayak and turbidity was then derived from the measured backscatter. The bathymetric full waveform dataset contains a discretized sample of the full backscatter of water column and benthic layer. Therefore, the full waveform records encapsulate the water column characteristics of turbidity. A nonparametric support vector regression method is utilized to estimate water turbidity from both hyperspectral imagery and voxelized full waveform LiDAR returns, both individually and as a fused dataset. Results of all the evaluations will be presented, showing an initial turbidity prediction accuracy of approximately 1.0 NTU. We will also discuss our future strategy for enhanced fusion of the full waveform LiDAR and hyperspectral imagery for improved turbidity estimation.

  2. Improved Framework for Breast Cancer Detection using Hybrid Feature Extraction Technique and FFNN

    Directory of Open Access Journals (Sweden)

    Ibrahim Mohamed Jaber Alamin

    2016-10-01

    Full Text Available Breast Cancer early detection using terminologies of image processing is suffered from the less accuracy performance in different automated medical tools. To improve the accuracy, still there are many research studies going on different phases such as segmentation, feature extraction, detection, and classification. The proposed framework is consisting of four main steps such as image preprocessing, image segmentation, feature extraction and finally classification. This paper presenting the hybrid and automated image processing based framework for breast cancer detection. For image preprocessing, both Laplacian and average filtering approach is used for smoothing and noise reduction if any. These operations are performed on 256 x 256 sized gray scale image. Output of preprocessing phase is used at efficient segmentation phase. Algorithm is separately designed for preprocessing step with goal of improving the accuracy. Segmentation method contributed for segmentation is nothing but the improved version of region growing technique. Thus breast image segmentation is done by using proposed modified region growing technique. The modified region growing technique overcoming the limitations of orientation as well as intensity. The next step we proposed is feature extraction, for this framework we have proposed to use combination of different types of features such as texture features, gradient features, 2D-DWT features with higher order statistics (HOS. Such hybrid feature set helps to improve the detection accuracy. For last phase, we proposed to use efficient feed forward neural network (FFNN. The comparative study between existing 2D-DWT feature extraction and proposed HOS-2D-DWT based feature extraction methods is proposed.

  3. Interstellar Carbodiimide (HNCNH): A New Astronomical Detection from the GBT PRIMOS Survey Via Maser Emission Features

    Science.gov (United States)

    McGuire, Brett; Loomis, R. A.; Charness, C.; Corby, J. F.; Blake, G. A.; Hollis, J. M.; Lovas, F.; Jewell, P. R.; Remijan, A. J.

    2013-01-01

    We present the first interstellar detection of carbodiimide (HNCNH), a tautomer of the known interstellar species cyanamide (NH2CN), in weak maser emission, using data from the GBT PRebiotic Interstellar MOlecular Survey (PRIMOS). The anticipated abundance of this molecule is such that emission features arising from a purely thermal population are below the detection limit of any current surveys. As such, HNCNH could only be detected through the observed cm-wavelength transitions which have been amplified by masing. We discuss the utility of cm-wavelength molecular line surveys in the detection of new molecular species and the possibility of future detections of low-abundance species through weakly masing transitions.

  4. Exploration of available feature detection and identification systems and their performance on radiographs

    Science.gov (United States)

    Wantuch, Andrew C.; Vita, Joshua A.; Jimenez, Edward S.; Bray, Iliana E.

    2016-10-01

    Despite object detection, recognition, and identification being very active areas of computer vision research, many of the available tools to aid in these processes are designed with only photographs in mind. Although some algorithms used specifically for feature detection and identification may not take explicit advantage of the colors available in the image, they still under-perform on radiographs, which are grayscale images. We are especially interested in the robustness of these algorithms, specifically their performance on a preexisting database of X-ray radiographs in compressed JPEG form, with multiple ways of describing pixel information. We will review various aspects of the performance of available feature detection and identification systems, including MATLABs Computer Vision toolbox, VLFeat, and OpenCV on our non-ideal database. In the process, we will explore possible reasons for the algorithms' lessened ability to detect and identify features from the X-ray radiographs.

  5. Multi-Feature Based Multiple Landmine Detection Using Ground Penetration Radar

    OpenAIRE

    Park, S.; K. Kim; Ko, K. H.

    2014-01-01

    This paper presents a novel method for detection of multiple landmines using a ground penetrating radar (GPR). Conventional algorithms mainly focus on detection of a single landmine, which cannot linearly extend to the multiple landmine case. The proposed algorithm is composed of four steps; estimation of the number of multiple objects buried in the ground, isolation of each object, feature extraction and detection of landmines. The number of objects in the GPR signal is estimated by using th...

  6. Textural feature selection for enhanced detection of stationary humans in through-the-wall radar imagery

    Science.gov (United States)

    Chaddad, A.; Ahmad, F.; Amin, M. G.; Sevigny, P.; DiFilippo, D.

    2014-05-01

    Feature-based methods have been recently considered in the literature for detection of stationary human targets in through-the-wall radar imagery. Specifically, textural features, such as contrast, correlation, energy, entropy, and homogeneity, have been extracted from gray-level co-occurrence matrices (GLCMs) to aid in discriminating the true targets from multipath ghosts and clutter that closely mimic the target in size and intensity. In this paper, we address the task of feature selection to identify the relevant subset of features in the GLCM domain, while discarding those that are either redundant or confusing, thereby improving the performance of feature-based scheme to distinguish between targets and ghosts/clutter. We apply a Decision Tree algorithm to find the optimal combination of co-occurrence based textural features for the problem at hand. We employ a K-Nearest Neighbor classifier to evaluate the performance of the optimal textural feature based scheme in terms of its target and ghost/clutter discrimination capability and use real-data collected with the vehicle-borne multi-channel through-the-wall radar imaging system by Defence Research and Development Canada. For the specific data analyzed, it is shown that the identified dominant features yield a higher classification accuracy, with lower number of false alarms and missed detections, compared to the full GLCM based feature set.

  7. Evaluation of various feature extraction methods for landmine detection using hidden Markov models

    Science.gov (United States)

    Hamdi, Anis; Frigui, Hichem

    2012-06-01

    Hidden Markov Models (HMM) have proved to be eective for detecting buried land mines using data collected by a moving-vehicle-mounted ground penetrating radar (GPR). The general framework for a HMM-based landmine detector consists of building a HMM model for mine signatures and a HMM model for clutter signatures. A test alarm is assigned a condence proportional to the probability of that alarm being generated by the mine model and inversely proportional to its probability in the clutter model. The HMM models are built based on features extracted from GPR training signatures. These features are expected to capture the salient properties of the 3-dimensional alarms in a compact representation. The baseline HMM framework for landmine detection is based on gradient features. It models the time varying behavior of GPR signals, encoded using edge direction information, to compute the likelihood that a sequence of measurements is consistent with a buried landmine. In particular, the HMM mine models learns the hyperbolic shape associated with the signature of a buried mine by three states that correspond to the succession of an increasing edge, a at edge, and a decreasing edge. Recently, for the same application, other features have been used with dierent classiers. In particular, the Edge Histogram Descriptor (EHD) has been used within a K-nearest neighbor classier. Another descriptor is based on Gabor features and has been used within a discrete HMM classier. A third feature, that is closely related to the EHD, is the Bar histogram feature. This feature has been used within a Neural Networks classier for handwritten word recognition. In this paper, we propose an evaluation of the HMM based landmine detection framework with several feature extraction techniques. We adapt and evaluate the EHD, Gabor, Bar, and baseline gradient feature extraction methods. We compare the performance of these features using a large and diverse GPR data collection.

  8. Combining heterogeneous features for colonic polyp detection in CTC based on semi-definite programming

    Science.gov (United States)

    Wang, Shijun; Yao, Jianhua; Petrick, Nicholas A.; Summers, Ronald M.

    2009-02-01

    Colon cancer is the second leading cause of cancer-related deaths in the United States. Computed tomographic colonography (CTC) combined with a computer aided detection system provides a feasible combination for improving colonic polyps detection and increasing the use of CTC for colon cancer screening. To distinguish true polyps from false positives, various features extracted from polyp candidates have been proposed. Most of these features try to capture the shape information of polyp candidates or neighborhood knowledge about the surrounding structures (fold, colon wall, etc.). In this paper, we propose a new set of shape descriptors for polyp candidates based on statistical curvature information. These features, called histogram of curvature features, are rotation, translation and scale invariant and can be treated as complementing our existing feature set. Then in order to make full use of the traditional features (defined as group A) and the new features (group B) which are highly heterogeneous, we employed a multiple kernel learning method based on semi-definite programming to identify an optimized classification kernel based on the combined set of features. We did leave-one-patient-out test on a CTC dataset which contained scans from 50 patients (with 90 6-9mm polyp detections). Experimental results show that a support vector machine (SVM) based on the combined feature set and the semi-definite optimization kernel achieved higher FROC performance compared to SVMs using the two groups of features separately. At a false positive per patient rate of 7, the sensitivity on 6-9mm polyps using the combined features improved from 0.78 (Group A) and 0.73 (Group B) to 0.82 (p<=0.01).

  9. Passive Copy-Move Forgery Detection Using Halftoning-based Block Truncation Coding Feature

    Science.gov (United States)

    Harjito, Bambang; Prasetyo, Heri

    2017-06-01

    This paper presents a new method on passive copy-move forgery detection by exploiting the effectiveness and usability of Halftoning-based Block Truncation Coding (HBTC) image feature. Copy-move forgery detection precisely locates the large size or flat tampered regions of an image. On our method, the tampered input image is firstly divided into several overlapping image blocks to construct the image feature descriptors. Each image block is further divided into several non-overlapping image blocks for processing HBTC. Two image feature descriptors, namely Color Feature (CF) and Bit Pattern Feature (BF) are computed from the HBTC compressed data-stream of each image block. Lexicography sorting rearranges the image feature descriptors in ascending manner for whole image. The similarity between some tampered image regions is measured based on their CF and BF under specific shift frequency threshold. As documented in the experimental results, the proposed method yields a promising result for detecting the tampered or copy-move forgery regions. It has proved that the HBTC is not only suitable for image compression, but it can also be used in the copy-move forgery detection.

  10. Human Detection Using Random Color Similarity Feature and Random Ferns Classifier.

    Science.gov (United States)

    Zhang, Miaohui; Xin, Ming

    2016-01-01

    We explore a novel approach for human detection based on random color similarity feature (RCS) and random ferns classifier which is also known as semi-naive Bayesian classifier. In contrast to other existing features employed by human detection, color-based features are rarely used in vision-based human detection because of large intra-class variations. In this paper, we propose a novel color-based feature, RCS feature, which is yielded by simple color similarity computation between image cells randomly picked in still images, and can effectively characterize human appearances. In addition, a histogram of oriented gradient based local binary feature (HOG-LBF) is also introduced to enrich the human descriptor set. Furthermore, random ferns classifier is used in the proposed approach because of its faster speed in training and testing than traditional classifiers such as Support Vector Machine (SVM) classifier, without a loss in performance. Finally, the proposed method is conducted in public datasets and achieves competitive detection results.

  11. A COMPARATIVE ANALYSIS OF SINGLE AND COMBINATION FEATURE EXTRACTION TECHNIQUES FOR DETECTING CERVICAL CANCER LESIONS

    Directory of Open Access Journals (Sweden)

    S. Pradeep Kumar Kenny

    2016-02-01

    Full Text Available Cervical cancer is the third most common form of cancer affecting women especially in third world countries. The predominant reason for such alarming rate of death is primarily due to lack of awareness and proper health care. As they say, prevention is better than cure, a better strategy has to be put in place to screen a large number of women so that an early diagnosis can help in saving their lives. One such strategy is to implement an automated system. For an automated system to function properly a proper set of features have to be extracted so that the cancer cell can be detected efficiently. In this paper we compare the performances of detecting a cancer cell using a single feature versus a combination feature set technique to see which will suit the automated system in terms of higher detection rate. For this each cell is segmented using multiscale morphological watershed segmentation technique and a series of features are extracted. This process is performed on 967 images and the data extracted is subjected to data mining techniques to determine which feature is best for which stage of cancer. The results thus obtained clearly show a higher percentage of success for combination feature set with 100% accurate detection rate.

  12. A ROC-based feature selection method for computer-aided detection and diagnosis

    Science.gov (United States)

    Wang, Songyuan; Zhang, Guopeng; Liao, Qimei; Zhang, Junying; Jiao, Chun; Lu, Hongbing

    2014-03-01

    Image-based computer-aided detection and diagnosis (CAD) has been a very active research topic aiming to assist physicians to detect lesions and distinguish them from benign to malignant. However, the datasets fed into a classifier usually suffer from small number of samples, as well as significantly less samples available in one class (have a disease) than the other, resulting in the classifier's suboptimal performance. How to identifying the most characterizing features of the observed data for lesion detection is critical to improve the sensitivity and minimize false positives of a CAD system. In this study, we propose a novel feature selection method mR-FAST that combines the minimal-redundancymaximal relevance (mRMR) framework with a selection metric FAST (feature assessment by sliding thresholds) based on the area under a ROC curve (AUC) generated on optimal simple linear discriminants. With three feature datasets extracted from CAD systems for colon polyps and bladder cancer, we show that the space of candidate features selected by mR-FAST is more characterizing for lesion detection with higher AUC, enabling to find a compact subset of superior features at low cost.

  13. Landmine detection with Bayesian cross-categorization on point-wise, contextual and spatial features

    Science.gov (United States)

    Léveillé, Jasmin; Yu, Ssu-Hsin; Gandhe, Avinash

    2016-05-01

    Recently developed feature extraction methods proposed in the explosive hazard detection community have yielded many features that potentially provide complementary information for explosive detection. Finding the right combination of features that is most effective in distinguishing targets from clutter, on the other hand, is extremely challenging due to a large number of potential features to explore. Furthermore, sensors employed for mine and buried explosive hazard detection are typically sensitive to environmental conditions such as soil properties and weather as well as other operating parameters. In this work, we applied Bayesian cross-categorization (CrossCat) to a heterogeneous set of features derived from electromagnetic induction (EMI) sensor time-series for purposes of buried explosive hazard detection. The set of features used here includes simple, point-wise measurements such as the overall magnitude of the EMI response, contextual information such as soil type, and a new feature consisting of spatially aggregated Discrete Spectra of Relaxation Frequencies (DSRFs). Previous work showed that the DSRF characterizes target properties with some invariance to orientation and position. We have developed a novel approach to aggregate point-wise DSRF estimates. The spatial aggregation is based on the Bag-of-Words (BoW) model found in the machine learning and computer vision literatures and aims to enhance the invariance properties of point-wise DSRF estimates. We considered various refinements to the BoW model for purpose of buried explosive hazard detection and tested their usefulness as part of a Bayesian cross-categorization framework on data collected from two different sites. The results show improved performance over classifiers using only point-wise features.

  14. Wood defect detection method with PCA feature fusion and compressed sensing

    Institute of Scientific and Technical Information of China (English)

    Yizhuo Zhang; Chao Xu; Chao Li; Huiling Yu; Jun Cao

    2015-01-01

    We used principal component analysis (PCA) and compressed sensing to detect wood defects from wood plate images. PCA makes it possible to reduce data redundancy and feature dimensions and compressed sensing, used as a clas-sifier, improves identification accuracy. We extracted 25 features, including geometry and regional features, gray-scale texture features, and invariant moment features, from wood board images and then integrated them using PCA, and se-lected eight principal components to express defects. After the fusion process, we used the features to construct a data dic-tionary, and realized the classification of defects by computing the optimal solution of the data dictionary in l1 norm using the least square method. We tested 50 Xylosma samples of live knots, dead knots, and cracks. The average detection time with PCA feature fusion and without were 0.2015 and 0.7125 ms, respectively. The original detection accuracy by SOM neural network was 87%, but after compressed sensing, it was 92%.

  15. Feature Selection and Classifier Parameters Estimation for EEG Signals Peak Detection Using Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Asrul Adam

    2014-01-01

    Full Text Available Electroencephalogram (EEG signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1 standard PSO and (2 random asynchronous particle swarm optimization (RA-PSO. The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model.

  16. Multi-Feature Based Multiple Landmine Detection Using Ground Penetration Radar

    Directory of Open Access Journals (Sweden)

    S. Park

    2014-06-01

    Full Text Available This paper presents a novel method for detection of multiple landmines using a ground penetrating radar (GPR. Conventional algorithms mainly focus on detection of a single landmine, which cannot linearly extend to the multiple landmine case. The proposed algorithm is composed of four steps; estimation of the number of multiple objects buried in the ground, isolation of each object, feature extraction and detection of landmines. The number of objects in the GPR signal is estimated by using the energy projection method. Then signals for the objects are extracted by using the symmetry filtering method. Each signal is then processed for features, which are given as input to the support vector machine (SVM for landmine detection. Three landmines buried in various ground conditions are considered for the test of the proposed method. They demonstrate that the proposed method can successfully detect multiple landmines.

  17. Detection and Classification of Cancer from Microscopic Biopsy Images Using Clinically Significant and Biologically Interpretable Features

    Science.gov (United States)

    Kumar, Rajesh; Srivastava, Subodh

    2015-01-01

    A framework for automated detection and classification of cancer from microscopic biopsy images using clinically significant and biologically interpretable features is proposed and examined. The various stages involved in the proposed methodology include enhancement of microscopic images, segmentation of background cells, features extraction, and finally the classification. An appropriate and efficient method is employed in each of the design steps of the proposed framework after making a comparative analysis of commonly used method in each category. For highlighting the details of the tissue and structures, the contrast limited adaptive histogram equalization approach is used. For the segmentation of background cells, k-means segmentation algorithm is used because it performs better in comparison to other commonly used segmentation methods. In feature extraction phase, it is proposed to extract various biologically interpretable and clinically significant shapes as well as morphology based features from the segmented images. These include gray level texture features, color based features, color gray level texture features, Law's Texture Energy based features, Tamura's features, and wavelet features. Finally, the K-nearest neighborhood method is used for classification of images into normal and cancerous categories because it is performing better in comparison to other commonly used methods for this application. The performance of the proposed framework is evaluated using well-known parameters for four fundamental tissues (connective, epithelial, muscular, and nervous) of randomly selected 1000 microscopic biopsy images. PMID:27006938

  18. Unsupervised Skin cancer detection by combination of texture and shape features in dermoscopy images

    Directory of Open Access Journals (Sweden)

    Hamed aghapanah rudsari

    2014-05-01

    Full Text Available In this paper a novel unsupervised feature extraction method for detection of melanoma in skin images is presented. First of all, normal skin surrounding the lesion is removed in a segmentation process. In the next step, some shape and texture features are extracted from the output image of the first step: GLCM, GLRLM, the proposed directional-frequency features, and some parameters of Ripplet transform are used as texture features; Also, NRL features and Zernike moments are used as shape features. Totally, 63 texture features and 31 shape features are extracted. Finally, the number of extracted features is reduced using PCA method and a proposed method based on Fisher criteria. Extracted features are classified using the Perceptron Neural Networks, Support Vector Machine, 4-NN, and Naïve Bayes. The results show that SVM has the best performance. The proposed algorithm is applied on a database that consists of 160 labeled images. The overall results confirm the superiority of the proposed method in both accuracy and reliability over previous works.

  19. Efficient feature selection using a hybrid algorithm for the task of epileptic seizure detection

    Science.gov (United States)

    Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline

    2014-07-01

    Feature selection is a very important aspect in the field of machine learning. It entails the search of an optimal subset from a very large data set with high dimensional feature space. Apart from eliminating redundant features and reducing computational cost, a good selection of feature also leads to higher prediction and classification accuracy. In this paper, an efficient feature selection technique is introduced in the task of epileptic seizure detection. The raw data are electroencephalography (EEG) signals. Using discrete wavelet transform, the biomedical signals were decomposed into several sets of wavelet coefficients. To reduce the dimension of these wavelet coefficients, a feature selection method that combines the strength of both filter and wrapper methods is proposed. Principal component analysis (PCA) is used as part of the filter method. As for wrapper method, the evolutionary harmony search (HS) algorithm is employed. This metaheuristic method aims at finding the best discriminating set of features from the original data. The obtained features were then used as input for an automated classifier, namely wavelet neural networks (WNNs). The WNNs model was trained to perform a binary classification task, that is, to determine whether a given EEG signal was normal or epileptic. For comparison purposes, different sets of features were also used as input. Simulation results showed that the WNNs that used the features chosen by the hybrid algorithm achieved the highest overall classification accuracy.

  20. Landmine detection with ground penetrating radar using discrete hidden Markov models with symbol dependent features

    Science.gov (United States)

    Frigui, Hichem; Missaoui, Oualid; Gader, Paul

    2008-04-01

    In this paper, we propose an efficient Discrete Hidden Markov Models (DHMM) for landmine detection that rely on training data to learn the relevant features that characterize different signatures (mines and non-mines), and can adapt to different environments and different radar characteristics. Our work is motivated by the fact that mines and clutter objects have different characteristics depending on the mine type, soil and weather conditions, and burial depth. Thus, ideally different sets of specialized features may be needed to achieve high detection and low false alarm rates. The proposed approach includes three main components: feature extraction, clustering, and DHMM. First, since we do not assume that the relevant features for the different signatures are known a priori, we proceed by extracting several sets of features for each signature. Then, we apply a clustering and feature discrimination algorithm to the training data to quantize it into a set of symbols and learn feature relevance weights for each symbol. These symbols and their weights are then used in a DHMM framework to learn the parameters of the mine and the background models. Preliminary results on large and diverse ground penetrating radar data show that the proposed method outperforms the basic DHMM where all the features are treated equally important.

  1. Detection of Harbours from High Resolution Remote Sensing Imagery via Saliency Analysis and Feature Learning

    Science.gov (United States)

    Wang, Yetianjian; Pan, Li; Wang, Dagang; Kang, Yifei

    2016-06-01

    Harbours are very important objects in civil and military fields. To detect them from high resolution remote sensing imagery is important in various fields and also a challenging task. Traditional methods of detecting harbours mainly focus on the segmentation of water and land and the manual selection of knowledge. They do not make enough use of other features of remote sensing imagery and often fail to describe the harbours completely. In order to improve the detection, a new method is proposed. First, the image is transformed to Hue, Saturation, Value (HSV) colour space and saliency analysis is processed via the generation and enhancement of the co-occurrence histogram to help detect and locate the regions of interest (ROIs) that is salient and may be parts of the harbour. Next, SIFT features are extracted and feature learning is processed to help represent the ROIs. Then, by using classified feature of the harbour, a classifier is trained and used to check the ROIs to find whether they belong to the harbour. Finally, if the ROIs belong to the harbour, a minimum bounding rectangle is formed to include all the harbour ROIs and detect and locate the harbour. The experiment on high resolution remote sensing imagery shows that the proposed method performs better than other methods in precision of classifying ROIs and accuracy of completely detecting and locating harbours.

  2. A General Purpose Feature Extractor for Light Detection and Ranging Data

    Directory of Open Access Journals (Sweden)

    Edwin B. Olson

    2010-11-01

    Full Text Available Feature extraction is a central step of processing Light Detection and Ranging (LIDAR data. Existing detectors tend to exploit characteristics of specific environments: corners and lines from indoor (rectilinear environments, and trees from outdoor environments. While these detectors work well in their intended environments, their performance in different environments can be poor. We describe a general purpose feature detector for both 2D and 3D LIDAR data that is applicable to virtually any environment. Our method adapts classic feature detection methods from the image processing literature, specifically the multi-scale Kanade-Tomasi corner detector. The resulting method is capable of identifying highly stable and repeatable features at a variety of spatial scales without knowledge of environment, and produces principled uncertainty estimates and corner descriptors at same time. We present results on both software simulation and standard datasets, including the 2D Victoria Park and Intel Research Center datasets, and the 3D MIT DARPA Urban Challenge dataset.

  3. Robust Feature Detection and Local Classification for Surfaces Based on Moment Analysis

    OpenAIRE

    2004-01-01

    The stable local classification of discrete surfaces with respect to features such as edges and corners or concave and convex regions, respectively, is as quite difficult as well as indispensable for many surface processing applications. Usually, the feature detection is done via a local curvature analysis. If concerned with large triangular and irregular grids, e.g., generated via a marching cube algorithm, the detectors are tedious to treat and a robust classification is hard to achieve. He...

  4. Feature detection in biological tissues using multi-band and narrow-band imaging.

    Science.gov (United States)

    Tamura, Yuki; Mashita, Tomohiro; Kuroda, Yoshihiro; Kiyokawa, Kiyoshi; Takemura, Haruo

    2016-12-01

    In the past decade, augmented reality systems have been expected to support surgical operations by making it possible to view invisible objects that are inside or occluded by the skull, hands, or organs. However, the properties of biological tissues that are non-rigid and featureless require a large number of distributed features to track the movement of tissues in detail. With the goal of increasing the number of feature points in organ tracking, we propose a feature detection using multi-band and narrow-band imaging and a new band selection method. The depth of light penetration into an object depends on the wavelength of light based on optical characteristics. We applied typical feature detectors to detect feature points using three selected bands in a human hand. To consider surgical situations, we applied our method to a chicken liver with a variety of light conditions. Our experimental results revealed that the image of each band exhibited a different distribution of feature points. In addition, the total number of feature points determined by the proposed method exceeded that of the R, G, and B images obtained using a normal camera. The results using a chicken liver with various light sources and intensities also show different distributions with each selected band. We have proposed a feature detection method using multi-band and narrow-band imaging and a band selection method. The results of our experiments confirmed that the proposed method increased the number of distributed feature points. The proposed method was also effective for different light conditions.

  5. Object-Based Analysis of LIDAR Geometric Features for Vegetation Detection in Shaded Areas

    Science.gov (United States)

    Lin, Yu-Ching; Lin, ChinSu; Tsai, Ming-Da; Lin, Chun-Lin

    2016-06-01

    The extraction of land cover information from remote sensing data is a complex process. Spectral information has been widely utilized in classifying remote sensing images. However, shadows limit the use of multispectral images because they result in loss of spectral radiometric information. In addition, true reflectance may be underestimated in shaded areas. In land cover classification, shaded areas are often left unclassified or simply assigned as a shadow class. Vegetation indices from remote sensing measurement are radiation-based measurements computed through spectral combination. They indicate vegetation properties and play an important role in remote sensing of forests. Airborne light detection and ranging (LiDAR) technology is an active remote sensing technique that produces a true orthophoto at a single wavelength. This study investigated three types of geometric lidar features where NDVI values fail to represent meaningful forest information. The three features include echo width, normalized eigenvalue, and standard deviation of the unit weight observation of the plane adjustment, and they can be derived from waveform data and discrete point clouds. Various feature combinations were carried out to evaluate the compensation of the three lidar features to vegetation detection in shaded areas. Echo width was found to outperform the other two features. Furthermore, surface characteristics estimated by echo width were similar to that by normalized eigenvalues. Compared to the combination of only NDVI and mean height difference, those including one of the three features had a positive effect on the detection of vegetation class.

  6. Automated retrieval of cloud and aerosol properties from the ARM Raman lidar, part 1: feature detection

    Energy Technology Data Exchange (ETDEWEB)

    Thorsen, Tyler J.; Fu, Qiang; Newsom, Rob K.; Turner, David D.; Comstock, Jennifer M.

    2015-11-01

    A Feature detection and EXtinction retrieval (FEX) algorithm for the Atmospheric Radiation Measurement (ARM) program’s Raman lidar (RL) has been developed. Presented here is part 1 of the FEX algorithm: the detection of features including both clouds and aerosols. The approach of FEX is to use multiple quantities— scattering ratios derived using elastic and nitro-gen channel signals from two fields of view, the scattering ratio derived using only the elastic channel, and the total volume depolarization ratio— to identify features using range-dependent detection thresholds. FEX is designed to be context-sensitive with thresholds determined for each profile by calculating the expected clear-sky signal and noise. The use of multiple quantities pro-vides complementary depictions of cloud and aerosol locations and allows for consistency checks to improve the accuracy of the feature mask. The depolarization ratio is shown to be particularly effective at detecting optically-thin features containing non-spherical particles such as cirrus clouds. Improve-ments over the existing ARM RL cloud mask are shown. The performance of FEX is validated against a collocated micropulse lidar and observations from the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) satellite over the ARM Darwin, Australia site. While we focus on a specific lidar system, the FEX framework presented here is suitable for other Raman or high spectral resolution lidars.

  7. Forest Fire Smoke Video Detection Using Spatiotemporal and Dynamic Texture Features

    Directory of Open Access Journals (Sweden)

    Yaqin Zhao

    2015-01-01

    Full Text Available Smoke detection is a very key part of fire recognition in a forest fire surveillance video since the smoke produced by forest fires is visible much before the flames. The performance of smoke video detection algorithm is often influenced by some smoke-like objects such as heavy fog. This paper presents a novel forest fire smoke video detection based on spatiotemporal features and dynamic texture features. At first, Kalman filtering is used to segment candidate smoke regions. Then, candidate smoke region is divided into small blocks. Spatiotemporal energy feature of each block is extracted by computing the energy features of its 8-neighboring blocks in the current frame and its two adjacent frames. Flutter direction angle is computed by analyzing the centroid motion of the segmented regions in one candidate smoke video clip. Local Binary Motion Pattern (LBMP is used to define dynamic texture features of smoke videos. Finally, smoke video is recognized by Adaboost algorithm. The experimental results show that the proposed method can effectively detect smoke image recorded from different scenes.

  8. Automatic detection of suspicious behavior of pickpockets with track-based features in a shopping mall

    Science.gov (United States)

    Bouma, Henri; Baan, Jan; Burghouts, Gertjan J.; Eendebak, Pieter T.; van Huis, Jasper R.; Dijk, Judith; van Rest, Jeroen H. C.

    2014-10-01

    Proactive detection of incidents is required to decrease the cost of security incidents. This paper focusses on the automatic early detection of suspicious behavior of pickpockets with track-based features in a crowded shopping mall. Our method consists of several steps: pedestrian tracking, feature computation and pickpocket recognition. This is challenging because the environment is crowded, people move freely through areas which cannot be covered by a single camera, because the actual snatch is a subtle action, and because collaboration is complex social behavior. We carried out an experiment with more than 20 validated pickpocket incidents. We used a top-down approach to translate expert knowledge in features and rules, and a bottom-up approach to learn discriminating patterns with a classifier. The classifier was used to separate the pickpockets from normal passers-by who are shopping in the mall. We performed a cross validation to train and evaluate our system. In this paper, we describe our method, identify the most valuable features, and analyze the results that were obtained in the experiment. We estimate the quality of these features and the performance of automatic detection of (collaborating) pickpockets. The results show that many of the pickpockets can be detected at a low false alarm rate.

  9. Model-based defect detection on structured surfaces having optically unresolved features.

    Science.gov (United States)

    O'Connor, Daniel; Henning, Andrew J; Sherlock, Ben; Leach, Richard K; Coupland, Jeremy; Giusca, Claudiu L

    2015-10-20

    In this paper, we demonstrate, both numerically and experimentally, a method for the detection of defects on structured surfaces having optically unresolved features. The method makes use of synthetic reference data generated by an observational model that is able to simulate the response of the selected optical inspection system to the ideal structure, thereby providing an ideal measure of deviation from nominal geometry. The method addresses the high dynamic range challenge faced in highly parallel manufacturing by enabling the use of low resolution, wide field of view optical systems for defect detection on surfaces containing small features over large regions.

  10. Context-dependent feature selection for landmine detection with ground-penetrating radar

    Science.gov (United States)

    Ratto, Christopher R.; Torrione, Peter A.; Collins, Leslie M.

    2009-05-01

    We present a novel method for improving landmine detection with ground-penetrating radar (GPR) by utilizing a priori knowledge of environmental conditions to facilitate algorithm training. The goal of Context-Dependent Feature Selection (CDFS) is to mitigate performance degradation caused by environmental factors. CDFS operates on GPR data by first identifying its environmental context, and then fuses the decisions of several classifiers trained on context-dependent subsets of features. CDFS was evaluated on GPR data collected at several distinct sites under a variety of weather conditions. Results show that using prior environmental knowledge in this fashion has the potential to improve landmine detection.

  11. A CAD System for Lesion Detection in Cervigram Based on Laws Textural Feature

    Directory of Open Access Journals (Sweden)

    RamaPraba P.S

    2014-01-01

    Full Text Available Cervical cancer is the second most common cancer among the women worldwide. A computer aided diagnosis system can help colposcopist to analyze cervical images more accurately. This work aims to detect lesion in cervical images based on Laws textural feature and nearest neighbor classifier and it can be used as a diagnostic tool. The images used for the detection of cervical cancer are taken by using colposcope which magnifies the cells of cervix. The Laws textural features are extracted from the cervical images and input to nearest neighbor classifier. A totally 240 images are used for the evaluation and an overall accuracy of 96% is obtained.

  12. Feature detection and SLAM on embedded processors for micro-robot navigation

    Science.gov (United States)

    Robinette, Paul; Collins, Thomas R.

    2013-05-01

    We have developed software that allows a micro-robot to localize itself at a 1Hz rate using only onboard hardware. The Surveyor SRV-1 robot and its Blackfin processors were used to perform FAST feature detection on images. Good features selected from these images were then described using the SURF descriptor algorithm. An onboard Gumstix then correlated the features reported by the two processors and used GTSAM to develop an estimate of robot localization and landmark positions. Localization errors in this system were on the same order of magnitude as the size of the robot itself, giving the robot the potential to autonomously operate in a real-world environment.

  13. Change Detection in Uav Video Mosaics Combining a Feature Based Approach and Extended Image Differencing

    Science.gov (United States)

    Saur, Günter; Krüger, Wolfgang

    2016-06-01

    Change detection is an important task when using unmanned aerial vehicles (UAV) for video surveillance. We address changes of short time scale using observations in time distances of a few hours. Each observation (previous and current) is a short video sequence acquired by UAV in near-Nadir view. Relevant changes are, e.g., recently parked or moved vehicles. Examples for non-relevant changes are parallaxes caused by 3D structures of the scene, shadow and illumination changes, and compression or transmission artifacts. In this paper we present (1) a new feature based approach to change detection, (2) a combination with extended image differencing (Saur et al., 2014), and (3) the application to video sequences using temporal filtering. In the feature based approach, information about local image features, e.g., corners, is extracted in both images. The label "new object" is generated at image points, where features occur in the current image and no or weaker features are present in the previous image. The label "vanished object" corresponds to missing or weaker features in the current image and present features in the previous image. This leads to two "directed" change masks and differs from image differencing where only one "undirected" change mask is extracted which combines both label types to the single label "changed object". The combination of both algorithms is performed by merging the change masks of both approaches. A color mask showing the different contributions is used for visual inspection by a human image interpreter.

  14. Generalized Discriminant Analysis algorithm for feature reduction in Cyber Attack Detection System

    Directory of Open Access Journals (Sweden)

    Shailendra Singh

    2009-10-01

    Full Text Available This Generalized Discriminant Analysis (GDA has provided an extremely powerful approach to extracting non-linear features. The network traffic data provided for the design of intrusion detection system always are large with ineffective information, thus we need to remove the worthless information from the original high dimensional database. To improve the generalization ability, we usually generate a small set of features from the original input variables by feature extraction. The conventional Linear Discriminant Analysis (LDA feature reduction technique has its limitations. It is not suitable for non-linear dataset. Thus we propose an efficient algorithm based on the Generalized Discriminant Analysis (GDA feature reduction technique which is novel approach used in the area of cyber attack detection. This not only reduces the number of the input features but also increases the classification accuracy and reduces the training and testing time of the classifiers by selecting most discriminating features. We use Artificial Neural Network (ANN and C4.5 classifiers to compare the performance of the proposed technique. The result indicates the superiority of algorithm.

  15. A Solitary Feature-based Lung Nodule Detection Approach for Chest X-Ray Radiographs.

    Science.gov (United States)

    Li, Xuechen; Shen, Linlin; Luo, Suhuai

    2017-01-31

    Lung cancer is one of the most deadly diseases. It has a high death rate and its incidence rate has been increasing all over the world. Lung cancer appears as a solitary nodule in chest x-ray radiograph (CXR). Therefore, lung nodule detection in CXR could have a significant impact on early detection of lung cancer. Radiologists define a lung nodule in chest x-ray radiographs as "solitary white nodule-like blob". However, the solitary feature has not been employed for lung nodule detection before. In this paper, a solitary feature-based lung nodule detection method was proposed. We employed stationary wavelet transform and convergence index filter to extract the texture features and used AdaBoost to generate white nodule-likeness map. A solitary feature was defined to evaluate the isolation degree of candidates. Both the isolation degree and the white nodule-likeness were used as final evaluation of lung nodule candidates. The proposed method shows better performance and robustness than those reported in previous research. More than 80% and 93% of lung nodules in the lung field in the JSRT database were detected when the false positives per image was two and five, respectively. The proposed approach has the potential of being used in clinical practice.

  16. Real-Time Illumination Invariant Face Detection Using Biologically Inspired Feature Set and BP Neural Network

    Directory of Open Access Journals (Sweden)

    Reza Azad

    2014-06-01

    Full Text Available In recent years, face detection has been thoroughly studied due to its wide potential applications, including face recognition, human-computer interaction, video surveillance, etc.In this paper, a new and illumination invariant face detection method, based on features inspired by the human's visual cortexand applying BP neural network on the extracted featureset is proposed.A feature set is extracted from face and non-face images, by means of a feed-forward model, which contains a view and illumination invariant C2 features from all images in the dataset. Then, these C2 feature vector which derived from a cortex-like mechanism passed to a BP neural network. In the result part, the proposed approach is applied on FEI and Wild face detection databases and high accuracy rate is achieved. In addition, experimental results have demonstrated our proposed face detector outperforms the most of the successful face detection algorithms in the literature and gives the first best result on all tested challenging face detection databases.

  17. Conditional Variational Autoencoder for Prediction and Feature Recovery Applied to Intrusion Detection in IoT.

    Science.gov (United States)

    Lopez-Martin, Manuel; Carro, Belen; Sanchez-Esguevillas, Antonio; Lloret, Jaime

    2017-08-26

    The purpose of a Network Intrusion Detection System is to detect intrusive, malicious activities or policy violations in a host or host's network. In current networks, such systems are becoming more important as the number and variety of attacks increase along with the volume and sensitiveness of the information exchanged. This is of particular interest to Internet of Things networks, where an intrusion detection system will be critical as its economic importance continues to grow, making it the focus of future intrusion attacks. In this work, we propose a new network intrusion detection method that is appropriate for an Internet of Things network. The proposed method is based on a conditional variational autoencoder with a specific architecture that integrates the intrusion labels inside the decoder layers. The proposed method is less complex than other unsupervised methods based on a variational autoencoder and it provides better classification results than other familiar classifiers. More important, the method can perform feature reconstruction, that is, it is able to recover missing features from incomplete training datasets. We demonstrate that the reconstruction accuracy is very high, even for categorical features with a high number of distinct values. This work is unique in the network intrusion detection field, presenting the first application of a conditional variational autoencoder and providing the first algorithm to perform feature recovery.

  18. Detection of Coronal Mass Ejections Using Multiple Features and Space-Time Continuity

    Science.gov (United States)

    Zhang, Ling; Yin, Jian-qin; Lin, Jia-ben; Feng, Zhi-quan; Zhou, Jin

    2017-07-01

    Coronal Mass Ejections (CMEs) release tremendous amounts of energy in the solar system, which has an impact on satellites, power facilities and wireless transmission. To effectively detect a CME in Large Angle Spectrometric Coronagraph (LASCO) C2 images, we propose a novel algorithm to locate the suspected CME regions, using the Extreme Learning Machine (ELM) method and taking into account the features of the grayscale and the texture. Furthermore, space-time continuity is used in the detection algorithm to exclude the false CME regions. The algorithm includes three steps: i) define the feature vector which contains textural and grayscale features of a running difference image; ii) design the detection algorithm based on the ELM method according to the feature vector; iii) improve the detection accuracy rate by using the decision rule of the space-time continuum. Experimental results show the efficiency and the superiority of the proposed algorithm in the detection of CMEs compared with other traditional methods. In addition, our algorithm is insensitive to most noise.

  19. SAR Images Unsupervised Change Detection Based on Combination of Texture Feature Vector with Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    ZHUANG Huifu

    2016-03-01

    Full Text Available Generally, spatial-contextual information would be used in change detection because there is significant speckle noise in synthetic aperture radar(SAR images. In this paper, using the rich texture information of SAR images, an unsupervised change detection approach to high-resolution SAR images based on texture feature vector and maximum entropy principle is proposed. The difference image is generated by using the 32-dimensional texture feature vector of gray-level co-occurrence matrix(GLCM. And the automatic threshold is obtained by maximum entropy principle. In this method, the appropriate window size to change detection is 11×11 according to the regression analysis of window size and precision index. The experimental results show that the proposed approach is better could both reduce the influence of speckle noise and improve the detection accuracy of high-resolution SAR image effectively; and it is better than Markov random field.

  20. Behavioral features recognition and oestrus detection based on fast approximate clustering algorithm in dairy cows

    Science.gov (United States)

    Tian, Fuyang; Cao, Dong; Dong, Xiaoning; Zhao, Xinqiang; Li, Fade; Wang, Zhonghua

    2017-06-01

    Behavioral features recognition was an important effect to detect oestrus and sickness in dairy herds and there is a need for heat detection aid. The detection method was based on the measure of the individual behavioural activity, standing time, and temperature of dairy using vibrational sensor and temperature sensor in this paper. The data of behavioural activity index, standing time, lying time and walking time were sent to computer by lower power consumption wireless communication system. The fast approximate K-means algorithm (FAKM) was proposed to deal the data of the sensor for behavioral features recognition. As a result of technical progress in monitoring cows using computers, automatic oestrus detection has become possible.

  1. Discriminative kernel feature extraction and learning for object recognition and detection

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2015-01-01

    Feature extraction and learning is critical for object recognition and detection. By embedding context cue of image attributes into the kernel descriptors, we propose a set of novel kernel descriptors called context kernel descriptors (CKD). The motivation of CKD is to use the spatial consistency...... codebook and reduced CKD are discriminative. We report superior performance of our algorithm for object recognition on benchmark datasets like Caltech-101 and CIFAR-10, as well as for detection on a challenging chicken feet dataset....

  2. Feature selection of seismic waveforms for long period event detection at Cotopaxi Volcano

    Science.gov (United States)

    Lara-Cueva, R. A.; Benítez, D. S.; Carrera, E. V.; Ruiz, M.; Rojo-Álvarez, J. L.

    2016-04-01

    Volcano Early Warning Systems (VEWS) have become a research topic in order to preserve human lives and material losses. In this setting, event detection criteria based on classification using machine learning techniques have proven useful, and a number of systems have been proposed in the literature. However, to the best of our knowledge, no comprehensive and principled study has been conducted to compare the influence of the many different sets of possible features that have been used as input spaces in previous works. We present an automatic recognition system of volcano seismicity, by considering feature extraction, event classification, and subsequent event detection, in order to reduce the processing time as a first step towards a high reliability automatic detection system in real-time. We compiled and extracted a comprehensive set of temporal, moving average, spectral, and scale-domain features, for separating long period seismic events from background noise. We benchmarked two usual kinds of feature selection techniques, namely, filter (mutual information and statistical dependence) and embedded (cross-validation and pruning), each of them by using suitable and appropriate classification algorithms such as k Nearest Neighbors (k-NN) and Decision Trees (DT). We applied this approach to the seismicity presented at Cotopaxi Volcano in Ecuador during 2009 and 2010. The best results were obtained by using a 15 s segmentation window, feature matrix in the frequency domain, and DT classifier, yielding 99% of detection accuracy and sensitivity. Selected features and their interpretation were consistent among different input spaces, in simple terms of amplitude and spectral content. Our study provides the framework for an event detection system with high accuracy and reduced computational requirements.

  3. Vision-based in-line fabric defect detection using yarn-specific shape features

    Science.gov (United States)

    Schneider, Dorian; Aach, Til

    2012-01-01

    We develop a methodology for automatic in-line flaw detection in industrial woven fabrics. Where state of the art detection algorithms apply texture analysis methods to operate on low-resolved ({200 ppi) image data, we describe here a process flow to segment single yarns in high-resolved ({1000 ppi) textile images. Four yarn shape features are extracted, allowing a precise detection and measurement of defects. The degree of precision reached allows a classification of detected defects according to their nature, providing an innovation in the field of automatic fabric flaw detection. The design has been carried out to meet real time requirements and face adverse conditions caused by loom vibrations and dirt. The entire process flow is discussed followed by an evaluation using a database with real-life industrial fabric images. This work pertains to the construction of an on-loom defect detection system to be used in manufacturing practice.

  4. Mitosis detection in breast cancer pathology images by combining handcrafted and convolutional neural network features.

    Science.gov (United States)

    Wang, Haibo; Cruz-Roa, Angel; Basavanhally, Ajay; Gilmore, Hannah; Shih, Natalie; Feldman, Mike; Tomaszewski, John; Gonzalez, Fabio; Madabhushi, Anant

    2014-10-01

    Breast cancer (BCa) grading plays an important role in predicting disease aggressiveness and patient outcome. A key component of BCa grade is the mitotic count, which involves quantifying the number of cells in the process of dividing (i.e., undergoing mitosis) at a specific point in time. Currently, mitosis counting is done manually by a pathologist looking at multiple high power fields (HPFs) on a glass slide under a microscope, an extremely laborious and time consuming process. The development of computerized systems for automated detection of mitotic nuclei, while highly desirable, is confounded by the highly variable shape and appearance of mitoses. Existing methods use either handcrafted features that capture certain morphological, statistical, or textural attributes of mitoses or features learned with convolutional neural networks (CNN). Although handcrafted features are inspired by the domain and the particular application, the data-driven CNN models tend to be domain agnostic and attempt to learn additional feature bases that cannot be represented through any of the handcrafted features. On the other hand, CNN is computationally more complex and needs a large number of labeled training instances. Since handcrafted features attempt to model domain pertinent attributes and CNN approaches are largely supervised feature generation methods, there is an appeal in attempting to combine these two distinct classes of feature generation strategies to create an integrated set of attributes that can potentially outperform either class of feature extraction strategies individually. We present a cascaded approach for mitosis detection that intelligently combines a CNN model and handcrafted features (morphology, color, and texture features). By employing a light CNN model, the proposed approach is far less demanding computationally, and the cascaded strategy of combining handcrafted features and CNN-derived features enables the possibility of maximizing the performance

  5. Cascaded ensemble of convolutional neural networks and handcrafted features for mitosis detection

    Science.gov (United States)

    Wang, Haibo; Cruz-Roa, Angel; Basavanhally, Ajay; Gilmore, Hannah; Shih, Natalie; Feldman, Mike; Tomaszewski, John; Gonzalez, Fabio; Madabhushi, Anant

    2014-03-01

    Breast cancer (BCa) grading plays an important role in predicting disease aggressiveness and patient outcome. A key component of BCa grade is mitotic count, which involves quantifying the number of cells in the process of dividing (i.e. undergoing mitosis) at a specific point in time. Currently mitosis counting is done manually by a pathologist looking at multiple high power fields on a glass slide under a microscope, an extremely laborious and time consuming process. The development of computerized systems for automated detection of mitotic nuclei, while highly desirable, is confounded by the highly variable shape and appearance of mitoses. Existing methods use either handcrafted features that capture certain morphological, statistical or textural attributes of mitoses or features learned with convolutional neural networks (CNN). While handcrafted features are inspired by the domain and the particular application, the data-driven CNN models tend to be domain agnostic and attempt to learn additional feature bases that cannot be represented through any of the handcrafted features. On the other hand, CNN is computationally more complex and needs a large number of labeled training instances. Since handcrafted features attempt to model domain pertinent attributes and CNN approaches are largely unsupervised feature generation methods, there is an appeal to attempting to combine these two distinct classes of feature generation strategies to create an integrated set of attributes that can potentially outperform either class of feature extraction strategies individually. In this paper, we present a cascaded approach for mitosis detection that intelligently combines a CNN model and handcrafted features (morphology, color and texture features). By employing a light CNN model, the proposed approach is far less demanding computationally, and the cascaded strategy of combining handcrafted features and CNN-derived features enables the possibility of maximizing performance by

  6. A Local Texture-Based Superpixel Feature Coding for Saliency Detection Combined with Global Saliency

    Directory of Open Access Journals (Sweden)

    Bingfei Nan

    2015-12-01

    Full Text Available Because saliency can be used as the prior knowledge of image content, saliency detection has been an active research area in image segmentation, object detection, image semantic understanding and other relevant image-based applications. In the case of saliency detection from cluster scenes, the salient object/region detected needs to not only be distinguished clearly from the background, but, preferably, to also be informative in terms of complete contour and local texture details to facilitate the successive processing. In this paper, a Local Texture-based Region Sparse Histogram (LTRSH model is proposed for saliency detection from cluster scenes. This model uses a combination of local texture patterns and color distribution as well as contour information to encode the superpixels to characterize the local feature of image for region contrast computing. Combining the region contrast as computed with the global saliency probability, a full-resolution salient map, in which the salient object/region detected adheres more closely to its inherent feature, is obtained on the bases of the corresponding high-level saliency spatial distribution as well as on the pixel-level saliency enhancement. Quantitative comparisons with five state-of-the-art saliency detection methods on benchmark datasets are carried out, and the comparative results show that the method we propose improves the detection performance in terms of corresponding measurements.

  7. A HYBRID FILTER AND WRAPPER FEATURE SELECTION APPROACH FOR DETECTING CONTAMINATION IN DRINKING WATER MANAGEMENT SYSTEM

    Directory of Open Access Journals (Sweden)

    S. VISALAKSHI

    2017-07-01

    Full Text Available Feature selection is an important task in predictive models which helps to identify the irrelevant features in the high - dimensional dataset. In this case of water contamination detection dataset, the standard wrapper algorithm alone cannot be applied because of the complexity. To overcome this computational complexity problem and making it lighter, filter-wrapper based algorithm has been proposed. In this work, reducing the feature space is a significant component of water contamination. The main findings are as follows: (1 The main goal is speeding up the feature selection process, so the proposed filter - based feature pre-selection is applied and guarantees that useful data are improbable to be detached in the initial stage which discussed briefly in this paper. (2 The resulting features are again filtered by using the Genetic Algorithm coded with Support Vector Machine method, where it facilitates to nutshell the subset of features with high accuracy and decreases the expense. Experimental results show that the proposed methods trim down redundant features effectively and achieved better classification accuracy.

  8. Using activity-related behavioural features towards more effective automatic stress detection.

    Directory of Open Access Journals (Sweden)

    Dimitris Giakoumis

    Full Text Available This paper introduces activity-related behavioural features that can be automatically extracted from a computer system, with the aim to increase the effectiveness of automatic stress detection. The proposed features are based on processing of appropriate video and accelerometer recordings taken from the monitored subjects. For the purposes of the present study, an experiment was conducted that utilized a stress-induction protocol based on the stroop colour word test. Video, accelerometer and biosignal (Electrocardiogram and Galvanic Skin Response recordings were collected from nineteen participants. Then, an explorative study was conducted by following a methodology mainly based on spatiotemporal descriptors (Motion History Images that are extracted from video sequences. A large set of activity-related behavioural features, potentially useful for automatic stress detection, were proposed and examined. Experimental evaluation showed that several of these behavioural features significantly correlate to self-reported stress. Moreover, it was found that the use of the proposed features can significantly enhance the performance of typical automatic stress detection systems, commonly based on biosignal processing.

  9. Discriminative kernel feature extraction and learning for object recognition and detection

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2015-01-01

    Feature extraction and learning is critical for object recognition and detection. By embedding context cue of image attributes into the kernel descriptors, we propose a set of novel kernel descriptors called context kernel descriptors (CKD). The motivation of CKD is to use the spatial consistency...

  10. Fast region-based object detection and tracking using correlation of features

    CSIR Research Space (South Africa)

    Senekal, F

    2010-11-01

    Full Text Available A new method for object detection using region based characteristics is proposed. The method uses correlation between features over a region as a descriptor for the region. It is shown that this region descriptor can be successfully applied...

  11. A Multiple Sensor Machine Vision System for Automatic Hardwood Feature Detection

    Science.gov (United States)

    D. Earl Kline; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman; Robert L. Brisbin

    1993-01-01

    A multiple sensor machine vision prototype is being developed to scan full size hardwood lumber at industrial speeds for automatically detecting features such as knots holes, wane, stain, splits, checks, and color. The prototype integrates a multiple sensor imaging system, a materials handling system, a computer system, and application software. The prototype provides...

  12. Harnessing the Power of GPUs to Speed Up Feature Selection for Outlier Detection

    Institute of Scientific and Technical Information of China (English)

    Fatemeh Azmandian; Ayse Yilmazer; Jennifer G Dy; Javed A Aslam; David R Kaeli

    2014-01-01

    Acquiring a set of features that emphasize the differences between normal data points and outliers can drastically facilitate the task of identifying outliers. In our work, we present a novel non-parametric evaluation criterion for filter-based feature selection which has an eye towards the final goal of outlier detection. The proposed method seeks the subset of features that represent the inherent characteristics of the normal dataset while forcing outliers to stand out, making them more easily distinguished by outlier detection algorithms. Experimental results on real datasets show the advantage of our feature selection algorithm compared with popular and state-of-the-art methods. We also show that the proposed algorithm is able to overcome the small sample space problem and perform well on highly imbalanced datasets. Furthermore, due to the highly parallelizable nature of the feature selection, we implement the algorithm on a graphics processing unit (GPU) to gain significant speedup over the serial version. The benefits of the GPU implementation are two-fold, as its performance scales very well in terms of the number of features, as well as the number of data points.

  13. Evaluation of image features and classification methods for Barrett's cancer detection using VLE imaging

    Science.gov (United States)

    Klomp, Sander; van der Sommen, Fons; Swager, Anne-Fré; Zinger, Svitlana; Schoon, Erik J.; Curvers, Wouter L.; Bergman, Jacques J.; de With, Peter H. N.

    2017-03-01

    Volumetric Laser Endomicroscopy (VLE) is a promising technique for the detection of early neoplasia in Barrett's Esophagus (BE). VLE generates hundreds of high resolution, grayscale, cross-sectional images of the esophagus. However, at present, classifying these images is a time consuming and cumbersome effort performed by an expert using a clinical prediction model. This paper explores the feasibility of using computer vision techniques to accurately predict the presence of dysplastic tissue in VLE BE images. Our contribution is threefold. First, a benchmarking is performed for widely applied machine learning techniques and feature extraction methods. Second, three new features based on the clinical detection model are proposed, having superior classification accuracy and speed, compared to earlier work. Third, we evaluate automated parameter tuning by applying simple grid search and feature selection methods. The results are evaluated on a clinically validated dataset of 30 dysplastic and 30 non-dysplastic VLE images. Optimal classification accuracy is obtained by applying a support vector machine and using our modified Haralick features and optimal image cropping, obtaining an area under the receiver operating characteristic of 0.95 compared to the clinical prediction model at 0.81. Optimal execution time is achieved using a proposed mean and median feature, which is extracted at least factor 2.5 faster than alternative features with comparable performance.

  14. Exploiting Higher Order and Multi-modal Features for 3D Object Detection

    DEFF Research Database (Denmark)

    Kiforenko, Lilita

    2017-01-01

    . The initial work introduces a feature descriptor that uses edge categorisation in combination with a local multi-modal histogram descriptor in order to detect objects with little or no texture or surface variation. The comparison is performed with a state-of-the-art method, which is outperformed...... by the presented edge descriptor. The second work presents an approach for robust detection of multiple objects by combining feature descriptors that capture both surface and edge information. This work presents quantitative results, where the performance of the developed feature descriptor combination is compared......-of-the-art descriptor and to this date, constant improvements of it are presented. The evaluation of PPFs is performed on seven publicly available datasets and it presents not only the performance comparison towards other popularly used methods, but also investigations of the space of possible point pair relations...

  15. Detecting Service Chains and Feature Interactions in Sensor-Driven Home Network Services

    Directory of Open Access Journals (Sweden)

    Takuya Inada

    2012-06-01

    Full Text Available Sensor-driven services often cause chain reactions, since one service may generate an environmental impact that automatically triggers another service. We first propose a framework that can formalize and detect such service chains based on ECA (event, condition, action rules. Although the service chain can be a major source of feature interactions, not all service chains lead to harmful interactions. Therefore, we then propose a method that identifies feature interactions within the service chains. Specifically, we characterize the degree of deviation of every service chain by evaluating the gap between expected and actual service states. An experimental evaluation demonstrates that the proposed method successfully detects 11 service chains and 6 feature interactions within 7 practical sensor-driven services.

  16. On the use of log-gabor features for subsurface object detection using ground penetrating radar

    Science.gov (United States)

    Harris, Samuel; Ho, K. C.; Zare, Alina

    2016-05-01

    regions with significant amount of metal debris. The challenge for the handheld GPR is to reduce the false alarm rate and limit the undesirable human operator effect. This paper proposes the use of log-Gabor features to improve the detection performance. In particular, we apply 36 log-Gabor filters to the B-scan of the GPR data in the time domain for the purpose to extract the edge behaviors of a prescreener alarm. The 36 log-Gabor filters cover the entire frequency plane with different bandwidths and orientations. The energy of each filter output forms an element of the feature vector and an SVM is trained to perform target vs non-target classification. Experimental results using the experimental hand held demonstrator data collected at a government site supports the increase in detection performance by using the log-Gabor features.

  17. Detection of explosive hazards using spectrum features from forward-looking ground penetrating radar imagery

    Science.gov (United States)

    Farrell, Justin; Havens, Timothy C.; Ho, K. C.; Keller, James M.; Ton, Tuan T.; Wong, David C.; Soumekh, Mehrdad

    2011-06-01

    Buried explosives have proven to be a challenging problem for which ground penetrating radar (GPR) has shown to be effective. This paper discusses an explosive hazard detection algorithm for forward looking GPR (FLGPR). The proposed algorithm uses the fast Fourier transform (FFT) to obtain spectral features of anomalies in the FLGPR imagery. Results show that the spectral characteristics of explosive hazards differ from that of background clutter and are useful for rejecting false alarms (FAs). A genetic algorithm (GA) is developed in order to select a subset of spectral features to produce a more generalized classifier. Furthermore, a GA-based K-Nearest Neighbor probability density estimator is employed in which targets and false alarms are used as training data to produce a two-class classifier. The experimental results of this paper use data collected by the US Army and show the effectiveness of spectrum based features in the detection of explosive hazards.

  18. Change Detection Based on DSM and Image Features in Urban Areas

    Institute of Scientific and Technical Information of China (English)

    LIU Zhifang; ZHANG Jianqing; ZHANG Zuxun; FAN Hong

    2003-01-01

    On the basis of stereo image analysis, the change detection of man-made objects in urban areas is introduced. Information of the height of man-made objects can be applied to reinforce their change detection. By comparison between the new and old DSMs, the changed regions are extracted. However, our aim is to detect changes of man-made objects in urban area and further in the potential areas by the means of line-feature matching and gradient direction histogram. The experiments based on the aerial images from Japan have proven that the algorithm is correct and efficient.

  19. The impact of signal normalization on seizure detection using line length features.

    Science.gov (United States)

    Logesparan, Lojini; Rodriguez-Villegas, Esther; Casson, Alexander J

    2015-10-01

    Accurate automated seizure detection remains a desirable but elusive target for many neural monitoring systems. While much attention has been given to the different feature extractions that can be used to highlight seizure activity in the EEG, very little formal attention has been given to the normalization that these features are routinely paired with. This normalization is essential in patient-independent algorithms to correct for broad-level differences in the EEG amplitude between people, and in patient-dependent algorithms to correct for amplitude variations over time. It is crucial, however, that the normalization used does not have a detrimental effect on the seizure detection process. This paper presents the first formal investigation into the impact of signal normalization techniques on seizure discrimination performance when using the line length feature to emphasize seizure activity. Comparing five normalization methods, based upon the mean, median, standard deviation, signal peak and signal range, we demonstrate differences in seizure detection accuracy (assessed as the area under a sensitivity-specificity ROC curve) of up to 52 %. This is despite the same analysis feature being used in all cases. Further, changes in performance of up to 22 % are present depending on whether the normalization is applied to the raw EEG itself or directly to the line length feature. Our results highlight the median decaying memory as the best current approach for providing normalization when using line length features, and they quantify the under-appreciated challenge of providing signal normalization that does not impair seizure detection algorithm performance.

  20. GANN: Genetic algorithm neural networks for the detection of conserved combinations of features in DNA

    Directory of Open Access Journals (Sweden)

    Beiko Robert G

    2005-02-01

    Full Text Available Abstract Background The multitude of motif detection algorithms developed to date have largely focused on the detection of patterns in primary sequence. Since sequence-dependent DNA structure and flexibility may also play a role in protein-DNA interactions, the simultaneous exploration of sequence- and structure-based hypotheses about the composition of binding sites and the ordering of features in a regulatory region should be considered as well. The consideration of structural features requires the development of new detection tools that can deal with data types other than primary sequence. Results GANN (available at http://bioinformatics.org.au/gann is a machine learning tool for the detection of conserved features in DNA. The software suite contains programs to extract different regions of genomic DNA from flat files and convert these sequences to indices that reflect sequence and structural composition or the presence of specific protein binding sites. The machine learning component allows the classification of different types of sequences based on subsamples of these indices, and can identify the best combinations of indices and machine learning architecture for sequence discrimination. Another key feature of GANN is the replicated splitting of data into training and test sets, and the implementation of negative controls. In validation experiments, GANN successfully merged important sequence and structural features to yield good predictive models for synthetic and real regulatory regions. Conclusion GANN is a flexible tool that can search through large sets of sequence and structural feature combinations to identify those that best characterize a set of sequences.

  1. Automatic Feature Detection, Description and Matching from Mobile Laser Scanning Data and Aerial Imagery

    Science.gov (United States)

    Hussnain, Zille; Oude Elberink, Sander; Vosselman, George

    2016-06-01

    In mobile laser scanning systems, the platform's position is measured by GNSS and IMU, which is often not reliable in urban areas. Consequently, derived Mobile Laser Scanning Point Cloud (MLSPC) lacks expected positioning reliability and accuracy. Many of the current solutions are either semi-automatic or unable to achieve pixel level accuracy. We propose an automatic feature extraction method which involves utilizing corresponding aerial images as a reference data set. The proposed method comprise three steps; image feature detection, description and matching between corresponding patches of nadir aerial and MLSPC ortho images. In the data pre-processing step the MLSPC is patch-wise cropped and converted to ortho images. Furthermore, each aerial image patch covering the area of the corresponding MLSPC patch is also cropped from the aerial image. For feature detection, we implemented an adaptive variant of Harris-operator to automatically detect corner feature points on the vertices of road markings. In feature description phase, we used the LATCH binary descriptor, which is robust to data from different sensors. For descriptor matching, we developed an outlier filtering technique, which exploits the arrangements of relative Euclidean-distances and angles between corresponding sets of feature points. We found that the positioning accuracy of the computed correspondence has achieved the pixel level accuracy, where the image resolution is 12cm. Furthermore, the developed approach is reliable when enough road markings are available in the data sets. We conclude that, in urban areas, the developed approach can reliably extract features necessary to improve the MLSPC accuracy to pixel level.

  2. Fast detection of covert visuospatial attention using hybrid N2pc and SSVEP features

    Science.gov (United States)

    Xu, Minpeng; Wang, Yijun; Nakanishi, Masaki; Wang, Yu-Te; Qi, Hongzhi; Jung, Tzyy-Ping; Ming, Dong

    2016-12-01

    Objective. Detecting the shift of covert visuospatial attention (CVSA) is vital for gaze-independent brain-computer interfaces (BCIs), which might be the only communication approach for severely disabled patients who cannot move their eyes. Although previous studies had demonstrated that it is feasible to use CVSA-related electroencephalography (EEG) features to control a BCI system, the communication speed remains very low. This study aims to improve the speed and accuracy of CVSA detection by fusing EEG features of N2pc and steady-state visual evoked potential (SSVEP). Approach. A new paradigm was designed to code the left and right CVSA with the N2pc and SSVEP features, which were then decoded by a classification strategy based on canonical correlation analysis. Eleven subjects were recruited to perform an offline experiment in this study. Temporal waves, amplitudes, and topographies for brain responses related to N2pc and SSVEP were analyzed. The classification accuracy derived from the hybrid EEG features (SSVEP and N2pc) was compared with those using the single EEG features (SSVEP or N2pc). Main results. The N2pc could be significantly enhanced under certain conditions of SSVEP modulations. The hybrid EEG features achieved significantly higher accuracy than the single features. It obtained an average accuracy of 72.9% by using a data length of 400 ms after the attention shift. Moreover, the average accuracy reached ˜80% (peak values above 90%) when using 2 s long data. Significance. The results indicate that the combination of N2pc and SSVEP is effective for fast detection of CVSA. The proposed method could be a promising approach for implementing a gaze-independent BCI.

  3. Detecting paralinguistic events in audio stream using context in features and probabilistic decisions☆

    Science.gov (United States)

    Gupta, Rahul; Audhkhasi, Kartik; Lee, Sungbok; Narayanan, Shrikanth

    2017-01-01

    Non-verbal communication involves encoding, transmission and decoding of non-lexical cues and is realized using vocal (e.g. prosody) or visual (e.g. gaze, body language) channels during conversation. These cues perform the function of maintaining conversational flow, expressing emotions, and marking personality and interpersonal attitude. In particular, non-verbal cues in speech such as paralanguage and non-verbal vocal events (e.g. laughters, sighs, cries) are used to nuance meaning and convey emotions, mood and attitude. For instance, laughters are associated with affective expressions while fillers (e.g. um, ah, um) are used to hold floor during a conversation. In this paper we present an automatic non-verbal vocal events detection system focusing on the detect of laughter and fillers. We extend our system presented during Interspeech 2013 Social Signals Sub-challenge (that was the winning entry in the challenge) for frame-wise event detection and test several schemes for incorporating local context during detection. Specifically, we incorporate context at two separate levels in our system: (i) the raw frame-wise features and, (ii) the output decisions. Furthermore, our system processes the output probabilities based on a few heuristic rules in order to reduce erroneous frame-based predictions. Our overall system achieves an Area Under the Receiver Operating Characteristics curve of 95.3% for detecting laughters and 90.4% for fillers on the test set drawn from the data specifications of the Interspeech 2013 Social Signals Sub-challenge. We perform further analysis to understand the interrelation between the features and obtained results. Specifically, we conduct a feature sensitivity analysis and correlate it with each feature's stand alone performance. The observations suggest that the trained system is more sensitive to a feature carrying higher discriminability with implications towards a better system design. PMID:28713197

  4. Glaucoma detection using novel optic disc localization, hybrid feature set and classification techniques.

    Science.gov (United States)

    Akram, M Usman; Tariq, Anam; Khalid, Shehzad; Javed, M Younus; Abbas, Sarmad; Yasin, Ubaid Ullah

    2015-12-01

    Glaucoma is a chronic and irreversible neuro-degenerative disease in which the neuro-retinal nerve that connects the eye to the brain (optic nerve) is progressively damaged and patients suffer from vision loss and blindness. The timely detection and treatment of glaucoma is very crucial to save patient's vision. Computer aided diagnostic systems are used for automated detection of glaucoma that calculate cup to disc ratio from colored retinal images. In this article, we present a novel method for early and accurate detection of glaucoma. The proposed system consists of preprocessing, optic disc segmentation, extraction of features from optic disc region of interest and classification for detection of glaucoma. The main novelty of the proposed method lies in the formation of a feature vector which consists of spatial and spectral features along with cup to disc ratio, rim to disc ratio and modeling of a novel mediods based classier for accurate detection of glaucoma. The performance of the proposed system is tested using publicly available fundus image databases along with one locally gathered database. Experimental results using a variety of publicly available and local databases demonstrate the superiority of the proposed approach as compared to the competitors.

  5. Feature selection and definition for contours classification of thermograms in breast cancer detection

    Science.gov (United States)

    Jagodziński, Dariusz; Matysiewicz, Mateusz; Neumann, Łukasz; Nowak, Robert M.; Okuniewski, Rafał; Oleszkiewicz, Witold; Cichosz, Paweł

    2016-09-01

    This contribution introduces the method of cancer pathologies detection on breast skin temperature distribution images. The use of thermosensitive foils applied to the breasts skin allows to create thermograms, which displays the amount of infrared energy emitted by all breast cells. The significant foci of hyperthermia or inflammation are typical for cancer cells. That foci can be recognized on thermograms as a contours, which are the areas of higher temperature. Every contour can be converted to a feature set that describe it, using the raw, central, Hu, outline, Fourier and colour moments of image pixels processing. This paper defines also the new way of describing a set of contours through theirs neighbourhood relations. Contribution introduces moreover the way of ranking and selecting most relevant features. Authors used Neural Network with Gevrey`s concept and recursive feature elimination, to estimate feature importance.

  6. Computerized lung nodule detection using 3D feature extraction and learning based algorithms.

    Science.gov (United States)

    Ozekes, Serhat; Osman, Onur

    2010-04-01

    In this paper, a Computer Aided Detection (CAD) system based on three-dimensional (3D) feature extraction is introduced to detect lung nodules. First, eight directional search was applied in order to extract regions of interests (ROIs). Then, 3D feature extraction was performed which includes 3D connected component labeling, straightness calculation, thickness calculation, determining the middle slice, vertical and horizontal widths calculation, regularity calculation, and calculation of vertical and horizontal black pixel ratios. To make a decision for each ROI, feed forward neural networks (NN), support vector machines (SVM), naive Bayes (NB) and logistic regression (LR) methods were used. These methods were trained and tested via k-fold cross validation, and results were compared. To test the performance of the proposed system, 11 cases, which were taken from Lung Image Database Consortium (LIDC) dataset, were used. ROC curves were given for all methods and 100% detection sensitivity was reached except naive Bayes.

  7. Proficient Feature Extraction Strategy for Performance Enhancement of NN Based Early Breast Tumor Detection

    Directory of Open Access Journals (Sweden)

    Khondker Jahid Reza

    2014-01-01

    Full Text Available Ultra Wideband is one of the promising microwave imaging techniques for breast tumor prognosis. The basic principle of tumor detection depends on the dielectric properties discrepancies between healthy and tumorous tissue. Usually, the tumor affected tissues scatter more signal than the healthy one and are used for early tumor detection through received pulses. Feedforward backpropagation neural network(NN was so far used for some research works by showing its detection efficiency up to 1mm (radius size with 95.8% accuracy. This paper introduces an efficient feature extraction method to further improve the performance by considering four main features of backpropagation NN. This performance is being increased to 99.99%. This strategy is well justified for classifying the normal and tumor affected breast with 100% accuracy in its early stage. It also enhances the training and testing performances by reducing the required duration. The overall performance is 99.99% verified by using thirteen different tumor sizes.

  8. Automatic solar feature detection using image processing and pattern recognition techniques

    Science.gov (United States)

    Qu, Ming

    The objective of the research in this dissertation is to develop a software system to automatically detect and characterize solar flares, filaments and Corona Mass Ejections (CMEs), the core of so-called solar activity. These tools will assist us to predict space weather caused by violent solar activity. Image processing and pattern recognition techniques are applied to this system. For automatic flare detection, the advanced pattern recognition techniques such as Multi-Layer Perceptron (MLP), Radial Basis Function (RBF), and Support Vector Machine (SVM) are used. By tracking the entire process of flares, the motion properties of two-ribbon flares are derived automatically. In the applications of the solar filament detection, the Stabilized Inverse Diffusion Equation (SIDE) is used to enhance and sharpen filaments; a new method for automatic threshold selection is proposed to extract filaments from background; an SVM classifier with nine input features is used to differentiate between sunspots and filaments. Once a filament is identified, morphological thinning, pruning, and adaptive edge linking methods are applied to determine filament properties. Furthermore, a filament matching method is proposed to detect filament disappearance. The automatic detection and characterization of flares and filaments have been successfully applied on Halpha full-disk images that are continuously obtained at Big Bear Solar Observatory (BBSO). For automatically detecting and classifying CMEs, the image enhancement, segmentation, and pattern recognition techniques are applied to Large Angle Spectrometric Coronagraph (LASCO) C2 and C3 images. The processed LASCO and BBSO images are saved to file archive, and the physical properties of detected solar features such as intensity and speed are recorded in our database. Researchers are able to access the solar feature database and analyze the solar data efficiently and effectively. The detection and characterization system greatly improves

  9. Intrusion Detection In Mobile Ad Hoc Networks Using GA Based Feature Selection

    CERN Document Server

    Nallusamy, R; Duraiswamy, K

    2009-01-01

    Mobile ad hoc networking (MANET) has become an exciting and important technology in recent years because of the rapid proliferation of wireless devices. MANETs are highly vulnerable to attacks due to the open medium, dynamically changing network topology and lack of centralized monitoring point. It is important to search new architecture and mechanisms to protect the wireless networks and mobile computing application. IDS analyze the network activities by means of audit data and use patterns of well-known attacks or normal profile to detect potential attacks. There are two methods to analyze: misuse detection and anomaly detection. Misuse detection is not effective against unknown attacks and therefore, anomaly detection method is used. In this approach, the audit data is collected from each mobile node after simulating the attack and compared with the normal behavior of the system. If there is any deviation from normal behavior then the event is considered as an attack. Some of the features of collected audi...

  10. Joint Facial Action Unit Detection and Feature Fusion: A Multi-conditional Learning Approach.

    Science.gov (United States)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2016-10-05

    Automated analysis of facial expressions can benefit many domains, from marketing to clinical diagnosis of neurodevelopmental disorders. Facial expressions are typically encoded as a combination of facial muscle activations, i.e., action units. Depending on context, these action units co-occur in specific patterns, and rarely in isolation. Yet, most existing methods for automatic action unit detection fail to exploit dependencies among them, and the corresponding facial features. To address this, we propose a novel multi-conditional latent variable model for simultaneous fusion of facial features and joint action unit detection. Specifically, the proposed model performs feature fusion in a generative fashion via a low-dimensional shared subspace, while simultaneously performing action unit detection using a discriminative classification approach. We show that by combining the merits of both approaches, the proposed methodology outperforms existing purely discriminative/generative methods for the target task. To reduce the number of parameters, and avoid overfitting, a novel Bayesian learning approach based on Monte Carlo sampling is proposed, to integrate out the shared subspace. We validate the proposed method on posed and spontaneous data from three publicly available datasets (CK+, DISFA and Shoulder-pain), and show that both feature fusion and joint learning of action units leads to improved performance compared to the state-of-the-art methods for the task.

  11. EEG error potentials detection and classification using time-frequency features for robot reinforcement learning.

    Science.gov (United States)

    Boubchir, Larbi; Touati, Youcef; Daachi, Boubaker; Chérif, Arab Ali

    2015-08-01

    In thought-based steering of robots, error potentials (ErrP) can appear when the action resulting from the brain-machine interface (BMI) classifier/controller does not correspond to the user's thought. Using the Steady State Visual Evoked Potentials (SSVEP) techniques, ErrP, which appear when a classification error occurs, are not easily recognizable by only examining the temporal or frequency characteristics of EEG signals. A supplementary classification process is therefore needed to identify them in order to stop the course of the action and back up to a recovery state. This paper presents a set of time-frequency (t-f) features for the detection and classification of EEG ErrP in extra-brain activities due to misclassification observed by a user exploiting non-invasive BMI and robot control in the task space. The proposed features are able to characterize and detect ErrP activities in the t-f domain. These features are derived from the information embedded in the t-f representation of EEG signals, and include the Instantaneous Frequency (IF), t-f information complexity, SVD information, energy concentration and sub-bands' energies. The experiment results on real EEG data show that the use of the proposed t-f features for detecting and classifying EEG ErrP achieved an overall classification accuracy up to 97% for 50 EEG segments using 2-class SVM classifier.

  12. A Dynamic Feature-Based Method for Hybrid Blurred/Multiple Object Detection in Manufacturing Processes

    Directory of Open Access Journals (Sweden)

    Tsun-Kuo Lin

    2016-01-01

    Full Text Available Vision-based inspection has been applied for quality control and product sorting in manufacturing processes. Blurred or multiple objects are common causes of poor performance in conventional vision-based inspection systems. Detecting hybrid blurred/multiple objects has long been a challenge in manufacturing. For example, single-feature-based algorithms might fail to exactly extract features when concurrently detecting hybrid blurred/multiple objects. Therefore, to resolve this problem, this study proposes a novel vision-based inspection algorithm that entails selecting a dynamic feature-based method on the basis of a multiclassifier of support vector machines (SVMs for inspecting hybrid blurred/multiple object images. The proposed algorithm dynamically selects suitable inspection schemes for classifying the hybrid images. The inspection schemes include discrete wavelet transform, spherical wavelet transform, moment invariants, and edge-feature-descriptor-based classification methods. The classification methods for single and multiple objects are adaptive region growing- (ARG- based and local adaptive region growing- (LARG- based learning approaches, respectively. The experimental results demonstrate that the proposed algorithm can dynamically select suitable inspection schemes by applying a selection algorithm, which uses SVMs for classifying hybrid blurred/multiple object samples. Moreover, the method applies suitable feature-based schemes on the basis of the classification results for employing the ARG/LARG-based method to inspect the hybrid objects. The method improves conventional methods for inspecting hybrid blurred/multiple objects and achieves high recognition rates for that in manufacturing processes.

  13. Solid waste bin level detection using gray level co-occurrence matrix feature extraction approach.

    Science.gov (United States)

    Arebey, Maher; Hannan, M A; Begum, R A; Basri, Hassan

    2012-08-15

    This paper presents solid waste bin level detection and classification using gray level co-occurrence matrix (GLCM) feature extraction methods. GLCM parameters, such as displacement, d, quantization, G, and the number of textural features, are investigated to determine the best parameter values of the bin images. The parameter values and number of texture features are used to form the GLCM database. The most appropriate features collected from the GLCM are then used as inputs to the multi-layer perceptron (MLP) and the K-nearest neighbor (KNN) classifiers for bin image classification and grading. The classification and grading performance for DB1, DB2 and DB3 features were selected with both MLP and KNN classifiers. The results demonstrated that the KNN classifier, at KNN = 3, d = 1 and maximum G values, performs better than using the MLP classifier with the same database. Based on the results, this method has the potential to be used in solid waste bin level classification and grading to provide a robust solution for solid waste bin level detection, monitoring and management.

  14. Feature-based attentional tuning during biological motion detection measured with SSVEP.

    Science.gov (United States)

    Hasan, Rakibul; Srinivasan, Ramesh; Grossman, Emily D

    2017-08-01

    Performance in detection tasks can be improved by directing attention to task-relevant features. In this study, we evaluate the direction tuning of selective attention to motion features when observers detect point-light biological motion in noise. Feature-based attention strategy is assessed by capitalizing on the sensitivity of unattended steady-state visual-evoked potential (SSVEP) to the spreading of feature-based attention to unattended regions of space. Participants monitored for the presence of a point-light walker embedded in uniform dynamic noise in the center of the screen. We analyzed the phase-locked electroencephalogram response to a flickering random-dot kinematogram (RDK) in an unattended peripheral annulus for the 1 s prior to the onset of the target. We found the highest SSVEP power to originate from electrodes over posterior parietal cortex (PPC), with power modulated by the direction of motion in the unattended annulus. The SSVEP was strongest on trials in which the unattended motion was opposite the facing direction of the walker, consistent with the backstroke of the feet and with the global direction of perceived background motion from a translating walker. Coherence between electrodes over PPC and other brain regions successfully predicted individual participant's d-prime, with the highest regression coefficients at electrodes over ventrolateral prefrontal cortex (VLPFC). The findings are evidence that functional connectivity between frontal and parietal cortex promote perceptual feature-based attention, and subsequent perceptual sensitivity, when segregating point-light figures from masking surround.

  15. Horizon Detection in Seismic Data: An Application of Linked Feature Detection from Multiple Time Series

    Directory of Open Access Journals (Sweden)

    Robert G. Aykroyd

    2014-01-01

    Full Text Available Seismic studies are a key stage in the search for large scale underground features such as water reserves, gas pockets, or oil fields. Sound waves, generated on the earth’s surface, travel through the ground before being partially reflected at interfaces between regions with high contrast in acoustic properties such as between liquid and solid. After returning to the surface, the reflected signals are recorded by acoustic sensors. Importantly, reflections from different depths return at different times, and hence the data contain depth information as well as position. A strong reflecting interface, called a horizon, indicates a stratigraphic boundary between two different regions, and it is the location of these horizons which is of key importance. This paper proposes a simple approach for the automatic identification of horizons, which avoids computationally complex and time consuming 3D reconstruction. The new approach combines nonparametric smoothing and classification techniques which are applied directly to the seismic data, with novel graphical representations of the intermediate steps introduced. For each sensor position, potential horizon locations are identified along the corresponding time-series traces. These candidate locations are then examined across all traces and when consistent patterns occur the points are linked together to form coherent horizons.

  16. Detection of Cardiac Abnormalities from Multilead ECG using Multiscale Phase Alternation Features.

    Science.gov (United States)

    Tripathy, R K; Dandapat, S

    2016-06-01

    The cardiac activities such as the depolarization and the relaxation of atria and ventricles are observed in electrocardiogram (ECG). The changes in the morphological features of ECG are the symptoms of particular heart pathology. It is a cumbersome task for medical experts to visually identify any subtle changes in the morphological features during 24 hours of ECG recording. Therefore, the automated analysis of ECG signal is a need for accurate detection of cardiac abnormalities. In this paper, a novel method for automated detection of cardiac abnormalities from multilead ECG is proposed. The method uses multiscale phase alternation (PA) features of multilead ECG and two classifiers, k-nearest neighbor (KNN) and fuzzy KNN for classification of bundle branch block (BBB), myocardial infarction (MI), heart muscle defect (HMD) and healthy control (HC). The dual tree complex wavelet transform (DTCWT) is used to decompose the ECG signal of each lead into complex wavelet coefficients at different scales. The phase of the complex wavelet coefficients is computed and the PA values at each wavelet scale are used as features for detection and classification of cardiac abnormalities. A publicly available multilead ECG database (PTB database) is used for testing of the proposed method. The experimental results show that, the proposed multiscale PA features and the fuzzy KNN classifier have better performance for detection of cardiac abnormalities with sensitivity values of 78.12 %, 80.90 % and 94.31 % for BBB, HMD and MI classes. The sensitivity value of proposed method for MI class is compared with the state-of-art techniques from multilead ECG.

  17. Bathymetric Map of the Bering/Chukchi Sea

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Two bathymetric maps were developed by the U.S. Geological Survey, one for the Chukchi Sea and Arctic Ocean, and one for the Aleutian Trench and Bering Sea. The 2...

  18. Improving a bathymetric resurvey policy with observed sea floor dynamics

    NARCIS (Netherlands)

    Dorst, L.L.; Roos, P.C.; Hulscher, S.J.M.H.

    2013-01-01

    Bathymetric resurveying in shallow seas is a costly process with limited resources, yet necessary for adequate nautical charts and therefore crucial for safe navigation. An important factor in an efficient resurvey policy is the type and size of sea floor dynamics. We propose four indicators, which

  19. Bathymetric Map of the Bering/Chukchi Sea

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Two bathymetric maps were developed by the U.S. Geological Survey, one for the Chukchi Sea and Arctic Ocean, and one for the Aleutian Trench and Bering Sea. The 2...

  20. Detailed bathymetric surveys in the central Indian Basin

    Digital Repository Service at National Institute of Oceanography (India)

    Kodagali, V.N.; KameshRaju, K.A.; Ramprasad, T.; George, P.; Jaisankar, S.

    Over 420,000 line kilometers of echo-sounding data was collected in the Central Indian Basin. This data was digitized, merged with navigation data and a detailed bathymetric map of the Basin was prepared. The Basin can be broadly classified...

  1. Robust and fast license plate detection based on the fusion of color and edge feature

    Science.gov (United States)

    Cai, De; Shi, Zhonghan; Liu, Jin; Hu, Chuanping; Mei, Lin; Qi, Li

    2014-11-01

    Extracting a license plate is an important stage in automatic vehicle identification. The degradation of images and the computation intense make this task difficult. In this paper, a robust and fast license plate detection based on the fusion of color and edge feature is proposed. Based on the dichromatic reflection model, two new color ratios computed from the RGB color model are introduced and proved to be two color invariants. The global color feature extracted by the new color invariants improves the method's robustness. The local Sobel edge feature guarantees the method's accuracy. In the experiment, the detection performance is good. The detection results show that this paper's method is robust to the illumination, object geometry and the disturbance around the license plates. The method can also detect license plates when the color of the car body is the same as the color of the plates. The processing time for image size of 1000x1000 by pixels is nearly 0.2s. Based on the comparison, the performance of the new ratios is comparable to the common used HSI color model.

  2. Efficient epileptic seizure detection by a combined IMF-VoE feature.

    Science.gov (United States)

    Qi, Yu; Wang, Yueming; Zheng, Xiaoxiang; Zhang, Jianmin; Zhu, Junming; Guo, Jianping

    2012-01-01

    Automatic seizure detection from the electroen-cephalogram (EEG) plays an important role in an on-demand closed-loop therapeutic system. A new feature, called IMF-VoE, is proposed to predict the occurrence of seizures. The IMF-VoE feature combines three intrinsic mode functions (IMFs) from the empirical mode decomposition of a EEG signal and the variance of the range between the upper and lower envelopes (VoE) of the signal. These multiple cues encode the intrinsic characteristics of seizure states, thus are able to distinguish them from the background. The feature is tested on 80.4 hours of EEG data with 10 seizures of 4 patients. The sensitivity of 100% is obtained with a low false detection rate of 0.16 per hour. Average time delays are 19.4s, 13.2s, and 10.7s at the false detection rates of 0.16 per hour, 0.27 per hour, and 0.41 per hour respectively, when different thresholds are used. The result is competitive among recent studies. In addition, since the IMF-VoE is compact, the detection system is of high computational efficiency and able to run in real time.

  3. Multivariate anomaly detection for Earth observations: a comparison of algorithms and feature extraction techniques

    Science.gov (United States)

    Flach, Milan; Gans, Fabian; Brenning, Alexander; Denzler, Joachim; Reichstein, Markus; Rodner, Erik; Bathiany, Sebastian; Bodesheim, Paul; Guanche, Yanira; Sippel, Sebastian; Mahecha, Miguel D.

    2017-08-01

    Today, many processes at the Earth's surface are constantly monitored by multiple data streams. These observations have become central to advancing our understanding of vegetation dynamics in response to climate or land use change. Another set of important applications is monitoring effects of extreme climatic events, other disturbances such as fires, or abrupt land transitions. One important methodological question is how to reliably detect anomalies in an automated and generic way within multivariate data streams, which typically vary seasonally and are interconnected across variables. Although many algorithms have been proposed for detecting anomalies in multivariate data, only a few have been investigated in the context of Earth system science applications. In this study, we systematically combine and compare feature extraction and anomaly detection algorithms for detecting anomalous events. Our aim is to identify suitable workflows for automatically detecting anomalous patterns in multivariate Earth system data streams. We rely on artificial data that mimic typical properties and anomalies in multivariate spatiotemporal Earth observations like sudden changes in basic characteristics of time series such as the sample mean, the variance, changes in the cycle amplitude, and trends. This artificial experiment is needed as there is no gold standard for the identification of anomalies in real Earth observations. Our results show that a well-chosen feature extraction step (e.g., subtracting seasonal cycles, or dimensionality reduction) is more important than the choice of a particular anomaly detection algorithm. Nevertheless, we identify three detection algorithms (k-nearest neighbors mean distance, kernel density estimation, a recurrence approach) and their combinations (ensembles) that outperform other multivariate approaches as well as univariate extreme-event detection methods. Our results therefore provide an effective workflow to automatically detect anomalies

  4. Ischemia episode detection in ECG using kernel density estimation, support vector machine and feature selection

    Directory of Open Access Journals (Sweden)

    Park Jinho

    2012-06-01

    Full Text Available Abstract Background Myocardial ischemia can be developed into more serious diseases. Early Detection of the ischemic syndrome in electrocardiogram (ECG more accurately and automatically can prevent it from developing into a catastrophic disease. To this end, we propose a new method, which employs wavelets and simple feature selection. Methods For training and testing, the European ST-T database is used, which is comprised of 367 ischemic ST episodes in 90 records. We first remove baseline wandering, and detect time positions of QRS complexes by a method based on the discrete wavelet transform. Next, for each heart beat, we extract three features which can be used for differentiating ST episodes from normal: 1 the area between QRS offset and T-peak points, 2 the normalized and signed sum from QRS offset to effective zero voltage point, and 3 the slope from QRS onset to offset point. We average the feature values for successive five beats to reduce effects of outliers. Finally we apply classifiers to those features. Results We evaluated the algorithm by kernel density estimation (KDE and support vector machine (SVM methods. Sensitivity and specificity for KDE were 0.939 and 0.912, respectively. The KDE classifier detects 349 ischemic ST episodes out of total 367 ST episodes. Sensitivity and specificity of SVM were 0.941 and 0.923, respectively. The SVM classifier detects 355 ischemic ST episodes. Conclusions We proposed a new method for detecting ischemia in ECG. It contains signal processing techniques of removing baseline wandering and detecting time positions of QRS complexes by discrete wavelet transform, and feature extraction from morphology of ECG waveforms explicitly. It was shown that the number of selected features were sufficient to discriminate ischemic ST episodes from the normal ones. We also showed how the proposed KDE classifier can automatically select kernel bandwidths, meaning that the algorithm does not require any numerical

  5. Effect of radar undesirable characteristics on the performance of spectral feature landmine detection technique

    Science.gov (United States)

    Ho, K. C.; Gader, P. D.; Wilson, J. N.; Frigui, H.

    2010-04-01

    A factor that could affect the performance of ground penetrating radar for landmine detection is self-signature. The radar self-signature is created by the internal coupling of the radar itself and it appears constant in different scans. Although not varying much, the radar self-signature can create hyperbolic shape or anomaly pattern after ground alignment and thereby increasing the amount of false detections. This paper examines the effect of radar self-signature on the performance of the subspace spectral feature landmine detection algorithm. Experimental results in the presence of strong radar self-signatures will be given and performance comparison with the pre-screener that is based on anomaly detection will be made.

  6. Flying control of small-type helicopter by detecting its in-air natural features

    Directory of Open Access Journals (Sweden)

    Chinthaka Premachandra

    2015-05-01

    Full Text Available Control of a small type helicopter is an interesting research area in unmanned aerial vehicle development. This study aims to detect a more typical helicopter unequipped with markers as a means by which to resolve the various issues of the prior studies. Accordingly, we propose a method of detecting the helicopter location and pose through using an infrastructure camera to recognize its in-air natural features such as ellipse traced by the rotation of the helicopter's propellers. A single-rotor system helicopter was used as the controlled airframe in our experiments. Here, helicopter location is measured by detecting the main rotor ellipse center and pose is measured following relationship between the main rotor ellipse and the tail rotor ellipse. Following these detection results we confirmed the hovering control possibility of the helicopter through experiments.

  7. Infrared small target's detection and identification with moving platform based on motion features

    Science.gov (United States)

    Jia, Yan; Zou, Xu; Zhong, Sheng; Lu, Hongqiang

    2015-10-01

    The infrared small target's detection and tracking are important parts of the automatic target recognition. When the camera platform equipped with an infrared camera moves, the small target's position change in the imaging plane is affected by the composite motion of the small target and the camera platform. Traditional detection and tracking algorithms may lose the small target and make the follow-up detection and tracking fail because of not considering the camera platform's movement. Moreover, when there exist small targets with different motion features in the camera's view, some detection and tracking algorithms can't recognize different targets based on their motion features because there are no trajectories in a unified coordinate system, which may lead to the true small targets undetected or detected incorrectly . To solve those problems, we present a method under the condition of moving camera platform. Firstly, get the camera platform's motion information from the inertial measurement values, and then decouple to remove the motion of the camera platform itself by means of coordinate transformation. Next, estimate the trajectories of the small targets with different motion features based on their position changes in the same imaging plane coordinate system. Finally, recognize different small targets preliminarily based on their different trajectories. Experimental results show that this method can improve the small target's detection probability. Furthermore, when the camera platform fails to track the small target, it's possible to predict the position of the small target in the next frame based on the fitted motion equation and realize sustained and stable tracking.

  8. Pavement crack detection combining non-negative feature with fast LoG in complex scene

    Science.gov (United States)

    Wang, Wanli; Zhang, Xiuhua; Hong, Hanyu

    2015-12-01

    Pavement crack detection is affected by much interference in the realistic situation, such as the shadow, road sign, oil stain, salt and pepper noise etc. Due to these unfavorable factors, the exist crack detection methods are difficult to distinguish the crack from background correctly. How to extract crack information effectively is the key problem to the road crack detection system. To solve this problem, a novel method for pavement crack detection based on combining non-negative feature with fast LoG is proposed. The two key novelties and benefits of this new approach are that 1) using image pixel gray value compensation to acquisit uniform image, and 2) combining non-negative feature with fast LoG to extract crack information. The image preprocessing results demonstrate that the method is indeed able to homogenize the crack image with more accurately compared to existing methods. A large number of experimental results demonstrate the proposed approach can detect the crack regions more correctly compared with traditional methods.

  9. Aircraft Detection from VHR Images Based on Circle-Frequency Filter and Multilevel Features

    Science.gov (United States)

    Gao, Feng; Li, Bo

    2013-01-01

    Aircraft automatic detection from very high-resolution (VHR) images plays an important role in a wide variety of applications. This paper proposes a novel detector for aircraft detection from very high-resolution (VHR) remote sensing images. To accurately distinguish aircrafts from background, a circle-frequency filter (CF-filter) is used to extract the candidate locations of aircrafts from a large size image. A multi-level feature model is then employed to represent both local appearance and spatial layout of aircrafts by means of Robust Hue Descriptor and Histogram of Oriented Gradients. The experimental results demonstrate the superior performance of the proposed method. PMID:24163637

  10. Aircraft Detection from VHR Images Based on Circle-Frequency Filter and Multilevel Features

    Directory of Open Access Journals (Sweden)

    Feng Gao

    2013-01-01

    Full Text Available Aircraft automatic detection from very high-resolution (VHR images plays an important role in a wide variety of applications. This paper proposes a novel detector for aircraft detection from very high-resolution (VHR remote sensing images. To accurately distinguish aircrafts from background, a circle-frequency filter (CF-filter is used to extract the candidate locations of aircrafts from a large size image. A multi-level feature model is then employed to represent both local appearance and spatial layout of aircrafts by means of Robust Hue Descriptor and Histogram of Oriented Gradients. The experimental results demonstrate the superior performance of the proposed method.

  11. Modeling and Detecting Feature Interactions among Integrated Services of Home Network Systems

    Science.gov (United States)

    Igaki, Hiroshi; Nakamura, Masahide

    This paper presents a framework for formalizing and detecting feature interactions (FIs) in the emerging smart home domain. We first establish a model of home network system (HNS), where every networked appliance (or the HNS environment) is characterized as an object consisting of properties and methods. Then, every HNS service is defined as a sequence of method invocations of the appliances. Within the model, we next formalize two kinds of FIs: (a) appliance interactions and (b) environment interactions. An appliance interaction occurs when two method invocations conflict on the same appliance, whereas an environment interaction arises when two method invocations conflict indirectly via the environment. Finally, we propose offline and online methods that detect FIs before service deployment and during execution, respectively. Through a case study with seven practical services, it is shown that the proposed framework is generic enough to capture feature interactions in HNS integrated services. We also discuss several FI resolution schemes within the proposed framework.

  12. Learning to Automatically Detect Features for Mobile Robots Using Second-Order Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Olivier Aycard

    2004-12-01

    Full Text Available In this paper, we propose a new method based on Hidden Markov Models to interpret temporal sequences of sensor data from mobile robots to automatically detect features. Hidden Markov Models have been used for a long time in pattern recognition, especially in speech recognition. Their main advantages over other methods (such as neural networks are their ability to model noisy temporal signals of variable length. We show in this paper that this approach is well suited for interpretation of temporal sequences of mobile-robot sensor data. We present two distinct experiments and results: the first one in an indoor environment where a mobile robot learns to detect features like open doors or T-intersections, the second one in an outdoor environment where a different mobile robot has to identify situations like climbing a hill or crossing a rock.

  13. Feature Understanding and Target Detection for Sparse Microwave Synthetic Aperture Radar Images

    Directory of Open Access Journals (Sweden)

    Zhang Zenghui

    2016-02-01

    Full Text Available Sparse microwave imaging using sparse priors of observed scenes in space, time, frequency, or polarization domain and echo data with sampling rate smaller than the traditional Nyquist rate as well as optimization algorithms for reconstructing the microwave images of observed scenes has many advantages over traditional microwave imaging systems. In sparse microwave imaging, image acquisition and representation vary; therefore, new feature analysis and cognitive interpretation theories and methods should be developed based on current research results. In this study, we analyze the statistical properties of sparse Synthetic Aperture Radar (SAR images and changes in point, line and regional features induced by sparse reconstruction. For SAR images recovered by the spatial sparse model, the statistical distribution degrades, whereas points and lines can be accurately extracted by low sampling rates. Furthermore, the target detection method based on sparse SAR images is studied. Owing to a weak background noise, target detection is easier using sparse SAR images than traditional ones.

  14. Community Detecting and Feature Analysis in Real Directed Weighted Social Networks

    Directory of Open Access Journals (Sweden)

    Yao Liu

    2013-06-01

    Full Text Available Real social networks usually have some structural features of the complex networks, such as community structure, the scale-free degree distribution, clustering, "small world" network, dynamic evolution and so on. A new community detecting algorithm for directed and weighted social networks is proposed in this paper. Due to the use of more reference information, the accuracy of the algorithm is better than some of the typical detecting algorithms. And because of the use of heap structure and multi-task modular architecture, the algorithm also got a high computational efficiency than other algorithms. The effectiveness and efficiency of the algorithm is validated by experiments on real social networks. Based on the theories and models of complex networks, the features of the real large social networks are analyzed.

  15. Digital Image Forgery Detection Using JPEG Features and Local Noise Discrepancies

    Directory of Open Access Journals (Sweden)

    Bo Liu

    2014-01-01

    Full Text Available Wide availability of image processing software makes counterfeiting become an easy and low-cost way to distort or conceal facts. Driven by great needs for valid forensic technique, many methods have been proposed to expose such forgeries. In this paper, we proposed an integrated algorithm which was able to detect two commonly used fraud practices: copy-move and splicing forgery in digital picture. To achieve this target, a special descriptor for each block was created combining the feature from JPEG block artificial grid with that from noise estimation. And forehand image quality assessment procedure reconciled these different features by setting proper weights. Experimental results showed that, compared to existing algorithms, our proposed method is effective on detecting both copy-move and splicing forgery regardless of JPEG compression ratio of the input image.

  16. Learning to Automatically Detect Features for Mobile Robots Using Second-Order Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Richard Washington

    2008-11-01

    Full Text Available In this paper, we propose a new method based on Hidden Markov Models to interpret temporal sequences of sensor data from mobile robots to automatically detect features. Hidden Markov Models have been used for a long time in pattern recognition, especially in speech recognition. Their main advantages over other methods (such as neural networks are their ability to model noisy temporal signals of variable length. We show in this paper that this approach is well suited for interpretation of temporal sequences of mobile-robot sensor data. We present two distinct experiments and results: the first one in an indoor environment where a mobile robot learns to detect features like open doors or T- intersections, the second one in an outdoor environment where a different mobile robot has to identify situations like climbing a hill or crossing a rock.

  17. Spatial-temporal features of thermal images for Carpal Tunnel Syndrome detection

    Science.gov (United States)

    Estupinan Roldan, Kevin; Ortega Piedrahita, Marco A.; Benitez, Hernan D.

    2014-02-01

    Disorders associated with repeated trauma account for about 60% of all occupational illnesses, Carpal Tunnel Syndrome (CTS) being the most consulted today. Infrared Thermography (IT) has come to play an important role in the field of medicine. IT is non-invasive and detects diseases based on measuring temperature variations. IT represents a possible alternative to prevalent methods for diagnosis of CTS (i.e. nerve conduction studies and electromiography). This work presents a set of spatial-temporal features extracted from thermal images taken in healthy and ill patients. Support Vector Machine (SVM) classifiers test this feature space with Leave One Out (LOO) validation error. The results of the proposed approach show linear separability and lower validation errors when compared to features used in previous works that do not account for temperature spatial variability.

  18. Improved Feature Detection in Fused Intensity-Range Images with Complex SIFT (ℂSIFT

    Directory of Open Access Journals (Sweden)

    Boris Jutzi

    2011-09-01

    Full Text Available The real and imaginary parts are proposed as an alternative to the usual Polar representation of complex-valued images. It is proven that the transformation from Polar to Cartesian representation contributes to decreased mutual information, and hence to greater distinctiveness. The Complex Scale-Invariant Feature Transform (ℂSIFT detects distinctive features in complex-valued images. An evaluation method for estimating the uniformity of feature distributions in complex-valued images derived from intensity-range images is proposed. In order to experimentally evaluate the proposed methodology on intensity-range images, three different kinds of active sensing systems were used: Range Imaging, Laser Scanning, and Structured Light Projection devices (PMD CamCube 2.0, Z+F IMAGER 5003, Microsoft Kinect.

  19. Automatic detection of diabetic retinopathy features in ultra-wide field retinal images

    Science.gov (United States)

    Levenkova, Anastasia; Sowmya, Arcot; Kalloniatis, Michael; Ly, Angelica; Ho, Arthur

    2017-03-01

    Diabetic retinopathy (DR) is a major cause of irreversible vision loss. DR screening relies on retinal clinical signs (features). Opportunities for computer-aided DR feature detection have emerged with the development of Ultra-WideField (UWF) digital scanning laser technology. UWF imaging covers 82% greater retinal area (200°), against 45° in conventional cameras3 , allowing more clinically relevant retinopathy to be detected4 . UWF images also provide a high resolution of 3078 x 2702 pixels. Currently DR screening uses 7 overlapping conventional fundus images, and the UWF images provide similar results1,4. However, in 40% of cases, more retinopathy was found outside the 7-field ETDRS) fields by UWF and in 10% of cases, retinopathy was reclassified as more severe4 . This is because UWF imaging allows examination of both the central retina and more peripheral regions, with the latter implicated in DR6 . We have developed an algorithm for automatic recognition of DR features, including bright (cotton wool spots and exudates) and dark lesions (microaneurysms and blot, dot and flame haemorrhages) in UWF images. The algorithm extracts features from grayscale (green "red-free" laser light) and colour-composite UWF images, including intensity, Histogram-of-Gradient and Local binary patterns. Pixel-based classification is performed with three different classifiers. The main contribution is the automatic detection of DR features in the peripheral retina. The method is evaluated by leave-one-out cross-validation on 25 UWF retinal images with 167 bright lesions, and 61 other images with 1089 dark lesions. The SVM classifier performs best with AUC of 94.4% / 95.31% for bright / dark lesions.

  20. Incidental breast masses detected by computed tomography: are any imaging features predictive of malignancy?

    Energy Technology Data Exchange (ETDEWEB)

    Porter, G. [Primrose Breast Care Unit, Derriford Hospital, Plymouth (United Kingdom)], E-mail: Gareth.Porter@phnt.swest.nhs.uk; Steel, J.; Paisley, K.; Watkins, R. [Primrose Breast Care Unit, Derriford Hospital, Plymouth (United Kingdom); Holgate, C. [Department of Histopathology, Derriford Hospital, Plymouth (United Kingdom)

    2009-05-15

    Aim: To review the outcome of further assessment of breast abnormalities detected incidentally by multidetector computed tomography (MDCT) and to determine whether any MDCT imaging features were predictive of malignancy. Material and methods: The outcome of 34 patients referred to the Primrose Breast Care Unit with breast abnormalities detected incidentally using MDCT was prospectively recorded. Women with a known diagnosis of breast cancer were excluded. CT imaging features and histological diagnoses were recorded and the correlation assessed using Fisher's exact test. Results: Of the 34 referred patients a malignant diagnosis was noted in 11 (32%). There were 10 breast malignancies (seven invasive ductal carcinomas, one invasive lobular carcinoma, two metastatic lesions) and one axillary lymphoma. CT features suggestive of breast malignancy were spiculation [6/10 (60%) versus 0/24 (0%) p = 0.0002] and associated axillary lymphadenopathy [3/10 (33%) versus 0/20 (0%) p = 0.030]. Conversely, a well-defined mass was suggestive of benign disease [10/24 (42%) versus 0/10 (0%); p = 0.015]. Associated calcification, ill-definition, heterogeneity, size, and multiplicity of lesions were not useful discriminating CT features. There was a non-significant trend for lesions in involuted breasts to be more frequently malignant than in dense breasts [6/14 (43%) versus 4/20 (20%) p = 0.11]. Conclusion: In the present series there was a significant rate (32%) of malignancy in patients referred to the breast clinic with CT-detected incidental breast lesions. The CT features of spiculation or axillary lymphadenopathy are strongly suggestive of malignancy.

  1. EEG-based Drowsiness Detection for Safe Driving Using Chaotic Features and Statistical Tests

    OpenAIRE

    Mardi, Zahra; Ashtiani, Seyedeh Naghmeh Miri; Mikaili, Mohammad

    2011-01-01

    Electro encephalography (EEG) is one of the most reliable sources to detect sleep onset while driving. In this study, we have tried to demonstrate that sleepiness and alertness signals are separable with an appropriate margin by extracting suitable features. So, first of all, we have recorded EEG signals from 10 volunteers. They were obliged to avoid sleeping for about 20 hours before the test. We recorded the signals while subjects did a virtual driving game. They tried to pass some barriers...

  2. Detection of braking intention in diverse situations during simulated driving based on EEG feature combination

    Science.gov (United States)

    Kim, Il-Hwa; Kim, Jeong-Woo; Haufe, Stefan; Lee, Seong-Whan

    2015-02-01

    Objective. We developed a simulated driving environment for studying neural correlates of emergency braking in diversified driving situations. We further investigated to what extent these neural correlates can be used to detect a participant's braking intention prior to the behavioral response. Approach. We measured electroencephalographic (EEG) and electromyographic signals during simulated driving. Fifteen participants drove a virtual vehicle and were exposed to several kinds of traffic situations in a simulator system, while EEG signals were measured. After that, we extracted characteristic features to categorize whether the driver intended to brake or not. Main results. Our system shows excellent detection performance in a broad range of possible emergency situations. In particular, we were able to distinguish three different kinds of emergency situations (sudden stop of a preceding vehicle, sudden cutting-in of a vehicle from the side and unexpected appearance of a pedestrian) from non-emergency (soft) braking situations, as well as from situations in which no braking was required, but the sensory stimulation was similar to stimulations inducing an emergency situation (e.g., the sudden stop of a vehicle on a neighboring lane). Significance. We proposed a novel feature combination comprising movement-related potentials such as the readiness potential, event-related desynchronization features besides the event-related potentials (ERP) features used in a previous study. The performance of predicting braking intention based on our proposed feature combination was superior compared to using only ERP features. Our study suggests that emergency situations are characterized by specific neural patterns of sensory perception and processing, as well as motor preparation and execution, which can be utilized by neurotechnology based braking assistance systems.

  3. Sequential filtering processes shape feature detection in crickets: a framework for song pattern recognition

    Directory of Open Access Journals (Sweden)

    Berthold Gerhard Hedwig

    2016-02-01

    Full Text Available Intraspecific acoustic communication requires filtering processes and feature detectors in the auditory pathway of the receiver for the recognition of species-specific signals. Insects like acoustically communicating crickets allow describing and analysing the mechanisms underlying auditory processing at the behavioural and neural level. Female crickets approach male calling song, their phonotactic behaviour is tuned to the characteristic features of the song, such as the carrier frequency and the temporal pattern of sound pulses. Data from behavioural experiments and from neural recordings at different stages of processing in the auditory pathway lead to a concept of serially arranged filtering mechanisms. These encompass a filter for the carrier frequency at the level of the hearing organ, and the pulse duration through phasic onset responses of afferents and reciprocal inhibition of thoracic interneurons. Further processing by a delay line and coincidence detector circuit in the brain leads to feature detecting neurons that specifically respond to the species-specific pulse rate, and match the characteristics of the phonotactic response. This same circuit may also control the response to the species-specific chirp pattern. Based on these serial filters and the feature detecting mechanism, female phonotactic behaviour is shaped and tuned to the characteristic properties of male calling song.

  4. Sequential Filtering Processes Shape Feature Detection in Crickets: A Framework for Song Pattern Recognition.

    Science.gov (United States)

    Hedwig, Berthold G

    2016-01-01

    Intraspecific acoustic communication requires filtering processes and feature detectors in the auditory pathway of the receiver for the recognition of species-specific signals. Insects like acoustically communicating crickets allow describing and analysing the mechanisms underlying auditory processing at the behavioral and neural level. Female crickets approach male calling song, their phonotactic behavior is tuned to the characteristic features of the song, such as the carrier frequency and the temporal pattern of sound pulses. Data from behavioral experiments and from neural recordings at different stages of processing in the auditory pathway lead to a concept of serially arranged filtering mechanisms. These encompass a filter for the carrier frequency at the level of the hearing organ, and the pulse duration through phasic onset responses of afferents and reciprocal inhibition of thoracic interneurons. Further, processing by a delay line and coincidence detector circuit in the brain leads to feature detecting neurons that specifically respond to the species-specific pulse rate, and match the characteristics of the phonotactic response. This same circuit may also control the response to the species-specific chirp pattern. Based on these serial filters and the feature detecting mechanism, female phonotactic behavior is shaped and tuned to the characteristic properties of male calling song.

  5. Finding features for real-time premature ventricular contraction detection using a fuzzy neural network system.

    Science.gov (United States)

    Lim, Joon S

    2009-03-01

    Fuzzy neural networks (FNNs) have been successfully applied to generate predictive rules for medical or diagnostic data. This brief presents an approach to detect premature ventricular contractions (PVCs) using the neural network with weighted fuzzy membership functions (NEWFMs). The NEWFM classifies normal and PVC beats by the trained bounded sum of weighted fuzzy membership functions (BSWFMs) using wavelet transformed coefficients from the MIT-BIH PVC database. The eight generalized coefficients, locally related to the time signal, are extracted by the nonoverlap area distribution measurement method. The eight generalized coefficients are used for the three PVC data sets with reliable accuracy rates of 99.80%, 99.21%, and 98.78%, respectively, which means that the selected features are less dependent on the data sets. It is shown that the locations of the eight features are not only around the QRS complex that represents ventricular depolarization in the electrocardiogram (ECG) containing a Q wave, an R wave, and an S wave, but also the QR segment from the Q wave to the R wave has more discriminate information than the RS segment from the R wave to the S wave. The BSWFMs of the eight features trained by NEWFM are shown visually, which makes the features explicitly interpretable. Since each BSWFM combines multiple weighted fuzzy membership functions into one using the bounded sum, the eight small-sized BSWFMs can realize real-time PVC detection in a mobile environment.

  6. Effective dysphonia detection using feature dimension reduction and kernel density estimation for patients with Parkinson's disease.

    Directory of Open Access Journals (Sweden)

    Shanshan Yang

    Full Text Available Detection of dysphonia is useful for monitoring the progression of phonatory impairment for patients with Parkinson's disease (PD, and also helps assess the disease severity. This paper describes the statistical pattern analysis methods to study different vocal measurements of sustained phonations. The feature dimension reduction procedure was implemented by using the sequential forward selection (SFS and kernel principal component analysis (KPCA methods. Four selected vocal measures were projected by the KPCA onto the bivariate feature space, in which the class-conditional feature densities can be approximated with the nonparametric kernel density estimation technique. In the vocal pattern classification experiments, Fisher's linear discriminant analysis (FLDA was applied to perform the linear classification of voice records for healthy control subjects and PD patients, and the maximum a posteriori (MAP decision rule and support vector machine (SVM with radial basis function kernels were employed for the nonlinear classification tasks. Based on the KPCA-mapped feature densities, the MAP classifier successfully distinguished 91.8% voice records, with a sensitivity rate of 0.986, a specificity rate of 0.708, and an area value of 0.94 under the receiver operating characteristic (ROC curve. The diagnostic performance provided by the MAP classifier was superior to those of the FLDA and SVM classifiers. In addition, the classification results indicated that gender is insensitive to dysphonia detection, and the sustained phonations of PD patients with minimal functional disability are more difficult to be correctly identified.

  7. Oxygen Saturation and RR Intervals Feature Selection for Sleep Apnea Detection

    Directory of Open Access Journals (Sweden)

    Antonio G. Ravelo-García

    2015-05-01

    Full Text Available A diagnostic system for sleep apnea based on oxygen saturation and RR intervals obtained from the EKG (electrocardiogram is proposed with the goal to detect and quantify minute long segments of sleep with breathing pauses. We measured the discriminative capacity of combinations of features obtained from RR series and oximetry to evaluate improvements of the performance compared to oximetry-based features alone. Time and frequency domain variables derived from oxygen saturation (SpO2 as well as linear and non-linear variables describing the RR series have been explored in recordings from 70 patients with suspected sleep apnea. We applied forward feature selection in order to select a minimal set of variables that are able to locate patterns indicating respiratory pauses. Linear discriminant analysis (LDA was used to classify the presence of apnea during specific segments. The system will finally provide a global score indicating the presence of clinically significant apnea integrating the segment based apnea detection. LDA results in an accuracy of 87%; sensitivity of 76% and specificity of 91% (AUC = 0.90 with a global classification of 97% when only oxygen saturation is used. In case of additionally including features from the RR series; the system performance improves to an accuracy of 87%; sensitivity of 73% and specificity of 92% (AUC = 0.92, with a global classification rate of 100%.

  8. Multi-channels statistical and morphological features based mitosis detection in breast cancer histopathology.

    Science.gov (United States)

    Irshad, Humayun; Roux, Ludovic; Racoceanu, Daniel

    2013-01-01

    Accurate counting of mitosis in breast cancer histopathology plays a critical role in the grading process. Manual counting of mitosis is tedious and subject to considerable inter- and intra-reader variations. This work aims at improving the accuracy of mitosis detection by selecting the color channels that better capture the statistical and morphological features having mitosis discrimination from other objects. The proposed framework includes comprehensive analysis of first and second order statistical features together with morphological features in selected color channels and a study on balancing the skewed dataset using SMOTE method for increasing the predictive accuracy of mitosis classification. The proposed framework has been evaluated on MITOS data set during an ICPR 2012 contest and ranked second from 17 finalists. The proposed framework achieved 74% detection rate, 70% precision and 72% F-Measure. In future work, we plan to apply our mitosis detection tool to images produced by different types of slide scanners, including multi-spectral and multi-focal microscopy.

  9. Robust detection of premature ventricular contractions using sparse signal decomposition and temporal features.

    Science.gov (United States)

    Manikandan, M Sabarimalai; Ramkumar, Barathram; Deshpande, Pranav S; Choudhary, Tilendra

    2015-12-01

    An automated noise-robust premature ventricular contraction (PVC) detection method is proposed based on the sparse signal decomposition, temporal features, and decision rules. In this Letter, the authors exploit sparse expansion of electrocardiogram (ECG) signals on mixed dictionaries for simultaneously enhancing the QRS complex and reducing the influence of tall P and T waves, baseline wanders, and muscle artefacts. They further investigate a set of ten generalised temporal features combined with decision-rule-based detection algorithm for discriminating PVC beats from non-PVC beats. The accuracy and robustness of the proposed method is evaluated using 47 ECG recordings from the MIT/BIH arrhythmia database. Evaluation results show that the proposed method achieves an average sensitivity of 89.69%, and specificity 99.63%. Results further show that the proposed decision-rule-based algorithm with ten generalised features can accurately detect different patterns of PVC beats (uniform and multiform, couplets, triplets, and ventricular tachycardia) in presence of other normal and abnormal heartbeats.

  10. Spectrum and Image Texture Features Analysis for Early Blight Disease Detection on Eggplant Leaves

    Directory of Open Access Journals (Sweden)

    Chuanqi Xie

    2016-05-01

    Full Text Available This study investigated both spectrum and texture features for detecting early blight disease on eggplant leaves. Hyperspectral images for healthy and diseased samples were acquired covering the wavelengths from 380 to 1023 nm. Four gray images were identified according to the effective wavelengths (408, 535, 624 and 703 nm. Hyperspectral images were then converted into RGB, HSV and HLS images. Finally, eight texture features (mean, variance, homogeneity, contrast, dissimilarity, entropy, second moment and correlation based on gray level co-occurrence matrix (GLCM were extracted from gray images, RGB, HSV and HLS images, respectively. The dependent variables for healthy and diseased samples were set as 0 and 1. K-Nearest Neighbor (KNN and AdaBoost classification models were established for detecting healthy and infected samples. All models obtained good results with the classification rates (CRs over 88.46% in the testing sets. The results demonstrated that spectrum and texture features were effective for early blight disease detection on eggplant leaves.

  11. Regions of micro-calcifications clusters detection based on new features from imbalance data in mammograms

    Science.gov (United States)

    Wang, Keju; Dong, Min; Yang, Zhen; Guo, Yanan; Ma, Yide

    2017-02-01

    Breast cancer is the most common cancer among women. Micro-calcification cluster on X-ray mammogram is one of the most important abnormalities, and it is effective for early cancer detection. Surrounding Region Dependence Method (SRDM), a statistical texture analysis method is applied for detecting Regions of Interest (ROIs) containing microcalcifications. Inspired by the SRDM, we present a method that extract gray and other features which are effective to predict the positive and negative regions of micro-calcifications clusters in mammogram. By constructing a set of artificial images only containing micro-calcifications, we locate the suspicious pixels of calcifications of a SRDM matrix in original image map. Features are extracted based on these pixels for imbalance date and then the repeated random subsampling method and Random Forest (RF) classifier are used for classification. True Positive (TP) rate and False Positive (FP) can reflect how the result will be. The TP rate is 90% and FP rate is 88.8% when the threshold q is 10. We draw the Receiver Operating Characteristic (ROC) curve and the Area Under the ROC Curve (AUC) value reaches 0.9224. The experiment indicates that our method is effective. A novel regions of micro-calcifications clusters detection method is developed, which is based on new features for imbalance data in mammography, and it can be considered to help improving the accuracy of computer aided diagnosis breast cancer.

  12. Detection of sub-kilometer craters in high resolution planetary images using shape and texture features

    Science.gov (United States)

    Bandeira, Lourenço; Ding, Wei; Stepinski, Tomasz F.

    2012-01-01

    Counting craters is a paramount tool of planetary analysis because it provides relative dating of planetary surfaces. Dating surfaces with high spatial resolution requires counting a very large number of small, sub-kilometer size craters. Exhaustive manual surveys of such craters over extensive regions are impractical, sparking interest in designing crater detection algorithms (CDAs). As a part of our effort to design a CDA, which is robust and practical for planetary research analysis, we propose a crater detection approach that utilizes both shape and texture features to identify efficiently sub-kilometer craters in high resolution panchromatic images. First, a mathematical morphology-based shape analysis is used to identify regions in an image that may contain craters; only those regions - crater candidates - are the subject of further processing. Second, image texture features in combination with the boosting ensemble supervised learning algorithm are used to accurately classify previously identified candidates into craters and non-craters. The design of the proposed CDA is described and its performance is evaluated using a high resolution image of Mars for which sub-kilometer craters have been manually identified. The overall detection rate of the proposed CDA is 81%, the branching factor is 0.14, and the overall quality factor is 72%. This performance is a significant improvement over the previous CDA based exclusively on the shape features. The combination of performance level and computational efficiency offered by this CDA makes it attractive for practical application.

  13. Spike detection, characterization, and discrimination using feature analysis software written in LabVIEW.

    Science.gov (United States)

    Stewart, C M; Newlands, S D; Perachio, A A

    2004-12-01

    Rapid and accurate discrimination of single units from extracellular recordings is a fundamental process for the analysis and interpretation of electrophysiological recordings. We present an algorithm that performs detection, characterization, discrimination, and analysis of action potentials from extracellular recording sessions. The program was entirely written in LabVIEW (National Instruments), and requires no external hardware devices or a priori information about action potential shapes. Waveform events are detected by scanning the digital record for voltages that exceed a user-adjustable trigger. Detected events are characterized to determine nine different time and voltage levels for each event. Various algebraic combinations of these waveform features are used as axis choices for 2-D Cartesian plots of events. The user selects axis choices that generate distinct clusters. Multiple clusters may be defined as action potentials by manually generating boundaries of arbitrary shape. Events defined as action potentials are validated by visual inspection of overlain waveforms. Stimulus-response relationships may be identified by selecting any recorded channel for comparison to continuous and average cycle histograms of binned unit data. The algorithm includes novel aspects of feature analysis and acquisition, including higher acquisition rates for electrophysiological data compared to other channels. The program confirms that electrophysiological data may be discriminated with high-speed and efficiency using algebraic combinations of waveform features derived from high-speed digital records.

  14. Method for inshore ship detection based on feature recognition and adaptive background window

    Science.gov (United States)

    Zhao, Hongyu; Wang, Quan; Huang, Jingjian; Wu, Weiwei; Yuan, Naichang

    2014-01-01

    Inshore ship detection in synthetic aperture radar (SAR) images is a challenging task. We present an inshore ship detection method based on the characteristics of inshore ships. We first use the Markov random field (MRF) method to segment water and land, and then extract the feature points of inshore ships using polygonal approximation. Following this, we propose new rules for inshore ship extraction and use these rules to separate inshore ships from the land in binary images. Finally, we utilize the adaptive background window (ABW) to complete the clutter statistic and successfully detect inshore ships using a constant false alarm rate (CFAR) detector with ABW and G0 distribution. Experimental results using SAR images show that our method is more accurate than traditional CFAR detection based on K-distribution (K-CFAR), given the same CFAR, and that the quality of the image obtained through our method is higher than that of the traditional K-CFAR detection method by a factor of 0.165. Our method accurately locates and detects inshore ships in complicated environments and thus is more practical for inshore ship detection.

  15. Obscenity Detection Using Haar-Like Features and Gentle Adaboost Classifier

    Directory of Open Access Journals (Sweden)

    Rashed Mustafa

    2014-01-01

    Full Text Available Large exposure of skin area of an image is considered obscene. This only fact may lead to many false images having skin-like objects and may not detect those images which have partially exposed skin area but have exposed erotogenic human body parts. This paper presents a novel method for detecting nipples from pornographic image contents. Nipple is considered as an erotogenic organ to identify pornographic contents from images. In this research Gentle Adaboost (GAB haar-cascade classifier and haar-like features used for ensuring detection accuracy. Skin filter prior to detection made the system more robust. The experiment showed that, considering accuracy, haar-cascade classifier performs well, but in order to satisfy detection time, train-cascade classifier is suitable. To validate the results, we used 1198 positive samples containing nipple objects and 1995 negative images. The detection rates for haar-cascade and train-cascade classifiers are 0.9875 and 0.8429, respectively. The detection time for haar-cascade is 0.162 seconds and is 0.127 seconds for train-cascade classifier.

  16. Obscenity detection using haar-like features and Gentle Adaboost classifier.

    Science.gov (United States)

    Mustafa, Rashed; Min, Yang; Zhu, Dingju

    2014-01-01

    Large exposure of skin area of an image is considered obscene. This only fact may lead to many false images having skin-like objects and may not detect those images which have partially exposed skin area but have exposed erotogenic human body parts. This paper presents a novel method for detecting nipples from pornographic image contents. Nipple is considered as an erotogenic organ to identify pornographic contents from images. In this research Gentle Adaboost (GAB) haar-cascade classifier and haar-like features used for ensuring detection accuracy. Skin filter prior to detection made the system more robust. The experiment showed that, considering accuracy, haar-cascade classifier performs well, but in order to satisfy detection time, train-cascade classifier is suitable. To validate the results, we used 1198 positive samples containing nipple objects and 1995 negative images. The detection rates for haar-cascade and train-cascade classifiers are 0.9875 and 0.8429, respectively. The detection time for haar-cascade is 0.162 seconds and is 0.127 seconds for train-cascade classifier.

  17. A simple optimization can improve the performance of single feature polymorphism detection by Affymetrix expression arrays

    Directory of Open Access Journals (Sweden)

    Fujisawa Hironori

    2010-05-01

    Full Text Available Abstract Background High-density oligonucleotide arrays are effective tools for genotyping numerous loci simultaneously. In small genome species (genome size: Results We compared the single feature polymorphism (SFP detection performance of whole-genome and transcript hybridizations using the Affymetrix GeneChip® Rice Genome Array, using the rice cultivars with full genome sequence, japonica cultivar Nipponbare and indica cultivar 93-11. Both genomes were surveyed for all probe target sequences. Only completely matched 25-mer single copy probes of the Nipponbare genome were extracted, and SFPs between them and 93-11 sequences were predicted. We investigated optimum conditions for SFP detection in both whole genome and transcript hybridization using differences between perfect match and mismatch probe intensities of non-polymorphic targets, assuming that these differences are representative of those between mismatch and perfect targets. Several statistical methods of SFP detection by whole-genome hybridization were compared under the optimized conditions. Causes of false positives and negatives in SFP detection in both types of hybridization were investigated. Conclusions The optimizations allowed a more than 20% increase in true SFP detection in whole-genome hybridization and a large improvement of SFP detection performance in transcript hybridization. Significance analysis of the microarray for log-transformed raw intensities of PM probes gave the best performance in whole genome hybridization, and 22,936 true SFPs were detected with 23.58% false positives by whole genome hybridization. For transcript hybridization, stable SFP detection was achieved for highly expressed genes, and about 3,500 SFPs were detected at a high sensitivity (> 50% in both shoot and young panicle transcripts. High SFP detection performances of both genome and transcript hybridizations indicated that microarrays of a complex genome (e.g., of Oryza sativa can be

  18. 2006 NOAA Bathymetric Lidar: Puerto Rico (Southwest)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set (Project Number OPR-I305-KRL-06) depicts depth values (mean 5 meter gridded) collected using LiDAR (Light Detection & Ranging) from the shoreline...

  19. Thermography based breast cancer detection using texture features and minimum variance quantization

    Science.gov (United States)

    Milosevic, Marina; Jankovic, Dragan; Peulic, Aleksandar

    2014-01-01

    In this paper, we present a system based on feature extraction techniques and image segmentation techniques for detecting and diagnosing abnormal patterns in breast thermograms. The proposed system consists of three major steps: feature extraction, classification into normal and abnormal pattern and segmentation of abnormal pattern. Computed features based on gray-level co-occurrence matrices are used to evaluate the effectiveness of textural information possessed by mass regions. A total of 20 GLCM features are extracted from thermograms. The ability of feature set in differentiating abnormal from normal tissue is investigated using a Support Vector Machine classifier, Naive Bayes classifier and K-Nearest Neighbor classifier. To evaluate the classification performance, five-fold cross validation method and Receiver operating characteristic analysis was performed. The verification results show that the proposed algorithm gives the best classification results using K-Nearest Neighbor classifier and a accuracy of 92.5%. Image segmentation techniques can play an important role to segment and extract suspected hot regions of interests in the breast infrared images. Three image segmentation techniques: minimum variance quantization, dilation of image and erosion of image are discussed. The hottest regions of thermal breast images are extracted and compared to the original images. According to the results, the proposed method has potential to extract almost exact shape of tumors. PMID:26417334

  20. EEG-based Drowsiness Detection for Safe Driving Using Chaotic Features and Statistical Tests.

    Science.gov (United States)

    Mardi, Zahra; Ashtiani, Seyedeh Naghmeh Miri; Mikaili, Mohammad

    2011-05-01

    Electro encephalography (EEG) is one of the most reliable sources to detect sleep onset while driving. In this study, we have tried to demonstrate that sleepiness and alertness signals are separable with an appropriate margin by extracting suitable features. So, first of all, we have recorded EEG signals from 10 volunteers. They were obliged to avoid sleeping for about 20 hours before the test. We recorded the signals while subjects did a virtual driving game. They tried to pass some barriers that were shown on monitor. Process of recording was ended after 45 minutes. Then, after preprocessing of recorded signals, we labeled them by drowsiness and alertness by using times associated with pass times of the barriers or crash times to them. Then, we extracted some chaotic features (include Higuchi's fractal dimension and Petrosian's fractal dimension) and logarithm of energy of signal. By applying the two-tailed t-test, we have shown that these features can create 95% significance level of difference between drowsiness and alertness in each EEG channels. Ability of each feature has been evaluated by artificial neural network and accuracy of classification with all features was about 83.3% and this accuracy has been obtained without performing any optimization process on classifier.

  1. Comparison of Different Features and Classifiers for Driver Fatigue Detection Based on a Single EEG Channel

    Science.gov (United States)

    2017-01-01

    Driver fatigue has become an important factor to traffic accidents worldwide, and effective detection of driver fatigue has major significance for public health. The purpose method employs entropy measures for feature extraction from a single electroencephalogram (EEG) channel. Four types of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE), and spectral entropy (PE), were deployed for the analysis of original EEG signal and compared by ten state-of-the-art classifiers. Results indicate that optimal performance of single channel is achieved using a combination of channel CP4, feature FE, and classifier Random Forest (RF). The highest accuracy can be up to 96.6%, which has been able to meet the needs of real applications. The best combination of channel + features + classifier is subject-specific. In this work, the accuracy of FE as the feature is far greater than the Acc of other features. The accuracy using classifier RF is the best, while that of classifier SVM with linear kernel is the worst. The impact of channel selection on the Acc is larger. The performance of various channels is very different.

  2. Comparison of Different Features and Classifiers for Driver Fatigue Detection Based on a Single EEG Channel

    Directory of Open Access Journals (Sweden)

    Jianfeng Hu

    2017-01-01

    Full Text Available Driver fatigue has become an important factor to traffic accidents worldwide, and effective detection of driver fatigue has major significance for public health. The purpose method employs entropy measures for feature extraction from a single electroencephalogram (EEG channel. Four types of entropies measures, sample entropy (SE, fuzzy entropy (FE, approximate entropy (AE, and spectral entropy (PE, were deployed for the analysis of original EEG signal and compared by ten state-of-the-art classifiers. Results indicate that optimal performance of single channel is achieved using a combination of channel CP4, feature FE, and classifier Random Forest (RF. The highest accuracy can be up to 96.6%, which has been able to meet the needs of real applications. The best combination of channel + features + classifier is subject-specific. In this work, the accuracy of FE as the feature is far greater than the Acc of other features. The accuracy using classifier RF is the best, while that of classifier SVM with linear kernel is the worst. The impact of channel selection on the Acc is larger. The performance of various channels is very different.

  3. Optimal Feature Space Selection in Detecting Epileptic Seizure based on Recurrent Quantification Analysis and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Saleh LAshkari

    2016-06-01

    Full Text Available Selecting optimal features based on nature of the phenomenon and high discriminant ability is very important in the data classification problems. Since it doesn't require any assumption about stationary condition and size of the signal and the noise in Recurrent Quantification Analysis (RQA, it may be useful for epileptic seizure Detection. In this study, RQA was used to discriminate ictal EEG from the normal EEG where optimal features selected by combination of algorithm genetic and Bayesian Classifier. Recurrence plots of hundred samples in each two categories were obtained with five distance norms in this study: Euclidean, Maximum, Minimum, Normalized and Fixed Norm. In order to choose optimal threshold for each norm, ten threshold of ε was generated and then the best feature space was selected by genetic algorithm in combination with a bayesian classifier. The results shown that proposed method is capable of discriminating the ictal EEG from the normal EEG where for Minimum norm and 0.1˂ε˂1, accuracy was 100%. In addition, the sensitivity of proposed framework to the ε and the distance norm parameters was low. The optimal feature presented in this study is Trans which it was selected in most feature spaces with high accuracy.

  4. Impulse feature extraction method for machinery fault detection using fusion sparse coding and online dictionary learning

    Directory of Open Access Journals (Sweden)

    Deng Sen

    2015-04-01

    Full Text Available Impulse components in vibration signals are important fault features of complex machines. Sparse coding (SC algorithm has been introduced as an impulse feature extraction method, but it could not guarantee a satisfactory performance in processing vibration signals with heavy background noises. In this paper, a method based on fusion sparse coding (FSC and online dictionary learning is proposed to extract impulses efficiently. Firstly, fusion scheme of different sparse coding algorithms is presented to ensure higher reconstruction accuracy. Then, an improved online dictionary learning method using FSC scheme is established to obtain redundant dictionary and it can capture specific features of training samples and reconstruct the sparse approximation of vibration signals. Simulation shows that this method has a good performance in solving sparse coefficients and training redundant dictionary compared with other methods. Lastly, the proposed method is further applied to processing aircraft engine rotor vibration signals. Compared with other feature extraction approaches, our method can extract impulse features accurately and efficiently from heavy noisy vibration signal, which has significant supports for machinery fault detection and diagnosis.

  5. Driver Fatigue Detection System Using Electroencephalography Signals Based on Combined Entropy Features

    Directory of Open Access Journals (Sweden)

    Zhendong Mu

    2017-02-01

    Full Text Available Driver fatigue has become one of the major causes of traffic accidents, and is a complicated physiological process. However, there is no effective method to detect driving fatigue. Electroencephalography (EEG signals are complex, unstable, and non-linear; non-linear analysis methods, such as entropy, maybe more appropriate. This study evaluates a combined entropy-based processing method of EEG data to detect driver fatigue. In this paper, 12 subjects were selected to take part in an experiment, obeying driving training in a virtual environment under the instruction of the operator. Four types of enthrones (spectrum entropy, approximate entropy, sample entropy and fuzzy entropy were used to extract features for the purpose of driver fatigue detection. Electrode selection process and a support vector machine (SVM classification algorithm were also proposed. The average recognition accuracy was 98.75%. Retrospective analysis of the EEG showed that the extracted features from electrodes T5, TP7, TP8 and FP1 may yield better performance. SVM classification algorithm using radial basis function as kernel function obtained better results. A combined entropy-based method demonstrates good classification performance for studying driver fatigue detection.

  6. Improved bathymetric datasets for the shallow water regions in the Indian Ocean

    Indian Academy of Sciences (India)

    B Sindhu; I Suresh; A S Unnikrishnan; N V Bhatkar; S Neetu; G S Michael

    2007-06-01

    Ocean modellers use bathymetric datasets like ETOPO5 and ETOPO2 to represent the ocean bottom topography. The former dataset is based on digitization of depth contours greater than 200m, and the latter is based on satellite altimetry. Hence, they are not always reliable in shallow regions. An improved shelf bathymetry for the Indian Ocean region (20°E to 112°E and 38°S to 32°N) is derived by digitizing the depth contours and sounding depths less than 200m from the hydrographic charts published by the National Hydrographic Office, India. The digitized data are then gridded and used to modify the existing ETOPO5 and ETOPO2 datasets for depths less than 200 m. In combining the digitized data with the original ETOPO dataset, we apply an appropriate blending technique near the 200m contour to ensure smooth merging of the datasets. Using the modified ETOPO5, we demonstrate that the original ETOPO5 is indeed inaccurate in depths of less than 200m and has features that are not actually present on the ocean bottom. Though the present version of ETOPO2 (ETOPO2v2) is a better bathymetry compared to its earlier versions, there are still differences between the ETOPO2v2 and the modified ETOPO2. We assess the improvements of these bathymetric grids with the performance of existing models of tidal circulation and tsunami propagation.

  7. Bathymetric controls on sediment transport in the Hudson River estuary: Lateral asymmetry and frontal trapping

    Science.gov (United States)

    Ralston, David K.; Geyer, W. Rockwell; Warner, John C.

    2012-01-01

    Analyses of field observations and numerical model results have identified that sediment transport in the Hudson River estuary is laterally segregated between channel and shoals, features frontal trapping at multiple locations along the estuary, and varies significantly over the spring-neap tidal cycle. Lateral gradients in depth, and therefore baroclinic pressure gradient and stratification, control the lateral distribution of sediment transport. Within the saline estuary, sediment fluxes are strongly landward in the channel and seaward on the shoals. At multiple locations, bottom salinity fronts form at bathymetric transitions in width or depth. Sediment convergences near the fronts create local maxima in suspended-sediment concentration and deposition, providing a general mechanism for creation of secondary estuarine turbidity maxima at bathymetric transitions. The lateral bathymetry also affects the spring-neap cycle of sediment suspension and deposition. In regions with broad, shallow shoals, the shoals are erosional and the channel is depositional during neap tides, with the opposite pattern during spring tides. Narrower, deeper shoals are depositional during neaps and erosional during springs. In each case, the lateral transfer is from regions of higher to lower bed stress, and depends on the elevation of the pycnocline relative to the bed. Collectively, the results indicate that lateral and along-channel gradients in bathymetry and thus stratification, bed stress, and sediment flux lead to an unsteady, heterogeneous distribution of sediment transport and trapping along the estuary rather than trapping solely at a turbidity maximum at the limit of the salinity intrusion.

  8. Multiple instance feature learning for landmine detection in ground-penetrating radar data

    Science.gov (United States)

    Bolton, Jeremy; Gader, Paul; Frigui, Hichem

    2010-04-01

    Multiple instance learning (MIL) is a technique used for identifying a target pattern within sets of data. In MIL, a learner is presented with sets of samples; whereas in standard techniques, a learner is presented with individual samples. The MI scenario is encountered given the nature of landmine detection in GPR data, and therefore landmine detection results should benefit from the use of multiple instance techniques. Previously, a random set framework for multiple instance learning (RSF-MIL) was proposed which utilizes random sets and fuzzy measures to model the MIL problem. An improved version C-RSF-MIL was recently developed showing a increase in learning and classification performance. This new approach is used to learn and characterize features of landmines within GPR imagery for the purposes of classification. Experimental results show the benefits of using C-RSF-MIL for landmine detection in GPR imagery.

  9. Boosting multi-features with prior knowledge for mini unmanned helicopter landmark detection

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Without sufficient real training data, the data driven classification algorithms based on boosting method cannot solely be utilized to applications such as the mini unmanned helicopter landmark image detection. In this paper, we propose an approach which uses a boosting algorithm with the prior knowledge for the mini unmanned helicopter landmark image detection. The stage forward stagewise additive model of boosting is analyzed, and the approach how to combine it with the prior knowledge model is presented. The approach is then applied to landmark image detection, where the multi-features are boosted to solve a series of problems, such as rotation, noises affected, etc. Results of real flight experiments demonstrate that for small training examples the boosted learning system using prior knowledge is dramatically better than the one driven by data only.

  10. Efficient Fine Arrhythmia Detection Based on DCG P-T Features.

    Science.gov (United States)

    Bie, Rongfang; Xu, Shuaijing; Zhang, Guangzhi; Zhang, Meng; Ma, Xianlin; Zhang, Xialin

    2016-07-01

    Due to the high mortality associated with heart disease, there is an urgent demand for advanced detection of abnormal heart beats. The use of dynamic electrocardiogram (DCG) provides a useful indicator of heart condition from long-term monitoring techniques commonly used in the clinic. However, accurately distinguishing sparse abnormal heart beats from large DCG data sets remains difficult. Herein, we propose an efficient fine solution based on 11 geometrical features of the DCG PQRST(P-T) waves and an improved hierarchical clustering method for arrhythmia detection. Data sets selected from MIT-BIH are used to validate the effectiveness of this approach. Experimental results show that the detection procedure of arrhythmia is fast and with accurate clustering.

  11. Origins and features of oil slicks in the Bohai Sea detected from satellite SAR images.

    Science.gov (United States)

    Ding, Yi; Cao, Conghua; Huang, Juan; Song, Yan; Liu, Guiyan; Wu, Lingjuan; Wan, Zhenwen

    2016-05-15

    Oil slicks were detected using satellite Synthetic Aperture Radar (SAR) images in 2011. We investigated potential origins and regional and seasonal features of oil slick in the Bohai Sea. Distance between oil slicks and potential origins (ships, seaports, and oil exploitation platforms) and the angle at which oil slicks move relative to potential driving forces were evaluated. Most oil slicks were detected along main ship routes rather than around seaports and oil exploitation platforms. Few oil slicks were detected within 20km of seaports. Directions of oil slicks movement were much more strongly correlated with directions of ship routes than with directions of winds and currents. These findings support the premise that oil slicks in the Bohai Sea most likely originate from illegal disposal of oil-polluted wastes from ships. Seasonal variation of oil slicks followed an annual cycle, with a peak in August and a trough in December.

  12. A two-view ultrasound CAD system for spina bifida detection using Zernike features

    Science.gov (United States)

    Konur, Umut; Gürgen, Fikret; Varol, Füsun

    2011-03-01

    In this work, we address a very specific CAD (Computer Aided Detection/Diagnosis) problem and try to detect one of the relatively common birth defects - spina bifida, in the prenatal period. To do this, fetal ultrasound images are used as the input imaging modality, which is the most convenient so far. Our approach is to decide using two particular types of views of the fetal neural tube. Transcerebellar head (i.e. brain) and transverse (axial) spine images are processed to extract features which are then used to classify healthy (normal), suspicious (probably defective) and non-decidable cases. Decisions raised by two independent classifiers may be individually treated, or if desired and data related to both modalities are available, those decisions can be combined to keep matters more secure. Even more security can be attained by using more than two modalities and base the final decision on all those potential classifiers. Our current system relies on feature extraction from images for cases (for particular patients). The first step is image preprocessing and segmentation to get rid of useless image pixels and represent the input in a more compact domain, which is hopefully more representative for good classification performance. Next, a particular type of feature extraction, which uses Zernike moments computed on either B/W or gray-scale image segments, is performed. The aim here is to obtain values for indicative markers that signal the presence of spina bifida. Markers differ depending on the image modality being used. Either shape or texture information captured by moments may propose useful features. Finally, SVM is used to train classifiers to be used as decision makers. Our experimental results show that a promising CAD system can be actualized for the specific purpose. On the other hand, the performance of such a system would highly depend on the qualities of image preprocessing, segmentation, feature extraction and comprehensiveness of image data.

  13. Improving Bee Algorithm Based Feature Selection in Intrusion Detection System Using Membrane Computing

    Directory of Open Access Journals (Sweden)

    Kazeem I. Rufai

    2014-03-01

    Full Text Available Despite the great benefits accruable from the debut of computer and the internet, efforts are constantly being put up by fraudulent and mischievous individuals to compromise the integrity, confidentiality or availability of electronic information systems. In Cyber-security parlance, this is termed ‘intrusion’. Hence, this has necessitated the introduction of Intrusion Detection Systems (IDS to help detect and curb different types of attack. However, based on the high volume of data traffic involved in a network system, effects of redundant and irrelevant data should be minimized if a qualitative intrusion detection mechanism is genuinely desirous. Several attempts, especially feature subset selection approach using Bee Algorithm (BA, Linear Genetic Programming (LGP, Support Vector Decision Function Ranking (SVDF, Rough, Rough-DPSO, and Mutivariate Regression Splines (MARS have been advanced in the past to measure the dependability and quality of a typical IDS. The observed problem among these approaches has to do with their general performance. This has therefore motivated this research work. We hereby propose a new but robust algorithm called membrane algorithm to improve the Bee Algorithm based feature subset selection technique. This Membrane computing paradigm is a class of parallel computing devices. Data used were taken from KDD-Cup 99 Dataset which is the acceptable standard benchmark for intrusion detection. When the final results were compared to those of the existing approaches, using the three standard IDS measurements-Attack Detection, False Alarm and Classification Accuracy Rates, it was discovered that Bee Algorithm-Membrane Computing (BA-MC approach is a better technique. This is because our approach produced very high attack detection rate of 89.11%, classification accuracy of 95.60% and also generated a reasonable decrease in false alarm rate of 0.004. Receiver Operating Characteristic (ROC curve was used for results

  14. A method for detecting and correcting feature misidentification on expression microarrays

    Science.gov (United States)

    Tu, I-Ping; Schaner, Marci; Diehn, Maximilian; Sikic, Branimir I; Brown, Patrick O; Botstein, David; Fero, Michael J

    2004-01-01

    Background Much of the microarray data published at Stanford is based on mouse and human arrays produced under controlled and monitored conditions at the Brown and Botstein laboratories and at the Stanford Functional Genomics Facility (SFGF). Nevertheless, as large datasets based on the Stanford Human array began to accumulate, a small but significant number of discrepancies were detected that required a serious attempt to track down the original source of error. Due to a controlled process environment, sufficient data was available to accurately track the entire process leading to up to the final expression data. In this paper, we describe our statistical methods to detect the inconsistencies in microarray data that arise from process errors, and discuss our technique to locate and fix these errors. Results To date, the Brown and Botstein laboratories and the Stanford Functional Genomics Facility have together produced 40,000 large-scale (10–50,000 feature) cDNA microarrays. By applying the heuristic described here, we have been able to check most of these arrays for misidentified features, and have been able to confidently apply fixes to the data where needed. Out of the 265 million features checked in our database, problems were detected and corrected on 1.3 million of them. Conclusion Process errors in any genome scale high throughput production regime can lead to subsequent errors in data analysis. We show the value of tracking multi-step high throughput operations by using this knowledge to detect and correct misidentified data on gene expression microarrays. PMID:15357875

  15. A method for detecting and correcting feature misidentification on expression microarrays

    Directory of Open Access Journals (Sweden)

    Brown Patrick O

    2004-09-01

    Full Text Available Abstract Background Much of the microarray data published at Stanford is based on mouse and human arrays produced under controlled and monitored conditions at the Brown and Botstein laboratories and at the Stanford Functional Genomics Facility (SFGF. Nevertheless, as large datasets based on the Stanford Human array began to accumulate, a small but significant number of discrepancies were detected that required a serious attempt to track down the original source of error. Due to a controlled process environment, sufficient data was available to accurately track the entire process leading to up to the final expression data. In this paper, we describe our statistical methods to detect the inconsistencies in microarray data that arise from process errors, and discuss our technique to locate and fix these errors. Results To date, the Brown and Botstein laboratories and the Stanford Functional Genomics Facility have together produced 40,000 large-scale (10–50,000 feature cDNA microarrays. By applying the heuristic described here, we have been able to check most of these arrays for misidentified features, and have been able to confidently apply fixes to the data where needed. Out of the 265 million features checked in our database, problems were detected and corrected on 1.3 million of them. Conclusion Process errors in any genome scale high throughput production regime can lead to subsequent errors in data analysis. We show the value of tracking multi-step high throughput operations by using this knowledge to detect and correct misidentified data on gene expression microarrays.

  16. LMD based features for the automatic seizure detection of EEG signals using SVM.

    Science.gov (United States)

    Zhang, Tao; Chen, Wanzhong

    2016-09-20

    Achieving the goal of detecting seizure activity automatically using electroencephalogram (EEG) signals is of great importance and significance for the treatment of epileptic seizures. To realize this aim, a newly-developed time-frequency analytical algorithm, namely local mean decomposition (LMD), is employed in the presented study. LMD is able to decompose an arbitrary signal into a series of product functions (PFs). Primarily, the raw EEG signal is decomposed into several PFs, and then the temporal statistical and non-linear features of the first five PFs are calculated. The features of each PF are fed into five classifiers, including back propagation neural network (BPNN), K-nearest neighbor (KNN), linear discriminant analysis (LDA), un-optimized support vector machine (SVM) and SVM optimized by genetic algorithm (GA-SVM), for five classification cases, respectively. Confluent features of all PFs are further passed into the high-performance GA-SVM for the same classification tasks. Experimental results on the international public Bonn epilepsy EEG dataset show that the average classification accuracy of the presented approach are equal to or higher than 98.10% in all the five cases, and this indicates the effectiveness of the proposed approach for automated seizure detection.

  17. Breast cancer mitosis detection in histopathological images with spatial feature extraction

    Science.gov (United States)

    Albayrak, Abdülkadir; Bilgin, Gökhan

    2013-12-01

    In this work, cellular mitosis detection in histopathological images has been investigated. Mitosis detection is very expensive and time consuming process. Development of digital imaging in pathology has enabled reasonable and effective solution to this problem. Segmentation of digital images provides easier analysis of cell structures in histopathological data. To differentiate normal and mitotic cells in histopathological images, feature extraction step is very crucial step for the system accuracy. A mitotic cell has more distinctive textural dissimilarities than the other normal cells. Hence, it is important to incorporate spatial information in feature extraction or in post-processing steps. As a main part of this study, Haralick texture descriptor has been proposed with different spatial window sizes in RGB and La*b* color spaces. So, spatial dependencies of normal and mitotic cellular pixels can be evaluated within different pixel neighborhoods. Extracted features are compared with various sample sizes by Support Vector Machines using k-fold cross validation method. According to the represented results, it has been shown that separation accuracy on mitotic and non-mitotic cellular pixels gets better with the increasing size of spatial window.

  18. Wood Texture Features Extraction by Using GLCM Combined With Various Edge Detection Methods

    Science.gov (United States)

    Fahrurozi, A.; Madenda, S.; Ernastuti; Kerami, D.

    2016-06-01

    An image forming specific texture can be distinguished manually through the eye. However, sometimes it is difficult to do if the texture owned quite similar. Wood is a natural material that forms a unique texture. Experts can distinguish the quality of wood based texture observed in certain parts of the wood. In this study, it has been extracted texture features of the wood image that can be used to identify the characteristics of wood digitally by computer. Feature extraction carried out using Gray Level Co-occurrence Matrices (GLCM) built on an image from several edge detection methods applied to wood image. Edge detection methods used include Roberts, Sobel, Prewitt, Canny and Laplacian of Gaussian. The image of wood taken in LE2i laboratory, Universite de Bourgogne from the wood sample in France that grouped by their quality by experts and divided into four types of quality. Obtained a statistic that illustrates the distribution of texture features values of each wood type which compared according to the edge operator that is used and selection of specified GLCM parameters.

  19. Object detection via feature synthesis using MDL-based genetic programming.

    Science.gov (United States)

    Lin, Yingqiang; Bhanu, Bir

    2005-06-01

    In this paper, we use genetic programming (GP) to synthesize composite operators and composite features from combinations of primitive operations and primitive features for object detection. The motivation for using GP is to overcome the human experts' limitations of focusing only on conventional combinations of primitive image processing operations in the feature synthesis. GP attempts many unconventional combinations that in some cases yield exceptionally good results. To improve the efficiency of GP and prevent its well-known code bloat problem without imposing severe restriction on the GP search, we design a new fitness function based on minimum description length principle to incorporate both the pixel labeling error and the size of a composite operator into the fitness evaluation process. To further improve the efficiency of GP, smart crossover, smart mutation and a public library ideas are incorporated to identify and keep the effective components of composite operators. Our experiments, which are performed on selected training regions of a training image to reduce the training time, show that compared to normal GP, our GP algorithm finds effective composite operators more quickly and the learned composite operators can be applied to the whole training image and other similar testing images. Also, compared to a traditional region-of-interest extraction algorithm, the composite operators learned by GP are more effective and efficient for object detection.

  20. Comparative Evaluation of Hyperspectral Imaging and Bathymetric Lidar for Measuring Channel Morphology Across a Range of River Environments

    Science.gov (United States)

    Legleiter, C. J.; Overstreet, B. T.; Glennie, C. L.; Pan, Z.; Fernandez-Diaz, J. C.; Singhania, A.

    2014-12-01

    Reliable topographic information is critical to many applications in the riverine sciences. Quantifying morphologic change, modeling flow and sediment transport, and assessing aquatic habitat all require accurate, spatially distributed measurements of bed elevation. Remote sensing has emerged as a powerful tool for acquiring such data, but the capabilities and limitations associated with various remote sensing techniques must be evaluated systematically. In this study, we assessed the potential of hyperspectral imaging and bathymetric LiDAR for measuring channel morphology across a range of conditions in two distinct field sites: the clear-flowing Snake River in Grand Teton National Park and the confluence of the Blue and Colorado Rivers in north-central Colorado, USA. Field measurements of water column optical properties highlighted differences among these streams, including the highly turbid Muddy Creek also entering the Colorado, and enabled theoretical calculations of bathymetric precision (smallest detectable change in depth) and dynamic range (maximum detectable depth). Hyperspectral imaging can yield more precise depth estimates in shallow, clear water but bathymetric LiDAR could provide more consistent performance across a broader range of depths. Spectrally-based depth retrieval was highly accurate on the Snake River but less reliable in the more complex confluence setting. Stratification of the Blue/Colorado site into clear and turbid subsets did not improve depth retrieval performance. To obtain bed elevations, image-derived depth estimates were subtracted from water surface elevations derived from near-infrared LiDAR acquired at the same time as the hyperspectral images. For the water-penetrating green LiDAR, bed elevations were inferred from laser waveforms. On the Snake River, hyperspectral imaging resulted in smaller mean and root mean square errors than bathymetric LiDAR, but at the Blue/Colorado site the optical approach was subject to a shallow

  1. Physics-based features for identifying contextual factors affecting landmine detection with ground-penetrating radar

    Science.gov (United States)

    Ratto, Christopher R.; Morton, Kenneth D., Jr.; Collins, Leslie M.; Torrione, Peter A.

    2011-06-01

    It has been established throughout the ground-penetrating radar (GPR) literature that environmental factors can severely impact the performance of GPR sensors in landmine detection applications. Over the years, electromagnetic inversion techniques have been proposed for determining these factors with the goal of mitigating performance losses. However, these techniques are often computationally expensive and require models and responses from canonical targets, and therefore may not be appropriate for real-time route-clearance applications. An alternative technique for mitigating performance changes due to environmental factors is context-dependent classification, in which decision rules are adjusted based on contextual shifts identified from the GPR data. However, analysis of the performance of context-dependent learning has been limited to qualitative comparisons of contextually-similar GPR signatures and quantitative improvement to the ROC curve, while the actual information extracted regarding soils has not been investigated thoroughly. In this work, physics-based features of GPR data used in previous context-dependent approaches were extracted from simulated GPR data generated through Finite-Difference Time-Domain (FDTD) modeling. Statistical techniques where then used to predict several potential contextual factors, including soil dielectric constant, surface roughness, amount of subsurface clutter, and the existence of subsurface layering, based on the features. Results suggest that physics-based features of the GPR background may contain informatin regarding physical properties of the environment, and contextdependent classification based on these features can exploit information regarding these potentially-important environmental factors.

  2. Advanced signal processing method for ground penetrating radar feature detection and enhancement

    Science.gov (United States)

    Zhang, Yu; Venkatachalam, Anbu Selvam; Huston, Dryver; Xia, Tian

    2014-03-01

    This paper focuses on new signal processing algorithms customized for an air coupled Ultra-Wideband (UWB) Ground Penetrating Radar (GPR) system targeting highway pavements and bridge deck inspections. The GPR hardware consists of a high-voltage pulse generator, a high speed 8 GSps real time data acquisition unit, and a customized field-programmable gate array (FPGA) control element. In comparison to most existing GPR system with low survey speeds, this system can survey at normal highway speed (60 mph) with a high horizontal resolution of up to 10 scans per centimeter. Due to the complexity and uncertainty of subsurface media, the GPR signal processing is important but challenging. In this GPR system, an adaptive GPR signal processing algorithm using Curvelet Transform, 2D high pass filtering and exponential scaling is proposed to alleviate noise and clutter while the subsurface features are preserved and enhanced. First, Curvelet Transform is used to remove the environmental and systematic noises while maintain the range resolution of the B-Scan image. Then, mathematical models for cylinder-shaped object and clutter are built. A two-dimension (2D) filter based on these models removes clutter and enhances the hyperbola feature in a B-Scan image. Finally, an exponential scaling method is applied to compensate the signal attenuation in subsurface materials and to improve the desired signal feature. For performance test and validation, rebar detection experiments and subsurface feature inspection in laboratory and field configurations are performed.

  3. Pulmonary embolism detection using localized vessel-based features in dual energy CT

    Science.gov (United States)

    Dicente Cid, Yashin; Depeursinge, Adrien; Foncubierta Rodríguez, Antonio; Platon, Alexandra; Poletti, Pierre-Alexandre; Müller, Henning

    2015-03-01

    Pulmonary embolism (PE) affects up to 600,000 patients and contributes to at least 100,000 deaths every year in the United States alone. Diagnosis of PE can be difficult as most symptoms are unspecific and early diagnosis is essential for successful treatment. Computed Tomography (CT) images can show morphological anomalies that suggest the existence of PE. Various image-based procedures have been proposed for improving computer-aided diagnosis of PE. We propose a novel method for detecting PE based on localized vessel-based features computed in Dual Energy CT (DECT) images. DECT provides 4D data indexed by the three spatial coordinates and the energy level. The proposed features encode the variation of the Hounsfield Units across the different levels and the CT attenuation related to the amount of iodine contrast in each vessel. A local classification of the vessels is obtained through the classification of these features. Moreover, the localization of the vessel in the lung provides better comparison between patients. Results show that the simple features designed are able to classify pulmonary embolism patients with an AUC (area under the receiver operating curve) of 0.71 on a lobe basis. Prior segmentation of the lung lobes is not necessary because an automatic atlas-based segmentation obtains similar AUC levels (0.65) for the same dataset. The automatic atlas reaches 0.80 AUC in a larger dataset with more control cases.

  4. Land Cover Change Detection Based on Genetically Feature Aelection and Image Algebra Using Hyperion Hyperspectral Imagery

    Science.gov (United States)

    Seydi, S. T.; Hasanlou, M.

    2015-12-01

    The Earth has always been under the influence of population growth and human activities. This process causes the changes in land use. Thus, for optimal management of the use of resources, it is necessary to be aware of these changes. Satellite remote sensing has several advantages for monitoring land use/cover resources, especially for large geographic areas. Change detection and attribution of cultivation area over time present additional challenges for correctly analyzing remote sensing imagery. In this regards, for better identifying change in multi temporal images we use hyperspectral images. Hyperspectral images due to high spectral resolution created special placed in many of field. Nevertheless, selecting suitable and adequate features/bands from this data is crucial for any analysis and especially for the change detection algorithms. This research aims to automatically feature selection for detect land use changes are introduced. In this study, the optimal band images using hyperspectral sensor using Hyperion hyperspectral images by using genetic algorithms and Ratio bands, we select the optimal band. In addition, the results reveal the superiority of the implemented method to extract change map with overall accuracy by a margin of nearly 79% using multi temporal hyperspectral imagery.

  5. Graph clustering for weapon discharge event detection and tracking in infrared imagery using deep features

    Science.gov (United States)

    Bhattacharjee, Sreyasee Das; Talukder, Ashit

    2017-05-01

    This paper addresses the problem of detecting and tracking weapon discharge event in an Infrared Imagery collection. While most of the prior work in related domains exploits the vast amount of complementary in- formation available from both visible-band (EO) and Infrared (IR) image (or video sequences), we handle the problem of recognizing human pose and activity detection exclusively in thermal (IR) images or videos. The task is primarily two-fold: 1) locating the individual in the scene from IR imagery, and 2) identifying the correct pose of the human individual (i.e. presence or absence of weapon discharge activity or intent). An efficient graph-based shortlisting strategy for identifying candidate regions of interest in the IR image utilizes both image saliency and mutual similarities from the initial list of the top scored proposals of a given query frame, which ensures an improved performance for both detection and recognition simultaneously and reduced false alarms. The proposed search strategy offers an efficient feature extraction scheme that can capture the maximum amount of object structural information by defining a region- based deep shape descriptor representing each object of interest present in the scene. Therefore, our solution is capable of handling the fundamental incompleteness of the IR imageries for which the conventional deep features optimized on the natural color images in Imagenet are not quite suitable. Our preliminary experiments on the OSU weapon dataset demonstrates significant success in automated recognition of weapon discharge events from IR imagery.

  6. Load-differential features for automated detection of fatigue cracks using guided waves

    Science.gov (United States)

    Chen, Xin; Lee, Sang Jun; Michaels, Jennifer E.; Michaels, Thomas E.

    2012-05-01

    Guided wave structural health monitoring (SHM) is being considered to assess the integrity of plate-like structures for many applications. Prior research has investigated how guided wave propagation is affected by applied loads, which induce anisotropic changes in both dimensions and phase velocity. In addition, it is well-known that applied tensile loads open fatigue cracks and thus enhance their detectability using ultrasonic methods. Here we describe load-differential methods in which signals recorded from different loads at the same damage state are compared without using previously obtained damage-free data. Changes in delay-and-sum images are considered as a function of differential loads and damage state. Load-differential features are extracted from these images that capture the effects of loading as fatigue cracks are opened. Damage detection thresholds are adaptively set based upon the load-differential behavior of the various features, which enables implementation of an automated fatigue crack detection process. The efficacy of the proposed approach is examined using data from a fatigue test performed on an aluminum plate specimen that is instrumented with a sparse array of surface-mounted ultrasonic guided wave transducers.

  7. LAND COVER CHANGE DETECTION BASED ON GENETICALLY FEATURE AELECTION AND IMAGE ALGEBRA USING HYPERION HYPERSPECTRAL IMAGERY

    Directory of Open Access Journals (Sweden)

    S. T. Seydi

    2015-12-01

    Full Text Available The Earth has always been under the influence of population growth and human activities. This process causes the changes in land use. Thus, for optimal management of the use of resources, it is necessary to be aware of these changes. Satellite remote sensing has several advantages for monitoring land use/cover resources, especially for large geographic areas. Change detection and attribution of cultivation area over time present additional challenges for correctly analyzing remote sensing imagery. In this regards, for better identifying change in multi temporal images we use hyperspectral images. Hyperspectral images due to high spectral resolution created special placed in many of field. Nevertheless, selecting suitable and adequate features/bands from this data is crucial for any analysis and especially for the change detection algorithms. This research aims to automatically feature selection for detect land use changes are introduced. In this study, the optimal band images using hyperspectral sensor using Hyperion hyperspectral images by using genetic algorithms and Ratio bands, we select the optimal band. In addition, the results reveal the superiority of the implemented method to extract change map with overall accuracy by a margin of nearly 79% using multi temporal hyperspectral imagery.

  8. Traffic Sign Detection and Recognition using Features Combination and Random Forests

    Directory of Open Access Journals (Sweden)

    Ayoub ELLAHYANI

    2016-01-01

    Full Text Available In this paper, we present a computer vision based system for fast robust Traffic Sign Detection and Recognition (TSDR, consisting of three steps. The first step consists on image enhancement and thresholding using the three components of the Hue Saturation and Value (HSV space. Then we refer to distance to border feature and Random Forests classifier to detect circular, triangular and rectangular shapes on the segmented images. The last step consists on identifying the information included in the detected traffic signs. We compare four features descriptors which include Histogram of Oriented Gradients (HOG, Gabor, Local Binary Pattern (LBP, and Local Self-Similarity (LSS. We also compare their different combinations. For the classifiers we have carried out a comparison between Random Forests and Support Vector Machines (SVMs. The best results are given by the combination HOG with LSS together with the Random Forest classifier. The proposed method has been tested on the Swedish Traffic Signs Data set and gives satisfactory results.

  9. Real-Time Lane Region Detection Using a Combination of Geometrical and Image Features

    Science.gov (United States)

    Cáceres Hernández, Danilo; Kurnianggoro, Laksono; Filonenko, Alexander; Jo, Kang Hyun

    2016-01-01

    Over the past few decades, pavement markings have played a key role in intelligent vehicle applications such as guidance, navigation, and control. However, there are still serious issues facing the problem of lane marking detection. For example, problems include excessive processing time and false detection due to similarities in color and edges between traffic signs (channeling lines, stop lines, crosswalk, arrows, etc.). This paper proposes a strategy to extract the lane marking information taking into consideration its features such as color, edge, and width, as well as the vehicle speed. Firstly, defining the region of interest is a critical task to achieve real-time performance. In this sense, the region of interest is dependent on vehicle speed. Secondly, the lane markings are detected by using a hybrid color-edge feature method along with a probabilistic method, based on distance-color dependence and a hierarchical fitting model. Thirdly, the following lane marking information is extracted: the number of lane markings to both sides of the vehicle, the respective fitting model, and the centroid information of the lane. Using these parameters, the region is computed by using a road geometric model. To evaluate the proposed method, a set of consecutive frames was used in order to validate the performance. PMID:27869657

  10. Spectral feature characterization methods for blood stain detection in crime scene backgrounds

    Science.gov (United States)

    Yang, Jie; Mathew, Jobin J.; Dube, Roger R.; Messinger, David W.

    2016-05-01

    Blood stains are one of the most important types of evidence for forensic investigation. They contain valuable DNA information, and the pattern of the stains can suggest specifics about the nature of the violence that transpired at the scene. Blood spectral signatures containing unique reflectance or absorption features are important both for forensic on-site investigation and laboratory testing. They can be used for target detection and identification applied to crime scene hyperspectral imagery, and also be utilized to analyze the spectral variation of blood on various backgrounds. Non-blood stains often mislead the detection and can generate false alarms at a real crime scene, especially for dark and red backgrounds. This paper measured the reflectance of liquid blood and 9 kinds of non-blood samples in the range of 350 nm - 2500 nm in various crime scene backgrounds, such as pure samples contained in petri dish with various thicknesses, mixed samples with different colors and materials of fabrics, and mixed samples with wood, all of which are examined to provide sub-visual evidence for detecting and recognizing blood from non-blood samples in a realistic crime scene. The spectral difference between blood and non-blood samples are examined and spectral features such as "peaks" and "depths" of reflectance are selected. Two blood stain detection methods are proposed in this paper. The first method uses index to denote the ratio of "depth" minus "peak" over"depth" add"peak" within a wavelength range of the reflectance spectrum. The second method uses relative band depth of the selected wavelength ranges of the reflectance spectrum. Results show that the index method is able to discriminate blood from non-blood samples in most tested crime scene backgrounds, but is not able to detect it from black felt. Whereas the relative band depth method is able to discriminate blood from non-blood samples on all of the tested background material types and colors.

  11. Bathymetric estimation using MERIS images in coastal sea waters

    OpenAIRE

    Minghelli Roman, Audrey; Polidori, Laurent; Mathieu-blanc, Sandrine; Loubersac, Lionel; Cauneau, François

    2007-01-01

    Bathymetric estimation using remote sensing images has previously been applied to high spatial resolution imagery such as CASI, Ikonos, or SPOT but not on medium spatial resolution images (i.e., MERIS). This choice can be justified when there is a need to map the bathymetry on large areas. In this letter, we present the results of the bathymetry estimation over a large known area, the Gulf of Lion (France), expanding over 270 x 180 km.

  12. Bright Retinal Lesions Detection using Colour Fundus Images Containing Reflective Features

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Karnowski, Thomas Paul [ORNL; Chaum, Edward [ORNL; Meriaudeau, Fabrice [ORNL; Tobin Jr, Kenneth William [ORNL; Li, Yaquin [University of Tennessee, Knoxville (UTK)

    2009-01-01

    In the last years the research community has developed many techniques to detect and diagnose diabetic retinopathy with retinal fundus images. This is a necessary step for the implementation of a large scale screening effort in rural areas where ophthalmologists are not available. In the United States of America, the incidence of diabetes is worryingly increasing among the young population. Retina fundus images of patients younger than 20 years old present a high amount of reflection due to the Nerve Fibre Layer (NFL), the younger the patient the more these reflections are visible. To our knowledge we are not aware of algorithms able to explicitly deal with this type of reflection artefact. This paper presents a technique to detect bright lesions also in patients with a high degree of reflective NFL. First, the candidate bright lesions are detected using image equalization and relatively simple histogram analysis. Then, a classifier is trained using texture descriptor (Multi-scale Local Binary Patterns) and other features in order to remove the false positives in the lesion detection. Finally, the area of the lesions is used to diagnose diabetic retinopathy. Our database consists of 33 images from a telemedicine network currently developed. When determining moderate to high diabetic retinopathy using the bright lesions detected the algorithm achieves a sensitivity of 100% at a specificity of 100% using hold-one-out testing.

  13. Enhanced retinal modeling for face recognition and facial feature point detection under complex illumination conditions

    Science.gov (United States)

    Cheng, Yong; Li, Zuoyong; Jiao, Liangbao; Lu, Hong; Cao, Xuehong

    2016-07-01

    We improved classic retinal modeling to alleviate the adverse effect of complex illumination on face recognition and extracted robust image features. Our improvements on classic retinal modeling included three aspects. First, a combined filtering scheme was applied to simulate functions of horizontal and amacrine cells for accurate local illumination estimation. Second, we developed an optimal threshold method for illumination classification. Finally, we proposed an adaptive factor acquisition model based on the arctangent function. Experimental results on the combined Yale B; the Carnegie Mellon University poses, illumination, and expression; and the Labeled Face Parts in the Wild databases show that the proposed method can effectively alleviate illumination difference of images under complex illumination conditions, which is helpful for improving the accuracy of face recognition and that of facial feature point detection.

  14. Habitat Classification of Temperate Marine Macroalgal Communities Using Bathymetric LiDAR

    Directory of Open Access Journals (Sweden)

    Richard Zavalas

    2014-03-01

    Full Text Available Here, we evaluated the potential of using bathymetric Light Detection and Ranging (LiDAR to characterise shallow water (<30 m benthic habitats of high energy subtidal coastal environments. Habitat classification, quantifying benthic substrata and macroalgal communities, was achieved in this study with the application of LiDAR and underwater video groundtruth data using automated classification techniques. Bathymetry and reflectance datasets were used to produce secondary terrain derivative surfaces (e.g., rugosity, aspect that were assumed to influence benthic patterns observed. An automated decision tree classification approach using the Quick Unbiased Efficient Statistical Tree (QUEST was applied to produce substrata, biological and canopy structure habitat maps of the study area. Error assessment indicated that habitat maps produced were primarily accurate (>70%, with varying results for the classification of individual habitat classes; for instance, producer accuracy for mixed brown algae and sediment substrata, was 74% and 93%, respectively. LiDAR was also successful for differentiating canopy structure of macroalgae communities (i.e., canopy structure classification, such as canopy forming kelp versus erect fine branching algae. In conclusion, habitat characterisation using bathymetric LiDAR provides a unique potential to collect baseline information about biological assemblages and, hence, potential reef connectivity over large areas beyond the range of direct observation. This research contributes a new perspective for assessing the structure of subtidal coastal ecosystems, providing a novel tool for the research and management of such highly dynamic marine environments.

  15. Global Bathymetric Prediction For Ocean Modeling and Marine Geophysics

    Science.gov (United States)

    Sandwell, David T.; Smith, Walter H. F.; Sichoix, Lydie; Frey, Herbert V. (Technical Monitor)

    2001-01-01

    We proposed to construct a complete bathymetric map of the oceans at a 3-10 km resolution by combining all of the available depth soundings collected over the past 30 years with high resolution marine gravity information provided by the Geosat, ERS-1/2, and Topex/Poseidon altimeters. Detailed bathymetry is essential for understanding physical oceanography and marine geophysics. Currents and tides are controlled by the overall shapes of the ocean basins as well as the smaller sharp ocean ridges and seamounts. Because erosion rates are low in the deep oceans, detailed bathymetry reveals the mantle convection patterns, the plate boundaries, the cooling/subsidence of the oceanic lithosphere, the oceanic plateaus, and the distribution of off-ridge volcanoes. We proposed to: (1) Accumulate all available depth soundings collected over the past 30 years; (2) Use the short wavelength (< 160 km) satellite gravity information to interpolate between sparse ship soundings; (3) Improve the resolution of the marine gravity field using enhanced estimates along repeat altimeter profiles together with the dense altimeter measurements; (4) Refine/improve bathymetric predictions using the improved resolution gravity field and also by investigating computer-intensive methods for bathymetric prediction such as inverse theory; and (5) Produce a 'Globe of the Earth' similar to the globe of Venus prepared by the NASA Magellan investigation. This will also include the best available digital land data.

  16. Application of next generation sequencing to human gene fusion detection: computational tools, features and perspectives.

    Science.gov (United States)

    Wang, Qingguo; Xia, Junfeng; Jia, Peilin; Pao, William; Zhao, Zhongming

    2013-07-01

    Gene fusions are important genomic events in human cancer because their fusion gene products can drive the development of cancer and thus are potential prognostic tools or therapeutic targets in anti-cancer treatment. Major advancements have been made in computational approaches for fusion gene discovery over the past 3 years due to improvements and widespread applications of high-throughput next generation sequencing (NGS) technologies. To identify fusions from NGS data, existing methods typically leverage the strengths of both sequencing technologies and computational strategies. In this article, we review the NGS and computational features of existing methods for fusion gene detection and suggest directions for future development.

  17. Mean shift texture surface detection based on WT and COM feature image selection

    Institute of Scientific and Technical Information of China (English)

    HAN Yan-fang; SHI Peng-fei

    2006-01-01

    Mean shift is a widely used clustering algorithm in image segmentation. However, the segmenting results are not so good as expected when dealing with the texture surface due to the influence of the textures. Therefore, an approach based on wavelet transform (WT), co-occurrence matrix (COM) and mean shift is proposed in this paper. First, WT and COM are employed to extract the optimal resolution approximation of the original image as feature image. Then, mean shift is successfully used to obtain better detection results. Finally, experiments are done to show this approach is effective.

  18. A Widely Applicable Silver Sol for TLC Detection with Rich and Stable SERS Features

    Science.gov (United States)

    Zhu, Qingxia; Li, Hao; Lu, Feng; Chai, Yifeng; Yuan, Yongfang

    2016-04-01

    Thin-layer chromatography (TLC) coupled with surface-enhanced Raman spectroscopy (SERS) has gained tremendous popularity in the study of various complex systems. However, the detection of hydrophobic analytes is difficult, and the specificity still needs to be improved. In this study, a SERS-active non-aqueous silver sol which could activate the analytes to produce rich and stable spectral features was rapidly synthesized. Then, the optimized silver nanoparticles (AgNPs)-DMF sol was employed for TLC-SERS detection of hydrophobic (and also hydrophilic) analytes. SERS performance of this sol was superior to that of traditional Lee-Meisel AgNPs due to its high specificity, acceptable stability, and wide applicability. The non-aqueous AgNPs would be suitable for the TLC-SERS method, which shows great promise for applications in food safety assurance, environmental monitoring, medical diagnoses, and many other fields.

  19. Acoustic Longitudinal Field NIF Optic Feature Detection Map Using Time-Reversal & MUSIC

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S K

    2006-02-09

    We developed an ultrasonic longitudinal field time-reversal and MUltiple SIgnal Classification (MUSIC) based detection algorithm for identifying and mapping flaws in fused silica NIF optics. The algorithm requires a fully multistatic data set, that is one with multiple, independently operated, spatially diverse transducers, each transmitter of which, in succession, launches a pulse into the optic and the scattered signal measured and recorded at every receiver. We have successfully localized engineered ''defects'' larger than 1 mm in an optic. We confirmed detection and localization of 3 mm and 5 mm features in experimental data, and a 0.5 mm in simulated data with sufficiently high signal-to-noise ratio. We present the theory, experimental results, and simulated results.

  20. Feature-based fusion of infrared and visible dynamic images using target detection

    Institute of Scientific and Technical Information of China (English)

    Congyi Liu; Zhongliang Jing; Gang Xiao; Bo Yang

    2007-01-01

    We employ the target detection to improve the performance of the feature-based fusion of infrared and visible dynamic images, which forms a novel fusion scheme. First, the target detection is used to segment the source image sequences into target and background regions. Then, the dual-tree complex wavelet transform (DT-CWT) is proposed to decompose all the source image sequences. Different fusion rules are applied respectively in target and background regions to preserve the target information as much as possible. Real world infrared and visible image sequences are used to validate the performance of the proposed novel scheme. Compared with the previous fusion approaches of image sequences, the improvements of shift invariance, temporal stability and consistency, and computation cost are all ensured.

  1. On the use of feature selection to improve the detection of sea oil spills in SAR images

    Science.gov (United States)

    Mera, David; Bolon-Canedo, Veronica; Cotos, J. M.; Alonso-Betanzos, Amparo

    2017-03-01

    Fast and effective oil spill detection systems are crucial to ensure a proper response to environmental emergencies caused by hydrocarbon pollution on the ocean's surface. Typically, these systems uncover not only oil spills, but also a high number of look-alikes. The feature extraction is a critical and computationally intensive phase where each detected dark spot is independently examined. Traditionally, detection systems use an arbitrary set of features to discriminate between oil spills and look-alikes phenomena. However, Feature Selection (FS) methods based on Machine Learning (ML) have proved to be very useful in real domains for enhancing the generalization capabilities of the classifiers, while discarding the existing irrelevant features. In this work, we present a generic and systematic approach, based on FS methods, for choosing a concise and relevant set of features to improve the oil spill detection systems. We have compared five FS methods: Correlation-based feature selection (CFS), Consistency-based filter, Information Gain, ReliefF and Recursive Feature Elimination for Support Vector Machine (SVM-RFE). They were applied on a 141-input vector composed of features from a collection of outstanding studies. Selected features were validated via a Support Vector Machine (SVM) classifier and the results were compared with previous works. Test experiments revealed that the classifier trained with the 6-input feature vector proposed by SVM-RFE achieved the best accuracy and Cohen's kappa coefficient (87.1% and 74.06% respectively). This is a smaller feature combination with similar or even better classification accuracy than previous works. The presented finding allows to speed up the feature extraction phase without reducing the classifier accuracy. Experiments also confirmed the significance of the geometrical features since 75.0% of the different features selected by the applied FS methods as well as 66.67% of the proposed 6-input feature vector belong to

  2. Application of Geologic Mapping Techniques and Autonomous Feature Detection to Future Exploration of Europa

    Science.gov (United States)

    Bunte, M. K.; Tanaka, K. L.; Doggett, T.; Figueredo, P. H.; Lin, Y.; Greeley, R.; Saripalli, S.; Bell, J. F.

    2013-12-01

    Europa's extremely young surface age, evidence for extensive resurfacing, and indications of a sub-surface ocean elevate its astrobiological potential for habitable environments and make it a compelling focus for study. Knowledge of the global distribution and timing of Europan geologic units is a key step in understanding the history of the satellite and for identifying areas relevant for exploration. I have produced a 1:15M scale global geologic map of Europa which represents a proportionate distribution of four unit types and associated features: plains, linea, chaos, and crater materials. Mapping techniques differ somewhat from other planetary maps but do provide a method to establish stratigraphic markers and to illustrate the surface history through four periods of formation as a function of framework lineament cross-cutting relationships. Correlations of observed features on Europa with Earth analogs enforce a multi-process theory for formation rather than the typical reliance on the principle of parsimony. Lenticulae and microchaos are genetically similar and most likely form by diapirism. Platy and blocky chaos units, endmembers of archetypical chaos, are best explained by brine mobilization. Ridges account for the majority of lineaments and may form by a number of methods indicative of local conditions; most form by either tidal pumping or shear heating. The variety of morphologies exhibited by bands indicates that multiple formation mechanisms apply once fracturing of the brittle surface over a ductile subsurface is initiated. Mapping results support the interpretation that Europa's shell has thickened over time resulting in changes in the style and intensity of deformation. Mapping serves as an index for change detection and classification, aids in pre-encounter targeting, and supports the selection of potential landing sites. Highest priority target areas are those which indicate geophysical activity by the presence of volcanic plumes, outgassing, or

  3. Road detection in arid environments using uniformly distributed random based features

    Science.gov (United States)

    Plodpradista, P.; Keller, J. M.; Popescu, M.

    2016-05-01

    The capability of detecting an unpaved road in arid environments can greatly enhance an explosive hazard detection system. One approach is to segment out the off-road area and the area above the horizon, which is considered to be irrelevant for the task in hand. Segmenting out irrelevant areas, such as the region above the horizon, allows the explosive hazard detection system to process a smaller region in a scene, enabling a more computationally complex approach. In this paper, we propose a novel approach for speeding up the detection algorithms based on random projection and random selection. Both methods have a low computational cost and reduce the dimensionality of the data while approximately preserving, with a certain probability, the pair-wise point distances. Dimensionality reduction allows any classifier employed in our proposed algorithm to consume fewer computational resources. Furthermore, by applying the random projections directly to image intensity patches, there is no feature extraction needed. The data used in our proposed algorithms are obtained from sensors on board a U.S. Army countermine vehicle. We tested our proposed algorithms on data obtained from several runs on an arid climate road. In our experiments we compare our algorithms based on random projection and random selection to Principal Component Analysis (PCA), a popular dimensionality reduction method.

  4. [Spectral features analysis of Pinus massoniana with pest of Dendrolimus punctatus Walker and levels detection].

    Science.gov (United States)

    Xu, Zhang-Hua; Liu, Jian; Yu, Kun-Yong; Gong, Cong-Hong; Xie, Wan-Jun; Tang, Meng-Ya; Lai, Ri-Wen; Li, Zeng-Lu

    2013-02-01

    Taking 51 field measured hyperspectral data with different pest levels in Yanping, Fujian Province as objects, the spectral reflectance and first derivative features of 4 levels of healthy, mild, moderate and severe insect pest were analyzed. On the basis of 7 detecting parameters construction, the pest level detecting models were built. The results showed that (1) the spectral reflectance of Pinus massoniana with pests were significantly lower than that of healthy state, and the higher the pest level, the lower the reflectance; (2) with the increase in pest level, the spectral reflectance curves' "green peak" and "red valley" of Pinus massoniana gradually disappeared, and the red edge was leveleds (3) the pest led to spectral "green peak" red shift, red edge position blue shift, but the changes in "red valley" and near-infrared position were complicated; (4) CARI, RES, REA and REDVI were highly relevant to pest levels, and the correlations between REP, RERVI, RENDVI and pest level were weak; (5) the multiple linear regression model with the variables of the 7 detection parameters could effectively detect the pest levels of Dendrolimus punctatus Walker, with both the estimation rate and accuracy above 0.85.

  5. Joint spatial-spectral feature space clustering for speech activity detection from ECoG signals.

    Science.gov (United States)

    Kanas, Vasileios G; Mporas, Iosif; Benz, Heather L; Sgarbas, Kyriakos N; Bezerianos, Anastasios; Crone, Nathan E

    2014-04-01

    Brain-machine interfaces for speech restoration have been extensively studied for more than two decades. The success of such a system will depend in part on selecting the best brain recording sites and signal features corresponding to speech production. The purpose of this study was to detect speech activity automatically from electrocorticographic signals based on joint spatial-frequency clustering of the ECoG feature space. For this study, the ECoG signals were recorded while a subject performed two different syllable repetition tasks. We found that the optimal frequency resolution to detect speech activity from ECoG signals was 8 Hz, achieving 98.8% accuracy by employing support vector machines as a classifier. We also defined the cortical areas that held the most information about the discrimination of speech and nonspeech time intervals. Additionally, the results shed light on the distinct cortical areas associated with the two syllables repetition tasks and may contribute to the development of portable ECoG-based communication.

  6. Spinal focal lesion detection in multiple myeloma using multimodal image features

    Science.gov (United States)

    Fränzle, Andrea; Hillengass, Jens; Bendl, Rolf

    2015-03-01

    Multiple myeloma is a tumor disease in the bone marrow that affects the skeleton systemically, i.e. multiple lesions can occur in different sites in the skeleton. To quantify overall tumor mass for determining degree of disease and for analysis of therapy response, volumetry of all lesions is needed. Since the large amount of lesions in one patient impedes manual segmentation of all lesions, quantification of overall tumor volume is not possible until now. Therefore development of automatic lesion detection and segmentation methods is necessary. Since focal tumors in multiple myeloma show different characteristics in different modalities (changes in bone structure in CT images, hypointensity in T1 weighted MR images and hyperintensity in T2 weighted MR images), multimodal image analysis is necessary for the detection of focal tumors. In this paper a pattern recognition approach is presented that identifies focal lesions in lumbar vertebrae based on features from T1 and T2 weighted MR images. Image voxels within bone are classified using random forests based on plain intensities and intensity value derived features (maximum, minimum, mean, median) in a 5 x 5 neighborhood around a voxel from both T1 and T2 weighted MR images. A test data sample of lesions in 8 lumbar vertebrae from 4 multiple myeloma patients can be classified at an accuracy of 95% (using a leave-one-patient-out test). The approach provides a reasonable delineation of the example lesions. This is an important step towards automatic tumor volume quantification in multiple myeloma.

  7. A feature matching and fusion-based positive obstacle detection algorithm for field autonomous land vehicles

    Directory of Open Access Journals (Sweden)

    Tao Wu

    2017-03-01

    Full Text Available Positive obstacles will cause damage to field robotics during traveling in field. Field autonomous land vehicle is a typical field robotic. This article presents a feature matching and fusion-based algorithm to detect obstacles using LiDARs for field autonomous land vehicles. There are three main contributions: (1 A novel setup method of compact LiDAR is introduced. This method improved the LiDAR data density and reduced the blind region of the LiDAR sensor. (2 A mathematical model is deduced under this new setup method. The ideal scan line is generated by using the deduced mathematical model. (3 Based on the proposed mathematical model, a feature matching and fusion (FMAF-based algorithm is presented in this article, which is employed to detect obstacles. Experimental results show that the performance of the proposed algorithm is robust and stable, and the computing time is reduced by an order of two magnitudes by comparing with other exited algorithms. This algorithm has been perfectly applied to our autonomous land vehicle, which has won the champion in the challenge of Chinese “Overcome Danger 2014” ground unmanned vehicle.

  8. Context-dependent feature selection using unsupervised contexts applied to GPR-based landmine detection

    Science.gov (United States)

    Ratto, Christopher R.; Torrione, Peter A.; Collins, Leslie M.

    2010-04-01

    Context-dependent classification techniques applied to landmine detection with ground-penetrating radar (GPR) have demonstrated substantial performance improvements over conventional classification algorithms. Context-dependent algorithms compute a decision statistic by integrating over uncertainty in the unknown, but probabilistically inferable, context of the observation. When applied to GPR, contexts may be defined by differences in electromagnetic properties of the subsurface environment, which are due to discrepancies in soil composition, moisture levels, and surface texture. Context-dependent Feature Selection (CDFS) is a technique developed for selecting a unique subset of features for classifying landmines from clutter in different environmental contexts. In past work, context definitions were assumed to be soil moisture conditions which were known during training. However, knowledge of environmental conditions could be difficult to obtain in the field. In this paper, we utilize an unsupervised learning algorithm for defining contexts which are unknown a priori. Our method performs unsupervised context identification based on similarities in physics-based and statistical features that characterize the subsurface environment of the raw GPR data. Results indicate that utilizing this contextual information improves classification performance, and provides performance improvements over non-context-dependent approaches. Implications for on-line context identification will be suggested as a possible avenue for future work.

  9. Feature extraction for ultrasonic sensor based defect detection in ceramic components

    Science.gov (United States)

    Kesharaju, Manasa; Nagarajah, Romesh

    2014-02-01

    High density silicon carbide materials are commonly used as the ceramic element of hard armour inserts used in traditional body armour systems to reduce their weight, while providing improved hardness, strength and elastic response to stress. Currently, armour ceramic tiles are inspected visually offline using an X-ray technique that is time consuming and very expensive. In addition, from X-rays multiple defects are also misinterpreted as single defects. Therefore, to address these problems the ultrasonic non-destructive approach is being investigated. Ultrasound based inspection would be far more cost effective and reliable as the methodology is applicable for on-line quality control including implementation of accept/reject criteria. This paper describes a recently developed methodology to detect, locate and classify various manufacturing defects in ceramic tiles using sub band coding of ultrasonic test signals. The wavelet transform is applied to the ultrasonic signal and wavelet coefficients in the different frequency bands are extracted and used as input features to an artificial neural network (ANN) for purposes of signal classification. Two different classifiers, using artificial neural networks (supervised) and clustering (un-supervised) are supplied with features selected using Principal Component Analysis(PCA) and their classification performance compared. This investigation establishes experimentally that Principal Component Analysis(PCA) can be effectively used as a feature selection method that provides superior results for classifying various defects in the context of ultrasonic inspection in comparison with the X-ray technique.

  10. Detection of microsleep events in a car driving simulation study using electrocardiographic features

    Directory of Open Access Journals (Sweden)

    Lenis Gustavo

    2016-09-01

    Full Text Available Microsleep events (MSE are short intrusions of sleep under the demand of sustained attention. They can impose a major threat to safety while driving a car and are considered one of the most significant causes of traffic accidents. Driver’s fatigue and MSE account for up to 20% of all car crashes in Europe and at least 100,000 accidents in the US every year. Unfortunately, there is not a standardized test developed to quantify the degree of vigilance of a driver. To account for this problem, different approaches based on biosignal analysis have been studied in the past. In this paper, we investigate an electrocardiographic-based detection of MSE using morphological and rhythmical features. 14 records from a car driving simulation study with a high incidence of MSE were analyzed and the behavior of the ECG features before and after an MSE in relation to reference baseline values (without drowsiness were investigated. The results show that MSE cannot be detected (or predicted using only the ECG. However, in the presence of MSE, the rhythmical and morphological features were observed to be significantly different than the ones calculated for the reference signal without sleepiness. In particular, when MSE were present, the heart rate diminished while the heart rate variability increased. Time distances between P wave and R peak, and R peak and T wave and their dispersion increased also. This demonstrates a noticeable change of the autonomous regulation of the heart. In future, the ECG parameter could be used as a surrogate measure of fatigue.

  11. Detection of impact crater in 3D mesh by extraction of feature lines

    Science.gov (United States)

    Jorda, L.; Mari, J.-L.; Viseur, S.; Bouley, S.

    2013-09-01

    Impact craters are observed at the surface of most solar system bodies: terrestrial planets, satellites and asteroids. The measurement of their size-frequency distribution (SFD) is the only method available to estimate the age of the observed geological units, assuming a rate and velocity distributions of impactors and a crater scaling law. The age of the geological units is fundamental to establish a chronology of events explaining the global evolution of the surface. In addition, the detailed characterization of the crater properties (depth-to-diameter ratio and radial profile) yields a better understanding of the geological processes which altered the observed surfaces. Crater detection is usually performed manually directly from the acquired images. However, this method can become prohibitive when dealing with small craters extracted from very large data sets. A large number of solar system objects is being mapped at a very high spatial resolution by space probes since a few decades, emphasizing the need for new automatic methods of crater detection. Powerful computers are now available to produce and analyze huge 3D models of the surface in the form of 3D meshes containing tens to hundreds of billions of facets. This motivates the development of a new family of automatic crater detection algorithms (CDAs). The automatic CDAs developed so far were mainly based on morphological analyses and pattern recognition techniques on 2D images (e.g., Bandeira et al., 2012). Since a few years, new CDAs based on 3D models are being developed (see, e.g., Salamuniccar and Loncaric, 2010). Our objective is to develop and test against existing methods an automatic CDA using a new approach based on the discrete differential properties of 3D meshes. The method (Kudelski et al., 2010, 2011a,b) produces the feature lines (the crest and the ravine lines) lying on the surface. It is based on a double step algorithm: first, the regions of interest are flagged according to curvature

  12. Improved Feature Extraction, Feature Selection, and Identification Techniques That Create a Fast Unsupervised Hyperspectral Target Detection Algorithm

    Science.gov (United States)

    2008-03-01

    According to Stein, Beaven, Hoff, Winter, Schaum , and Stocker (2002:62), the local Gaussian model may not be a valid for hyperspectral data if relatively...David W.J., Scott G. Beaven, Lawrence E. Hoff, Edwin M. Winter, Alan P. Schaum and Alan D. Stocker. “Anomaly Detection for Hyperspectral Imagery

  13. Depth-based human fall detection via shape features and improved extreme learning machine.

    Science.gov (United States)

    Ma, Xin; Wang, Haibo; Xue, Bingxia; Zhou, Mingang; Ji, Bing; Li, Yibin

    2014-11-01

    Falls are one of the major causes leading to injury of elderly people. Using wearable devices for fall detection has a high cost and may cause inconvenience to the daily lives of the elderly. In this paper, we present an automated fall detection approach that requires only a low-cost depth camera. Our approach combines two computer vision techniques-shape-based fall characterization and a learning-based classifier to distinguish falls from other daily actions. Given a fall video clip, we extract curvature scale space (CSS) features of human silhouettes at each frame and represent the action by a bag of CSS words (BoCSS). Then, we utilize the extreme learning machine (ELM) classifier to identify the BoCSS representation of a fall from those of other actions. In order to eliminate the sensitivity of ELM to its hyperparameters, we present a variable-length particle swarm optimization algorithm to optimize the number of hidden neurons, corresponding input weights, and biases of ELM. Using a low-cost Kinect depth camera, we build an action dataset that consists of six types of actions (falling, bending, sitting, squatting, walking, and lying) from ten subjects. Experimenting with the dataset shows that our approach can achieve up to 91.15% sensitivity, 77.14% specificity, and 86.83% accuracy. On a public dataset, our approach performs comparably to state-of-the-art fall detection methods that need multiple cameras.

  14. A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features

    Directory of Open Access Journals (Sweden)

    P. Amudha

    2015-01-01

    Full Text Available Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC with Enhanced Particle Swarm Optimization (EPSO to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup’99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different.

  15. Boolean map saliency combined with motion feature used for dim and small target detection in infrared video sequences

    Science.gov (United States)

    Wang, Xiaoyang; Peng, Zhenming; Zhang, Ping

    2016-10-01

    Infrared dim and small target detection plays an important role in infrared search and tracking systems. In this paper, a novel infrared dim and small target detection method based on Boolean map saliency and motion feature is proposed. Infrared targets are the most salient parts in images, with high gray level and continuous moving trajectory. Utilizing this property, we build a feature space containing gray level feature and motion feature. The gray level feature is the intensity of input images, while the motion feature is obtained by motion charge in consecutive frames. In the second step, the Boolean map saliency approach is implemented on the gray level feature and motion feature to obtain the gray saliency map and motion saliency map. In the third step, two saliency maps are combined together to get the final result. Numerical experiments have verified the effectiveness of the proposed method. The final detection result can not only get an accurate detection result, but also with fewer false alarms, which is suitable for practical use.

  16. Evidence for bathymetric control on the distribution of body wave microseism sources from temporary seismic arrays in Africa

    Science.gov (United States)

    Euler, Garrett G.; Wiens, Douglas A.; Nyblade, Andrew A.

    2014-06-01

    Microseisms are the background seismic vibrations mostly driven by the interaction of ocean waves with the solid Earth. Locating the sources of microseisms improves our understanding of the range of conditions under which they are generated and has potential applications to seismic tomography and climate research. In this study, we detect persistent source locations of P-wave microseisms at periods of 5-10 s (0.1-0.2 Hz) using broad-band array noise correlation techniques and frequency-slowness analysis. Data include vertical component records from four temporary seismic arrays in equatorial and southern Africa with a total of 163 broad-band stations and deployed over a span of 13 yr (1994-2007). While none of the arrays were deployed contemporaneously, we find that the recorded microseismic P waves originate from common, distant oceanic bathymetric features with amplitudes that vary seasonally in proportion with extratropical cyclone activity. Our results show that the majority of the persistent microseismic P-wave source locations are within the 30-60º latitude belts of the Northern and Southern hemispheres while a substantially reduced number are found at lower latitudes. Variations in source location with frequency are also observed and indicate tomographic studies including microseismic body wave sources will benefit from analysing multiple frequency bands. We show that the distribution of these source regions in the North Atlantic as well as in the Southern Ocean correlate with variations in bathymetry and ocean wave heights and corroborate current theory on double-frequency microseism generation. The stability of the source locations over the 13-yr time span of our investigation suggests that the long-term body wave microseism source distribution is governed by variations in the bathymetry and ocean wave heights while the interaction of ocean waves has a less apparent influence.

  17. Hyperspectral Feature Detection Onboard the Earth Observing One Spacecraft using Superpixel Segmentation and Endmember Extraction

    Science.gov (United States)

    Thompson, David R.; Bornstein, Benjamin; Bue, Brian D.; Tran, Daniel Q.; Chien, Steve A.; Castano, Rebecca

    2012-01-01

    We present a demonstration of onboard hyperspectral image processing with the potential to reduce mission downlink requirements. The system detects spectral endmembers and then uses them to map units of surface material. This summarizes the content of the scene, reveals spectral anomalies warranting fast response, and reduces data volume by two orders of magnitude. We have integrated this system into the Autonomous Science craft Experiment for operational use onboard the Earth Observing One (EO-1) Spacecraft. The system does not require prior knowledge about spectra of interest. We report on a series of trial overflights in which identical spacecraft commands are effective for autonomous spectral discovery and mapping for varied target features, scenes and imaging conditions.

  18. Detection of Sharp Symmetric Features in the Circumbinary Disk Around AK Sco

    CERN Document Server

    Janson, Markus; Boccaletti, Anthony; Maire, Anne-Lise; Zurlo, Alice; Marzari, Francesco; Meyer, Michael R; Carson, Joseph C; Augereau, Jean-Charles; Garufi, Antonio; Henning, Thomas; Desidera, Silvano; Asensio-Torres, Ruben; Polh, Adriana

    2015-01-01

    The Search for Planets Orbiting Two Stars (SPOTS) survey aims to study the formation and distribution of planets in binary systems by detecting and characterizing circumbinary planets and their formation environments through direct imaging. With the SPHERE Extreme Adaptive Optics instrument, a good contrast can be achieved even at small (<300 mas) separations from bright stars, which enables studies of planets and disks in a separation range that was previously inaccessible. Here, we report the discovery of resolved scattered light emission from the circumbinary disk around the well-studied young double star AK Sco, at projected separations in the ~13--40 AU range. The sharp morphology of the imaged feature is surprising, given the smooth appearance of the disk in its spectral energy distribution. We show that the observed morphology can be represented either as a highly eccentric ring around AK Sco, or as two separate spiral arms in the disk, wound in opposite directions. The relative merits of these inte...

  19. Nonlinear features identified by Volterra series for damage detection in a buckled beam

    Directory of Open Access Journals (Sweden)

    Shiki S. B.

    2014-01-01

    Full Text Available The present paper proposes a new index for damage detection based on nonlinear features extracted from prediction errors computed by multiple convolutions using the discrete-time Volterra series. A reference Volterra model is identified with data in the healthy condition and used for monitoring the system operating with linear or nonlinear behavior. When the system has some structural change, possibly associated with damage, the index metrics computed could give an alert to separate the linear and nonlinear contributions, besides provide a diagnostic about the structural state. To show the applicability of the method, an experimental test is performed using nonlinear vibration signals measured in a clamped buckled beam subject to different levels of force applied and with simulated damages through discontinuities inserted in the beam surface.

  20. AN ANN APPROACH FOR NETWORK INTRUSION DETECTION USING ENTROPY BASED FEATURE SELECTION

    Directory of Open Access Journals (Sweden)

    Ashalata Panigrahi

    2015-06-01

    Full Text Available With the increase in Internet users the number of malicious users are also growing day-by-day posing a serious problem in distinguishing between normal and abnormal behavior of users in the network. This has led to the research area of intrusion detection which essentially analyzes the network traffic and tries to determine normal and abnormal patterns of behavior.In this paper, we have analyzed the standard NSL-KDD intrusion dataset using some neural network based techniques for predicting possible intrusions. Four most effective classification methods, namely, Radial Basis Function Network, SelfOrganizing Map, Sequential Minimal Optimization, and Projective Adaptive Resonance Theory have been applied. In order to enhance the performance of the classifiers, three entropy based feature selection methods have been applied as preprocessing of data. Performances of different combinations of classifiers and attribute reduction methods have also been compared.

  1. Pulmonary Nodule Detection Model Based on SVM and CT Image Feature-Level Fusion with Rough Sets

    Science.gov (United States)

    Lu, Huiling; Zhang, Junjie; Shi, Hongbin

    2016-01-01

    In order to improve the detection accuracy of pulmonary nodules in CT image, considering two problems of pulmonary nodules detection model, including unreasonable feature structure and nontightness of feature representation, a pulmonary nodules detection algorithm is proposed based on SVM and CT image feature-level fusion with rough sets. Firstly, CT images of pulmonary nodule are analyzed, and 42-dimensional feature components are extracted, including six new 3-dimensional features proposed by this paper and others 2-dimensional and 3-dimensional features. Secondly, these features are reduced for five times with rough set based on feature-level fusion. Thirdly, a grid optimization model is used to optimize the kernel function of support vector machine (SVM), which is used as a classifier to identify pulmonary nodules. Finally, lung CT images of 70 patients with pulmonary nodules are collected as the original samples, which are used to verify the effectiveness and stability of the proposed model by four groups' comparative experiments. The experimental results show that the effectiveness and stability of the proposed model based on rough set feature-level fusion are improved in some degrees.

  2. A MapReduce scheme for image feature extraction and its application to man-made object detection

    Science.gov (United States)

    Cai, Fei; Chen, Honghui

    2013-07-01

    A fundamental challenge in image engineering is how to locate interested objects from high-resolution images with efficient detection performance. Several man-made objects detection approaches have been proposed while the majority of these methods are not truly timesaving and suffer low degree of detection precision. To address this issue, we propose a novel approach for man-made object detection in aerial image involving MapReduce scheme for large scale image analysis to support image feature extraction, which can be widely used to compute-intensive tasks in a highly parallel way, and texture feature extraction and clustering. Comprehensive experiments show that the parallel framework saves voluminous time for feature extraction with satisfied objects detection performance.

  3. Rip current evidence by hydrodynamic simulations, bathymetric surveys and UAV observation

    Science.gov (United States)

    Benassai, Guido; Aucelli, Pietro; Budillon, Giorgio; De Stefano, Massimo; Di Luccio, Diana; Di Paola, Gianluigi; Montella, Raffaele; Mucerino, Luigi; Sica, Mario; Pennetta, Micla

    2017-09-01

    The prediction of the formation, spacing and location of rip currents is a scientific challenge that can be achieved by means of different complementary methods. In this paper the analysis of numerical and experimental data, including RPAS (remotely piloted aircraft systems) observations, allowed us to detect the presence of rip currents and rip channels at the mouth of Sele River, in the Gulf of Salerno, southern Italy. The dataset used to analyze these phenomena consisted of two different bathymetric surveys, a detailed sediment analysis and a set of high-resolution wave numerical simulations, completed with Google EarthTM images and RPAS observations. The grain size trend analysis and the numerical simulations allowed us to identify the rip current occurrence, forced by topographically constrained channels incised on the seabed, which were compared with observations.

  4. Multi-feature classifiers for burst detection in single EEG channels from preterm infants

    Science.gov (United States)

    Navarro, X.; Porée, F.; Kuchenbuch, M.; Chavez, M.; Beuchée, Alain; Carrault, G.

    2017-08-01

    Objective. The study of electroencephalographic (EEG) bursts in preterm infants provides valuable information about maturation or prognostication after perinatal asphyxia. Over the last two decades, a number of works proposed algorithms to automatically detect EEG bursts in preterm infants, but they were designed for populations under 35 weeks of post menstrual age (PMA). However, as the brain activity evolves rapidly during postnatal life, these solutions might be under-performing with increasing PMA. In this work we focused on preterm infants reaching term ages (PMA  ⩾36 weeks) using multi-feature classification on a single EEG channel. Approach. Five EEG burst detectors relying on different machine learning approaches were compared: logistic regression (LR), linear discriminant analysis (LDA), k-nearest neighbors (kNN), support vector machines (SVM) and thresholding (Th). Classifiers were trained by visually labeled EEG recordings from 14 very preterm infants (born after 28 weeks of gestation) with 36-41 weeks PMA. Main results. The most performing classifiers reached about 95% accuracy (kNN, SVM and LR) whereas Th obtained 84%. Compared to human-automatic agreements, LR provided the highest scores (Cohen’s kappa  =  0.71) using only three EEG features. Applying this classifier in an unlabeled database of 21 infants  ⩾36 weeks PMA, we found that long EEG bursts and short inter-burst periods are characteristic of infants with the highest PMA and weights. Significance. In view of these results, LR-based burst detection could be a suitable tool to study maturation in monitoring or portable devices using a single EEG channel.

  5. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Science.gov (United States)

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  6. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Directory of Open Access Journals (Sweden)

    Florian Eyben

    Full Text Available Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  7. Bathymetric Inversion of South China Sea from Satellite Altimetry Data

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    This paper focuses on the study of ocean bathymetric inversion from satellite altimeter data by using FFT technique.In this study,the free-air gravity anomalies over the South China Sea are determined by the satellite altimeter data of GEOSAT,ERS-1,ERS-2 and T/P.And the 2.5′×2.5′ bathymetry model in South China Sea is calculated from the gravity anomalies with the inversion model given.After the analysis of the inversion and the comparison between the results,some conclusions can be drawn.

  8. Machine Fault Detection Based on Filter Bank Similarity Features Using Acoustic and Vibration Analysis

    Directory of Open Access Journals (Sweden)

    Mauricio Holguín-Londoño

    2016-01-01

    Full Text Available Vibration and acoustic analysis actively support the nondestructive and noninvasive fault diagnostics of rotating machines at early stages. Nonetheless, the acoustic signal is less used because of its vulnerability to external interferences, hindering an efficient and robust analysis for condition monitoring (CM. This paper presents a novel methodology to characterize different failure signatures from rotating machines using either acoustic or vibration signals. Firstly, the signal is decomposed into several narrow-band spectral components applying different filter bank methods such as empirical mode decomposition, wavelet packet transform, and Fourier-based filtering. Secondly, a feature set is built using a proposed similarity measure termed cumulative spectral density index and used to estimate the mutual statistical dependence between each bandwidth-limited component and the raw signal. Finally, a classification scheme is carried out to distinguish the different types of faults. The methodology is tested in two laboratory experiments, including turbine blade degradation and rolling element bearing faults. The robustness of our approach is validated contaminating the signal with several levels of additive white Gaussian noise, obtaining high-performance outcomes that make the usage of vibration, acoustic, and vibroacoustic measurements in different applications comparable. As a result, the proposed fault detection based on filter bank similarity features is a promising methodology to implement in CM of rotating machinery, even using measurements with low signal-to-noise ratio.

  9. DETECTION OF SHARP SYMMETRIC FEATURES IN THE CIRCUMBINARY DISK AROUND AK Sco

    Energy Technology Data Exchange (ETDEWEB)

    Janson, Markus; Asensio-Torres, Ruben [Department of Astronomy, Stockholm University, AlbaNova University Center, SE-106 91 Stockholm (Sweden); Thalmann, Christian; Meyer, Michael R.; Garufi, Antonio [Institute for Astronomy, ETH Zurich, Wolfgang-Pauli-Strasse 27, CH-8093 Zurich (Switzerland); Boccaletti, Anthony [LESIA, Observatoire de Paris—Meudon, CNRS, Université Pierre et Marie Curie, Université Paris Didierot, 5 Place Jules Janssen, F-92195 Meudon (France); Maire, Anne-Lise; Henning, Thomas; Pohl, Adriana [Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany); Zurlo, Alice [Núcleo de Astronomía, Facultad de Ingeniería, Universidad Diego Portales, Av. Ejercito 441, Santiago (Chile); Marzari, Francesco [Dipartimento di Fisica, University of Padova, Via Marzolo 8, I-35131 Padova (Italy); Carson, Joseph C. [Department of Physics and Astronomy, College of Charleston, 66 George Street, Charleston, SC 29424 (United States); Augereau, Jean-Charles [Université Grenoble Alpes, IPAG, F-38000 Grenoble (France); Desidera, Silvano [INAF—Osservatorio Astromonico di Padova, Vicolo dell’Osservatorio 5, I-35122 Padova (Italy)

    2016-01-01

    The Search for Planets Orbiting Two Stars survey aims to study the formation and distribution of planets in binary systems by detecting and characterizing circumbinary planets and their formation environments through direct imaging. With the SPHERE Extreme Adaptive Optics instrument, a good contrast can be achieved even at small (<300 mas) separations from bright stars, which enables studies of planets and disks in a separation range that was previously inaccessible. Here, we report the discovery of resolved scattered light emission from the circumbinary disk around the well-studied young double star AK Sco, at projected separations in the ∼13–40 AU range. The sharp morphology of the imaged feature is surprising, given the smooth appearance of the disk in its spectral energy distribution. We show that the observed morphology can be represented either as a highly eccentric ring around AK Sco, or as two separate spiral arms in the disk, wound in opposite directions. The relative merits of these interpretations are discussed, as well as whether these features may have been caused by one or several circumbinary planets interacting with the disk.

  10. A new feature extraction method for signal classification applied to cord dorsum potentials detection

    Science.gov (United States)

    Vidaurre, D.; Rodríguez, E. E.; Bielza, C.; Larrañaga, P.; Rudomin, P.

    2012-01-01

    In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods. PMID:22929924

  11. GPS Signal Feature Analysis to Detect Volcanic Plume on Mount Etna

    Science.gov (United States)

    Cannavo', Flavio; Aranzulla, Massimo; Scollo, Simona; Puglisi, Giuseppe; Imme', Giuseppina

    2014-05-01

    Volcanic ash produced during explosive eruptions can cause disruptions to aviation operations and to population living around active volcanoes. Thus, detection of volcanic plume becomes a crucial issue to reduce troubles connected to its presence. Nowadays, the volcanic plume detection is carried out by using different approaches such as satellites, radars and lidars. Recently, the capability of GPS to retrieve volcanic plumes has been also investigated and some tests applied to explosive activity of Etna have demonstrated that also the GPS may give useful information. In this work, we use the permanent and continuous GPS network of the Istituto Nazionale di Geofisica e Vulcanologia, Osservatorio Etneo (Italy) that consists of 35 stations located all around volcano flanks. Data are processed by the GAMIT package developed by Massachusetts Institute of Technology. Here we investigate the possibility to quantify the volcanic plume through the GPS signal features and to estimate its spatial distribution by means of a tomographic inversion algorithm. The method is tested on volcanic plumes produced during the lava fountain of 4-5 September 2007, already used to confirm if weak explosive activity may or may not affect the GPS signals.

  12. Improving sleep/wake detection via boundary adaptation for respiratory spectral features.

    Science.gov (United States)

    Long, Xi; Haakma, Reinder; Rolink, Jerome; Fonseca, Pedro; Aarts, Ronald M

    2015-01-01

    In previous work, respiratory spectral features have been successfully used for sleep/wake detection. They are usually extracted from several frequency bands. However, these traditional bands with fixed frequency boundaries might not be the most appropriate to optimize the sleep and wake separation. This is caused by the between-subject variability in physiology, or more specifically, in respiration during sleep. Since the optimal boundaries may relate to mean respiratory frequency over the entire night. Therefore, we propose to adapt these boundaries for each subject in terms of his/her mean respiratory frequency. The adaptive boundaries were considered as those being able to maximize the separation between sleep and wake states by means of their mean power spectral density (PSD) curves overnight. Linear regression models were used to address the association between the adaptive boundaries and mean respiratory frequency based on training data. This was then in turn used to estimate the adaptive boundaries of each test subject. Experiments were conducted on the data from 15 healthy subjects using a linear discriminant classifier with a leave-one-subject-out cross-validation. We reveal that the spectral boundary adaptation can help improve the performance of sleep/wake detection when actigraphy is absent.

  13. A new feature extraction method for signal classification applied to cord dorsum potential detection

    Science.gov (United States)

    Vidaurre, D.; Rodríguez, E. E.; Bielza, C.; Larrañaga, P.; Rudomin, P.

    2012-10-01

    In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods.

  14. Computing network-based features from physiological time series: application to sepsis detection.

    Science.gov (United States)

    Santaniello, Sabato; Granite, Stephen J; Sarma, Sridevi V; Winslow, Raimond L

    2014-01-01

    Sepsis is a systemic deleterious host response to infection. It is a major healthcare problem that affects millions of patients every year in the intensive care units (ICUs) worldwide. Despite the fact that ICU patients are heavily instrumented with physiological sensors, early sepsis detection remains challenging, perhaps because clinicians identify sepsis by using static scores derived from bed-side measurements individually, i.e., without systematically accounting for potential interactions between these signals and their dynamics. In this study, we apply network-based data analysis to take into account interactions between bed-side physiological time series (PTS) data collected in ICU patients, and we investigate features to distinguish between sepsis and non-sepsis conditions. We treated each PTS source as a node on a graph and we retrieved the graph connectivity matrix over time by tracking the correlation between each pair of sources' signals over consecutive time windows. Then, for each connectivity matrix, we computed the eigenvalue decomposition. We found that, even though raw PTS measurements may have indistinguishable distributions in non-sepsis and early sepsis states, the median /I of the eigenvalues computed from the same data is statistically different (p sepsis detection.

  15. Unsupervised Video Shot Detection Using Clustering Ensemble with a Color Global Scale-Invariant Feature Transform Descriptor

    Directory of Open Access Journals (Sweden)

    Yuchou Chang

    2008-02-01

    Full Text Available Scale-invariant feature transform (SIFT transforms a grayscale image into scale-invariant coordinates of local features that are invariant to image scale, rotation, and changing viewpoints. Because of its scale-invariant properties, SIFT has been successfully used for object recognition and content-based image retrieval. The biggest drawback of SIFT is that it uses only grayscale information and misses important visual information regarding color. In this paper, we present the development of a novel color feature extraction algorithm that addresses this problem, and we also propose a new clustering strategy using clustering ensembles for video shot detection. Based on Fibonacci lattice-quantization, we develop a novel color global scale-invariant feature transform (CGSIFT for better description of color contents in video frames for video shot detection. CGSIFT first quantizes a color image, representing it with a small number of color indices, and then uses SIFT to extract features from the quantized color index image. We also develop a new space description method using small image regions to represent global color features as the second step of CGSIFT. Clustering ensembles focusing on knowledge reuse are then applied to obtain better clustering results than using single clustering methods for video shot detection. Evaluation of the proposed feature extraction algorithm and the new clustering strategy using clustering ensembles reveals very promising results for video shot detection.

  16. Unsupervised Video Shot Detection Using Clustering Ensemble with a Color Global Scale-Invariant Feature Transform Descriptor

    Directory of Open Access Journals (Sweden)

    Hong Yi

    2008-01-01

    Full Text Available Abstract Scale-invariant feature transform (SIFT transforms a grayscale image into scale-invariant coordinates of local features that are invariant to image scale, rotation, and changing viewpoints. Because of its scale-invariant properties, SIFT has been successfully used for object recognition and content-based image retrieval. The biggest drawback of SIFT is that it uses only grayscale information and misses important visual information regarding color. In this paper, we present the development of a novel color feature extraction algorithm that addresses this problem, and we also propose a new clustering strategy using clustering ensembles for video shot detection. Based on Fibonacci lattice-quantization, we develop a novel color global scale-invariant feature transform (CGSIFT for better description of color contents in video frames for video shot detection. CGSIFT first quantizes a color image, representing it with a small number of color indices, and then uses SIFT to extract features from the quantized color index image. We also develop a new space description method using small image regions to represent global color features as the second step of CGSIFT. Clustering ensembles focusing on knowledge reuse are then applied to obtain better clustering results than using single clustering methods for video shot detection. Evaluation of the proposed feature extraction algorithm and the new clustering strategy using clustering ensembles reveals very promising results for video shot detection.

  17. High-resolution multispectral satellite imagery for extracting bathymetric information of Antarctic shallow lakes

    Science.gov (United States)

    Jawak, Shridhar D.; Luis, Alvarinho J.

    2016-05-01

    High-resolution pansharpened images from WorldView-2 were used for bathymetric mapping around Larsemann Hills and Schirmacher oasis, east Antarctica. We digitized the lake features in which all the lakes from both the study areas were manually extracted. In order to extract the bathymetry values from multispectral imagery we used two different models: (a) Stumpf model and (b) Lyzenga model. Multiband image combinations were used to improve the results of bathymetric information extraction. The derived depths were validated against the in-situ measurements and root mean square error (RMSE) was computed. We also quantified the error between in-situ and satellite-estimated lake depth values. Our results indicated a high correlation (R = 0.60 0.80) between estimated depth and in-situ depth measurements, with RMSE ranging from 0.10 to 1.30 m. This study suggests that the coastal blue band in the WV-2 imagery could retrieve accurate bathymetry information compared to other bands. To test the effect of size and dimension of lake on bathymetry retrieval, we distributed all the lakes on the basis of size and depth (reference data), as some of the lakes were open, some were semi frozen and others were completely frozen. Several tests were performed on open lakes on the basis of size and depth. Based on depth, very shallow lakes provided better correlation (≈ 0.89) compared to shallow (≈ 0.67) and deep lakes (≈ 0.48). Based on size, large lakes yielded better correlation in comparison to medium and small lakes.

  18. Correction for depth biases to shallow water multibeam bathymetric data

    Science.gov (United States)

    Yang, Fan-lin; Li, Jia-biao; Liu, Zhi-min; Han, Li-tao

    2013-04-01

    Vertical errors often present in multibeam swath bathymetric data. They are mainly sourced by sound refraction, internal wave disturbance, imperfect tide correction, transducer mounting, long period heave, static draft change, dynamic squat and dynamic motion residuals, etc. Although they can be partly removed or reduced by specific algorithms, the synthesized depth biases are unavoidable and sometimes have an important influence on high precise utilization of the final bathymetric data. In order to confidently identify the decimeter-level changes in seabed morphology by MBES, we must remove or weaken depth biases and improve the precision of multibeam bathymetry further. The fixed-interval profiles that are perpendicular to the vessel track are generated to adjust depth biases between swaths. We present a kind of postprocessing method to minimize the depth biases by the histogram of cumulative depth biases. The datum line in each profile can be obtained by the maximum value of histogram. The corrections of depth biases can be calculated according to the datum line. And then the quality of final bathymetry can be improved by the corrections. The method is verified by a field test.

  19. Correction for Depth Biases to Shallow Water Multibeam Bathymetric Data

    Institute of Scientific and Technical Information of China (English)

    YANG Fan-lin; LI Jia-biao; LIU Zhi-min; HAN Li-tao

    2013-01-01

    Vertical errors often present in multibeam swath bathymetric data.They are mainly sourced by sound refraction,internal wave disturbance,imperfect tide correction,transducer mounting,long period heave,static draft change,dynamic squat and dynamic motion residuals,etc.Although they can be partly removed or reduced by specific algorithms,the synthesized depth biases are unavoidable and sometimes have an important influence on high precise utilization of the final bathymetric data.In order to confidently identify the decimeter-level changes in seabed morphology by MBES,we must remove or weaken depth biases and improve the precision of multibeam bathymetry further.The fixed-interval profiles that are perpendicular to the vessel track are generated to adjust depth biases between swaths.We present a kind of postprocessing method to minimize the depth biases by the histogram of cumulative depth biases.The datum line in each profile can be obtained by the maximum value of histogram.The corrections of depth biases can be calculated according to the datum line.And then the quality of final bathymetry can be improved by the corrections.The method is verified by a field test.

  20. Bathymetric Contours - LAKE_BATHYMETRY_IDNR_IN: Bathymetric Contours for Selected Lakes in Indiana (Indiana Department of Natural Resources, Polygon Shapefile)

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — LAKE_BATHYMETRY_IDNR_IN.SHP provides bathymetric contours for the following 85 lakes in Indiana, with depths calculated from the average shoreline of each lake:...

  1. Genetic Particle Swarm Optimization-Based Feature Selection for Very-High-Resolution Remotely Sensed Imagery Object Change Detection.

    Science.gov (United States)

    Chen, Qiang; Chen, Yunhao; Jiang, Weiguo

    2016-07-30

    In the field of multiple features Object-Based Change Detection (OBCD) for very-high-resolution remotely sensed images, image objects have abundant features and feature selection affects the precision and efficiency of OBCD. Through object-based image analysis, this paper proposes a Genetic Particle Swarm Optimization (GPSO)-based feature selection algorithm to solve the optimization problem of feature selection in multiple features OBCD. We select the Ratio of Mean to Variance (RMV) as the fitness function of GPSO, and apply the proposed algorithm to the object-based hybrid multivariate alternative detection model. Two experiment cases on Worldview-2/3 images confirm that GPSO can significantly improve the speed of convergence, and effectively avoid the problem of premature convergence, relative to other feature selection algorithms. According to the accuracy evaluation of OBCD, GPSO is superior at overall accuracy (84.17% and 83.59%) and Kappa coefficient (0.6771 and 0.6314) than other algorithms. Moreover, the sensitivity analysis results show that the proposed algorithm is not easily influenced by the initial parameters, but the number of features to be selected and the size of the particle swarm would affect the algorithm. The comparison experiment results reveal that RMV is more suitable than other functions as the fitness function of GPSO-based feature selection algorithm.

  2. Estimating sea floor dynamics in the Southern North Sea to improve bathymetric survey planning

    NARCIS (Netherlands)

    Dorst, Leendert Louis

    2009-01-01

    Safe nautical charts require a carefully designed bathymetric survey policy, especially in shallow sandy seas that potentially have dynamic sea floor patterns. Bathymetric resurveying at sea is a costly process with limited resources, though. A pattern on the sea floor known as tidal sand waves is c

  3. Detection Rate, Distribution, Clinical and Pathological Features of Colorectal Serrated Polyps

    Institute of Scientific and Technical Information of China (English)

    Hai-Long Cao; Xue Chen; Shao-Chun Du; Wen-Jing Song; Wei-Qiang Wang; Meng-Que Xu; Si-Nan Wang

    2016-01-01

    Background:Colorectal serrated polyp is considered as histologically heterogeneous lesions with malignant potential in western countries.However,few Asian studies have investigated the comprehensive clinical features of serrated polyps in symptomatic populations.The aim of the study was to evaluate the features of colorectal serrated polyps in a Chinese symptomatic population.Methods:Data from all consecutive symptomatic patients were documented from a large colonoscopy database and were analyzed.Chi-square test or Fisher's exact test and logistic regression analysis were used for the data processing.Results:A total of 9191 (31.7%) patients were detected with at least one colorectal polyp.The prevalence of serrated polyps was 0.53% (153/28,981).The proportions of hyperplastic polyp (HP),sessile serrated adenoma/polyp (SSA/P),and traditional serrated adenoma (TSA) of all serrated polyps were 41.2%,7.2%,and 51.6%,respectively,which showed a lower proportion of HP and SSA/P and a higher proportion of TSA.Serrated polyps appeared more in males and elder patients while there was no significant difference in the subtype distribution in gender and age.The proportions of large and proximal serrated polyps were 13.7% (21/153) and 46.4% (71/153),respectively.In total,98.9% (89/90) serrated adenomas were found with dysplasia.Moreover,14 patients with serrated polyps were found with synchronous advanced colorectal neoplasia,and large serrated polyps (LSPs) (odds ratio:3.446,95% confidence interval:1.010-11.750,P < 0.05),especially large HPs,might have an association with synchronous advanced neoplasia (AN).Conclusions:The overall detection rate ofcolorectal serrated polyps in Chinese symptomatic patient population was low,and distribution pattern of three subtypes is different from previous reports.Moreover,LSPs,especially large HPs,might be associated with an increased risk of synchronous AN.

  4. A quick eye to anger: An investigation of a differential effect of facial features in detecting angry and happy expressions.

    Science.gov (United States)

    Lo, L Y; Cheng, M Y

    2015-08-11

    Detection of angry and happy faces is generally found to be easier and faster than that of faces expressing emotions other than anger or happiness. This can be explained by the threatening account and the feature account. Few empirical studies have explored the interaction between these two accounts which are seemingly, but not necessarily, mutually exclusive. The present studies hypothesised that prominent facial features are important in facilitating the detection process of both angry and happy expressions; yet the detection of happy faces was more facilitated by the prominent features than angry faces. Results confirmed the hypotheses and indicated that participants reacted faster to the emotional expressions with prominent features (in Study 1) and the detection of happy faces was more facilitated by the prominent feature than angry faces (in Study 2). The findings are compatible with evolutionary speculation which suggests that the angry expression is an alarming signal of potential threats to survival. Compared to the angry faces, the happy faces need more salient physical features to obtain a similar level of processing efficiency.

  5. Facial/License Plate Detection Using a Two-Level Cascade Classifier and a Single Convolutional Feature Map

    Directory of Open Access Journals (Sweden)

    Ying-Nong Chen

    2015-12-01

    Full Text Available In this paper, an object detector is proposed based on a convolution/subsampling feature map and a two-level cascade classifier. First, a convolution/subsampling operation alleviates illumination, rotation and noise variances. Then, two classifiers are concatenated to check a large number of windows using a coarse-to-fine strategy. Since the sub-sampled feature map with enhanced pixels was fed into the coarse-level classifier, the checked windows were drastically reduced to a quarter of the original image. A few remaining windows showing detailed data were further checked using a fine-level classifier. In addition to improving the detection process, the proposed mechanism also sped up the training process. Some features generated from the prototypes within the small window were selected and trained to obtain the coarse-level classifier. Moreover, a feature ranking algorithm reduced the large feature pool to a small set, thus speeding up the training process without losing detection performance. The contribution of this paper is twofold: first, the coarse-to-fine scheme shortens both the training and detection processes. Second, the feature ranking algorithm reduces training time. Finally, some experimental results were achieved for evaluation. From the results, the proposed method was shown to outperform the rapidly performing Adaboost, as well as forward feature selection methods.

  6. a Detection Method of Artificial Area from High Resolution Remote Sensing Images Based on Multi Scale and Multi Feature Fusion

    Science.gov (United States)

    Li, P.; Hu, X.; Hu, Y.; Ding, Y.; Wang, L.; Li, L.

    2017-05-01

    In order to solve the problem of automatic detection of artificial objects in high resolution remote sensing images, a method for detection of artificial areas in high resolution remote sensing images based on multi-scale and multi feature fusion is proposed. Firstly, the geometric features such as corner, straight line and right angle are extracted from the original resolution, and the pseudo corner points, pseudo linear features and pseudo orthogonal angles are filtered out by the self-constraint and mutual restraint between them. Then the radiation intensity map of the image with high geometric characteristics is obtained by the linear inverse distance weighted method. Secondly, the original image is reduced to multiple scales and the visual saliency image of each scale is obtained by adaptive weighting of the orthogonal saliency, the local brightness and contrast which are calculated at the corresponding scale. Then the final visual saliency image is obtained by fusing all scales' visual saliency images. Thirdly, the visual saliency images of artificial areas based on multi scales and multi features are obtained by fusing the geometric feature energy intensity map and visual saliency image obtained in previous decision level. Finally, the artificial areas can be segmented based on the method called OTSU. Experiments show that the method in this paper not only can detect large artificial areas such as urban city, residential district, but also detect the single family house in the countryside correctly. The detection rate of artificial areas reached 92 %.

  7. Bathymetrical distribution and size structure of cold-water coral populations in the Cap de Creus and Lacaze-Duthiers canyons (northwestern Mediterranean)

    Science.gov (United States)

    Gori, A.; Orejas, C.; Madurell, T.; Bramanti, L.; Martins, M.; Quintanilla, E.; Marti-Puig, P.; Lo Iacono, C.; Puig, P.; Requena, S.; Greenacre, M.; Gili, J. M.

    2013-03-01

    Submarine canyons are known as one of the seafloor morphological features where living cold-water coral (CWC) communities develop in the Mediterranean Sea. We investigated the CWC community of the two westernmost submarine canyons of the Gulf of Lions canyon system: the Cap de Creus Canyon (CCC) and Lacaze-Duthiers Canyon (LDC). Coral associations have been studied through video material recorded by means of a manned submersible and a remotely operated vehicle. Video transects have been conducted and analyzed in order to obtain information on (1) coral bathymetric distribution and density patterns, (2) size structure of coral populations, and (3) coral colony position with respect to the substrate. Madrepora oculata was the most abundant CWC in both canyons, while Lophelia pertusa and Dendrophyllia cornigera mostly occurred as isolated colonies or in small patches. An important exception was detected in a vertical cliff in LDC where a large L. pertusa framework was documented. This is the first record of such an extended L. pertusa framework in the Mediterranean Sea. In both canyons coral populations were dominated by medium and large colonies, but the frequent presence of small-sized colonies also indicate active recruitment. The predominant coral orientation (90° and 135°) is probably driven by the current regime as well as by the sediment load transported by the current flows. In general, no clear differences were observed in the abundance and in the size structure of the CWC populations between CCC and LDC, despite large differences in particulate matter between canyons.

  8. First Simultaneous Detection of Moving Magnetic Features in Photospheric Intensity and Magnetic Field Data

    CERN Document Server

    Lim, Eun-Kyung; Goode, Philip

    2012-01-01

    The formation and the temporal evolution of a bipolar moving magnetic feature (MMF) was studied with high spatial and temporal resolution. The photometric properties were observed with the New Solar Telescope at Big Bear Solar Observatory using a broadband TiO filter (705.7 nm), while the magnetic field was analyzed using the spectropolarimetric data obtained by Hinode. For the first time, we observed a bipolar MMF simultaneously in intensity images and magnetic field data, and studied the details of its structure. The vector magnetic field and the Doppler velocity of the MMF were also studied. A bipolar MMF having its positive polarity closer to the negative penumbra formed being accompanied by a bright, filamentary structure in the TiO data connecting the MMF and a dark penumbral filament. A fast downflow (<2km/s) was detected at the positive polarity. The vector magnetic field obtained from the full Stokes inversion revealed that a bipolar MMF has a U-shaped magnetic field configuration. Our observation...

  9. Microarray-based large scale detection of single feature polymorphism in Gossypium hirsutum L.

    Indian Academy of Sciences (India)

    Anukool Srivastava; Samir V. Sawant; Satya Narayan Jena

    2015-12-01

    Microarrays offer an opportunity to explore the functional sequence polymorphism among different cultivars of many crop plants. The Affymetrix microarray expression data of five genotypes of Gossypium hirsutum L. at six different fibre developmental stages was used to identify single feature polymorphisms (SFPs). The background corrected and quantile-normalized log2 intensity values of all probes of triplicate data of each cotton variety were subjected to SFPs call by using SAM procedure in R language software. We detected a total of 37,473 SFPs among six pair genotype combinations of two superior (JKC777 and JKC725) and three inferior (JKC703, JKC737 and JKC783) using the expression data. The 224 SFPs covering 51 genes were randomly selected from the dataset of all six fibre developmental stages of JKC777 and JKC703 for validation by sequencing on a capillary sequencer. Of these 224 SFPs, 132 were found to be polymorphic and 92 monomorphic which indicate that the SFP prediction from the expression data in the present study confirmed a ∼ 58.92% of true SFPs. We further identified that most of the SFPs are associated with genes involved in fatty acid, flavonoid, auxin biosynthesis etc. indicating that these pathways significantly involved in fibre development.

  10. Single Tree Stem Profile Detection Using Terrestrial Laser Scanner Data, Flatness Saliency Features and Curvature Properties

    Directory of Open Access Journals (Sweden)

    Kenneth Olofsson

    2016-09-01

    Full Text Available A method for automatic stem detection and stem profile estimation based on terrestrial laser scanning (TLS was validated. The root-mean-square error was approximately 1 cm for stem diameter estimations. The method contains a new way of extracting the flatness saliency feature using the centroid of a subset of a point cloud within a voxel cell that approximates the point by point calculations. The loss of accuracy is outweighed by a much higher computational speed, making it possible to cover large datasets. The algorithm introduces a new way to connect surface patches belonging to a stem and investigates if they belong to curved surfaces. Thereby, cylindrical objects, like stems, are found in the pre-filtering stage. The algorithm uses a new cylinder fitting method that estimates the axis direction by transforming the TLS points into a radial-angular coordinate system and evaluates the deviations by a moving window convex hull algorithm. Once the axis direction is found, the cylinder center is chosen as the position with the smallest radial deviations. The cylinder fitting method works on a point cloud in both the single-scan setup, as well as a multiple scan setup of a TLS system.

  11. Imaging features, follow-up, and management of incidentally detected renal lesions

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, A.J., E-mail: Alison.Bradley@uhsm.nhs.uk [Department of Radiology, University Hospital of South Manchester, Southmoor Road, Wythenshawe, Manchester M23 9LT (United Kingdom); Lim, Y.Y.; Singh, F.M. [Department of Radiology, University Hospital of South Manchester, Southmoor Road, Wythenshawe, Manchester M23 9LT (United Kingdom)

    2011-12-15

    Incidental renal masses are common findings on cross-sectional imaging. Most will be readily identified as simple cysts, but with an inexorable rise in abdominal imaging, [particularly computed tomography (CT)], coupled with a rise in the incidence of renal cancer, the likelihood of detecting a malignant mass is increasing. This review informs the radiologist which lesions can be safely ignored, which will require further imaging for accurate categorization, and which require referral for consideration of treatment. For the small proportion of lesions that are indeterminate, careful attention to imaging technique, and the use of unenhanced and contrast-enhanced CT or magnetic resonance imaging (MRI) in all but a few specific instances will accurately characterize such lesions. The figures have been chosen to illustrate specific imaging features of common renal lesions. Management options for malignant, or presumed malignant, renal masses include active surveillance, percutaneous ablation, laparoscopic or open, partial or total nephrectomy. Biopsy has a role in determining the nature of masses that remain indeterminate on cross-sectional imaging, prior to definitive treatment. Common pitfalls in assessing incidental renal lesions are emphasized; some of these are due to sub-optimal imaging techniques and others to errors in interpretation.

  12. Hevea Leaves Boundary Identification based on Morphological Transformation and Edge Detection Features

    Directory of Open Access Journals (Sweden)

    Sule Tekkesinoglu

    2014-03-01

    Full Text Available The goal of this study is to present a concept to identify overlapping rubber tree (Hevea brasiliensis-scientific name leaf boundaries. Basically rubber tree leaves show similarity to each other and they may contain similar information such as color, texture or shape of leaves. In fact rubber tree leaves are naturally in class of palmate leaves, it means that numbers of leaves are joining at their base. So it reflects the information of the position of the leaves whether the leaves are overlapped or separated. Therefore, this unique feature could be used to distinguish particular leaves from others clone to identify the type of trees. This study addresses the problem of identifying the overlapped leaves with complex background. The morphological transformation is often applied in order to obtain the foreground object and the background location as well. However, it does not yield satisfactory results in order to get boundaries information. This study, presents on improved approach to identify boundary of rubber tree leaves based on morphological operation and edge detection methods. The outcome of this fused algorithm exhibits promising results for identifying the leaf boundaries of rubber trees.

  13. FIRST SIMULTANEOUS DETECTION OF MOVING MAGNETIC FEATURES IN PHOTOSPHERIC INTENSITY AND MAGNETIC FIELD DATA

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Eun-Kyung; Yurchyshyn, Vasyl; Goode, Philip, E-mail: eklim@bbso.njit.edu [Big Bear Solar Observatory, New Jersey Institute of Technology, 40386 North Shore Lane, Big Bear City, CA 92314-9672 (United States)

    2012-07-01

    The formation and the temporal evolution of a bipolar moving magnetic feature (MMF) was studied with high-spatial and temporal resolution. The photometric properties were observed with the New Solar Telescope at Big Bear Solar Observatory using a broadband TiO filter (705.7 nm), while the magnetic field was analyzed using the spectropolarimetric data obtained by Hinode. For the first time, we observed a bipolar MMF simultaneously in intensity images and magnetic field data, and studied the details of its structure. The vector magnetic field and the Doppler velocity of the MMF were also studied. A bipolar MMF with its positive polarity closer to the negative penumbra formed, accompanied by a bright, filamentary structure in the TiO data connecting the MMF and a dark penumbral filament. A fast downflow ({<=}2 km s{sup -1}) was detected at the positive polarity. The vector magnetic field obtained from the full Stokes inversion revealed that a bipolar MMF has a U-shaped magnetic field configuration. Our observations provide a clear intensity counterpart of the observed MMF in the photosphere, and strong evidence of the connection between the MMF and the penumbral filament as a serpentine field.

  14. Feature selection for anomaly–based network intrusion detection using cluster validity indices

    CSIR Research Space (South Africa)

    Naidoo, Tyrone

    2015-09-01

    Full Text Available data, which is rarely available in operational networks. It uses normalized cluster validity indices as an objective function that is optimized over the search space of candidate feature subsets via a genetic algorithm. Feature sets produced...

  15. Subtidal Bathymetric Changes by Shoreline Armoring Removal and Restoration Projects

    Science.gov (United States)

    Wallace, J.

    2016-12-01

    The Salish Sea, a region with a diverse coastline, is altered by anthropogenic shoreline modifications such as seawalls. In recent years, local organizations have moved to restore these shorelines. Current research monitors the changes restoration projects have on the upper beach, lower beach, and intertidal, however little research exists to record possible negative effects on the subtidal. The purpose of this research is to utilize multibeam sonar bathymetric data to analyze possible changes to the seafloor structure of the subtidal in response to shoreline modification and to investigate potential ecosystem consequences of shoreline alteration. The subtidal is home to several species including eelgrass (Zostera marina). Eelgrass is an important species in Puget Sound as it provides many key ecosystem functions including providing habitat for a wide variety of organisms, affecting the physics of waves, and sediment transport in the subtidal. Thus bathymetric changes could impact eelgrass growth and reduce its ability to provide crucial ecosystem services. Three Washington state study sites of completed shoreline restoration projects were used to generate data from areas of varied topographic classification, Seahurst Park in Burien, the Snohomish County Nearshore Restoration Project in Everett, and Cornet Bay State Park on Whidbey Island. Multibeam sonar data was acquired using a Konsberg EM 2040 system and post-processed in Caris HIPS to generate a base surface of one-meter resolution. It was then imported into the ArcGIS software suite for the generation of spatial metrics. Measurements of change were calculated through a comparison of historical and generated data. Descriptive metrics generated included, total elevation change, percent area changed, and a transition matrix of positive and negative change. Additionally, pattern metrics such as, surface roughness, and Bathymetric Position Index (BPI), were calculated. The comparison of historical data to new data

  16. Erythrocyte Features for Malaria Parasite Detection in Microscopic Images of Thin Blood Smear: A Review

    Directory of Open Access Journals (Sweden)

    Salam Shuleenda Devi

    2016-12-01

    Full Text Available Microscopic image analysis of blood smear plays a very important role in characterization of erythrocytes in screening of malaria parasites. The characteristics feature of erythrocyte changes due to malaria parasite infection. The microscopic features of the erythrocyte include morphology, intensity and texture. In this paper, the different features used to differentiate the non- infected and malaria infected erythrocyte have been reviewed.

  17. Sea-bed biogeochemistry and benthic foraminiferal bathymetric zonation on the slope of the northwest Gulf of Mexico

    Energy Technology Data Exchange (ETDEWEB)

    Loubere, P. (Northern Illinois Univ., DeKalb, IL (United States)); Gary, A. (Unocal Science and Technology Division, Brea, CA (United States)); Lagoe, M. (Univ. of Texas, Austin, TX (United States))

    1993-10-01

    The bathymetric zonation of benthic Foraminiferal taxa in the northwest Gulf of Mexico is summarized and compared to several important environmental parameters measured in boxcores collected along a depth transect. The parameters are bottom water temperature, organic carbon flux, bottom water oxygen content, biogeochemical gradients within the sediments and sedimentation regime. The prominent Foraminiferal boundary between 170 and 200 m is associated with the position of the mudline in the northwest Gulf. Below this, assemblage changes are more gradational with water depth and, between 200 and 600 m, appear to be related to gradients in temperature, oxygen supply and organic carbon flux. Between 600 and 2000 m bathymetric zonation correlates to the organic carbon flux profile. An analysis of sediment pore water geochemistry and sedimentary features in the box-cores shows that there is a progressive change in the vertical distribution and character of potential microhabitats within the sediments down the slope of the northwest Gulf. From 250 to about 700 m water depth the biogenic structures observed in the sediments are abundant and complex, and the biogeochemical gradients in the sediments are steep. The visible complexity and chemical gradients gradually decrease with increasing water depth so that by 1000 m the anoxic boundary is deeper than 7 cm in our boxcores. At water depths greater than 1200 m the sediments are homogeneous, oxidized, hemipelagites. The published Foraminiferal bathymetric zonation of the N.W. Gulf appears to follow this gradient in sedimentary environments which must influence the generation of benthic Foraminiferal assemblages. The gradient is largely controlled by the organic carbon flux to the sea-bed. 42 refs., 8 figs., 3 tabs.

  18. Early breast cancer detection with digital mammograms using Haar-like features and AdaBoost algorithm

    Science.gov (United States)

    Zheng, Yufeng; Yang, Clifford; Merkulov, Alex; Bandari, Malavika

    2016-05-01

    The current computer-aided detection (CAD) methods are not sufficiently accurate in detecting masses, especially in dense breasts and/or small masses (typically at their early stages). A small mass may not be perceived when it is small and/or homogeneous with surrounding tissues. Possible reasons for the limited performance of existing CAD methods are lack of multiscale analysis and unification of variant masses. The speed of CAD analysis is important for field applications. We propose a new CAD model for mass detection, which extracts simple Haar-like features for fast detection, uses AdaBoost approach for feature selection and classifier training, applies cascading classifiers for reduction of false positives, and utilizes multiscale detection for variant sizes of masses. In addition to Haar features, local binary pattern (LBP) and histograms of oriented gradient (HOG) are extracted and applied to mass detection. The performance of a CAD system can be measured with true positive rate (TPR) and false positives per image (FPI). We are collecting our own digital mammograms for the proposed research. The proposed CAD model will be initially demonstrated with mass detection including architecture distortion.

  19. Modeling dune response using measured and equilibrium bathymetric profiles

    Science.gov (United States)

    Fauver, Laura A.; Thompson, David M.; Sallenger, Asbury H.

    2007-01-01

    Coastal engineers typically use numerical models such as SBEACH to predict coastal change due to extreme storms. SBEACH model inputs include pre-storm profiles, wave heights and periods, and water levels. This study focuses on the sensitivity of SBEACH to the details of pre-storm bathymetry. The SBEACH model is tested with two initial conditions for bathymetry, including (1) measured bathymetry from lidar, and (2) calculated equilibrium profiles. Results show that longshore variability in the predicted erosion signal is greater over measured bathymetric profiles, due to longshore variations in initial surf zone bathymetry. Additionally, patterns in predicted erosion can be partially explained by the configuration of the inner surf zone from the shoreline to the trough, with surf zone slope accounting for 67% of the variability in predicted erosion volumes.

  20. Detection of linear features in synthetic-aperture radar images by use of the localized Radon transform and prior information.

    Science.gov (United States)

    Onana, Vincent-de-Paul; Trouvé, Emmanuel; Mauris, Gilles; Rudant, Jean-Paul; Tonyé, Emmanuel

    2004-01-10

    A new linear-features detection method is proposed for extracting straight edges and lines in synthetic-aperture radar images. This method is based on the localized Radon transform, which produces geometrical integrals along straight lines. In the transformed domain, linear features have a specific signature: They appear as strongly contrasted structures, which are easier to extract with the conventional ratio edge detector. The proposed method is dedicated to applications such as geographical map updating for which prior information (approximate length and orientation of features) is available. Experimental results show the method's robustness with respect to poor radiometric contrast and hidden parts and its complementarity to conventional pixel-by-pixel approaches.

  1. Investigation of context, soft spatial, and spatial frequency domain features for buried explosive hazard detection in FL-LWIR

    Science.gov (United States)

    Price, Stanton R.; Anderson, Derek T.; Stone, Kevin; Keller, James M.

    2014-05-01

    It is well-known that a pattern recognition system is only as good as the features it is built upon. In the fields of image processing and computer vision, we have numerous spatial domain and spatial-frequency domain features to extract characteristics of imagery according to its color, shape and texture. However, these approaches extract information across a local neighborhood, or region of interest, which for target detection contains both object(s) of interest and background (surrounding context). A goal of this research is to filter out as much task irrelevant information as possible, e.g., tire tracks, surface texture, etc., to allow a system to place more emphasis on image features in spatial regions that likely belong to the object(s) of interest. Herein, we outline a procedure coined soft feature extraction to refine the focus of spatial domain features. This idea is demonstrated in the context of an explosive hazards detection system using forward looking infrared imagery. We also investigate different ways to spatially contextualize and calculate mathematical features from shearlet filtered candidate image chips. Furthermore, we investigate localization strategies in relation to different ways of grouping image features to reduce the false alarm rate. Performance is explored in the context of receiver operating characteristic curves on data from a U.S. Army test site that contains multiple target and clutter types, burial depths, and times of day.

  2. Bathymetric survey of Rock Run Rookery Lake, Will County, Illinois.

    Science.gov (United States)

    Duncker, James J.; Sharpe, Jennifer B.

    2017-01-01

    The bathymetric data set was collected in Rock Run on Dec. 10, 2015 by USGS ILWSC staff Clayton Bosch and Louis Pappas. The bathymetric data were collected with an RD Instruments 1200 kHz ADCP (S/N 8617) and Trimble Ag 162 GPS mounted on the M/V La Moine. A temporary reference point (TRP) was established on the north side of the footbridge over the connecting channel to the Des Plaines River. The mean water surface elevation (504.97 feet, WGS 84) during the survey was established from a temporary reference point whose elevation was later established by GPS survey. The measured depths were then converted to a lake bed elevation. The location and depth data were compiled into a bathymetry dataset (Rock Run Bathymetry Data.csv). The dataset was imported as a shapefile into ArcMap (ArcGIS software 10.3.1). A shape file of lake boundary elevation was developed based on imagery from September 16, 2015 (U.S. Department of Agriculture Farm Services Agency National Agriculture Imagery Program (NAIP)) (point data can be found in Rock Run Lake Boundary.csv). This shape file was merged with the elevation shape file to enforced the lake and island edges in the final bathymetry. This elevation shape file was then contoured using Geostatistical Analyst/Deterministic methods/Radial Basis Functions with Completely Regularized Spline (defaults were used except Sector type: 4 Sectors, Angle: 42, Major semiaxis: 800, Minor semiaxis: 500). The raster was then exported to a GeoTIFF file with a resulting raster cell size of 1 foot.

  3. Cloud detection in all-sky images via multi-scale neighborhood features and multiple supervised learning techniques

    Science.gov (United States)

    Cheng, Hsu-Yung; Lin, Chih-Lung

    2017-01-01

    Cloud detection is important for providing necessary information such as cloud cover in many applications. Existing cloud detection methods include red-to-blue ratio thresholding and other classification-based techniques. In this paper, we propose to perform cloud detection using supervised learning techniques with multi-resolution features. One of the major contributions of this work is that the features are extracted from local image patches with different sizes to include local structure and multi-resolution information. The cloud models are learned through the training process. We consider classifiers including random forest, support vector machine, and Bayesian classifier. To take advantage of the clues provided by multiple classifiers and various levels of patch sizes, we employ a voting scheme to combine the results to further increase the detection accuracy. In the experiments, we have shown that the proposed method can distinguish cloud and non-cloud pixels more accurately compared with existing works.

  4. Restricted Bipartite Graphs Based Target Detection for Hyperspectral Image Classification with GFA-LFDA Multi Feature Selection

    Directory of Open Access Journals (Sweden)

    T. Karthikeyan

    2015-06-01

    Full Text Available Hyper spectral imaging has recently become one of the most active research areas in remote sensing. Hyper spectral imagery possesses more spectral information than multispectral imagery because the number of spectral bands in hyper spectral imagery is in the hundreds rather than in the tens. However, the high dimensions of hyper spectral images cause redundancy in spatial-spectral feature domain and consider only spectral and spatial features only and ability of the classifier to excel even as training HSI images are limited. However, unless develop suitable algorithms for target detection or classification of the hyper spectral images data becomes difficult. Therefore, it is becomes essential to consider different features and find exact target detection rate to improve classification rate. In order to overcome this problem in this study presents a novel classification framework for hyper spectral data. Proposed system uses a graph based representation, Restricted Bipartite Graphs (RBG for exact detection of the class values. Before that the feature of the HSI images are selected using the Gaussian Firefly Algorithm (GFA for multiple feature selection and Local-Fisher’s Discriminant Analysis (LFDA based feature projection are performed in a raw spectral-spatial feature space for effective dimensionality reduction. Then RBG is proposed to represent the reduced feature results into graphical manner to solve exact target class matching problem, in hyper spectral imaginary. Classification is performed using the Hybrid Genetic Fuzzy Neural Network (HGFNN, Genetic algorithm is used to optimize the weights of the fuzzifier and the defuzzifier for labeled and unlabeled data samples. Experimentation results show that the proposed GFA-LFDA-RBG-HGFNN method outperforms in terms of the classification accuracy and less misclassification results than traditional methods.

  5. Extracting Information from Conventional AE Features for Fatigue Onset Damage Detection in Carbon Fiber Composites

    DEFF Research Database (Denmark)

    Unnthorsson, Runar; Pontoppidan, Niels Henrik Bohl; Jonsson, Magnus Thor

    2005-01-01

    We have analyzed simple data fusion and preprocessing methods on Acoustic Emission measurements of prosthetic feet made of carbon fiber reinforced composites. This paper presents the initial research steps; aiming at reducing the time spent on the fatigue test. With a simple single feature probab...... approaches can readily be investigated using the improved features, possibly improving the performance using multiple feature classifiers, e.g., Voting systems; Support Vector Machines and Gaussian Mixtures....

  6. Predicting species diversity of benthic communities within turbid nearshore using full-waveform bathymetric LiDAR and machine learners.

    Directory of Open Access Journals (Sweden)

    Antoine Collin

    Full Text Available Epi-macrobenthic species richness, abundance and composition are linked with type, assemblage and structural complexity of seabed habitat within coastal ecosystems. However, the evaluation of these habitats is highly hindered by limitations related to both waterborne surveys (slow acquisition, shallow water and low reactivity and water clarity (turbid for most coastal areas. Substratum type/diversity and bathymetric features were elucidated using a supervised method applied to airborne bathymetric LiDAR waveforms over Saint-Siméon-Bonaventure's nearshore area (Gulf of Saint-Lawrence, Québec, Canada. High-resolution underwater photographs were taken at three hundred stations across an 8-km(2 study area. Seven models based upon state-of-the-art machine learning techniques such as Naïve Bayes, Regression Tree, Classification Tree, C 4.5, Random Forest, Support Vector Machine, and CN2 learners were tested for predicting eight epi-macrobenthic species diversity metrics as a function of the class number. The Random Forest outperformed other models with a three-discretized Simpson index applied to epi-macrobenthic communities, explaining 69% (Classification Accuracy of its variability by mean bathymetry, time range and skewness derived from the LiDAR waveform. Corroborating marine ecological theory, areas with low Simpson epi-macrobenthic diversity responded to low water depths, high skewness and time range, whereas higher Simpson diversity relied upon deeper bottoms (correlated with stronger hydrodynamics and low skewness and time range. The degree of species heterogeneity was therefore positively linked with the degree of the structural complexity of the benthic cover. This work underpins that fully exploited bathymetric LiDAR (not only bathymetrically derived by-products, coupled with proficient machine learner, is able to rapidly predict habitat characteristics at a spatial resolution relevant to epi-macrobenthos diversity, ranging from clear to

  7. Characteristics and short-term changes of the Po Delta seafloor morphology through high-resolution bathymetric and backscatter data

    Science.gov (United States)

    Madricardo, Fantina; Bosman, Alessandro; Kruss, Aleksandra; Remia, Alessandro; Correggiari, Anna; Fogarin, Stefano; Romagnoli, Claudia; Moscon, Giorgia

    2016-04-01

    River deltas are highly dynamical and valuable environments and often undergo strong natural and human-induced actions that need constant monitoring. Whereas remote sensing observations of the sub-aerial part of the delta are very important for the assessment of the morphological changes over long time scales (years-decades), the short time-scale evolution of the submerged part of the system remains often undetermined. In particular, the shallow-water submarine pro-delta front is commonly characterized by active depositional and erosional processes. This area is crucial for the understanding of the fluvial and coastal dynamics. In this study, we applied geophysical investigations to characterize the very shallow-water area of the Po river delta in the northern Adriatic Sea. The modern Po delta is the result of increased sediment flux derived from both climate change (Little Ice Age) and human impact (deforestation and diversion and construction of artificial levees) and in recent years is suffering erosion. Here, we present the results of two high-resolution multibeam echosounder surveys carried out in June 2013 and in September 2014 on the Po river mouth and delta front in the framework of the Ritmare Project. The Po delta front, as other modern deltas, has a complicated morphology, consisting of multiple terminal distributary channels, subaqueous levee deposits, and mouth bars. The high-resolution bathymetric data show that the prodelta slope has a curved shape with an overall southward asymmetry of the submerged delta due to prevalent longshore currents. The 2013 bathymetric map highlights a number of sedimentary features, such as depositional bars, radiating in the prodelta slope with an asymmetric section, with steeper southward lee side. The new bathymetric map collected in 2014 shows impressive changes: in correspondence with the depositional lobes, we observed extensive collapse depressions with bathymetric changes of over 1 m in 15 months and widespread

  8. Feature selection and classification methodology for the detection of knee-joint disorders.

    Science.gov (United States)

    Nalband, Saif; Sundar, Aditya; Prince, A Amalin; Agarwal, Anita

    2016-04-01

    Vibroarthographic (VAG) signals emitted from the knee joint disorder provides an early diagnostic tool. The nonstationary and nonlinear nature of VAG signal makes an important aspect for feature extraction. In this work, we investigate VAG signals by proposing a wavelet based decomposition. The VAG signals are decomposed into sub-band signals of different frequencies. Nonlinear features such as recurrence quantification analysis (RQA), approximate entropy (ApEn) and sample entropy (SampEn) are extracted as features of VAG signal. A total of twenty-four features form a vector to characterize a VAG signal. Two feature selection (FS) techniques, apriori algorithm and genetic algorithm (GA) selects six and four features as the most significant features. Least square support vector machines (LS-SVM) and random forest are proposed as classifiers to evaluate the performance of FS techniques. Results indicate that the classification accuracy was more prominent with features selected from FS algorithms. Results convey that LS-SVM using the apriori algorithm gives the highest accuracy of 94.31% with false discovery rate (FDR) of 0.0892. The proposed work also provided better classification accuracy than those reported in the previous studies which gave an accuracy of 88%. This work can enhance the performance of existing technology for accurately distinguishing normal and abnormal VAG signals. And the proposed methodology could provide an effective non-invasive diagnostic tool for knee joint disorders.

  9. Building an intrusion detection system using a filter-based feature selection algorithm

    NARCIS (Netherlands)

    Ambusaidi, Mohammed A.; He, Xiangjian; Nanda, Priyadarsi; Tan, Zhiyuan

    2016-01-01

    Redundant and irrelevant features in data have caused a long-term problem in network traffic classification. These features not only slow down the process of classification but also prevent a classifier from making accurate decisions, especially when coping with big data. In this paper, we propose a

  10. Testing of Haar-Like Feature in Region of Interest Detection for Automated Target Recognition (ATR) System

    Science.gov (United States)

    Zhang, Yuhan; Lu, Dr. Thomas

    2010-01-01

    The objectives of this project were to develop a ROI (Region of Interest) detector using Haar-like feature similar to the face detection in Intel's OpenCV library, implement it in Matlab code, and test the performance of the new ROI detector against the existing ROI detector that uses Optimal Trade-off Maximum Average Correlation Height filter (OTMACH). The ROI detector included 3 parts: 1, Automated Haar-like feature selection in finding a small set of the most relevant Haar-like features for detecting ROIs that contained a target. 2, Having the small set of Haar-like features from the last step, a neural network needed to be trained to recognize ROIs with targets by taking the Haar-like features as inputs. 3, using the trained neural network from the last step, a filtering method needed to be developed to process the neural network responses into a small set of regions of interests. This needed to be coded in Matlab. All the 3 parts needed to be coded in Matlab. The parameters in the detector needed to be trained by machine learning and tested with specific datasets. Since OpenCV library and Haar-like feature were not available in Matlab, the Haar-like feature calculation needed to be implemented in Matlab. The codes for Adaptive Boosting and max/min filters in Matlab could to be found from the Internet but needed to be integrated to serve the purpose of this project. The performance of the new detector was tested by comparing the accuracy and the speed of the new detector against the existing OTMACH detector. The speed was referred as the average speed to find the regions of interests in an image. The accuracy was measured by the number of false positives (false alarms) at the same detection rate between the two detectors.

  11. Using GeoMapApp to access and interpret high-resolution bathymetric data collected with deep submergence vehicles

    Science.gov (United States)

    Ferrini, V.; Carbotte, S. M.; Ryan, W. B.; O'hara, S. H.; Bonczkowski, J.; Arko, R. A.

    2011-12-01

    Bathymetric data products generated with deep submergence technology can be of meter to sub-meter resolution, providing an unprecedented view of seafloor features. Data at this resolution provide near photo-quality information that can be used to not only quantify morphologic features and create geologic maps, but can also be used to develop and refine remote seafloor characterization techniques. Nesting these data within regional lower resolution data, and supplementing them with ground-truth photos and observations from seafloor samples is often the key to understanding the geologic features revealed in bathymetric data. Through the efforts of the Ridge 2000 Program and the Ridge 2000 Data Portal, many of these data have been acquired, assembled, spatially co-registered, and made directly accessible through a variety of programmatic interfaces. The Data Portal provides direct downloads of raw and processed data files with full attribution to contributing scientists and links to publications. In addition, GeoMapApp, a free Java-based visualization and analysis tool, provides quantitative access to several high-resolution bathymetry data sets within the context of important complementary data. The default basemap in GeoMapApp is the Global Multi-Resolution Topography (GMRT) Synthesis, which includes ship-based bathymetry data from over 500 research cruises as well as contributed regional grids, providing resolution of 100 m or better in many areas. GeoMapApp provides quantitative access to the GMRT, which can be used to understand the regional context of localized high-resolution bathymetry data. Other related data directly accessible through GeoMapApp include sidescan data, sample positions and descriptions, and bottom photos acquired with National Deep Submergence Facility assets and the WHOI TowCam System. Quantitative tools for interrogating and interpreting the data (e.g. profiling, changing sun-illumination and color scale) are provided within GeoMapApp to

  12. On the Putative Detection of z > 0 X-Ray Absorption Features in the Spectrum of Mrk 421

    Science.gov (United States)

    Rasmussen, Andrew P.; Kahn, Steven M.; Paerels, Frits; Herder, Jan Willem den; Kaastra, Jelle; de Vries, Cor

    2007-02-01

    In a series of papers, Nicastro et al. have reported the detection of z>0 O VII absorption features in the spectrum of Mrk 421 obtained with the Chandra Low Energy Transmission Grating Spectrometer (LETGS). We evaluate this result in the context of a high-quality spectrum of the same source obtained with the Reflection Grating Spectrometer (RGS) on XMM-Newton. The data comprise over 955 ks of usable exposure time and more than 2.6×104 counts per 50 mÅ at 21.6 Å. We concentrate on the spectrally clean region (21.3 <λ<22.5 ), where sharp features due to the astrophysically abundant O VII may reveal an intervening, warm-hot intergalactic medium (WHIM). We do not confirm detection of any of the intervening systems claimed to date. Rather, we detect only three unsurprising, astrophysically expected features down to the log(Ni)~14.6 (3 σ) sensitivity level. Each of the two purported WHIM features is rejected with a statistical confidence that exceeds that reported for its initial detection. While we cannot rule out the existence of fainter, WHIM related features in these spectra, we suggest that previous discovery claims were premature. A more recent paper by Williams et al. claims to have demonstrated that the RGS data we analyze here do not have the resolution or statistical quality required to confirm or deny the LETGS detections. We show that our analysis resolves the issues encountered by Williams et al. and recovers the full resolution and statistical quality of the RGS data. We highlight the differences between our analysis and those published by Williams et al. as this may explain our disparate conclusions.

  13. On the Putative Detection of Z>0 X-Ray Absorption Features in the Spectrum of Mrk 421

    Energy Technology Data Exchange (ETDEWEB)

    Rasmussen, Andrew P.; /SLAC /KIPAC, Menlo Park; Kahn, Steven M.; /SLAC /KIPAC, Menlo Park /Stanford U., Phys. Dept.; Paerels, Frits; /Columbia U., Astron. Astrophys.; Herder, Jan Willem den; Kaastra, Jelle; de Vries, Cor; /SRON, Utrecht

    2006-04-28

    In a series of papers, Nicastro et al. have claimed the detection of z > 0 O VII absorption features in the spectrum of Mrk 421 obtained with the Chandra Low Energy Transmission Grating Spectrometer (LETGS). We evaluate those claims in the context of a high quality spectrum of the same source obtained with the Reflection Grating Spectrometer (RGS) on XMM-Newton. The data comprise over 955 ksec of usable exposure time and more than 2.6 x 10{sup 4} counts per 50 m{angstrom} at 21.6 {angstrom}. We concentrate on the spectrally clean region (21.3 < {lambda} < 22.5 {angstrom}) where sharp features due to the astrophysically abundant O VII may reveal an intervening, warm-hot intergalactic medium (WHIM). In spite of the fact that the sensitivity of the RGS data is higher than that of the original LETGS data presented by Nicastro et al., we do not confirm detection of any of the intervening systems claimed to date. Rather, we detect only three unsurprising, astrophysically expected features down to the log (N{sub i}) {approx} 14.6 (3{sigma}) sensitivity level. Each of the two purported WHIM features is rejected with a statistical confidence that exceeds that reported for its initial detection. While we can not rule out the existence of fainter, WHIM related features in these spectra, we suggest that previous discovery claims were premature. A more recent paper by Williams et al. claims to have demonstrated that the RGS data we analyze here do not have the resolution or statistical quality required to confirm or deny the LETGS detections. We show that the Williams et al. reduction of the RGS data was highly flawed, leading to an artificial and spurious degradation of the instrument response. We carefully highlight the differences between our analysis presented here and those published by Williams et al.

  14. Accelerating object detection via a visual-feature-directed search cascade: algorithm and field programmable gate array implementation

    Science.gov (United States)

    Kyrkou, Christos; Theocharides, Theocharis

    2016-07-01

    Object detection is a major step in several computer vision applications and a requirement for most smart camera systems. Recent advances in hardware acceleration for real-time object detection feature extensive use of reconfigurable hardware [field programmable gate arrays (FPGAs)], and relevant research has produced quite fascinating results, in both the accuracy of the detection algorithms as well as the performance in terms of frames per second (fps) for use in embedded smart camera systems. Detecting objects in images, however, is a daunting task and often involves hardware-inefficient steps, both in terms of the datapath design and in terms of input/output and memory access patterns. We present how a visual-feature-directed search cascade composed of motion detection, depth computation, and edge detection, can have a significant impact in reducing the data that needs to be examined by the classification engine for the presence of an object of interest. Experimental results on a Spartan 6 FPGA platform for face detection indicate data search reduction of up to 95%, which results in the system being able to process up to 50 1024×768 pixels images per second with a significantly reduced number of false positives.

  15. Intra- and Inter-database Study for Arabic, English, and German Databases: Do Conventional Speech Features Detect Voice Pathology?

    Science.gov (United States)

    Ali, Zulfiqar; Alsulaiman, Mansour; Muhammad, Ghulam; Elamvazuthi, Irraivan; Al-Nasheri, Ahmed; Mesallam, Tamer A; Farahat, Mohamed; Malki, Khalid H

    2016-10-10

    A large population around the world has voice complications. Various approaches for subjective and objective evaluations have been suggested in the literature. The subjective approach strongly depends on the experience and area of expertise of a clinician, and human error cannot be neglected. On the other hand, the objective or automatic approach is noninvasive. Automatic developed systems can provide complementary information that may be helpful for a clinician in the early screening of a voice disorder. At the same time, automatic systems can be deployed in remote areas where a general practitioner can use them and may refer the patient to a specialist to avoid complications that may be life threatening. Many automatic systems for disorder detection have been developed by applying different types of conventional speech features such as the linear prediction coefficients, linear prediction cepstral coefficients, and Mel-frequency cepstral coefficients (MFCCs). This study aims to ascertain whether conventional speech features detect voice pathology reliably, and whether they can be correlated with voice quality. To investigate this, an automatic detection system based on MFCC was developed, and three different voice disorder databases were used in this study. The experimental results suggest that the accuracy of the MFCC-based system varies from database to database. The detection rate for the intra-database ranges from 72% to 95%, and that for the inter-database is from 47% to 82%. The results conclude that conventional speech features are not correlated with voice, and hence are not reliable in pathology detection.

  16. Detection of acute lymphocyte leukemia using k-nearest neighbor algorithm based on shape and histogram features

    Science.gov (United States)

    Purwanti, Endah; Calista, Evelyn

    2017-05-01

    Leukemia is a type of cancer which is caused by malignant neoplasms in leukocyte cells. Leukemia disease which can cause death quickly enough for the sufferer is a type of acute lymphocyte leukemia (ALL). In this study, we propose automatic detection of lymphocyte leukemia through classification of lymphocyte cell images obtained from peripheral blood smear single cell. There are two main objectives in this study. The first is to extract featuring cells. The second objective is to classify the lymphocyte cells into two classes, namely normal and abnormal lymphocytes. In conducting this study, we use combination of shape feature and histogram feature, and the classification algorithm is k-nearest Neighbour with k variation is 1, 3, 5, 7, 9, 11, 13, and 15. The best level of accuracy, sensitivity, and specificity in this study are 90%, 90%, and 90%, and they were obtained from combined features of area-perimeter-mean-standard deviation with k=7.

  17. Detection of Spectral Features of Anomalous Vegetation From Reflectance Spectroscopy Related to Pipeline Leakages

    Science.gov (United States)

    van der Meijde, M.; van der Werff, H. M.; Kooistra, J. F.

    2004-12-01

    Underground pipeline leakage inspection is an open problem with large economical and environmental impact. Traditional methods for investigating leakage and pollution, like drilling, are time consuming, destructive and expensive. A non-destructive and more economic exploration method would be a valuable complement to sub-surface investigative methods. Reflectance spectroscopy (or hyperspectral remote sensing) proved to be a tool that offers a non-destructive investigative method to identify anomalous spectral features in vegetation. One of the major environmental problems related to pipelines is the leakage of hydrocarbons into the environment. Hydrocarbons can establish locally anomalous zones that favor the development of a diverse array of chemical and mineralogical changes. Any vegetation present in these zones is likely to be influenced by the hostile and polluted environment. Geobotanical anomalies occur as a result of the effect of hydrocarbons on the growth of vegetation. The most likely changes in the vegetation are expected to occur in the chlorophyll concentrations which are an indicator of the health state. This is the main conclusion after an extensive field campaign in May 2004 in Holland investigating a 1 km trajectory of a 21 km long pipeline. The pipeline is `sweating' benzene condensates at approximately 50% of the connection points between the 9 meter segments of the pipeline. Spectral measurements were conducted at four different test locations in the 1 km trajectory. The test locations were covered by long grass, one of the fields was recently mown. Using different survey designs we can confirm the presence of geobotanical anomalies in different locations using various spectral interpretation techniques like linear red edge shifts, Carter stress indices, normalized difference vegetation index en yellowness index. After the interpretation of the geobotanical anomalies, derived from hyperspectral measurements, we compared the findings with

  18. Influence of Confocal Scanning Laser Microscopy specific acquisition parameters on the detection and matching of Speeded-Up Robust Features

    Energy Technology Data Exchange (ETDEWEB)

    Stanciu, Stefan G., E-mail: sgstanciu@gmail.com [Center for Microscopy-Microanalysis and Information Processing, University Politehnica Bucharest, Splaiul Independentei 313, sector 6, Bucharest (Romania); Hristu, Radu; Stanciu, George A. [Center for Microscopy-Microanalysis and Information Processing, University Politehnica Bucharest, Splaiul Independentei 313, sector 6, Bucharest (Romania)

    2011-04-15

    The robustness and distinctiveness of local features to various object or scene deformations and to modifications of the acquisition parameters play key roles in the design of many computer vision applications. In this paper we present the results of our experiments on the behavior of a recently developed technique for local feature detection and description, Speeded-Up Robust Features (SURF), regarding image modifications specific to Confocal Scanning Laser Microscopy (CSLM). We analyze the repeatability of detected SURF keypoints and the precision-recall of their matching under modifications of three important CSLM parameters: pinhole aperture, photomultiplier (PMT) gain and laser beam power. During any investigation by CSLM these three parameters have to be modified, individually or together, in order to optimize the contrast and the Signal Noise Ratio (SNR), being also inherently modified when changing the microscope objective. Our experiments show that an important amount of SURF features can be detected at the same physical locations in images collected at different values of the pinhole aperture, PMT gain and laser beam power, and further on can be successfully matched based on their descriptors. In the final part, we exemplify the potential of SURF in CSLM imaging by presenting a SURF-based computer vision application that deals with the mosaicing of images collected by this technique. -- Research highlights: {yields} Influence of pinhole aperture modifications on SURF detection and matching in CSLM images. {yields} Influence of photomultiplier gain modifications on SURF detection and matching in CSLM images. {yields} Influence of laser beam power modifications on SURF detection and matching in CSLM images. {yields} SURF-based automated mosaicing of CSLM images.

  19. Skin Detection Based on Color Model and Low Level Features Combined with Explicit Region and Parametric Approaches

    Directory of Open Access Journals (Sweden)

    HARPREET KAUR SAINI

    2014-10-01

    Full Text Available Skin detection is active research area in the field of computer vision which can be applied in the application of face detection, eye detection, etc. These detection helps in various applications such as driver fatigue monitoring system, surveillance system etc. In Computer vision applications, the color model and representations of the human image in color model is one of major module to detect the skin pixels. The mainstream technology is based on the individual pixels and selection of the pixels to detect the skin part in the whole image. In this thesis implementation, we presents a novel technique for skin color detection incorporating with explicit region based and parametric based approach which gives the better efficiency and performances in terms of skin detection in human images. Color models and image quantization technique is used to extract the regions of the images and to represent the image in a particular color model such as RGB and HSV, and then the parametric based approach is applied by selecting the low level skin features are applied to extract the skin and non-skin pixels of the images. In the first step, our technique uses the state-of-the-art non-parametric approach which we call the template based technique or explicitly defined skin regions technique. Then the low level features of the human skin are being extracted such as edge, corner detection which is also known as parametric method. The experimental results depict the improvement in detection rate of the skin pixels by this novel approach. And in the end we discuss the experimental results to prove the algorithmic improvements.

  20. Real-Time Hand Motion Parameter Estimation with Feature Point Detection Using Kinect

    Institute of Scientific and Technical Information of China (English)

    Chun-Ming Chang; Che-Hao Chang; Chung-Lin Huang

    2014-01-01

    This paper presents a real-time Kinect-based hand pose estimation method. Different from model-based and appearance-based approaches, our approach retrieves continuous hand motion parameters in real time. First, the hand region is segmented from the depth image. Then, some specific feature points on the hand are located by the random forest classifier, and the relative displacements of these feature points are transformed to a rotation invariant feature vector. Finally, the system retrieves the hand joint parameters by applying the regression functions on the feature vectors. Experimental results are compared with the ground truth dataset obtained by a data glove to show the effectiveness of our approach. The effects of different distances and different rotation angles for the estimation accuracy are also evaluated.

  1. CRED Fagatele Bay National Marine Sanctuary Bathymetric Position Index Habitat Structures 2004

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Bathymetric Position Index (BPI) Structures are derived from derivatives of Simrad EM-3000 multibeam bathymetry (1 m and 3 m resolution). BPI structures are...

  2. CRED Fagatele Bay National Marine Sanctuary Bathymetric Position Index Habitat Zones 2004

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Bathymetric Position Index (BPI) Zones derived from derivatives of Simrad EM-3000 multibeam bathymetry (3 m resolution). BPI zones are surficial characteristics of...

  3. SIM2012-3213 Bathymetric Contours of Breckenridge Reservoir, Quantico, Virginia

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Bathymetric data were collected using a boat-mounted Wide Area Augmentation System (WAAS), a type of differential global positioning system, echo depth-sounding...

  4. NY_GOME_CONTOURS: New York Bight and Gulf of Maine bathymetric contours

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This bathymetric shapefile contains 10 m contours for the continental shelf and 100 m beyond the 200 m shelf edge. The contours have been derived from the National...

  5. Using a personal watercraft for monitoring bathymetric changes at storm scale

    NARCIS (Netherlands)

    Van Son, S.T.J.; Lindenbergh, R.C.; De Schipper, M.A. .; De Vries, S.; Duijnmayer, K.

    2009-01-01

    Monitoring and understanding coastal processes is important for the Netherlands since the most densely populated areas are situated directly behind the coastal defense. Traditionally, bathymetric changes are monitored at annual intervals, although nowadays it is understood that most dramatic changes

  6. NY_GOME_CONTOURS: New York Bight and Gulf of Maine bathymetric contours

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This bathymetric shapefile contains 10 m contours for the continental shelf and 100 m beyond the 200 m shelf edge. The contours have been derived from the National...

  7. USGS Small-scale Dataset - Bathymetric Shaded Relief of North America 200509 GeoTIFF

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The Bathymetric Shaded Relief of North America map layer shows depth ranges using colors, with relief enhanced by shading. The image was derived from the National...

  8. CCALBATC - bathymetric contours for the central California region between Point Arena and Point Sur.

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — CCALBATC consists of bathymetric contours at 10-m and 50-m intervals for the area offshore of central California between Point Arena to the north and Point Sur to...

  9. Stellwagen Bank bathymetry - Percent slope derived from 5-meter bathymetric contour lines

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Percent slope of Stellwagen Bank bathymetry. Raster derived from 5-meter bathymetric contour lines (Quads 1-18). Collected on surveys carried out in 4 cruises 1994 -...

  10. Stellwagen Bank bathymetry - Degree slope derived from 5-meter bathymetric contour lines

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Degree slope of Stellwagen Bank bathymetry. Raster derived from 5-meter bathymetric contour lines (Quads 1-18). Collected on surveys carried out in 4 cruises 1994 -...

  11. Bathymetric measurements of Little Holland Tract, Sacramento-San Joaquin Delta, California, 2015

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Bathymetric data were collected by the U.S. Geological Survey (USGS) in 2015 for the Little Holland Tract in the Sacramento-San Joaquin River Delta, California. The...

  12. Bathymetric measurements of Little Holland Tract, Sacramento-San Joaquin Delta, California, 2015, from personal watercraft

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Bathymetric data were collected by the U.S. Geological Survey (USGS) in 2015 for Little Holland Tract in the Sacramento-San Joaquin River Delta, California. The data...

  13. International Bathymetric Chart of the Arctic Ocean, Version 2.23

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The goal of this initiative is to develop a digital data base that contains all available bathymetric data north of 64 degrees North, for use by mapmakers,...

  14. International Bathymetric Chart of the Arctic Ocean, Version 1.0

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The goal of this initiative is to develop a digital data base that contains all available bathymetric data north of 64 degrees North, for use by mapmakers,...

  15. Comparison of spatial frequency domain features for the detection of side attack explosive ballistics in synthetic aperture acoustics

    Science.gov (United States)

    Dowdy, Josh; Anderson, Derek T.; Luke, Robert H.; Ball, John E.; Keller, James M.; Havens, Timothy C.

    2016-05-01

    Explosive hazards in current and former conflict zones are a threat to both military and civilian personnel. As a result, much effort has been dedicated to identifying automated algorithms and systems to detect these threats. However, robust detection is complicated due to factors like the varied composition and anatomy of such hazards. In order to solve this challenge, a number of platforms (vehicle-based, handheld, etc.) and sensors (infrared, ground penetrating radar, acoustics, etc.) are being explored. In this article, we investigate the detection of side attack explosive ballistics via a vehicle-mounted acoustic sensor. In particular, we explore three acoustic features, one in the time domain and two on synthetic aperture acoustic (SAA) beamformed imagery. The idea is to exploit the varying acoustic frequency profile of a target due to its unique geometry and material composition with respect to different viewing angles. The first two features build their angle specific frequency information using a highly constrained subset of the signal data and the last feature builds its frequency profile using all available signal data for a given region of interest (centered on the candidate target location). Performance is assessed in the context of receiver operating characteristic (ROC) curves on cross-validation experiments for data collected at a U.S. Army test site on different days with multiple target types and clutter. Our preliminary results are encouraging and indicate that the top performing feature is the unrolled two dimensional discrete Fourier transform (DFT) of SAA beamformed imagery.

  16. An Energy efficient application specific integrated circuit for electrocardiogram feature detection and its potential for ambulatory cardiovascular disease detection.

    Science.gov (United States)

    Jain, Sanjeev Kumar; Bhaumik, Basabi

    2016-03-01

    A novel algorithm based on forward search is developed for real-time electrocardiogram (ECG) signal processing and implemented in application specific integrated circuit (ASIC) for QRS complex related cardiovascular disease diagnosis. The authors have evaluated their algorithm using MIT-BIH database and achieve sensitivity of 99.86% and specificity of 99.93% for QRS complex peak detection. In this Letter, Physionet PTB diagnostic ECG database is used for QRS complex related disease detection. An ASIC for cardiovascular disease detection is fabricated using 130-nm CMOS high-speed process technology. The area of the ASIC is 0.5 mm(2). The power dissipation is 1.73 μW at the operating frequency of 1 kHz with a supply voltage of 0.6 V. The output from the ASIC is fed to their Android application that generates diagnostic report and can be sent to a cardiologist through email. Their ASIC result shows average failed detection rate of 0.16% for six leads data of 290 patients in PTB diagnostic ECG database. They also have implemented a low-leakage version of their ASIC. The ASIC dissipates only 45 pJ with a supply voltage of 0.9 V. Their proposed ASIC is most suitable for energy efficient telemetry cardiovascular disease detection system.

  17. Helicobacter Pylori infection detection from gastric X-ray images based on feature fusion and decision fusion.

    Science.gov (United States)

    Ishihara, Kenta; Ogawa, Takahiro; Haseyama, Miki

    2017-05-01

    In this paper, a fully automatic method for detection of Helicobacter pylori (H. pylori) infection is presented with the aim of constructing a computer-aided diagnosis (CAD) system. In order to realize a CAD system with good performance for detection of H. pylori infection, we focus on the following characteristic of stomach X-ray examination. The accuracy of X-ray examination differs depending on the symptom of H. pylori infection that is focused on and the position from which X-ray images are taken. Therefore, doctors have to comprehensively assess the symptoms and positions. In order to introduce the idea of doctors' assessment into the CAD system, we newly propose a method for detection of H. pylori infection based on the combined use of feature fusion and decision fusion. As a feature fusion scheme, we adopt Multiple Kernel Learning (MKL). Since MKL can combine several features with determination of their weights, it can represent the differences in symptoms. By constructing an MKL classifier for each position, we can obtain several detection results. Furthermore, we introduce confidence-based decision fusion, which can consider the relationship between the classifier's performance and the detection results. Consequently, accurate detection of H. pylori infection becomes possible by the proposed method. Experimental results obtained by applying the proposed method to real X-ray images show that our method has good performance, close to the results of detection by specialists, and indicate that the realization of a CAD system for determining the risk of H. pylori infection is possible. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Near-Surface Crevasse Detection in Ice Sheets using Feature-Based Machine Learning

    Science.gov (United States)

    Ray, L.; Walker, B.; Lever, J.; Arcone, S. A.

    2015-12-01

    In 2014, a team of Dartmouth, CRREL, and University of Maine researchers conducted the first of three annual ground-penetrating radar surveys of the McMurdo Shear Zone using robot-towed instruments. This survey provides over 100 transects of a 5.7 km x 5.0 km grid spanning the width of the shear zone at spacing of approximately 50 m. Transect direction was orthogonal to ice flow. Additionally, a dense 200 m x 200 m grid was surveyed at 10 m spacing in both the N-S and W-E directions. Radar settings provided 20 traces/sec, which combined with an average robot speed of 1.52 m/s, provides a trace every 7.6 cm. The robot towed two antenna units at 400 MHz and 200 MHz center frequencies, with the former penetrating to approximately 19 m. We establish boundaries for the shear zone over the region surveyed using the 400 MHz antenna data, and we geo-locate crevasses using feature-based machine learning classification of GPR traces into one of three classes - 1) firn, 2) distinct crevasses, and 3) less distinct or deeper features originating within the 19 m penetration depth. Distinct crevasses feature wide, hyperbolic reflections with strike angles of 35-40° to transect direction and clear voids. Less distinct or deeper features range from broad diffraction patterns with no clear void to overlapping diffractions extending tens of meters in width with or without a clear void. The classification is derived from statistical features of unprocessed traces and thus provides a computationally efficient means for eventual real-time classification of GPR traces. Feature-based classification is shown to be insensitive to artifacts related to rolling or pitching motion of the instrument sled and also provides a means of assessing crevasse width and depth. In subsequent years, we will use feature-based classification to estimate ice flow and evolution of individual crevasses.

  19. The role of subducting bathymetric highs on the oceanic crust to deformation of accretionary wedge and earthquake segmentation in the Java forearc

    Science.gov (United States)

    Singh, S. C.; Mukti, M.; Deighton, I.

    2014-12-01

    Stratigraphic and structural observations of newly acquired seismic reflection data along the offshore south Java reveal the structural style of deformation along the forearc and the role of subducting bathymetric highs to the morphology of the forearc region. The forearc region can be divided in to two major structural units: accretionary wedge and forearc and forearc basin where a backthrust marks the boundary between the accretionary wedge and the forearc basin sediments. The continuous compression in the subduction zone has induced younger landward-vergent folds and thrusts within the seaward margin of the forearc basin sediments, which together with the backthrust is referred as the Offshore South Java Fault Zone (OSJFZ), representing the growth of the accretionary wedge farther landward. Seaward-vergent imbricated thrusts have deformed the sediments in the accretionary wedge younging seaward, and have developed fold-thrust belts in the accretionary wedge toward trench. Together with the backthrusts, these seaward-vergent thrusts characterize the growth of accretionary wedge in South of Java trench. Based on these new results, we suggest that accretionary wedge mechanic is not the first order factor in shaping the morphology of the accretionary wedge complex. Instead the subducting bathymetric highs play the main role in shaping the forearc that are manifested in the uplift of the forearc high and intense deformation along the OSJFZ. These subducting highs also induce compression within the accretionary sediments, evident from landward deflection of the subduction front at the trench and inner part of accretionary wedge in the seaward margin of the forearc basin. Intense deformation is also observed on the seaward portion of the accretionary wedge area where the bathymetric highs subducted. We suggest that these subducted bathymetric features define the segment boundaries for megathrust earthquakes, and hence reducing the maximum size of the earthquakes in the

  20. New insights from high resolution bathymetric surveys in the Panarea volcanic complex (Aeolian Islands, Italy)

    Science.gov (United States)

    Anzidei, M.; Esposito, A.

    2003-04-01

    During November 2002 the portion of the Panarea volcanic complex (Aeolian Islands, Italy), which includes the islets of Dattilo, Panarelli, Lisca Bianca, Bottaro and Lisca Nera, experienced an intense submarine gaseous exhalation that produced a spectacular submarine fumarolic field. The submarine volcanic activity of the Aeolian area was already known during historical times by Tito Livio, Strabone and Plinio (SGA, 1996), that reported exhalation episodes and submarine eruptions. During the last decade geological, structural, geochemical and volcanological studies performed on the Panarea volcanic complex, evidenced a positive gravimetric anomaly, tectonic discontinuities and several centres of geothermal fluid emission (Barberi et al., 1974; Lanzafame and Rossi, 1984; Bellia et al., 1986; Gabianelli et al., 1990; Italiano and Nuccio, 1991; Calanchi et al., 1995,1999). With the aim to estimate the crustal deformation of the submarine area of the archipelago, connected with the exhalation activity, we produced a detailed Marine Digital Terrain Model (MDTM) of the seafloor by means of a high resolution bathymetric survey. We used the multi beam technique coupled with GPS positioning in RTK mode. We obtained a MDTM with an average pixel of 0.5 m. Our MDTM allowed to estimate the location, deep, shape and size of the exhalation centres and seafloor morphological-structural features, opening new questions for the evaluation of the volcanic hazard of Panarea area which date is still debated.

  1. Less is more: Avoiding the LIBS dimensionality curse through judicious feature selection for explosive detection

    Science.gov (United States)

    Kumar Myakalwar, Ashwin; Spegazzini, Nicolas; Zhang, Chi; Kumar Anubham, Siva; Dasari, Ramachandra R.; Barman, Ishan; Kumar Gundawar, Manoj

    2015-08-01

    Despite its intrinsic advantages, translation of laser induced breakdown spectroscopy for material identification has been often impeded by the lack of robustness of developed classification models, often due to the presence of spurious correlations. While a number of classifiers exhibiting high discriminatory power have been reported, efforts in establishing the subset of relevant spectral features that enable a fundamental interpretation of the segmentation capability and avoid the ‘curse of dimensionality’ have been lacking. Using LIBS data acquired from a set of secondary explosives, we investigate judicious feature selection approaches and architect two different chemometrics classifiers -based on feature selection through prerequisite knowledge of the sample composition and genetic algorithm, respectively. While the full spectral input results in classification rate of ca.92%, selection of only carbon to hydrogen spectral window results in near identical performance. Importantly, the genetic algorithm-derived classifier shows a statistically significant improvement to ca. 94% accuracy for prospective classification, even though the number of features used is an order of magnitude smaller. Our findings demonstrate the impact of rigorous feature selection in LIBS and also hint at the feasibility of using a discrete filter based detector thereby enabling a cheaper and compact system more amenable to field operations.

  2. The Relationship of Forest Fires Detected by MODIS and SRTM Derived Topographic Features in Central Siberia

    Science.gov (United States)

    Ranson, Jon K.; Kovacs, Katalin; Kharuk, Viatcheslav; Burke, Erin

    2006-01-01

    Fires are a common occurrence in the Siberian boreal forest. The MOD14 Thermal anomalies product of the Terra MODIS Moderate Resolution Spectroradiometer) product set is designed to detect thermal anomalies (i.e. hotspots or fires) on the Earth's surface. Recent field studies showed a dependence of fire occurrence on topography. In this study MODIS thermal anomaly data and SRTM topography data were merged and analyzed to evaluate if forest fires are more likely to occur at certain combinations of elevation, slope and aspect. Using the satellite data over a large area can lead to better understanding how topography and forest fires are related. The study area covers a 2.5 Million krn(exp 2) portion of the Central Siberian southern taiga from 72 deg to 110 deg East and from 50 deg to 60 deg North. About 57% of the study area is forested and 80% of the forest grows between 200 and 1000 m. Forests with pine (Pinus sylvestris), larch (Larix sibirica, L. gmelinii), Siberian pine (Pinus sibirica), spruce (Picea obovata.) and fir (Abies sibirica) cover most of the landscape. Deciduous stands with birch (Betula pendula, B. pubescens) and aspen (Populus tremula) cover the areas of lower elevation in this region. The climate of this area is distinctly continental with long, cold winters and short hot summers. The tree line in this part of the world is around 1500 m in elevation with alpine tundra, snow and ice fields and rock outcrops extending up to over 3800 m. A 500 m resolution landcover map was developed using 2001 MODIS MOD13 Normalized Vegetation Index (NDVI) and Middle Infrared (MIR) products for seven 16-day periods. The classification accuracy was over 87%. The SRTM version 2 data, which is distributed in 1 degree by 1 degree tiles were mosaiced using the ENVI software. In this study, only those MODIS pixels were used that were flagged as "nominal or high confidence fire" by the MODIS fire product team. Using MODIS data from the years 2000 to 2005 along with the

  3. Application of IRS-1D data in water erosion features detection (case study: Nour roud catchment, Iran).

    Science.gov (United States)

    Solaimani, K; Amri, M A Hadian

    2008-08-01

    The aim of this study was capability of Indian Remote Sensing (IRS) data of 1D to detecting erosion features which were created from run-off. In this study, ability of PAN digital data of IRS-1D satellite was evaluated for extraction of erosion features in Nour-roud catchment located in Mazandaran province, Iran, using GIS techniques. Research method has based on supervised digital classification, using MLC algorithm and also visual interpretation, using PMU analysis and then these were evaluated and compared. Results indicated that opposite of digital classification, with overall accuracy 40.02% and kappa coefficient 31.35%, due to low spectral resolution; visual interpretation and classification, due to high spatial resolution (5.8 m), prepared classifying erosion features from this data, so that these features corresponded with the lithology, slope and hydrograph lines using GIS, so closely that one can consider their boundaries overlapped. Also field control showed that this data is relatively fit for using this method in investigation of erosion features and specially, can be applied to identify large erosion features.

  4. How do 2D fingerprints detect structurally diverse active compounds? Revealing compound subset-specific fingerprint features through systematic selection.

    Science.gov (United States)

    Heikamp, Kathrin; Bajorath, Jürgen

    2011-09-26

    In independent studies it has previously been demonstrated that two-dimensional (2D) fingerprints have scaffold hopping ability in virtual screening, although these descriptors primarily emphasize structural and/or topological resemblance of reference and database compounds. However, the mechanism by which such fingerprints enrich structurally diverse molecules in database selection sets is currently little understood. In order to address this question, similarity search calculations on 120 compound activity classes of varying structural diversity were carried out using atom environment fingerprints. Two feature selection methods, Kullback-Leibler divergence and gain ratio analysis, were applied to systematically reduce these fingerprints and generate alternative versions for searching. Gain ratio is a feature selection method from information theory that has thus far not been considered in fingerprint analysis. However, it is shown here to be an effective fingerprint feature selection approach. Following comparative feature selection and similarity searching, the compound recall characteristics of original and reduced fingerprint versions were analyzed in detail. Small sets of fingerprint features were found to distinguish subsets of active compounds from other database molecules. The compound recall of fingerprint similarity searching often resulted from a cumulative detection of distinct compound subsets by different fingerprint features, which provided a rationale for the scaffold hopping potential of these 2D fingerprints.

  5. Real-Time Detection and Measurement of Eye Features from Color Images

    Directory of Open Access Journals (Sweden)

    Diana Borza

    2016-07-01

    Full Text Available The accurate extraction and measurement of eye features is crucial to a variety of domains, including human-computer interaction, biometry, and medical research. This paper presents a fast and accurate method for extracting multiple features around the eyes: the center of the pupil, the iris radius, and the external shape of the eye. These features are extracted using a multistage algorithm. On the first stage the pupil center is localized using a fast circular symmetry detector and the iris radius is computed using radial gradient projections, and on the second stage the external shape of the eye (of the eyelids is determined through a Monte Carlo sampling framework based on both color and shape information. Extensive experiments performed on a different dataset demonstrate the effectiveness of our approach. In addition, this work provides eye annotation data for a publicly-available database.

  6. Object-Based Change Detection in Urban Areas: The Effects of Segmentation Strategy, Scale, and Feature Space on Unsupervised Methods

    Directory of Open Access Journals (Sweden)

    Lei Ma

    2016-09-01

    Full Text Available Object-based change detection (OBCD has recently been receiving increasing attention as a result of rapid improvements in the resolution of remote sensing data. However, some OBCD issues relating to the segmentation of high-resolution images remain to be explored. For example, segmentation units derived using different segmentation strategies, segmentation scales, feature space, and change detection methods have rarely been assessed. In this study, we have tested four common unsupervised change detection methods using different segmentation strategies and a series of segmentation scale parameters on two WorldView-2 images of urban areas. We have also evaluated the effect of adding extra textural and Normalized Difference Vegetation Index (NDVI information instead of using only spectral information. Our results indicated that change detection methods performed better at a medium scale than at a fine scale where close to the pixel size. Multivariate Alteration Detection (MAD always outperformed the other methods tested, at the same confidence level. The overall accuracy appeared to benefit from using a two-date segmentation strategy rather than single-date segmentation. Adding textural and NDVI information appeared to reduce detection accuracy, but the magnitude of this reduction was not consistent across the different unsupervised methods and segmentation strategies. We conclude that a two-date segmentation strategy is useful for change detection in high-resolution imagery, but that the optimization of thresholds is critical for unsupervised change detection methods. Advanced methods need be explored that can take advantage of additional textural or other parameters.

  7. Structural details of the Orion Nebula - Detection of a network of stringlike ionized features

    Science.gov (United States)

    Yusef-Zadeh, F.

    1990-09-01

    Continuum observations of the Orion Nebula, obtained at 20 cm using the A, B, C, and D configurations of the VLA during 1986-1987, are reported. Radio images of resolution 1.8 x 1.6 arcsec are presented and analyzed, with a focus on (1) the complex cone structure of M 42 and (2) an extended network of bright stringlike features concentrated near the Trapezium cluster. Possible theoretical explanations of these features are explored, starting from the blister model of H II regions developed by Tenorio and Tagle (1979).

  8. Detecting Structural Features in Metallic Glass via Synchrotron Radiation Experiments Combined with Simulations

    Directory of Open Access Journals (Sweden)

    Gu-Qing Guo

    2015-11-01

    Full Text Available Revealing the essential structural features of metallic glasses (MGs will enhance the understanding of glass-forming mechanisms. In this work, a feasible scheme is provided where we performed the state-of-the-art synchrotron-radiation based experiments combined with simulations to investigate the microstructures of ZrCu amorphous compositions. It is revealed that in order to stabilize the amorphous state and optimize the topological and chemical distribution, besides the icosahedral or icosahedral-like clusters, other types of clusters also participate in the formation of the microstructure in MGs. This cluster-level co-existing feature may be popular in this class of glassy materials.

  9. A robust segmentation approach based on analysis of features for defect detection in X-ray images of aluminium castings

    DEFF Research Database (Denmark)

    Lecomte, G.; Kaftandjian, V.; Cendre, Emmanuelle

    2007-01-01

    A robust image processing algorithm has been developed for detection of small and low contrasted defects, adapted to X-ray images of castings having a non-uniform background. The sensitivity to small defects is obtained at the expense of a high false alarm rate. We present in this paper a feature...... three parameters and taking into account the fact that X-ray grey-levels follow a statistical normal law. Results are shown on a set of 684 images, involving 59 defects, on which we obtained a 100% detection rate without any false alarm....

  10. Automatic lumbar vertebrae detection based on feature fusion deep learning for partial occluded C-arm X-ray images.

    Science.gov (United States)

    Li, Yang; Liang, Wei; Zhang, Yinlong; An, Haibo; Tan, Jindong; Yang Li; Wei Liang; Yinlong Zhang; Haibo An; Jindong Tan; Li, Yang; Liang, Wei; Tan, Jindong; Zhang, Yinlong; An, Haibo

    2016-08-01

    Automatic and accurate lumbar vertebrae detection is an essential step of image-guided minimally invasive spine surgery (IG-MISS). However, traditional methods still require human intervention due to the similarity of vertebrae, abnormal pathological conditions and uncertain imaging angle. In this paper, we present a novel convolutional neural network (CNN) model to automatically detect lumbar vertebrae for C-arm X-ray images. Training data is augmented by DRR and automatic segmentation of ROI is able to reduce the computational complexity. Furthermore, a feature fusion deep learning (FFDL) model is introduced to combine two types of features of lumbar vertebrae X-ray images, which uses sobel kernel and Gabor kernel to obtain the contour and texture of lumbar vertebrae, respectively. Comprehensive qualitative and quantitative experiments demonstrate that our proposed model performs more accurate in abnormal cases with pathologies and surgical implants in multi-angle views.

  11. 基于突发特征分析的事件检测%Analyzing bursty feature for event detection

    Institute of Scientific and Technical Information of China (English)

    陈宏; 陈伟

    2011-01-01

    针对新闻数据流的事件检测问题,提出了一种基于突发特征分析的事件检测方法.事件由在一定时间窗口内代表它的特征构成,通常它们在事件发生时表现出一定的突发.通过多尺度突发分析算法识别出突发特征,并计算突发特征突发模式的相似性及所在新闻的重合度,对突发特征进行聚类分析以构造事件.在路透社80多万篇新闻数据集中验证上述算法,可准确地识别出突发特征各种跨度上的突发,且能有效地检测出事件.%This paper proposed an event detection method based on analyzing bursty features in news streams.Event is a minimal set of bursty features that occur together in certain time window with strong support of documents in the text stream.Introduced an elastic burst detection algorithm to identify multi-scale bursty features.Then, used affinity propagation clustering algorithm to group these bursty features with high document overlap and identically distribution in bursty time windows together.Conducted experiments using real life data, the Reuters Corpus volume 1, with over 800 thousands news reports across one year.The proposed algorithm can accurately identify the multi-scale bursty features and detect the events efficiently.

  12. Cardiovascular magnetic resonance myocardial feature tracking detects quantitative wall motion during dobutamine stress.

    NARCIS (Netherlands)

    Schuster, A.; Kutty, S.; Padiyath, A.; Parish, V.; Gribben, P.; Danford, D.A.; Makowski, M.R.; Bigalke, B.; Beerbaum, P.B.J.; Nagel, E.

    2011-01-01

    BACKGROUND: Dobutamine stress cardiovascular magnetic resonance (DS-CMR) is an established tool to assess hibernating myocardium and ischemia. Analysis is typically based on visual assessment with considerable operator dependency. CMR myocardial feature tracking (CMR-FT) is a recently introduced tec

  13. Geomorphological change detection using object-based feature extraction from multi-temporal LIDAR data

    NARCIS (Netherlands)

    Seijmonsbergen, A.C.; Anders, N.S.; Bouten, W.; Feitosa, R.Q.; da Costa, G.A.O.P.; de Almeida, C.M.; Fonseca, L.M.G.; Kux, H.J.H.

    2012-01-01

    Multi-temporal LiDAR DTMs are used for the development and testing of a method for geomorphological change analysis in western Austria. Our test area is located on a mountain slope in the Gargellen Valley in western Austria. Six geomorphological features were mapped by using stratified Object-Based

  14. On-Line Fault Detection in Wind Turbine Transmission System using Adaptive Filter and Robust Statistical Features

    Directory of Open Access Journals (Sweden)

    Mark Frogley

    2013-01-01

    Full Text Available To reduce the maintenance cost, avoid catastrophic failure, and improve the wind transmission system reliability, online condition monitoring system is critical important. In the real applications, many rotating mechanical faults, such as bearing surface defect, gear tooth crack, chipped gear tooth and so on generate impulsive signals. When there are these types of faults developing inside rotating machinery, each time the rotating components pass over the damage point, an impact force could be generated. The impact force will cause a ringing of the support structure at the structural natural frequency. By effectively detecting those periodic impulse signals, one group of rotating machine faults could be detected and diagnosed. However, in real wind turbine operations, impulsive fault signals are usually relatively weak to the background noise and vibration signals generated from other healthy components, such as shaft, blades, gears and so on. Moreover, wind turbine transmission systems work under dynamic operating conditions. This will further increase the difficulties in fault detection and diagnostics. Therefore, developing advanced signal processing methods to enhance the impulsive signals is in great needs.In this paper, an adaptive filtering technique will be applied for enhancing the fault impulse signals-to-noise ratio in wind turbine gear transmission systems. Multiple statistical features designed to quantify the impulsive signals of the processed signal are extracted for bearing fault detection. The multiple dimensional features are then transformed into one dimensional feature. A minimum error rate classifier will be designed based on the compressed feature to identify the gear transmission system with defect. Real wind turbine vibration signals will be used to demonstrate the effectiveness of the presented methodology.

  15. Multiple kernel based feature and decision level fusion of iECO individuals for explosive hazard detection in FLIR imagery

    Science.gov (United States)

    Price, Stanton R.; Murray, Bryce; Hu, Lequn; Anderson, Derek T.; Havens, Timothy C.; Luke, Robert H.; Keller, James M.

    2016-05-01

    A serious threat to civilians and soldiers is buried and above ground explosive hazards. The automatic detection of such threats is highly desired. Many methods exist for explosive hazard detection, e.g., hand-held based sensors, downward and forward looking vehicle mounted platforms, etc. In addition, multiple sensors are used to tackle this extreme problem, such as radar and infrared (IR) imagery. In this article, we explore the utility of feature and decision level fusion of learned features for forward looking explosive hazard detection in IR imagery. Specifically, we investigate different ways to fuse learned iECO features pre and post multiple kernel (MK) support vector machine (SVM) based classification. Three MK strategies are explored; fixed rule, heuristics and optimization-based. Performance is assessed in the context of receiver operating characteristic (ROC) curves on data from a U.S. Army test site that contains multiple target and clutter types, burial depths and times of day. Specifically, the results reveal two interesting things. First, the different MK strategies appear to indicate that the different iECO individuals are all more-or-less important and there is not a dominant feature. This is reinforcing as our hypothesis was that iECO provides different ways to approach target detection. Last, we observe that while optimization-based MK is mathematically appealing, i.e., it connects the learning of the fusion to the underlying classification problem we are trying to solve, it appears to be highly susceptible to over fitting and simpler, e.g., fixed rule and heuristics approaches help us realize more generalizable iECO solutions.

  16. Compressive Sensing Based Bio-Inspired Shape Feature Detection CMOS Imager

    Science.gov (United States)

    Duong, Tuan A. (Inventor)

    2015-01-01

    A CMOS imager integrated circuit using compressive sensing and bio-inspired detection is presented which integrates novel functions and algorithms within a novel hardware architecture enabling efficient on-chip implementation.

  17. Integrated Circuit Wear out Prediction and Recycling Detection using Radio Frequency Distinct Native Attribute Features

    Science.gov (United States)

    2016-12-22

    This material is declared a work of the U.S. Government and is not subject to copyright protection in the United States. AFIT-ENG-DS-16-D-002...has been leveraged to support physical layer cyber-security applications by iden- tifying altered components , detecting Trojan hardware, and detecting...semiconductor device during manufac- turing. However, the internal components (transistors, etc.) in modern IC devices do not maintain consistent performance

  18. Comparison of feature extraction methods within a spatio-temporal land cover change detection framework

    CSIR Research Space (South Africa)

    Kleynhans, W

    2011-07-01

    Full Text Available value yields a change or no-change decision [2]. The objective of this paper is to compare the EKF derived pa- rameter sequence with a sliding window Fast Fourier Trans- form (FFT) alternative [3] within the afore mentioned spatio- temporal change... detection framework. When considering the sliding window FFT approach in the context of the afore- mentioned spatio-temporal change detection framework. The underlying idea is that a sliding window FFT is computed for the entire time series...

  19. On the Putative Detection of z>0 X-ray Absorption Features in the Spectrum of Markarian 421

    CERN Document Server

    Rasmussen, A P; Den Herder, J W A; Kaastra, J; Kahn, S M; Paerels, F; Herder, Jan Willem den; Kaastra, Jelle; Kahn, Steven M.; Paerels, Frits; Rasmussen, Andrew P.; Vries, Cor de

    2006-01-01

    In a series of papers, Nicastro et al. have claimed the detection of z>0 O VII absorption features in the spectrum of Mrk 421 obtained with the Chandra Low Energy Transmission Grating Spectrometer (LETGS). We evaluate those claims in the context of a high quality spectrum of the same source obtained with the Reflection Grating Spectrometer (RGS) on XMM-Newton. The data comprise over 955~ksec of usable exposure time and more than 26000 counts per 50 milliAngstroms at 21.6 Angstroms. We concentrate on the spectrally clean region (21.3 < lambda < 22.5 Angstrom) where sharp features due to the astrophysically abundant O VII may reveal an intervening, warm-hot intergalactic medium (WHIM). In spite of the fact that the sensitivity of the RGS data is higher than that of the original LETGS data presented by Nicastro et al., we do not confirm detection of any of the intervening systems claimed to date. Rather, we detect only three unsurprising, astrophysically expected features down to the Log(N)~14.6 (3sigma) s...

  20. Pattern recognition of spectral entropy features for detection of alcoholic and control visual ERP's in multichannel EEGs.

    Science.gov (United States)

    Padma Shri, T K; Sriraam, N

    2017-01-21

    This paper presents a novel ranking method to select spectral entropy (SE) features that discriminate alcoholic and control visual event-related potentials (ERP'S) in gamma sub-band (30-55 Hz) derived from a 64-channel electroencephalogram (EEG) recording. The ranking is based on a t test statistic that rejects the null hypothesis that the group means of SE values in alcoholics and controls are identical. The SE features with high ranks are indicative of maximal separation between their group means. Various sizes of top ranked feature subsets are evaluated by applying principal component analysis (PCA) and k-nearest neighbor (k-NN) classification. Even though ranking does not influence the performance of classifier significantly with the selection of all 61 active channels, the classification efficiency is directly proportional to the number of principal components (pc). The effect of ranking and PCA on classification is predominantly observed with reduced feature subsets of (N = 25, 15) top ranked features. Results indicate that for N = 25, proposed ranking method improves the k-NN classification accuracy from 91 to 93.87% as the number of pcs increases from 5 to 25. With same number of pcs, the k-NN classifier responds with accuracies of 84.42-91.54% with non-ranked features. Similarly for N = 15 and number of pcs varying from 5 to 15, ranking enhances k-NN detection accuracies from 88.9 to 93.08% as compared to 86.75-91.96% without ranking. This shows that the detection accuracy is increased by 6.5 and 2.8%, respectively, for N = 25, whereas it enhances by 2.2 and 1%, respectively, for N = 15 in comparison with non-ranked features. In the proposed t test ranking method for feature selection, the pcs of only top ranked feature candidates take part in classification process and hence provide better generalization.

  1. Integrating Sensors into a Marine Drone for Bathymetric 3D Surveys in Shallow Waters

    Science.gov (United States)

    Giordano, Francesco; Mattei, Gaia; Parente, Claudio; Peluso, Francesco; Santamaria, Raffaele

    2015-01-01

    This paper demonstrates that accurate data concerning bathymetry as well as environmental conditions in shallow waters can be acquired using sensors that are integrated into the same marine vehicle. An open prototype of an unmanned surface vessel (USV) named MicroVeGA is described. The focus is on the main instruments installed on-board: a differential Global Position System (GPS) system and single beam echo sounder; inertial platform for attitude control; ultrasound obstacle-detection system with temperature control system; emerged and submerged video acquisition system. The results of two cases study are presented, both concerning areas (Sorrento Marina Grande and Marechiaro Harbour, both in the Gulf of Naples) characterized by a coastal physiography that impedes the execution of a bathymetric survey with traditional boats. In addition, those areas are critical because of the presence of submerged archaeological remains that produce rapid changes in depth values. The experiments confirm that the integration of the sensors improves the instruments’ performance and survey accuracy. PMID:26729117

  2. Integrating Sensors into a Marine Drone for Bathymetric 3D Surveys in Shallow Waters.

    Science.gov (United States)

    Giordano, Francesco; Mattei, Gaia; Parente, Claudio; Peluso, Francesco; Santamaria, Raffaele

    2015-12-29

    This paper demonstrates that accurate data concerning bathymetry as well as environmental conditions in shallow waters can be acquired using sensors that are integrated into the same marine vehicle. An open prototype of an unmanned surface vessel (USV) named MicroVeGA is described. The focus is on the main instruments installed on-board: a differential Global Position System (GPS) system and single beam echo sounder; inertial platform for attitude control; ultrasound obstacle-detection system with temperature control system; emerged and submerged video acquisition system. The results of two cases study are presented, both concerning areas (Sorrento Marina Grande and Marechiaro Harbour, both in the Gulf of Naples) characterized by a coastal physiography that impedes the execution of a bathymetric survey with traditional boats. In addition, those areas are critical because of the presence of submerged archaeological remains that produce rapid changes in depth values. The experiments confirm that the integration of the sensors improves the instruments' performance and survey accuracy.

  3. Detection of Small-Scaled Features Using Landsat and Sentinel-2 Data Sets

    Science.gov (United States)

    Steensen, Torge; Muller, Sonke; Dresen, Boris; Buscher, Olaf

    2016-08-01

    In advanced times of renewable energies, our attention has to be on secondary features that can be utilised to enhance our independence from fossil fuels. In terms of biomass, this focus lies on small-scaled features like vegetation units alongside roads or hedges between agricultural fields. Currently, there is no easily- accessible inventory, if at all, outlining the growth and re-growth patterns of such vegetation. Since they are trimmed at least annually to allow the passing of traffic, we can, theoretically, harvest the cut and convert it into energy. This, however, requires a map outlining the vegetation growth and the potential energy amount at different locations as well as adequate transport routes and potential processing plant locations. With the help of Landsat and Sentinel-2 data sets, we explore the possibilities to create such a map. Additional data is provided in the form of regularly acquired, airborne orthophotos and GIS-based infrastructure data.

  4. Clinical and microbiologic features of Shigella and enteroinvasive Escherichia coli infections detected by DNA hybridization.

    OpenAIRE

    Taylor, D N; Echeverria, P.; Sethabutr, O.; Pitarangsi, C; Leksomboon, U; Blacklow, N R; Rowe, B.; R. Gross; Cross, J.

    1988-01-01

    To determine the clinical and microbiologic features of Shigella and enteroinvasive Escherichia coli (EIEC) infections, we investigated 410 children with diarrhea and 410 control children without diarrhea who were seen at Children's Hospital, Bangkok, Thailand, from January to June 1985. Shigella spp. were isolated from 96 (23%) and EIEC were isolated from 17 (4%) of 410 children with diarrhea and from 12 (3%) and 6 (1%) of 410 control children, respectively. The isolation rates of both patho...

  5. A General Purpose Feature Extractor for Light Detection and Ranging Data

    Science.gov (United States)

    2010-11-17

    an impediment to robust feature-based systems. The alternative LIDAR approach, scan matching, directly matches point clouds . This approach dispenses...4528. 11. Rnnholm, P.; Hyypp, H.; Hyypp, J.; Haggrn, H. Orientation of airborne laser scanning point clouds with multi-view, multi-scale image blocks...building extraction, reconstruction, and regularization from airborne laser scanning point clouds . Sensors 2008, 8, 7323-7343. 14. Dellaert, F. Square

  6. Detecting features of human personality based on handwriting using learning algorithms

    Directory of Open Access Journals (Sweden)

    Behnam Fallah

    2015-11-01

    Full Text Available Handwriting analysis is useful for understanding the personality characteristics through the patterns created by the handwriting and can reveal features such as mental and emotional instability. On the other hand, it is difficult to determine the personality, especially when it is associated with the law because there is no threshold or scale being able to make detailed results of the analysis. This thesis aims to provide an automated solution to recognize the personality of the author by combining image processing and pattern recognition techniques. The personality recognition system proposed in this project is composed of two main parts: training and testing. In the training part, after feature extraction from all image patterns of the input text, a proportional output is created through the MMPI personality test. Then these inputs are trained to the neural network as a pattern. As a result of this training, a comprehensive database will be formed. In the testing part, the database is used as a main comparison reference. After feature extraction, the input text image is compared with all patterns in the database to find the closest image to the input text image. Finally, the MMPI personality test output for the proposed text image is introduced as the output personality parameters. 

  7. Face liveness detection for face recognition based on cardiac features of skin color image

    Science.gov (United States)

    Suh, Kun Ha; Lee, Eui Chul

    2016-07-01

    With the growth of biometric technology, spoofing attacks have been emerged a threat to the security of the system. Main spoofing scenarios in the face recognition system include the printing attack, replay attack, and 3D mask attack. To prevent such attacks, techniques that evaluating liveness of the biometric data can be considered as a solution. In this paper, a novel face liveness detection method based on cardiac signal extracted from face is presented. The key point of proposed method is that the cardiac characteristic is detected in live faces but not detected in non-live faces. Experimental results showed that the proposed method can be effective way for determining printing attack or 3D mask attack.

  8. Bathymetric survey of the Brandon Road Dam Spillway, Joliet, Illinois

    Science.gov (United States)

    Engel, Frank; Krahulik, Justin

    2016-01-01

    Bathymetric survey data of the Brandon Road Dam spillway was collected on May 27 and May 28, 2015 by the US Geological Survey (USGS) using Trimble Real-Time Kinematic Global Positioning System (RTK-GPS) equipment. The base station was set up over a temporarily installed survey pin on both days. This pin was surveyed into an existing NGS benchmark (PID: BBCN12) within the Brandon Road Lock property. In wadeable sections, a GPS rover with 2.0 meter range pole and flat-foot was deployed. In sections unable to be waded, a 2.0 meter range pole was fix-mounted to a jon boat, and a boat-mounted Acoustic Doppler Current Profiler (ADCP) was used to collect the depth data. ADCP depth data were reviewed in the WinRiver II software and exported for processing with the Velocity Mapping Toolbox (Parsons and others, 2013). The RTK-GPS survey points of the water surface elevations were used to convert ADCP-measured depths into bed elevations. An InSitu Level Troll collected 1-minute water level data throughout the two day survey. These data were used to verify that a flat-pool assumption was reasonable for the conversion of the ADCP data to bed elevations given the measurement precision of the ADCP. An OPUS solution was acquired for each survey day.Parsons, D. R., Jackson, P. R., Czuba, J. A., Engel, F. L., Rhoads, B. L., Oberg, K. A., Best, J. L., Mueller, D. S., Johnson, K. K. and Riley, J. D. (2013), Velocity Mapping Toolbox (VMT): a processing and visualization suite for moving-vessel ADCP measurements. Earth Surf. Process. Landforms, 38: 1244–1260. doi: 10.1002/esp.3367

  9. Development of an algorithm for heartbeats detection and classification in Holter records based on temporal and morphological features

    Science.gov (United States)

    García, A.; Romano, H.; Laciar, E.; Correa, R.

    2011-12-01

    In this work a detection and classification algorithm for heartbeats analysis in Holter records was developed. First, a QRS complexes detector was implemented and their temporal and morphological characteristics were extracted. A vector was built with these features; this vector is the input of the classification module, based on discriminant analysis. The beats were classified in three groups: Premature Ventricular Contraction beat (PVC), Atrial Premature Contraction beat (APC) and Normal Beat (NB). These beat categories represent the most important groups of commercial Holter systems. The developed algorithms were evaluated in 76 ECG records of two validated open-access databases "arrhythmias MIT BIH database" and "MIT BIH supraventricular arrhythmias database". A total of 166343 beats were detected and analyzed, where the QRS detection algorithm provides a sensitivity of 99.69 % and a positive predictive value of 99.84 %. The classification stage gives sensitivities of 97.17% for NB, 97.67% for PCV and 92.78% for APC.

  10. Investigating tectonic and bathymetric features of the Indian Ocean using MAGSAT magnetic anomaly data

    Science.gov (United States)

    Sailor, R. V.; Lazarewicz, A. R. (Principal Investigator)

    1982-01-01

    An equivalent source anomaly map and a map of the relative magnetization for the investigation region were produced. Gravimetry, bathymetry, and MAGSAT anomaly maps were contoured in pseudocolor displays. Finally, an autoregressive spectrum estimation technique was verified with synthetic data and shown to be capable of resolving exponential power spectra using small samples of data. Interpretations were made regarding the relationship between MAGSAT data spectra and crustal anomaly spectra.

  11. Part-based Pedestrian Detection and Feature-based Tracking for Driver Assistance

    DEFF Research Database (Denmark)

    Prioletti, Antonio; Møgelmose, Andreas; Grislieri, Paolo

    2013-01-01

    Detecting pedestrians is still a challenging task for automotive vision systems due to the extreme variability of targets, lighting conditions, occlusion, and high-speed vehicle motion. Much research has been focused on this problem in the last ten years and detectors based on classifiers have ga...

  12. Detection Rate, Distribution, Clinical and Pathological Features of Colorectal Serrated Polyps

    Directory of Open Access Journals (Sweden)

    Hai-Long Cao

    2016-01-01

    Conclusions: The overall detection rate of colorectal serrated polyps in Chinese symptomatic patient population was low, and distribution pattern of three subtypes is different from previous reports. Moreover, LSPs, especially large HPs, might be associated with an increased risk of synchronous AN.

  13. Real time non-rigid surface detection based on binary robust independent elementary features

    Directory of Open Access Journals (Sweden)

    Chuin-Mu Wang

    2015-04-01

    Full Text Available The surface deformation detection of an object has been a very popular research project in recent years; in human vision, we can easily detect the location of the target and that scale of the surface rotation, and change of the viewpoint makes the surface deformation, but in a vision of the computer is a challenge. In those backgrounds of questions, we can propose a framework that is the surface deformation, which is based on the detection method of BRIEF to calculate object surface deformation. But BRIEF calculation has some problem that can’t rotate and change character; we also propose a useful calculation method to solve the problem, and the method proved by experiment can overcome the problem, by the way, it's very useful. The average operation time every picture in continuous image is 50∼80 ms in 2.5 GHz computer, let us look back for some related estimation technology of surface deformation, and there are still a few successful project that is surface deformation detection in the document.

  14. Ship detection in South African oceans using SAR, CFAR and a Haar-like feature classifier

    CSIR Research Space (South Africa)

    Schwegmann, CP

    2014-07-01

    Full Text Available Synthetic Aperture Radar images is a proven technology that can be used to detect ships at sea which have no active transponders (commonly referred to as dark targets). Various methods have been proposed that process SAR images to monitor...

  15. Multiresolution Analysis Techniques to Isolate, Detect and Characterize Morphologically Diverse Features of Structured ICF Capsule Implosions

    CERN Document Server

    Afeyan, Bedros; Jones, Peter; Starck, Jean Luc; Herrmann, Mark

    2012-01-01

    In order to capture just how nonuniform and degraded the symmetry may become of an imploding inertial confinement fusion capsule one may resort to the analysis of high energy X ray point projection backlighting generated radiographs. Here we show new results for such images by using methods of modern harmonic analysis which involve different families of wavelets, curvelets and WaSP (wavelet square partition) functions from geometric measure theory. Three different methods of isolating morphologically diverse features are suggested together with statistical means of quantifying their content for the purposes of comparing the same implosion at different times, to simulations and to different implosion images.

  16. Application of an Orbital GPR Model to Detecting Martian Polar Subsurface Features

    Science.gov (United States)

    Xu, Y.; Cummer, S. A.; Farrell, W. M.

    2005-01-01

    There are numerous challenges in successfully implementing and interpreting planetary ground penetrating radar (GPR) measurements. Many are due to substantial uncertainties in the target ground parameters and the intervening medium (i.e., the ionosphere). These uncertainties generate a compelling need for meaningful quantitative simulation of the planetary GPR problem. An accurate numerical model would enable realistic numerical GPR simulations using parameter regimes much broader than are possible in laboratory or field experiments. Parameters such as source bandwidth and power, surface and subsurface features, and ionospheric profiles could be rapidly iterated to understand their impact on GPR performance and the reliable interpretation of GPR data.

  17. Change detection studies in coastal zone features of Goa, India by remote sensing

    Digital Repository Service at National Institute of Oceanography (India)

    ManiMurali, R.; Vethamony, P.; Saran, A.K.; Jayakumar, S.

    identifying dif fe r ent features for training the computer. The MLC was applied on all the scenes by using PCIWORKS Ver 7.0 software. Param e ters for the classification are in - built alg o rithms of the PCIWORKS software. Detailed description is avai l... able in the PCIWORKS on - li ne manual. Six classes, viz. water bodies, veget a tion, barren land, mangroves, urban land and sandy beaches were identified in all the scenes used in the study. The MLC has been used to classify sate l lite images...

  18. Textural Feature Selection for Enhanced Detection of Stationary Humans in Through the Wall Radar Imagery

    Science.gov (United States)

    2014-05-02

    b) (c) Figure 2. (a) Through-the-wall MIMO System. (b) Building used for Through-the-Wall Measurements (the dashed square indicates the...number of training samples at the parent node t and iQ is the number of training samples associated with the child node iv . The Gini index is a...also be determined by using the weighted average of the Gini index. The training samples are first sorted based on the values they take for the feature

  19. Detecting Combustion and Flow Features In Situ Using Principal Component Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, David [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Grout, Ray W. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Fabian, Nathan D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bennett, Janine Camille [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2009-03-01

    This report presents progress on identifying and classifying features involving combustion in turbulent flow using principal component analysis (PCA) and k-means clustering using an in situ analysis framework. We describe a process for extracting temporally- and spatially-varying information from the simulation, classifying the information, and then applying the classification algorithm to either other portions of the simulation not used for training the classifier or further simulations. Because the regions classified as being of interest take up a small portion of the overall simulation domain, it will consume fewer resources to perform further analysis or save these regions at a higher fidelity than previously possible. The implementation of this process is partially complete and results obtained from PCA of test data is presented that indicates the process may have merit: the basis vectors that PCA provides are significantly different in regions where combustion is occurring and even when all 21 species of a lifted flame simulation are correlated the computational cost of PCA is minimal. What remains to be determined is whether k-means (or other) clustering techniques will be able to identify combined combustion and flow features with an accuracy that makes further characterization of these regions feasible and meaningful.

  20. Threshold Prediction of a Cyclostationary Feature Detection Process using an Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    PRINCE ANAND A

    2013-04-01

    Full Text Available Sensing of spectrum holes in a frequency spectrum is one of the important concepts in implementing a cognitive radio system. Cognitive radio provides a way to use the band width effectivelyand efficiently by identifying the spectrum holes in a particular spectrum. The presence of cyclostationary features indicates the absence or presence of primary users. The presence of signal or noise can be determined by calculating the threshold of a signal by using cyclic cross-periodogram matrix of the corresponding signal. To circumvent the difficulty in estimating the accurate threshold (statistical techniques were used by other researchers, an artificial neural network has been trained by extracted cyclostationary feature vectors which have been obtained by FFT accumulation method. 70% of extracted data has been used for training and the rest 30% has been used for testing the efficiency of thenetwork in estimating 99% accurate prediction of the threshold. The regression plot clearly indicates the superiority of the proposed scheme in estimating the threshold. Similar threshold samples derived from the data (other samples have also been experimented in this scheme, which provided consistently good results with reduced MSE.

  1. Automated detection of breast tumor in MRI and comparison of kinetic features for assessing tumor response to chemotherapy

    Science.gov (United States)

    Aghaei, Faranak; Tan, Maxine; Zheng, Bin

    2015-03-01

    Dynamic contrast-enhanced breast magnetic resonance imaging (DCE-MRI) is used increasingly in diagnosis of breast cancer and assessment of treatment efficacy in current clinical practice. The purpose of this preliminary study is to develop and test a new quantitative kinetic image feature analysis method and biomarker to predict response of breast cancer patients to neoadjuvant chemotherapy using breast MR images acquired before the chemotherapy. For this purpose, we developed a computer-aided detection scheme to automatically segment breast areas and tumors depicting on the sequentially scanned breast MR images. From a contrast-enhancement map generated by subtraction of two image sets scanned pre- and post-injection of contrast agent, our scheme computed 38 morphological and kinetic image features from both tumor and background parenchymal regions. We applied a number of statistical data analysis methods to identify effective image features in predicting response of the patients to the chemotherapy. Based on the performance assessment of individual features and their correlations, we applied a fusion method to generate a final image biomarker. A breast MR image dataset involving 68 patients was used in this study. Among them, 25 had complete response and 43 had partially response to the chemotherapy based on the RECIST guideline. Using this image feature fusion based biomarker, the area under a receiver operating characteristic curve is AUC = 0.850±0.047. This study demonstrated that a biomarker developed from the fusion of kinetic image features computed from breast MR images acquired pre-chemotherapy has potentially higher discriminatory power in predicting response of the patients to the chemotherapy.

  2. [Particular features of detection of patients with urogenital tuberculosis and their management].

    Science.gov (United States)

    Zhuravlev, V N; Golubev, D N; Novikov, B I; Skorniakov, S N; Medvinskiĭ, I D; Arkanov, L V; Cherniaev, I A; Borodin, É P; Verbetskiĭ, A F; Bobykin, E N

    2012-01-01

    The rate and trend in extrapulmonary tuberculosis (TB) incidence including urogenital tuberculosis (UTB) were estimated in population of the Sverdlovsk region for the last 25 years. Long-term results of treatment of 591 patients with different forms of UTB (renal parenchyma TB, tuberculosis papillitis, monocavernous and polycavernous renal TB, male genital TB) were studied. Ureter was involved in tuberculosis process in 24.7% of UTB cases, urinary bladder--in 20.1%, renal TB combined with male genital TB. Early (non-destructive) forms incidence increased 2.8-fold while advanced forms incidence decreased 1.7-fold. This shows an increased level of detection. Total number of patients operated in state hospitals with undetected, mostly complicated urogenital male tuberculosis remains high--from 7.3 to 16% from all newly detected patients.

  3. A new feature extraction method for signal classification applied to cord dorsum potentials detection

    OpenAIRE

    Vidaurre, D.; Rodríguez, E. E.; Bielza, C.; Larrañaga, P.; Rudomin, P.

    2012-01-01

    In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the n...

  4. Feature Detection and Curve Fitting Using Fast Walsh Transforms for Shock Tracking: Applications

    Science.gov (United States)

    Gnoffo, Peter A.

    2017-01-01

    Walsh functions form an orthonormal basis set consisting of square waves. Square waves make the system well suited for detecting and representing functions with discontinuities. Given a uniform distribution of 2p cells on a one-dimensional element, it has been proven that the inner product of the Walsh Root function for group p with every polynomial of degree application to the Riemann problem, in which a contact discontinuity and shock wave form after the diaphragm bursts.

  5. Facilitation of dragonfly target-detecting neurons by slow moving features on continuous paths.

    Science.gov (United States)

    Dunbier, James R; Wiederman, Steven D; Shoemaker, Patrick A; O'Carroll, David C

    2012-01-01

    Dragonflies detect and pursue targets such as other insects for feeding and conspecific interaction. They have a class of neurons highly specialized for this task in their lobula, the "small target motion detecting" (STMD) neurons. One such neuron, CSTMD1, reaches maximum response slowly over hundreds of milliseconds of target motion. Recording the intracellular response from CSTMD1 and a second neuron in this system, BSTMD1, we determined that for the neurons to reach maximum response levels, target motion must produce sequential local activation of elementary motion detecting elements. This facilitation effect is most pronounced when targets move at velocities slower than what was previously thought to be optimal. It is completely disrupted if targets are instantaneously displaced a few degrees from their current location. Additionally, we utilize a simple computational model to discount the parsimonious hypothesis that CSTMD1's slow build-up to maximum response is due to it incorporating a sluggish neural delay filter. Whilst the observed facilitation may be too slow to play a role in prey pursuit flights, which are typically rapidly resolved, we hypothesize that it helps maintain elevated sensitivity during prolonged, aerobatically intricate conspecific pursuits. Since the effect seems to be localized, it most likely enhances the relative salience of the most recently "seen" locations during such pursuit flights.

  6. Bathymetrical distribution and size structure of cold-water coral populations in the Cap de Creus and Lacaze-Duthiers canyons (northwestern Mediterranean

    Directory of Open Access Journals (Sweden)

    A. Gori

    2012-12-01

    Full Text Available Submarine canyons are known as one of the seafloor morphological features where living cold-water coral (CWC communities develop in the Mediterranean Sea. We investigated the CWC community of the two westernmost submarine canyons of the Gulf of Lions canyon system: the Cap de Creus Canyon (CCC and Lacaze Duthiers Canyon (LDC. Coral associations have been studied through video material recorded by means of a manned submersible and a remotely operated vehicle. Video transects have been conducted and analyzed in order to obtain information on (1 coral bathymetric distribution and density patterns, (2 size structure of coral populations, and (3 coral colony orientation with respect to the substrate. Madrepora oculata was the most abundant CWC in both canyons, while Lophelia pertusa and Dendrophyllia cornigera mostly occurred as isolated colonies or in small patches. An important exception was detected in a vertical cliff in LDC where a large Lophelia pertusa framework was documented. This is the first record of such an extended L. pertusa framework in the Mediterranean Sea. In both canyons coral populations were dominated by medium and large colonies, but the frequent presence of small-sized colonies also indicate active recruitment. The predominant coral orientation with respect to the substrate (90° and 135° is probably driven by the current regime as well as by the sediment load transported by the current flows. In general no clear differences were observed between the CWC populations from CCC and LDC, despite large differences in particulate matter between canyons.

  7. Bathymetrical distribution and size structure of cold-water coral populations in the Cap de Creus and Lacaze-Duthiers canyons (northwestern Mediterranean

    Directory of Open Access Journals (Sweden)

    A. Gori

    2013-03-01

    Full Text Available Submarine canyons are known as one of the seafloor morphological features where living cold-water coral (CWC communities develop in the Mediterranean Sea. We investigated the CWC community of the two westernmost submarine canyons of the Gulf of Lions canyon system: the Cap de Creus Canyon (CCC and Lacaze-Duthiers Canyon (LDC. Coral associations have been studied through video material recorded by means of a manned submersible and a remotely operated vehicle. Video transects have been conducted and analyzed in order to obtain information on (1 coral bathymetric distribution and density patterns, (2 size structure of coral populations, and (3 coral colony position with respect to the substrate. Madrepora oculata was the most abundant CWC in both canyons, while Lophelia pertusa and Dendrophyllia cornigera mostly occurred as isolated colonies or in small patches. An important exception was detected in a vertical cliff in LDC where a large L. pertusa framework was documented. This is the first record of such an extended L. pertusa framework in the Mediterranean Sea. In both canyons coral populations were dominated by medium and large colonies, but the frequent presence of small-sized colonies also indicate active recruitment. The predominant coral orientation (90° and 135° is probably driven by the current regime as well as by the sediment load transported by the current flows. In general, no clear differences were observed in the abundance and in the size structure of the CWC populations between CCC and LDC, despite large differences in particulate matter between canyons.

  8. More than a century of bathymetric observations and present-day shallow sediment characterization in Belfast Bay, Maine, USA: implications for pockmark field longevity

    Science.gov (United States)

    Brothers, Laura L.; Kelley, Joseph T.; Belknap, Daniel F.; Barnhardt, Walter A.; Andrews, Brian D.; Maynard, Melissa Landon

    2011-01-01

    Mechanisms and timescales responsible for pockmark formation and maintenance remain uncertain, especially in areas lacking extensive thermogenic fluid deposits (e.g., previously glaciated estuaries). This study characterizes seafloor activity in the Belfast Bay, Maine nearshore pockmark field using (1) three swath bathymetry datasets collected between 1999 and 2008, complemented by analyses of shallow box-core samples for radionuclide activity and undrained shear strength, and (2) historical bathymetric data (report and smooth sheets from 1872, 1947, 1948). In addition, because repeat swath bathymetry surveys are an emerging data source, we present a selected literature review of recent studies using such datasets for seafloor change analysis. This study is the first to apply the method to a pockmark field, and characterizes macro-scale (>5 m) evolution of tens of square kilometers of highly irregular seafloor. Presence/absence analysis yielded no change in pockmark frequency or distribution over a 9-year period (1999–2008). In that time pockmarks did not detectably enlarge, truncate, elongate, or combine. Historical data indicate that pockmark chains already existed in the 19th century. Despite the lack of macroscopic changes in the field, near-bed undrained shear-strength values of less than 7 kPa and scattered downcore 137Cs signatures indicate a highly disturbed setting. Integrating these findings with independent geophysical and geochemical observations made in the pockmark field, it can be concluded that (1) large-scale sediment resuspension and dispersion related to pockmark formation and failure do not occur frequently within this field, and (2) pockmarks can persevere in a dynamic estuarine setting that exhibits minimal modern fluid venting. Although pockmarks are conventionally thought to be long-lived features maintained by a combination of fluid venting and minimal sediment accumulation, this suggests that other mechanisms may be equally active in

  9. Direct detection of sharp upper-mantle features with waveform complexity

    Science.gov (United States)

    Sun, D.; Helmberger, D. V.

    2009-12-01

    A recent technique for processing array data searching for multipathing has been applied to USArray data [Sun and Helmberger, 2009]. A record can be decomposed by S(t) + A×S(t+ΔLR), where S(t) is the synthetics for a reference model. Time separation ΔLR and amplitude ratio A are needed to obtain best cross-correlation between a simulated waveform and data. The travel time of the composite waveform relative to the reference model synthetics is defined as ΔT. A simulated annealing algorithm is used to inverse the parameters of ΔLR, ΔT, and A. Whereas the conventional tomography yields a travel time correction (ΔT), our analysis yields an extra parameter of ΔLR which describe the waveform complexity. With the array, we can construct a mapping of the gradient of ΔLR with complexity patterns. A horizontal structure will introduce the waveform complexity along the distance profile (in-plane multipathing). A azimuthally orientation ΔLR pattern indicates a vertical structure with out-of-plane multipathing. Using such maps generated from artificial data we can easily recognize features produced by downwelling (DW) vs. upwelling (UW) and address their scale lengths. In particular, we find a line of DW's along the Rock Mountain Front which have anomalies similar to those found along the La Ristra line. These ΔLR anomalies are up to 8s, which corresponds to features extending down to the 410 discontinuity with a 6% shear velocity increase. Such features appear to be produced by delamination caused by the sharp lateral temperature gradient [Song and Helmberger, 2007]. The ΔLR patters for the Western US indicates a number of UW's, in which the Yellowstone is particular obvious. The records for events from southwestern and southeastern directions show generally simple waveform across the Yellowstone -Snake River Plain (SRP). For the event from the northeast, the stations along the western edge of SRP show strong waveform distortions, which indicate azimuthally

  10. Nonlinear Heart Rate Variability features for real-life stress detection. Case study: students under stress due to university examination.

    Science.gov (United States)

    Melillo, Paolo; Bracale, Marcello; Pecchia, Leandro

    2011-11-07

    This study investigates the variations of Heart Rate Variability (HRV) due to a real-life stressor and proposes a classifier based on nonlinear features of HRV for automatic stress detection. 42 students volunteered to participate to the study about HRV and stress. For each student, two recordings were performed: one during an on-going university examination, assumed as a real-life stressor, and one after holidays. Nonlinear analysis of HRV was performed by using Poincaré Plot, Approximate Entropy, Correlation dimension, Detrended Fluctuation Analysis, Recurrence Plot. For statistical comparison, we adopted the Wilcoxon Signed Rank test and for development of a classifier we adopted the Linear Discriminant Analysis (LDA). Almost all HRV features measuring heart rate complexity were significantly decreased in the stress session. LDA generated a simple classifier based on the two Poincaré Plot parameters and Approximate Entropy, which enables stress detection with a total classification accuracy, a sensitivity and a specificity rate of 90%, 86%, and 95% respectively. The results of the current study suggest that nonlinear HRV analysis using short term ECG recording could be effective in automatically detecting real-life stress condition, such as a university examination.

  11. Nonlinear Heart Rate Variability features for real-life stress detection. Case study: students under stress due to university examination

    Directory of Open Access Journals (Sweden)

    Melillo Paolo

    2011-11-01

    Full Text Available Abstract Background This study investigates the variations of Heart Rate Variability (HRV due to a real-life stressor and proposes a classifier based on nonlinear features of HRV for automatic stress detection. Methods 42 students volunteered to participate to the study about HRV and stress. For each student, two recordings were performed: one during an on-going university examination, assumed as a real-life stressor, and one after holidays. Nonlinear analysis of HRV was performed by using Poincaré Plot, Approximate Entropy, Correlation dimension, Detrended Fluctuation Analysis, Recurrence Plot. For statistical comparison, we adopted the Wilcoxon Signed Rank test and for development of a classifier we adopted the Linear Discriminant Analysis (LDA. Results Almost all HRV features measuring heart rate complexity were significantly decreased in the stress session. LDA generated a simple classifier based on the two Poincaré Plot parameters and Approximate Entropy, which enables stress detection with a total classification accuracy, a sensitivity and a specificity rate of 90%, 86%, and 95% respectively. Conclusions The results of the current study suggest that nonlinear HRV analysis using short term ECG recording could be effective in automatically detecting real-life stress condition, such as a university examination.

  12. Fukunaga-Koontz feature transformation for statistical structural damage detection and hierarchical neuro-fuzzy damage localisation

    Science.gov (United States)

    Hoell, Simon; Omenzetter, Piotr

    2017-07-01

    Considering jointly damage sensitive features (DSFs) of signals recorded by multiple sensors, applying advanced transformations to these DSFs and assessing systematically their contribution to damage detectability and localisation can significantly enhance the performance of structural health monitoring systems. This philosophy is explored here for partial autocorrelation coefficients (PACCs) of acceleration responses. They are interrogated with the help of the linear discriminant analysis based on the Fukunaga-Koontz transformation using datasets of the healthy and selected reference damage states. Then, a simple but efficient fast forward selection procedure is applied to rank the DSF components with respect to statistical distance measures specialised for either damage detection or localisation. For the damage detection task, the optimal feature subsets are identified based on the statistical hypothesis testing. For damage localisation, a hierarchical neuro-fuzzy tool is developed that uses the DSF ranking to establish its own optimal architecture. The proposed approaches are evaluated experimentally on data from non-destructively simulated damage in a laboratory scale wind turbine blade. The results support our claim of being able to enhance damage detectability and localisation performance by transforming and optimally selecting DSFs. It is demonstrated that the optimally selected PACCs from multiple sensors or their Fukunaga-Koontz transformed versions can not only improve the detectability of damage via statistical hypothesis testing but also increase the accuracy of damage localisation when used as inputs into a hierarchical neuro-fuzzy network. Furthermore, the computational effort of employing these advanced soft computing models for damage localisation can be significantly reduced by using transformed DSFs.

  13. Nonlinear features of equatorial baroclinic Rossby waves detected in Topex altimeter observations

    Directory of Open Access Journals (Sweden)

    R. E. Glazman

    1996-01-01

    Full Text Available Using a recently proposed technique for statistical analysis of non-gridded satellite altimeter data, regime of long equatorially-trapped baroclinic Rossby waves is studied. One-dimensional spatial and spatiotemporal autocorrelation functions of sea surface height (SSH variations yield a broad spectrum of baroclinic Rossby waves and permit determination of their propagation speed. The 1-d wavenumber spectrum of zonal variations is given by a power-law k-2 on scales from about 103 km to 104 km. We demonstrate that the observed wave regime exhibits features of soliton turbulence developing in the long baroclinic Rossby waves. However, being limited to second statistical moments, the present analysis does not allow us to rule out a possibility of weak wave turbulence.

  14. Rapid Detection Method of Moldy Maize Kernels Based on Color Feature

    Directory of Open Access Journals (Sweden)

    Xuan Chu

    2014-05-01

    Full Text Available In order to find the moldy maize kernels quickly, a method based on machine vision was proposed in this paper. Firstly, images of maize kernels were taken by the moldy maize sorting equipment, and three parts of every kernel, that is, moldy plaques, healthy endosperm and healthy embryo, were selected from these images. Then a threshold was set in R channel by analyzing color features of those three parts in RGB model. In this method, moldy plaques can be identified roughly. After that the location of the moldy plaques on the kernels was studied, a circle, whose centre was approximately the centroid of a maize kernel and diameter was about the width of embryos, was set to exclude the interference caused by shadow. This method, with the accuracy of 92.1%, laid a good foundation for the further study of moldy maize sorting equipment.

  15. Climatic features of the Mediterranean Sea detected by the analysis of the longwave radiative bulk formulae

    Directory of Open Access Journals (Sweden)

    M. E. Schiano

    Full Text Available Some important climatic features of the Mediterranean Sea stand out from an analysis of the systematic discrepancies between direct measurements of longwave radiation budget and predictions obtained by the most widely used bulk formulae. In particular, under clear-sky conditions the results show that the surface values of both air temperature and humidity over the Mediterranean Sea are larger than those expected over an open ocean with the same amount of net longwave radiation. Furthermore, the twofold climatic regime of the Mediterranean region strongly affects the downwelling clear-sky radiation. This study suggests that a single bulk formula with constant numerical coefficients is unable to reproduce the fluxes at the surface for all the seasons.

    Key words: Meteorology and Atmospheric dynamics (radiative processes – Oceanography: general (marginal and semienclosed seas; marine meteorology

  16. Beyond the average: Detecting global singular nodes from local features in complex networks

    CERN Document Server

    Costa, Luciano da Fontoura; Hilgetag, Claus C; Kaiser, Marcus; 10.1209/0295-5075/87/18008

    2010-01-01

    Deviations from the average can provide valuable insights about the organization of natural systems. The present article extends this important principle to the systematic identification and analysis of singular motifs in complex networks. Six measurements quantifying different and complementary features of the connectivity around each node of a network were calculated, and multivariate statistical methods applied to identify singular nodes. The potential of the presented concepts and methodology was illustrated with respect to different types of complex real-world networks, namely the US air transportation network, the protein-protein interactions of the yeast Saccharomyces cerevisiae and the Roget thesaurus networks. The obtained singular motifs possessed unique functional roles in the networks. Three classic theoretical network models were also investigated, with the Barab\\'asi-Albert model resulting in singular motifs corresponding to hubs, confirming the potential of the approach. Interestingly, the numb...

  17. Detection on Straight Line Problem in Triangle Geometry Features for Digit Recognition

    Directory of Open Access Journals (Sweden)

    N. A. Arbain

    2016-12-01

    Full Text Available Geometric object especially triangle geometry has been widely used in digit recognition area. The triangle geometry properties have been implemented as the triangle features which are used to construct the triangle shape. Triangle is formed based on three points of triangle corner A, B and C. However, a problem occurs when three points of triangle corner were in parallel line. Thus, an algorithm has been proposed in order to solve the straight line problem. The Support Vector Machine (SVM and Multi-Layer Perceptron (MLP were used to measure based on the classification accuracy. Four datasets were used: HODA, IFCHDB, MNIST and BANGLA. The comparison results classification demonstrated the effectiveness of our proposed method.

  18. IMAGING SPECTROSCOPY AND LIGHT DETECTION AND RANGING DATA FUSION FOR URBAN FEATURES EXTRACTION

    Directory of Open Access Journals (Sweden)

    Mohammed Idrees

    2013-01-01

    Full Text Available This study presents our findings on the fusion of Imaging Spectroscopy (IS and LiDAR data for urban feature extraction. We carried out necessary preprocessing of the hyperspectral image. Minimum Noise Fraction (MNF transforms was used for ordering hyperspectral bands according to their noise. Thereafter, we employed Optimum Index Factor (OIF to statistically select the three most appropriate bands combination from MNF result. The composite image was classified using unsupervised classification (k-mean algorithm and the accuracy of the classification assessed. Digital Surface Model (DSM and LiDAR intensity were generated from the LiDAR point cloud. The LiDAR intensity was filtered to remove the noise. Hue Saturation Intensity (HSI fusion algorithm was used to fuse the imaging spectroscopy and DSM as well as imaging spectroscopy and filtered intensity. The fusion of imaging spectroscopy and DSM was found to be better than that of imaging spectroscopy and LiDAR intensity quantitatively. The three datasets (imaging spectrocopy, DSM and Lidar intensity fused data were classified into four classes: building, pavement, trees and grass using unsupervised classification and the accuracy of the classification assessed. The result of the study shows that fusion of imaging spectroscopy and LiDAR data improved the visual identification of surface features. Also, the classification accuracy improved from an overall accuracy of 84.6% for the imaging spectroscopy data to 90.2% for the DSM fused data. Similarly, the Kappa Coefficient increased from 0.71 to 0.82. on the other hand, classification of the fused LiDAR intensity and imaging spectroscopy data perform poorly quantitatively with overall accuracy of 27.8% and kappa coefficient of 0.0988.

  19. Representation of Block-Based Image Features in a Multi-Scale Framework for Built-Up Area Detection

    Directory of Open Access Journals (Sweden)

    Zhongwen Hu

    2016-02-01

    Full Text Available The accurate extraction and mapping of built-up areas play an important role in many social, economic, and environmental studies. In this paper, we propose a novel approach for built-up area detection from high spatial resolution remote sensing images, using a block-based multi-scale feature representation framework. First, an image is divided into small blocks, in which the spectral, textural, and structural features are extracted and represented using a multi-scale framework; a set of refined Harris corner points is then used to select blocks as training samples; finally, a built-up index image is obtained by minimizing the normalized spectral, textural, and structural distances to the training samples, and a built-up area map is obtained by thresholding the index image. Experiments confirm that the proposed approach is effective for high-resolution optical and synthetic aperture radar images, with different scenes and different spatial resolutions.

  20. PROJECTION BASED STALTISTICAL FEATURE EXTRACTION WITH MULTISPECTRAL IMAGES AND ITS APPLICATIONS ON THE YELLOW RIVER MAINSTREAM LINE DETECTION

    Institute of Scientific and Technical Information of China (English)

    Zhang Yanning; Zhang Haichao; Duan Feng; Liu Xuegong; Han Lin

    2009-01-01

    Mainstream line is significant for the Yellow River situation forecasting and flood control. An effective statistical feature extraction method is proposed in this paper.In this method,a be tween-class scattering matrix based projection algorithm is performed to maximize between-class dif ferences,obtaining effective component for classification;then high-order statistics are utilized as the features to describe the mainstream line in the principal component obtained.Experiments are per formed to verify the applicability of the algorithm.The results both on synthesized and real scenes indicate that this approach could extract the mainstream line of the Yellow River automatically,and has a high precision in mainstream line detection.