WorldWideScience

Sample records for rapid feature extraction

  1. Airborne LIDAR and high resolution satellite data for rapid 3D feature extraction

    Science.gov (United States)

    Jawak, S. D.; Panditrao, S. N.; Luis, A. J.

    2014-11-01

    This work uses the canopy height model (CHM) based workflow for individual tree crown delineation and 3D feature extraction approach (Overwatch Geospatial's proprietary algorithm) for building feature delineation from high-density light detection and ranging (LiDAR) point cloud data in an urban environment and evaluates its accuracy by using very high-resolution panchromatic (PAN) (spatial) and 8-band (multispectral) WorldView-2 (WV-2) imagery. LiDAR point cloud data over San Francisco, California, USA, recorded in June 2010, was used to detect tree and building features by classifying point elevation values. The workflow employed includes resampling of LiDAR point cloud to generate a raster surface or digital terrain model (DTM), generation of a hill-shade image and an intensity image, extraction of digital surface model, generation of bare earth digital elevation model (DEM) and extraction of tree and building features. First, the optical WV-2 data and the LiDAR intensity image were co-registered using ground control points (GCPs). The WV-2 rational polynomial coefficients model (RPC) was executed in ERDAS Leica Photogrammetry Suite (LPS) using supplementary *.RPB file. In the second stage, ortho-rectification was carried out using ERDAS LPS by incorporating well-distributed GCPs. The root mean square error (RMSE) for the WV-2 was estimated to be 0.25 m by using more than 10 well-distributed GCPs. In the second stage, we generated the bare earth DEM from LiDAR point cloud data. In most of the cases, bare earth DEM does not represent true ground elevation. Hence, the model was edited to get the most accurate DEM/ DTM possible and normalized the LiDAR point cloud data based on DTM in order to reduce the effect of undulating terrain. We normalized the vegetation point cloud values by subtracting the ground points (DEM) from the LiDAR point cloud. A normalized digital surface model (nDSM) or CHM was calculated from the LiDAR data by subtracting the DEM from the DSM

  2. Feature Extraction

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Feature selection and reduction are key to robust multivariate analyses. In this talk I will focus on pros and cons of various variable selection methods and focus on those that are most relevant in the context of HEP.

  3. Extracting Product Features from Chinese Product Reviews

    OpenAIRE

    Yahui Xi

    2013-01-01

    With the great development of e-commerce, the number of product reviews grows rapidly on the e-commerce websites. Review mining has recently received a lot of attention, which aims to discover the valuable information from the massive product reviews. Product feature extraction is one of the basic tasks of product review mining. Its effectiveness can influence significantly the performance of subsequent jobs. Double Propagation is a state-of-the-art technique in product feature extraction. In...

  4. Feature extraction from high resolution satellite imagery as an input to the development and rapid update of a METRANS geographic information system (GIS).

    Science.gov (United States)

    2011-06-01

    This report describes an accuracy assessment of extracted features derived from three : subsets of Quickbird pan-sharpened high resolution satellite image for the area of the : Port of Los Angeles, CA. Visual Learning Systems Feature Analyst and D...

  5. Feature extraction using fractal codes

    NARCIS (Netherlands)

    B.A.M. Ben Schouten; Paul M. de Zeeuw

    1999-01-01

    Fast and successful searching for an object in a multimedia database is a highly desirable functionality. Several approaches to content based retrieval for multimedia databases can be found in the literature [9,10,12,14,17]. The approach we consider is feature extraction. A feature can be seen as a

  6. Linguistic feature analysis for protein interaction extraction

    Directory of Open Access Journals (Sweden)

    Cornelis Chris

    2009-11-01

    Full Text Available Abstract Background The rapid growth of the amount of publicly available reports on biomedical experimental results has recently caused a boost of text mining approaches for protein interaction extraction. Most approaches rely implicitly or explicitly on linguistic, i.e., lexical and syntactic, data extracted from text. However, only few attempts have been made to evaluate the contribution of the different feature types. In this work, we contribute to this evaluation by studying the relative importance of deep syntactic features, i.e., grammatical relations, shallow syntactic features (part-of-speech information and lexical features. For this purpose, we use a recently proposed approach that uses support vector machines with structured kernels. Results Our results reveal that the contribution of the different feature types varies for the different data sets on which the experiments were conducted. The smaller the training corpus compared to the test data, the more important the role of grammatical relations becomes. Moreover, deep syntactic information based classifiers prove to be more robust on heterogeneous texts where no or only limited common vocabulary is shared. Conclusion Our findings suggest that grammatical relations play an important role in the interaction extraction task. Moreover, the net advantage of adding lexical and shallow syntactic features is small related to the number of added features. This implies that efficient classifiers can be built by using only a small fraction of the features that are typically being used in recent approaches.

  7. Automatic extraction of planetary image features

    Science.gov (United States)

    LeMoigne-Stewart, Jacqueline J. (Inventor); Troglio, Giulia (Inventor); Benediktsson, Jon A. (Inventor); Serpico, Sebastiano B. (Inventor); Moser, Gabriele (Inventor)

    2013-01-01

    A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented.

  8. ANTHOCYANINS ALIPHATIC ALCOHOLS EXTRACTION FEATURES

    Directory of Open Access Journals (Sweden)

    P. N. Savvin

    2015-01-01

    Full Text Available Anthocyanins red pigments that give color a wide range of fruits, berries and flowers. In the food industry it is widely known as a dye a food additive E163. To extract from natural vegetable raw materials traditionally used ethanol or acidified water, but in same technologies it’s unacceptable. In order to expand the use of anthocyanins as colorants and antioxidants were explored extracting pigments alcohols with different structures of the carbon skeleton, and the position and number of hydroxyl groups. For the isolation anthocyanins raw materials were extracted sequentially twice with t = 60 C for 1.5 hours. The evaluation was performed using extracts of classical spectrophotometric methods and modern express chromaticity. Color black currant extracts depends on the length of the carbon skeleton and position of the hydroxyl group, with the alcohols of normal structure have higher alcohols compared to the isomeric structure of the optical density and index of the red color component. This is due to the different ability to form hydrogen bonds when allocating anthocyanins and other intermolecular interactions. During storage blackcurrant extracts are significant structural changes recoverable pigments, which leads to a significant change in color. In this variation, the stronger the higher the length of the carbon skeleton and branched molecules extractant. Extraction polyols (ethyleneglycol, glycerol are less effective than the corresponding monohydric alcohols. However these extracts saved significantly higher because of their reducing ability at interacting with polyphenolic compounds.

  9. Statistical feature extraction based iris recognition system

    Indian Academy of Sciences (India)

    Iris recognition systems have been proposed by numerous researchers using different feature extraction techniques for accurate and reliable biometric authentication. In this paper, a statistical feature extraction technique based on correlation between adjacent pixels has been proposed and implemented. Hamming ...

  10. Audio feature extraction using probability distribution function

    Science.gov (United States)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  11. Automated Feature Extraction from Hyperspectral Imagery Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed activities will result in the development of a novel hyperspectral feature-extraction toolkit that will provide a simple, automated, and accurate...

  12. Real一time Network Flow Feature Extraction System Design

    Directory of Open Access Journals (Sweden)

    CHEN Tao

    2017-04-01

    Full Text Available Aiming at the problem that packet sampling technique has lower flow feature extraction accuracy in high-speed network,a real-time network flow feature extraction system is implemented in NetFPGA. Making full use of NetFPGA high running speed and powerful parallel processing ability,the system can support gigabit data throughput. This real-time extraction system consists of two key elements,including address mapping module and flow table core processing module. The former uses pipeline technique to index flow record quickly through Bob Jenkins hash algorithm. The latter can update flow table rapidly by parallelizing query and match flow record. Online traffic test results show that the system can achieve real-time flow feature extraction in 1 Gbps Internet COTITIeCtI OTI.

  13. Extraction of Facial Features from Color Images

    Directory of Open Access Journals (Sweden)

    J. Pavlovicova

    2008-09-01

    Full Text Available In this paper, a method for localization and extraction of faces and characteristic facial features such as eyes, mouth and face boundaries from color image data is proposed. This approach exploits color properties of human skin to localize image regions – face candidates. The facial features extraction is performed only on preselected face-candidate regions. Likewise, for eyes and mouth localization color information and local contrast around eyes are used. The ellipse of face boundary is determined using gradient image and Hough transform. Algorithm was tested on image database Feret.

  14. Large datasets: Segmentation, feature extraction, and compression

    Energy Technology Data Exchange (ETDEWEB)

    Downing, D.J.; Fedorov, V.; Lawkins, W.F.; Morris, M.D.; Ostrouchov, G.

    1996-07-01

    Large data sets with more than several mission multivariate observations (tens of megabytes or gigabytes of stored information) are difficult or impossible to analyze with traditional software. The amount of output which must be scanned quickly dilutes the ability of the investigator to confidently identify all the meaningful patterns and trends which may be present. The purpose of this project is to develop both a theoretical foundation and a collection of tools for automated feature extraction that can be easily customized to specific applications. Cluster analysis techniques are applied as a final step in the feature extraction process, which helps make data surveying simple and effective.

  15. Feature Extraction in Radar Target Classification

    Directory of Open Access Journals (Sweden)

    Z. Kus

    1999-09-01

    Full Text Available This paper presents experimental results of extracting features in the Radar Target Classification process using the J frequency band pulse radar. The feature extraction is based on frequency analysis methods, the discrete-time Fourier Transform (DFT and Multiple Signal Characterisation (MUSIC, based on the detection of Doppler effect. The analysis has turned to the preference of DFT with implemented Hanning windowing function. We assumed to classify targets-vehicles into two classes, the wheeled vehicle and tracked vehicle. The results show that it is possible to classify them only while moving. The feature of the class results from a movement of moving parts of the vehicle. However, we have not found any feature to classify the wheeled and tracked vehicles while non-moving, although their engines are on.

  16. Automatic Feature Extraction from Planetary Images

    Science.gov (United States)

    Troglio, Giulia; Le Moigne, Jacqueline; Benediktsson, Jon A.; Moser, Gabriele; Serpico, Sebastiano B.

    2010-01-01

    With the launch of several planetary missions in the last decade, a large amount of planetary images has already been acquired and much more will be available for analysis in the coming years. The image data need to be analyzed, preferably by automatic processing techniques because of the huge amount of data. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to planetary data that often present low contrast and uneven illumination characteristics. Different methods have already been presented for crater extraction from planetary images, but the detection of other types of planetary features has not been addressed yet. Here, we propose a new unsupervised method for the extraction of different features from the surface of the analyzed planet, based on the combination of several image processing techniques, including a watershed segmentation and the generalized Hough Transform. The method has many applications, among which image registration and can be applied to arbitrary planetary images.

  17. FEATURE EXTRACTION FOR EMG BASED PROSTHESES CONTROL

    Directory of Open Access Journals (Sweden)

    R. Aishwarya

    2013-01-01

    Full Text Available The control of prosthetic limb would be more effective if it is based on Surface Electromyogram (SEMG signals from remnant muscles. The analysis of SEMG signals depend on a number of factors, such as amplitude as well as time- and frequency-domain properties. Time series analysis using Auto Regressive (AR model and Mean frequency which is tolerant to white Gaussian noise are used as feature extraction techniques. EMG Histogram is used as another feature vector that was seen to give more distinct classification. The work was done with SEMG dataset obtained from the NINAPRO DATABASE, a resource for bio robotics community. Eight classes of hand movements hand open, hand close, Wrist extension, Wrist flexion, Pointing index, Ulnar deviation, Thumbs up, Thumb opposite to little finger are taken into consideration and feature vectors are extracted. The feature vectors can be given to an artificial neural network for further classification in controlling the prosthetic arm which is not dealt in this paper.

  18. Feature extraction & image processing for computer vision

    CERN Document Server

    Nixon, Mark

    2012-01-01

    This book is an essential guide to the implementation of image processing and computer vision techniques, with tutorial introductions and sample code in Matlab. Algorithms are presented and fully explained to enable complete understanding of the methods and techniques demonstrated. As one reviewer noted, ""The main strength of the proposed book is the exemplar code of the algorithms."" Fully updated with the latest developments in feature extraction, including expanded tutorials and new techniques, this new edition contains extensive new material on Haar wavelets, Viola-Jones, bilateral filt

  19. Trace Ratio Criterion for Feature Extraction in Classification

    Directory of Open Access Journals (Sweden)

    Guoqi Li

    2014-01-01

    Full Text Available A generalized linear discriminant analysis based on trace ratio criterion algorithm (GLDA-TRA is derived to extract features for classification. With the proposed GLDA-TRA, a set of orthogonal features can be extracted in succession. Each newly extracted feature is the optimal feature that maximizes the trace ratio criterion function in the subspace orthogonal to the space spanned by the previous extracted features.

  20. Text feature extraction based on deep learning: a review.

    Science.gov (United States)

    Liang, Hong; Sun, Xiao; Sun, Yunlei; Gao, Yuan

    2017-01-01

    Selection of text feature item is a basic and important matter for text mining and information retrieval. Traditional methods of feature extraction require handcrafted features. To hand-design, an effective feature is a lengthy process, but aiming at new applications, deep learning enables to acquire new effective feature representation from training data. As a new feature extraction method, deep learning has made achievements in text mining. The major difference between deep learning and conventional methods is that deep learning automatically learns features from big data, instead of adopting handcrafted features, which mainly depends on priori knowledge of designers and is highly impossible to take the advantage of big data. Deep learning can automatically learn feature representation from big data, including millions of parameters. This thesis outlines the common methods used in text feature extraction first, and then expands frequently used deep learning methods in text feature extraction and its applications, and forecasts the application of deep learning in feature extraction.

  1. Extraction and Classification of Human Gait Features

    Science.gov (United States)

    Ng, Hu; Tan, Wooi-Haw; Tong, Hau-Lee; Abdullah, Junaidi; Komiya, Ryoichi

    In this paper, a new approach is proposed for extracting human gait features from a walking human based on the silhouette images. The approach consists of six stages: clearing the background noise of image by morphological opening; measuring of the width and height of the human silhouette; dividing the enhanced human silhouette into six body segments based on anatomical knowledge; applying morphological skeleton to obtain the body skeleton; applying Hough transform to obtain the joint angles from the body segment skeletons; and measuring the distance between the bottom of right leg and left leg from the body segment skeletons. The angles of joints, step-size together with the height and width of the human silhouette are collected and used for gait analysis. The experimental results have demonstrated that the proposed system is feasible and achieved satisfactory results.

  2. An Efficient Method of HOG Feature Extraction Using Selective Histogram Bin and PCA Feature Reduction

    National Research Council Canada - National Science Library

    LAI, C. Q; TEOH, S. S

    2016-01-01

    .... In this paper, a time-efficient HOG-based feature extraction method is proposed. The method uses selective number of histogram bins to perform feature extraction on different regions in the image...

  3. An Extraction Method of Acoustic Features for Music Emotion Classification

    National Research Council Canada - National Science Library

    Jiwei Qin; Liang Xu; Jinsheng Wang; Fei Guo

    2014-01-01

      Taking user's emotion in music retrieval and recommendation as application background, this paper presents a method to extract the features associated with music emotion from the existing physical features...

  4. Classification of Textures Using Filter Based Local Feature Extraction

    Directory of Open Access Journals (Sweden)

    Bocekci Veysel Gokhan

    2016-01-01

    Full Text Available In this work local features are used in feature extraction process in image processing for textures. The local binary pattern feature extraction method from textures are introduced. Filtering is also used during the feature extraction process for getting discriminative features. To show the effectiveness of the algorithm before the extraction process, three different noise are added to both train and test images. Wiener filter and median filter are used to remove the noise from images. We evaluate the performance of the method with Naïve Bayesian classifier. We conduct the comparative analysis on benchmark dataset with different filtering and size. Our experiments demonstrate that feature extraction process combine with filtering give promising results on noisy images.

  5. Negative emotion does not modulate rapid feature integration effects

    Directory of Open Access Journals (Sweden)

    Darinka eTruebutschek

    2012-04-01

    Full Text Available Emotional arousal at encoding is known to facilitate later memory recall. In the present study, we asked whether this emotion-modulation of episodic memory is also evident at very short time scales, as measured by feature integration effects, the moment-by-moment binding of relevant stimulus and response features in episodic memory. This question was motivated by recent findings that negative emotion appears to potentiate 1st-order trial sequence effects in classic conflict tasks, which has been attributed to emotion-modulation of conflict-driven cognitive control processes. However, these effects could equally well have been carried by emotion-modulation of mnemonic feature binding processes, which were perfectly confounded with putative control processes in these studies. In the present experiments, we tried to shed light on this question by testing explicitly whether feature integration processes, assessed in isolation of conflict-control, are in fact susceptible to negative emotion-modulation. For this purpose, we adopted a standard protocol for assessing the rapid binding of stimulus and response features in episodic memory (Experiment 1 and paired it with the presentation of either neutral or fearful background face stimuli, shown either at encoding only (Experiment 2, or at both encoding and retrieval (Experiment 3. Whereas reliable feature integration effects were observed in all three experiments, no evidence for emotion-modulation of these effects was detected, in spite of significant effects of emotion on response times. These findings suggest that rapid feature integration of foreground stimulus and response features is not subject to modulation by negative emotional background stimuli and further suggest that previous reports of emotion-modulated trial-transition effects are likely attributable to the effects of emotion on cognitive control processes.

  6. Feature extraction from scientific datasets using Apache Spark

    Science.gov (United States)

    Paral, J.; Wiltberger, M. J.

    2015-12-01

    We present an example of feature extraction from scientific datasets such as global numerical models using Apache Spark. The algorithm uses a simple penalized linear regression technique and a training dataset to learn and extract a similar feature from the rest of the data. Thanks to Apache Spark, algorithm can scale to a large number of computing nodes.

  7. Handwritten Character Classification using the Hotspot Feature Extraction Technique

    NARCIS (Netherlands)

    Surinta, Olarik; Schomaker, Lambertus; Wiering, Marco

    2012-01-01

    Feature extraction techniques can be important in character recognition, because they can enhance the efficacy of recognition in comparison to featureless or pixel-based approaches. This study aims to investigate the novel feature extraction technique called the hotspot technique in order to use it

  8. Iris Recognition Using Feature Extraction of Box Counting Fractal Dimension

    Science.gov (United States)

    Khotimah, C.; Juniati, D.

    2018-01-01

    Biometrics is a science that is now growing rapidly. Iris recognition is a biometric modality which captures a photo of the eye pattern. The markings of the iris are distinctive that it has been proposed to use as a means of identification, instead of fingerprints. Iris recognition was chosen for identification in this research because every human has a special feature that each individual is different and the iris is protected by the cornea so that it will have a fixed shape. This iris recognition consists of three step: pre-processing of data, feature extraction, and feature matching. Hough transformation is used in the process of pre-processing to locate the iris area and Daugman’s rubber sheet model to normalize the iris data set into rectangular blocks. To find the characteristics of the iris, it was used box counting method to get the fractal dimension value of the iris. Tests carried out by used k-fold cross method with k = 5. In each test used 10 different grade K of K-Nearest Neighbor (KNN). The result of iris recognition was obtained with the best accuracy was 92,63 % for K = 3 value on K-Nearest Neighbor (KNN) method.

  9. Automated Feature Extraction from Hyperspectral Imagery Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In response to NASA Topic S7.01, Visual Learning Systems, Inc. (VLS) will develop a novel hyperspectral plug-in toolkit for its award winning Feature AnalystREG...

  10. An Efficient Method of HOG Feature Extraction Using Selective Histogram Bin and PCA Feature Reduction

    OpenAIRE

    Lai, C.Q.; TEOH, S. S.

    2016-01-01

    Histogram of Oriented Gradient (HOG) is a popular image feature for human detection. It presents high detection accuracy and therefore has been widely used in vision-based surveillance and pedestrian detection systems. However, the main drawback of this feature is that it has a large feature size. The extraction algorithm is also computationally intensive and requires long processing time. In this paper, a time-efficient HOG-based feature extraction method is proposed. The method ...

  11. Feature Extraction for Structural Dynamics Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, Charles [Los Alamos National Laboratory; Nishio, Mayuko [Yokohama University; Hemez, Francois [Los Alamos National Laboratory; Stull, Chris [Los Alamos National Laboratory; Park, Gyuhae [Chonnam Univesity; Cornwell, Phil [Rose-Hulman Institute of Technology; Figueiredo, Eloi [Universidade Lusófona; Luscher, D. J. [Los Alamos National Laboratory; Worden, Keith [University of Sheffield

    2016-01-13

    As structural dynamics becomes increasingly non-modal, stochastic and nonlinear, finite element model-updating technology must adopt the broader notions of model validation and uncertainty quantification. For example, particular re-sampling procedures must be implemented to propagate uncertainty through a forward calculation, and non-modal features must be defined to analyze nonlinear data sets. The latter topic is the focus of this report, but first, some more general comments regarding the concept of model validation will be discussed.

  12. Extracting Conceptual Feature Structures from Text

    DEFF Research Database (Denmark)

    Andreasen, Troels; Bulskov, Henrik; Lassen, Tine

    2011-01-01

    This paper describes an approach to indexing texts by their conceptual content using ontologies along with lexico-syntactic information and semantic role assignment provided by lexical resources. The conceptual content of meaningful chunks of text is transformed into conceptual feature structures...... and mapped into concepts in a generative ontology. Synonymous but linguistically quite distinct expressions are mapped to the same concept in the ontology. This allows us to perform a content-based search which will retrieve relevant documents independently of the linguistic form of the query as well...

  13. Recent development of feature extraction and classification multispectral/hyperspectral images: a systematic literature review

    Science.gov (United States)

    Setiyoko, A.; Dharma, I. G. W. S.; Haryanto, T.

    2017-01-01

    Multispectral data and hyperspectral data acquired from satellite sensor have the ability in detecting various objects on the earth ranging from low scale to high scale modeling. These data are increasingly being used to produce geospatial information for rapid analysis by running feature extraction or classification process. Applying the most suited model for this data mining is still challenging because there are issues regarding accuracy and computational cost. This research aim is to develop a better understanding regarding object feature extraction and classification applied for satellite image by systematically reviewing related recent research projects. A method used in this research is based on PRISMA statement. After deriving important points from trusted sources, pixel based and texture-based feature extraction techniques are promising technique to be analyzed more in recent development of feature extraction and classification.

  14. Multi-scale salient feature extraction on mesh models

    KAUST Repository

    Yang, Yongliang

    2012-01-01

    We present a new method of extracting multi-scale salient features on meshes. It is based on robust estimation of curvature on multiple scales. The coincidence between salient feature and the scale of interest can be established straightforwardly, where detailed feature appears on small scale and feature with more global shape information shows up on large scale. We demonstrate this multi-scale description of features accords with human perception and can be further used for several applications as feature classification and viewpoint selection. Experiments exhibit that our method as a multi-scale analysis tool is very helpful for studying 3D shapes. © 2012 Springer-Verlag.

  15. RESEARCH ON FEATURE POINTS EXTRACTION METHOD FOR BINARY MULTISCALE AND ROTATION INVARIANT LOCAL FEATURE DESCRIPTOR

    Directory of Open Access Journals (Sweden)

    Hongwei Ying

    2014-08-01

    Full Text Available An extreme point of scale space extraction method for binary multiscale and rotation invariant local feature descriptor is studied in this paper in order to obtain a robust and fast method for local image feature descriptor. Classic local feature description algorithms often select neighborhood information of feature points which are extremes of image scale space, obtained by constructing the image pyramid using certain signal transform method. But build the image pyramid always consumes a large amount of computing and storage resources, is not conducive to the actual applications development. This paper presents a dual multiscale FAST algorithm, it does not need to build the image pyramid, but can extract feature points of scale extreme quickly. Feature points extracted by proposed method have the characteristic of multiscale and rotation Invariant and are fit to construct the local feature descriptor.

  16. Feature extraction for deep neural networks based on decision boundaries

    Science.gov (United States)

    Woo, Seongyoun; Lee, Chulhee

    2017-05-01

    Feature extraction is a process used to reduce data dimensions using various transforms while preserving the discriminant characteristics of the original data. Feature extraction has been an important issue in pattern recognition since it can reduce the computational complexity and provide a simplified classifier. In particular, linear feature extraction has been widely used. This method applies a linear transform to the original data to reduce the data dimensions. The decision boundary feature extraction method (DBFE) retains only informative directions for discriminating among the classes. DBFE has been applied to various parametric and non-parametric classifiers, which include the Gaussian maximum likelihood classifier (GML), the k-nearest neighbor classifier, support vector machines (SVM) and neural networks. In this paper, we apply DBFE to deep neural networks. This algorithm is based on the nonparametric version of DBFE, which was developed for neural networks. Experimental results with the UCI database show improved classification accuracy with reduced dimensionality.

  17. Fingerprint Identification - Feature Extraction, Matching and Database Search

    NARCIS (Netherlands)

    Bazen, A.M.

    2002-01-01

    Presents an overview of state-of-the-art fingerprint recognition technology for identification and verification purposes. Three principal challenges in fingerprint recognition are identified: extracting robust features from low-quality fingerprints, matching elastically deformed fingerprints and

  18. Integrated Phoneme Subspace Method for Speech Feature Extraction

    Directory of Open Access Journals (Sweden)

    Park Hyunsin

    2009-01-01

    Full Text Available Speech feature extraction has been a key focus in robust speech recognition research. In this work, we discuss data-driven linear feature transformations applied to feature vectors in the logarithmic mel-frequency filter bank domain. Transformations are based on principal component analysis (PCA, independent component analysis (ICA, and linear discriminant analysis (LDA. Furthermore, this paper introduces a new feature extraction technique that collects the correlation information among phoneme subspaces and reconstructs feature space for representing phonemic information efficiently. The proposed speech feature vector is generated by projecting an observed vector onto an integrated phoneme subspace (IPS based on PCA or ICA. The performance of the new feature was evaluated for isolated word speech recognition. The proposed method provided higher recognition accuracy than conventional methods in clean and reverberant environments.

  19. Rapid Processing of a Global Feature in the ON Visual Pathways of Behaving Monkeys

    Directory of Open Access Journals (Sweden)

    Jun Huang

    2017-08-01

    Full Text Available Visual objects are recognized by their features. Whereas, some features are based on simple components (i.e., local features, such as orientation of line segments, some features are based on the whole object (i.e., global features, such as an object having a hole in it. Over the past five decades, behavioral, physiological, anatomical, and computational studies have established a general model of vision, which starts from extracting local features in the lower visual pathways followed by a feature integration process that extracts global features in the higher visual pathways. This local-to-global model is successful in providing a unified account for a vast sets of perception experiments, but it fails to account for a set of experiments showing human visual systems' superior sensitivity to global features. Understanding the neural mechanisms underlying the “global-first” process will offer critical insights into new models of vision. The goal of the present study was to establish a non-human primate model of rapid processing of global features for elucidating the neural mechanisms underlying differential processing of global and local features. Monkeys were trained to make a saccade to a target in the black background, which was different from the distractors (white circle in color (e.g., red circle target, local features (e.g., white square target, a global feature (e.g., white ring with a hole target or their combinations (e.g., red square target. Contrary to the predictions of the prevailing local-to-global model, we found that (1 detecting a distinction or a change in the global feature was faster than detecting a distinction or a change in color or local features; (2 detecting a distinction in color was facilitated by a distinction in the global feature, but not in the local features; and (3 detecting the hole was interfered by the local features of the hole (e.g., white ring with a squared hole. These results suggest that monkey ON

  20. Level Sets and Voronoi based Feature Extraction from any Imagery

    DEFF Research Database (Denmark)

    Sharma, O.; Anton, François; Mioc, Darka

    2012-01-01

    Polygon features are of interest in many GEOProcessing applications like shoreline mapping, boundary delineation, change detection, etc. This paper presents a unique new GPU-based methodology to automate feature extraction combining level sets, or mean shift based segmentation together with Voronoi...

  1. Object Recognition by Using Multi-level Feature Point Extraction

    OpenAIRE

    Cheng, Yang; Dubois, Timeo

    2017-01-01

    In this paper, we present a novel approach for object recognition in real-time by employing multilevel feature analysis and demonstrate the practicality of adapting feature extraction into a Naive Bayesian classification framework that enables simple, efficient, and robust performance. We also show the proposed method scales well as the number of level-classes grows. To effectively understand the patches surrounding a keypoint, the trained classifier uses hundreds of simple binary features an...

  2. Feature Extraction and Selection Strategies for Automated Target Recognition

    Science.gov (United States)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-01-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  3. Statistical Feature Extraction and Recognition of Beverages Using Electronic Tongue

    Directory of Open Access Journals (Sweden)

    P. C. PANCHARIYA

    2010-01-01

    Full Text Available This paper describes an approach for extraction of features from data generated from an electronic tongue based on large amplitude pulse voltammetry. In this approach statistical features of the meaningful selected variables from current response signals are extracted and used for recognition of beverage samples. The proposed feature extraction approach not only reduces the computational complexity but also reduces the computation time and requirement of storage of data for the development of E-tongue for field applications. With the reduced information, a probabilistic neural network (PNN was trained for qualitative analysis of different beverages. Before the qualitative analysis of the beverages, the methodology has been tested for the basic artificial taste solutions i.e. sweet, sour, salt, bitter, and umami. The proposed procedure was compared with the more conventional and linear feature extraction technique employing principal component analysis combined with PNN. Using the extracted feature vectors, highly correct classification by PNN was achieved for eight types of juices and six types of soft drinks. The results indicated that the electronic tongue based on large amplitude pulse voltammetry with reduced feature was capable of discriminating not only basic artificial taste solutions but also the various sorts of the same type of natural beverages (fruit juices, vegetable juices, soft drinks, etc..

  4. Research of facial feature extraction based on MMC

    Science.gov (United States)

    Xue, Donglin; Zhao, Jiufen; Tang, Qinhong; Shi, Shaokun

    2017-07-01

    Based on the maximum margin criterion (MMC), a new algorithm of statistically uncorrelated optimal discriminant vectors and a new algorithm of orthogonal optimal discriminant vectors for feature extraction were proposed. The purpose of the maximum margin criterion is to maximize the inter-class scatter while simultaneously minimizing the intra-class scatter after the projection. Compared with original MMC method and principal component analysis (PCA) method, the proposed methods are better in terms of reducing or eliminating the statistically correlation between features and improving recognition rate. The experiment results on Olivetti Research Laboratory (ORL) face database shows that the new feature extraction method of statistically uncorrelated maximum margin criterion (SUMMC) are better in terms of recognition rate and stability. Besides, the relations between maximum margin criterion and Fisher criterion for feature extraction were revealed.

  5. THE IDENTIFICATION OF PILL USING FEATURE EXTRACTION IN IMAGE MINING

    Directory of Open Access Journals (Sweden)

    A. Hema

    2015-02-01

    Full Text Available With the help of image mining techniques, an automatic pill identification system was investigated in this study for matching the images of the pills based on its several features like imprint, color, size and shape. Image mining is an inter-disciplinary task requiring expertise from various fields such as computer vision, image retrieval, image matching and pattern recognition. Image mining is the method in which the unusual patterns are detected so that both hidden and useful data images can only be stored in large database. It involves two different approaches for image matching. This research presents a drug identification, registration, detection and matching, Text, color and shape extraction of the image with image mining concept to identify the legal and illegal pills with more accuracy. Initially, the preprocessing process is carried out using novel interpolation algorithm. The main aim of this interpolation algorithm is to reduce the artifacts, blurring and jagged edges introduced during up-sampling. Then the registration process is proposed with two modules they are, feature extraction and corner detection. In feature extraction the noisy high frequency edges are discarded and relevant high frequency edges are selected. The corner detection approach detects the high frequency pixels in the intersection points. Through the overall performance gets improved. There is a need of segregate the dataset into groups based on the query image’s size, shape, color, text, etc. That process of segregating required information is called as feature extraction. The feature extraction is done using Geometrical Gradient feature transformation. Finally, color and shape feature extraction were performed using color histogram and geometrical gradient vector. Simulation results shows that the proposed techniques provide accurate retrieval results both in terms of time and accuracy when compared to conventional approaches.

  6. Adaptive spectral window sizes for feature extraction from optical spectra

    Science.gov (United States)

    Kan, Chih-Wen; Lee, Andy Y.; Pham, Nhi; Nieman, Linda T.; Sokolov, Konstantin; Markey, Mia K.

    2008-02-01

    We propose an approach to adaptively adjust the spectral window size used to extract features from optical spectra. Previous studies have employed spectral features extracted by dividing the spectra into several spectral windows of a fixed width. However, the choice of spectral window size was arbitrary. We hypothesize that by adaptively adjusting the spectral window sizes, the trends in the data will be captured more accurately. Our method was tested on a diffuse reflectance spectroscopy dataset obtained in a study of oblique polarization reflectance spectroscopy of oral mucosa lesions. The diagnostic task is to classify lesions into one of four histopathology groups: normal, benign, mild dysplasia, or severe dysplasia (including carcinoma). Nine features were extracted from each of the spectral windows. We computed the area (AUC) under Receiver Operating Characteristic curve to select the most discriminatory wavelength intervals. We performed pairwise classifications using Linear Discriminant Analysis (LDA) with leave-one-out cross validation. The results showed that for discriminating benign lesions from mild or severe dysplasia, the adaptive spectral window size features achieved AUC of 0.84, while a fixed spectral window size of 20 nm had AUC of 0.71, and an AUC of 0.64 is achieved with a large window size containing all wavelengths. The AUCs of all feature combinations were also calculated. These results suggest that the new adaptive spectral window size method effectively extracts features that enable accurate classification of oral mucosa lesions.

  7. Hierarchical Feature Extraction With Local Neural Response for Image Recognition.

    Science.gov (United States)

    Li, Hong; Wei, Yantao; Li, Luoqing; Chen, C L P

    2013-04-01

    In this paper, a hierarchical feature extraction method is proposed for image recognition. The key idea of the proposed method is to extract an effective feature, called local neural response (LNR), of the input image with nontrivial discrimination and invariance properties by alternating between local coding and maximum pooling operation. The local coding, which is carried out on the locally linear manifold, can extract the salient feature of image patches and leads to a sparse measure matrix on which maximum pooling is carried out. The maximum pooling operation builds the translation invariance into the model. We also show that other invariant properties, such as rotation and scaling, can be induced by the proposed model. In addition, a template selection algorithm is presented to reduce computational complexity and to improve the discrimination ability of the LNR. Experimental results show that our method is robust to local distortion and clutter compared with state-of-the-art algorithms.

  8. Tool Wear Feature Extraction Based on Hilbert Marginal Spectrum

    Science.gov (United States)

    Guan, Shan; Song, Weijie; Pang, Hongyang

    2017-09-01

    In the metal cutting process, the signal contains a wealth of tool wear state information. A tool wear signal’s analysis and feature extraction method based on Hilbert marginal spectrum is proposed. Firstly, the tool wear signal was decomposed by empirical mode decomposition algorithm and the intrinsic mode functions including the main information were screened out by the correlation coefficient and the variance contribution rate. Secondly, Hilbert transform was performed on the main intrinsic mode functions. Hilbert time-frequency spectrum and Hilbert marginal spectrum were obtained by Hilbert transform. Finally, Amplitude domain indexes were extracted on the basis of the Hilbert marginal spectrum and they structured recognition feature vector of tool wear state. The research results show that the extracted features can effectively characterize the different wear state of the tool, which provides a basis for monitoring tool wear condition.

  9. Moment feature based fast feature extraction algorithm for moving object detection using aerial images.

    Directory of Open Access Journals (Sweden)

    A F M Saifuddin Saif

    Full Text Available Fast and computationally less complex feature extraction for moving object detection using aerial images from unmanned aerial vehicles (UAVs remains as an elusive goal in the field of computer vision research. The types of features used in current studies concerning moving object detection are typically chosen based on improving detection rate rather than on providing fast and computationally less complex feature extraction methods. Because moving object detection using aerial images from UAVs involves motion as seen from a certain altitude, effective and fast feature extraction is a vital issue for optimum detection performance. This research proposes a two-layer bucket approach based on a new feature extraction algorithm referred to as the moment-based feature extraction algorithm (MFEA. Because a moment represents the coherent intensity of pixels and motion estimation is a motion pixel intensity measurement, this research used this relation to develop the proposed algorithm. The experimental results reveal the successful performance of the proposed MFEA algorithm and the proposed methodology.

  10. Towards Home-Made Dictionaries for Musical Feature Extraction

    DEFF Research Database (Denmark)

    Harbo, Anders La-Cour

    2003-01-01

    The majority of musical feature extraction applications are based on the Fourier transform in various disguises. This is despite the fact that this transform is subject to a series of restrictions, which admittedly ease the computation and interpretation of transform coefficients, but also imposes...... arguably unnecessary limitations on the ability of the transform to extract and identify features. However, replacing the nicely structured dictionary of the Fourier transform (or indeed other nice transform such as the wavelet transform) with a home-made dictionary is a dangerous task, since even the most...

  11. Feature extraction of the wafer probe marks in IC packaging

    Science.gov (United States)

    Tsai, Cheng-Yu; Lin, Chia-Te; Kao, Chen-Ting; Wang, Chau-Shing

    2017-12-01

    This paper presents an image processing approach to extract six features of the probe mark on semiconductor wafer pads. The electrical characteristics of the chip pad must be tested using a probing needle before wire-bonding to the wafer. However, this test leaves probe marks on the pad. A large probe mark area results in poor adhesion forces at the bond ball of the pad, thus leading to undesirable products. In this paper, we present a method to extract six features of the wafer probe marks in IC packaging for further digital image processing.

  12. Feature extraction from multiple data sources using genetic programming.

    Energy Technology Data Exchange (ETDEWEB)

    Szymanski, J. J. (John J.); Brumby, Steven P.; Pope, P. A. (Paul A.); Eads, D. R. (Damian R.); Galassi, M. C. (Mark C.); Harvey, N. R. (Neal R.); Perkins, S. J. (Simon J.); Porter, R. B. (Reid B.); Theiler, J. P. (James P.); Young, A. C. (Aaron Cody); Bloch, J. J. (Jeffrey J.); David, N. A. (Nancy A.); Esch-Mosher, D. M. (Diana M.)

    2002-01-01

    Feature extration from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. The tool used is the GENetic Imagery Exploitation (GENIE) software, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniques to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land-cover features including towns, grasslands, wild fire burn scars, and several types of forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.

  13. Surrogate-assisted feature extraction for high-throughput phenotyping.

    Science.gov (United States)

    Yu, Sheng; Chakrabortty, Abhishek; Liao, Katherine P; Cai, Tianrun; Ananthakrishnan, Ashwin N; Gainer, Vivian S; Churchill, Susanne E; Szolovits, Peter; Murphy, Shawn N; Kohane, Isaac S; Cai, Tianxi

    2017-04-01

    Phenotyping algorithms are capable of accurately identifying patients with specific phenotypes from within electronic medical records systems. However, developing phenotyping algorithms in a scalable way remains a challenge due to the extensive human resources required. This paper introduces a high-throughput unsupervised feature selection method, which improves the robustness and scalability of electronic medical record phenotyping without compromising its accuracy. The proposed Surrogate-Assisted Feature Extraction (SAFE) method selects candidate features from a pool of comprehensive medical concepts found in publicly available knowledge sources. The target phenotype's International Classification of Diseases, Ninth Revision and natural language processing counts, acting as noisy surrogates to the gold-standard labels, are used to create silver-standard labels. Candidate features highly predictive of the silver-standard labels are selected as the final features. Algorithms were trained to identify patients with coronary artery disease, rheumatoid arthritis, Crohn's disease, and ulcerative colitis using various numbers of labels to compare the performance of features selected by SAFE, a previously published automated feature extraction for phenotyping procedure, and domain experts. The out-of-sample area under the receiver operating characteristic curve and F -score from SAFE algorithms were remarkably higher than those from the other two, especially at small label sizes. SAFE advances high-throughput phenotyping methods by automatically selecting a succinct set of informative features for algorithm training, which in turn reduces overfitting and the needed number of gold-standard labels. SAFE also potentially identifies important features missed by automated feature extraction for phenotyping or experts.

  14. Effects of Feature Extraction and Classification Methods on Cyberbully Detection

    OpenAIRE

    Saraç, Esra; ÖZEL, Selma Ayşe

    2016-01-01

    Cyberbullying is defined as an aggressive, intentional action against a defenseless person by using the Internet, or other electronic contents. Researchers have found that many of the bullying cases have tragically ended in suicides; hence automatic detection of cyberbullying has become important. In this study we show the effects of feature extraction, feature selection, and classification methods that are used, on the performance of automatic detection of cyberbullying. To perform the exper...

  15. Rapid Column Extraction method for SoilRapid Column Extraction method for Soil

    Energy Technology Data Exchange (ETDEWEB)

    Maxwell, Sherrod, L. III; Culligan, Brian K.

    2005-11-07

    The analysis of actinides in environmental soil and sediment samples is very important for environmental monitoring as well as for emergency preparedness. A new, rapid actinide separation method has been developed and implemented that provides total dissolution of large soil samples, high chemical recoveries and effective removal of matrix interferences. This method uses stacked TEVA Resin{reg_sign}, TRU Resin{reg_sign} and DGA-Resin{reg_sign} cartridges from Eichrom Technologies (Darien, IL, USA) that allows the rapid separation of plutonium (Pu) neptunium (Np), uranium (U), americium (Am), and curium (Cm) using a single multi-stage column combined with alpha spectrometry. The method combines a rapid fusion step for total dissolution to dissolve refractory analytes and matrix removal using cerium fluoride precipitation to remove the difficult soil matrix. By using vacuum box cartridge technology with rapid flow rates, sample preparation time is minimized.

  16. Spatial-Temporal Feature Analysis on Single-Trial Event Related Potential for Rapid Face Identification.

    Science.gov (United States)

    Jiang, Lei; Wang, Yun; Cai, Bangyu; Wang, Yueming; Wang, Yiwen

    2017-01-01

    The event-related potential (ERP) is the brain response measured in electroencephalography (EEG), which reflects the process of human cognitive activity. ERP has been introduced into brain computer interfaces (BCIs) to communicate the computer with the subject's intention. Due to the low signal-to-noise ratio of EEG, most ERP studies are based on grand-averaging over many trials. Recently single-trial ERP detection attracts more attention, which enables real time processing tasks as rapid face identification. All the targets needed to be retrieved may appear only once, and there is no knowledge of target label for averaging. More interestingly, how the features contribute temporally and spatially to single-trial ERP detection has not been fully investigated. In this paper, we propose to implement a local-learning-based (LLB) feature extraction method to investigate the importance of spatial-temporal components of ERP in a task of rapid face identification using single-trial detection. Comparing to previous methods, LLB method preserves the nonlinear structure of EEG signal distribution, and analyze the importance of original spatial-temporal components via optimization in feature space. As a data-driven methods, the weighting of the spatial-temporal component does not depend on the ERP detection method. The importance weights are optimized by making the targets more different from non-targets in feature space, and regularization penalty is introduced in optimization for sparse weights. This spatial-temporal feature extraction method is evaluated on the EEG data of 15 participants in performing a face identification task using rapid serial visual presentation paradigm. Comparing with other methods, the proposed spatial-temporal analysis method uses sparser (only 10% of the total) features, and could achieve comparable performance (98%) of single-trial ERP detection as the whole features across different detection methods. The interesting finding is that the N250 is

  17. An Efficient Method of HOG Feature Extraction Using Selective Histogram Bin and PCA Feature Reduction

    Directory of Open Access Journals (Sweden)

    LAI, C. Q.

    2016-11-01

    Full Text Available Histogram of Oriented Gradient (HOG is a popular image feature for human detection. It presents high detection accuracy and therefore has been widely used in vision-based surveillance and pedestrian detection systems. However, the main drawback of this feature is that it has a large feature size. The extraction algorithm is also computationally intensive and requires long processing time. In this paper, a time-efficient HOG-based feature extraction method is proposed. The method uses selective number of histogram bins to perform feature extraction on different regions in the image. Higher number of histogram bin which can capture more detailed information is performed on the regions of the image which may belong to part of a human figure, while lower number of histogram bin is used on the rest of the image. To further reduce the feature size, Principal Component Analysis (PCA is used to rank the features and remove some unimportant features. The performance of the proposed method was evaluated using INRIA human dataset on a linear Support Vector Machine (SVM classifier. The results showed the processing speed of the proposed method is 2.6 times faster than the original HOG and 7 times faster than the LBP method while providing comparable detection performance.

  18. Rapid Statistical Learning Supporting Word Extraction From Continuous Speech.

    Science.gov (United States)

    Batterink, Laura J

    2017-07-01

    The identification of words in continuous speech, known as speech segmentation, is a critical early step in language acquisition. This process is partially supported by statistical learning, the ability to extract patterns from the environment. Given that speech segmentation represents a potential bottleneck for language acquisition, patterns in speech may be extracted very rapidly, without extensive exposure. This hypothesis was examined by exposing participants to continuous speech streams composed of novel repeating nonsense words. Learning was measured on-line using a reaction time task. After merely one exposure to an embedded novel word, learners demonstrated significant learning effects, as revealed by faster responses to predictable than to unpredictable syllables. These results demonstrate that learners gained sensitivity to the statistical structure of unfamiliar speech on a very rapid timescale. This ability may play an essential role in early stages of language acquisition, allowing learners to rapidly identify word candidates and "break in" to an unfamiliar language.

  19. Sparse kernel orthonormalized PLS for feature extraction in large datasets

    DEFF Research Database (Denmark)

    Arenas-García, Jerónimo; Petersen, Kaare Brandt; Hansen, Lars Kai

    2006-01-01

    In this paper we are presenting a novel multivariate analysis method for large scale problems. Our scheme is based on a novel kernel orthonormalized partial least squares (PLS) variant for feature extraction, imposing sparsity constrains in the solution to improve scalability. The algorithm is te...

  20. Block truncation coding with color clumps: A novel feature extraction ...

    Indian Academy of Sciences (India)

    Block truncation coding with color clumps:A novel feature extraction technique for content based image classification ... Department of Information Technology, Xavier Institute of Social Service, Ranchi, Jharkhand 834001, India; A.K. Choudhury School of Information Technology, University of Calcutta, Kolkata 700 009, India ...

  1. Feature extraction using regular expression in detecting proper ...

    African Journals Online (AJOL)

    Feature extraction using regular expression in detecting proper noun for Malay news articles based on KNN algorithm. S Sulaiman, R.A. Wahid, F Morsidi. Abstract. No Abstract. Keywords: data mining; named entity recognition; regular expression; natural language processing. Full Text: EMAIL FREE FULL TEXT EMAIL ...

  2. Features extraction in anterior and posterior cruciate ligaments analysis.

    Science.gov (United States)

    Zarychta, P

    2015-12-01

    The main aim of this research is finding the feature vectors of the anterior and posterior cruciate ligaments (ACL and PCL). These feature vectors have to clearly define the ligaments structure and make it easier to diagnose them. Extraction of feature vectors is obtained by analysis of both anterior and posterior cruciate ligaments. This procedure is performed after the extraction process of both ligaments. In the first stage in order to reduce the area of analysis a region of interest including cruciate ligaments (CL) is outlined in order to reduce the area of analysis. In this case, the fuzzy C-means algorithm with median modification helping to reduce blurred edges has been implemented. After finding the region of interest (ROI), the fuzzy connectedness procedure is performed. This procedure permits to extract the anterior and posterior cruciate ligament structures. In the last stage, on the basis of the extracted anterior and posterior cruciate ligament structures, 3-dimensional models of the anterior and posterior cruciate ligament are built and the feature vectors created. This methodology has been implemented in MATLAB and tested on clinical T1-weighted magnetic resonance imaging (MRI) slices of the knee joint. The 3D display is based on the Visualization Toolkit (VTK). Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Image Processing and Features Extraction of Fingerprint Images ...

    African Journals Online (AJOL)

    Several fingerprint matching algorithms have been developed for minutiae or template matching of fingerprint templates. The efficiency of these fingerprint matching algorithms depends on the success of the image processing and features extraction steps employed. Fingerprint image processing and analysis is hence an ...

  4. Feature-extraction algorithms for the PANDA electromagnetic calorimeter

    NARCIS (Netherlands)

    Kavatsyuk, M.; Guliyev, E.; Lemmens, P. J. J.; Loehner, H.; Poelman, T. P.; Tambave, G.; Yu, B

    2009-01-01

    The feature-extraction algorithms are discussed which have been developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA detector at the future FAIR facility. Performance parameters have been derived in test measurements with cosmic rays, particle and photon

  5. Compact and Hybrid Feature Description for Building Extraction

    Science.gov (United States)

    Li, Z.; Liu, Y.; Hu, Y.; Li, P.; Ding, Y.

    2017-05-01

    Building extraction in aerial orthophotos is crucial for various applications. Currently, deep learning has been shown to be successful in addressing building extraction with high accuracy and high robustness. However, quite a large number of samples is required in training a classifier when using deep learning model. In order to realize accurate and semi-interactive labelling, the performance of feature description is crucial, as it has significant effect on the accuracy of classification. In this paper, we bring forward a compact and hybrid feature description method, in order to guarantees desirable classification accuracy of the corners on the building roof contours. The proposed descriptor is a hybrid description of an image patch constructed from 4 sets of binary intensity tests. Experiments show that benefiting from binary description and making full use of color channels, this descriptor is not only computationally frugal, but also accurate than SURF for building extraction.

  6. Genetic programming approach to extracting features from remotely sensed imagery

    Energy Technology Data Exchange (ETDEWEB)

    Theiler, J. P. (James P.); Perkins, S. J. (Simon J.); Harvey, N. R. (Neal R.); Szymanski, J. J. (John J.); Brumby, Steven P.

    2001-01-01

    Multi-instrument data sets present an interesting challenge to feature extraction algorithm developers. Beyond the immediate problems of spatial co-registration, the remote sensing scientist must explore a complex algorithm space in which both spatial and spectral signatures may be required to identify a feature of interest. We describe a genetic programming/supervised classifier software system, called Genie, which evolves and combines spatio-spectral image processing tools for remotely sensed imagery. We describe our representation of candidate image processing pipelines, and discuss our set of primitive image operators. Our primary application has been in the field of geospatial feature extraction, including wildfire scars and general land-cover classes, using publicly available multi-spectral imagery (MSI) and hyper-spectral imagery (HSI). Here, we demonstrate our system on Landsat 7 Enhanced Thematic Mapper (ETM+) MSI. We exhibit an evolved pipeline, and discuss its operation and performance.

  7. Advancing Affect Modeling via Preference Learning and Unsupervised Feature Extraction

    DEFF Research Database (Denmark)

    Martínez, Héctor Pérez

    over the other examined methods. The second challenge addressed in this thesis refers to the extraction of relevant information from physiological modalities. Deep learning is proposed as an automatic approach to extract input features for models of affect from physiological signals. Experiments...... difficulties, ordinal reports such as rankings and ratings can yield more reliable affect annotations than alternative tools. This thesis explores preference learning methods to automatically learn computational models from ordinal annotations of affect. In particular, an extensive collection of training...... the complexity of hand-crafting feature extractors that combine information across dissimilar modalities of input. Frequent sequence mining is presented as a method to learn feature extractors that fuse physiological and contextual information. This method is evaluated in a game-based dataset and compared...

  8. Diagnostic features of Alzheimer's disease extracted from PET sinograms

    Science.gov (United States)

    Sayeed, A.; Petrou, M.; Spyrou, N.; Kadyrov, A.; Spinks, T.

    2002-01-01

    Texture analysis of positron emission tomography (PET) images of the brain is a very difficult task, due to the poor signal to noise ratio. As a consequence, very few techniques can be implemented successfully. We use a new global analysis technique known as the Trace transform triple features. This technique can be applied directly to the raw sinograms to distinguish patients with Alzheimer's disease (AD) from normal volunteers. FDG-PET images of 18 AD and 10 normal controls obtained from the same CTI ECAT-953 scanner were used in this study. The Trace transform triple feature technique was used to extract features that were invariant to scaling, translation and rotation, referred to as invariant features, as well as features that were sensitive to rotation but invariant to scaling and translation, referred to as sensitive features in this study. The features were used to classify the groups using discriminant function analysis. Cross-validation tests using stepwise discriminant function analysis showed that combining both sensitive and invariant features produced the best results, when compared with the clinical diagnosis. Selecting the five best features produces an overall accuracy of 93% with sensitivity of 94% and specificity of 90%. This is comparable with the classification accuracy achieved by Kippenhan et al (1992), using regional metabolic activity.

  9. A rapid DNA extraction method suitable for human papillomavirus detection.

    Science.gov (United States)

    Brestovac, Brian; Wong, Michelle E; Costantino, Paul S; Groth, David

    2014-04-01

    Infection with oncogenic human papillomavirus (HPV) genotypes is necessary for the development of cervical cancer. Testing for HPV DNA from liquid based cervical samples can be used as an adjunct to traditional cytological screening. In addition there are ongoing viral load, genotyping, and prevalence studies. Therefore, a sensitive DNA extraction method is needed to maximize the efficiency of HPV DNA detection. The XytXtract Tissue kit is a DNA extraction kit that is rapid and so could be useful for HPV testing, particularly in screening protocols. This study was undertaken to determine the suitability of this method for HPV detection. DNA extraction from HeLa and Caski cell lines containing HPV 18 and 16 respectively together with DNA from five liquid based cervical samples were used in a HPV PCR assay. DNA was also extracted using the QIAamp DNA mini kit (Qiagen, Hilden, Germany) as a comparison. DNA extracts were serially diluted and assayed. HPV DNA was successfully detected in cell lines and cervical samples using the XytXtract Tissue kit. In addition, the XytXtract method was found to be more sensitive than the QIAmp method as determined by a dilution series of the extracted DNA. While the XytXtract method is a closed, the QIAamp method uses a spin column with possible loss of DNA through DNA binding competition of the matrix, which could impact on the final extraction efficiency. The XytXtract is a cheap, rapid and efficient method for extracting HPV DNA from both cell lines and liquid based cervical samples. © 2014 Wiley Periodicals, Inc.

  10. Rapid extraction of aflatoxin from creamy and crunchy peanut butter.

    Science.gov (United States)

    Vega, Victor A

    2005-01-01

    A rapid extraction technique was developed for the isolation and subsequent liquid chromatographic determination of aflatoxins B1, B2, G1, and G2 in creamy and crunchy peanut butter. Peanut buftter samples were extracted with a methanol 15% sodium chloride (7 + 3) solution followed by a second extraction with methanol. The extract was subjected to a cleanup using a Vicam Aflatest immunoaffinity column. Control samples for both smooth and crunchy peanut butter were fortified at 4 different levels for aflatoxin B1, B2, G1, and G2. The average aflatoxin B1, B2, G1, and G2 recoveries from smooth peanut buffer were 95.2, 89.9, 94.1, and 62.4%, respectively, and 92.4, 84.3, 85.5, and 53.7%, respectively, from crunchy peanut butter. This extraction method and the official AOAC Method 991.31 produced comparable results for peanut butter samples. This method provides a rapid, specific, and easily controlled assay for the analysis of aflatoxins in peanut butter with minimal solvent usage. Organic solvent consumption was decreased by 85% and hazardous waste production was decreased by 80% in comparison with the AOAC method. Along with the decreased solvent consumption, significant savings in time were observed.

  11. Rapid automatic keyword extraction for information retrieval and analysis

    Science.gov (United States)

    Rose, Stuart J [Richland, WA; Cowley,; E, Wendy [Richland, WA; Crow, Vernon L [Richland, WA; Cramer, Nicholas O [Richland, WA

    2012-03-06

    Methods and systems for rapid automatic keyword extraction for information retrieval and analysis. Embodiments can include parsing words in an individual document by delimiters, stop words, or both in order to identify candidate keywords. Word scores for each word within the candidate keywords are then calculated based on a function of co-occurrence degree, co-occurrence frequency, or both. Based on a function of the word scores for words within the candidate keyword, a keyword score is calculated for each of the candidate keywords. A portion of the candidate keywords are then extracted as keywords based, at least in part, on the candidate keywords having the highest keyword scores.

  12. Facial Feature Extraction Using Frequency Map Series in PCNN

    Directory of Open Access Journals (Sweden)

    Rencan Nie

    2016-01-01

    Full Text Available Pulse coupled neural network (PCNN has been widely used in image processing. The 3D binary map series (BMS generated by PCNN effectively describes image feature information such as edges and regional distribution, so BMS can be treated as the basis of extracting 1D oscillation time series (OTS for an image. However, the traditional methods using BMS did not consider the correlation of the binary sequence in BMS and the space structure for every map. By further processing for BMS, a novel facial feature extraction method is proposed. Firstly, consider the correlation among maps in BMS; a method is put forward to transform BMS into frequency map series (FMS, and the method lessens the influence of noncontinuous feature regions in binary images on OTS-BMS. Then, by computing the 2D entropy for every map in FMS, the 3D FMS is transformed into 1D OTS (OTS-FMS, which has good geometry invariance for the facial image, and contains the space structure information of the image. Finally, by analyzing the OTS-FMS, the standard Euclidean distance is used to measure the distances for OTS-FMS. Experimental results verify the effectiveness of OTS-FMS in facial recognition, and it shows better recognition performance than other feature extraction methods.

  13. Forged Signature Distinction Using Convolutional Neural Network for Feature Extraction

    Directory of Open Access Journals (Sweden)

    Seungsoo Nam

    2018-01-01

    Full Text Available This paper proposes a dynamic verification scheme for finger-drawn signatures in smartphones. As a dynamic feature, the movement of a smartphone is recorded with accelerometer sensors in the smartphone, in addition to the moving coordinates of the signature. To extract high-level longitudinal and topological features, the proposed scheme uses a convolution neural network (CNN for feature extraction, and not as a conventional classifier. We assume that a CNN trained with forged signatures can extract effective features (called S-vector, which are common in forging activities such as hesitation and delay before drawing the complicated part. The proposed scheme also exploits an autoencoder (AE as a classifier, and the S-vector is used as the input vector to the AE. An AE has high accuracy for the one-class distinction problem such as signature verification, and is also greatly dependent on the accuracy of input data. S-vector is valuable as the input of AE, and, consequently, could lead to improved verification accuracy especially for distinguishing forged signatures. Compared to the previous work, i.e., the MLP-based finger-drawn signature verification scheme, the proposed scheme decreases the equal error rate by 13.7%, specifically, from 18.1% to 4.4%, for discriminating forged signatures.

  14. Effects of Feature Extraction and Classification Methods on Cyberbully Detection

    Directory of Open Access Journals (Sweden)

    Esra SARAÇ

    2016-12-01

    Full Text Available Cyberbullying is defined as an aggressive, intentional action against a defenseless person by using the Internet, or other electronic contents. Researchers have found that many of the bullying cases have tragically ended in suicides; hence automatic detection of cyberbullying has become important. In this study we show the effects of feature extraction, feature selection, and classification methods that are used, on the performance of automatic detection of cyberbullying. To perform the experiments FormSpring.me dataset is used and the effects of preprocessing methods; several classifiers like C4.5, Naïve Bayes, kNN, and SVM; and information gain and chi square feature selection methods are investigated. Experimental results indicate that the best classification results are obtained when alphabetic tokenization, no stemming, and no stopwords removal are applied. Using feature selection also improves cyberbully detection performance. When classifiers are compared, C4.5 performs the best for the used dataset.

  15. Optimized Feature Extraction for Temperature-Modulated Gas Sensors

    Directory of Open Access Journals (Sweden)

    Alexander Vergara

    2009-01-01

    Full Text Available One of the most serious limitations to the practical utilization of solid-state gas sensors is the drift of their signal. Even if drift is rooted in the chemical and physical processes occurring in the sensor, improved signal processing is generally considered as a methodology to increase sensors stability. Several studies evidenced the augmented stability of time variable signals elicited by the modulation of either the gas concentration or the operating temperature. Furthermore, when time-variable signals are used, the extraction of features can be accomplished in shorter time with respect to the time necessary to calculate the usual features defined in steady-state conditions. In this paper, we discuss the stability properties of distinct dynamic features using an array of metal oxide semiconductors gas sensors whose working temperature is modulated with optimized multisinusoidal signals. Experiments were aimed at measuring the dispersion of sensors features in repeated sequences of a limited number of experimental conditions. Results evidenced that the features extracted during the temperature modulation reduce the multidimensional data dispersion among repeated measurements. In particular, the Energy Signal Vector provided an almost constant classification rate along the time with respect to the temperature modulation.

  16. Preparing Silica Aerogel Monoliths via a Rapid Supercritical Extraction Method

    Science.gov (United States)

    Gorka, Caroline A.

    2014-01-01

    A procedure for the fabrication of monolithic silica aerogels in eight hours or less via a rapid supercritical extraction process is described. The procedure requires 15-20 min of preparation time, during which a liquid precursor mixture is prepared and poured into wells of a metal mold that is placed between the platens of a hydraulic hot press, followed by several hours of processing within the hot press. The precursor solution consists of a 1.0:12.0:3.6:3.5 x 10-3 molar ratio of tetramethylorthosilicate (TMOS):methanol:water:ammonia. In each well of the mold, a porous silica sol-gel matrix forms. As the temperature of the mold and its contents is increased, the pressure within the mold rises. After the temperature/pressure conditions surpass the supercritical point for the solvent within the pores of the matrix (in this case, a methanol/water mixture), the supercritical fluid is released, and monolithic aerogel remains within the wells of the mold. With the mold used in this procedure, cylindrical monoliths of 2.2 cm diameter and 1.9 cm height are produced. Aerogels formed by this rapid method have comparable properties (low bulk and skeletal density, high surface area, mesoporous morphology) to those prepared by other methods that involve either additional reaction steps or solvent extractions (lengthier processes that generate more chemical waste).The rapid supercritical extraction method can also be applied to the fabrication of aerogels based on other precursor recipes. PMID:24637334

  17. Novel feature extraction method for hyperspectral remote sensing image

    Science.gov (United States)

    Liu, Chunhong; Zhao, Huijie

    2007-11-01

    In order to reduce high dimensions of hyperspectral remote sensing image and concentrate optimal information to reduced bands, this paper proposed a new method of feature extraction. The new method has two steps. The first step is to reduce the high dimensions by selecting high informative and low correlative bands according to the indexes calculated by a smart band selection method. The criterions that SBS method complied are: (1) The selected bands have the most information; (2) The selected bands have the smallest correlation with other bands. The second step is to decompose the selected bands by a novel second generation wavelet, predicting and updating subimages on rectangle and quincunx grids by Neville filters, finally using variance weighting as fusion weight. A 126-band HYMAP hyperspectral data was experimented in order to test the effect of the new method. The results showed classification accuracy is increased by using the novel feature extraction method.

  18. Rapid extraction and assay of uranium from environmental surface samples

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, Christopher A.; Chouyyok, Wilaiwan; Speakman, Robert J.; Olsen, Khris B.; Addleman, Raymond Shane

    2017-10-01

    Extraction methods enabling faster removal and concentration of uranium compounds for improved trace and low-level assay are demonstrated for standard surface sampling material in support of nuclear safeguards efforts, health monitoring, and other nuclear analysis applications. A key problem with the existing surface sampling swipes is the requirement for complete digestion of sample and sampling matrix. This is a time-consuming and labour-intensive process that limits laboratory throughput, elevates costs, and increases background levels. Various extraction methods are explored for their potential to quickly and efficiently remove different chemical forms of uranium from standard surface sampling material. A combination of carbonate and peroxide solutions is shown to give the most rapid and complete form of uranyl compound extraction and dissolution. This rapid extraction process is demonstrated to be compatible with standard inductive coupled plasma mass spectrometry methods for uranium isotopic assay as well as screening techniques such as x-ray fluorescence. The general approach described has application beyond uranium to other analytes of nuclear forensic interest (e.g., rare earth elements and plutonium) as well as heavy metals for environmental and industrial hygiene monitoring.

  19. Extracting BI-RADS Features from Portuguese Clinical Texts

    Science.gov (United States)

    Nassif, Houssam; Cunha, Filipe; Moreira, Inês C.; Cruz-Correia, Ricardo; Sousa, Eliana; Page, David; Burnside, Elizabeth; Dutra, Inês

    2013-01-01

    In this work we build the first BI-RADS parser for Portuguese free texts, modeled after existing approaches to extract BI-RADS features from English medical records. Our concept finder uses a semantic grammar based on the BIRADS lexicon and on iterative transferred expert knowledge. We compare the performance of our algorithm to manual annotation by a specialist in mammography. Our results show that our parser’s performance is comparable to the manual method. PMID:23797461

  20. Automated Feature Extraction of Foredune Morphology from Terrestrial Lidar Data

    Science.gov (United States)

    Spore, N.; Brodie, K. L.; Swann, C.

    2014-12-01

    Foredune morphology is often described in storm impact prediction models using the elevation of the dune crest and dune toe and compared with maximum runup elevations to categorize the storm impact and predicted responses. However, these parameters do not account for other foredune features that may make them more or less erodible, such as alongshore variations in morphology, vegetation coverage, or compaction. The goal of this work is to identify other descriptive features that can be extracted from terrestrial lidar data that may affect the rate of dune erosion under wave attack. Daily, mobile-terrestrial lidar surveys were conducted during a 6-day nor'easter (Hs = 4 m in 6 m water depth) along 20km of coastline near Duck, North Carolina which encompassed a variety of foredune forms in close proximity to each other. This abstract will focus on the tools developed for the automated extraction of the morphological features from terrestrial lidar data, while the response of the dune will be presented by Brodie and Spore as an accompanying abstract. Raw point cloud data can be dense and is often under-utilized due to time and personnel constraints required for analysis, since many algorithms are not fully automated. In our approach, the point cloud is first projected into a local coordinate system aligned with the coastline, and then bare earth points are interpolated onto a rectilinear 0.5 m grid creating a high resolution digital elevation model. The surface is analyzed by identifying features along each cross-shore transect. Surface curvature is used to identify the position of the dune toe, and then beach and berm morphology is extracted shoreward of the dune toe, and foredune morphology is extracted landward of the dune toe. Changes in, and magnitudes of, cross-shore slope, curvature, and surface roughness are used to describe the foredune face and each cross-shore transect is then classified using its pre-storm morphology for storm-response analysis.

  1. Tauberian-prony feature extraction technique for esophageal motility patterns.

    Science.gov (United States)

    Abou-Chadi, F E; Ezzat, F A; Gad-Elhak, N; Sif el-Din, A A

    1993-01-01

    For the esophageal contractile activity recorded during swallowing, a feature extraction scheme has been developed. It recognizes the time, duration, and amplitudes of local peaks for each peristaltic wave. The method is based on the Tauberian approximation for modeling waveforms as a sum of identically shaped pulses with different time delays and amplitudes. Initial conditions on the pulse properties are set and an optimal solution is sought. The method is completely automated and can be utilized for characterization and classification purposes.

  2. Feature Extraction and Pattern Identification for Anemometer Condition Diagnosis

    Directory of Open Access Journals (Sweden)

    Longji Sun

    2012-01-01

    Full Text Available Cup anemometers are commonly used for wind speed measurement in the wind industry. Anemometer malfunctions lead to excessive errors in measurement and directly influence the wind energy development for a proposed wind farm site. This paper is focused on feature extraction and pattern identification to solve the anemometer condition diagnosis problem of the PHM 2011 Data Challenge Competition. Since the accuracy of anemometers can be severely affected by the environmental factors such as icing and the tubular tower itself, in order to distinguish the cause due to anemometer failures from these factors, our methodologies start with eliminating irregular data (outliers under the influence of environmental factors. For paired data, the relation between the relative wind speed difference and the wind direction is extracted as an important feature to reflect normal or abnormal behaviors of paired anemometers. Decisions regarding the condition of paired anemometers are made by comparing the features extracted from training and test data. For shear data, a power law model is fitted using the preprocessed and normalized data, and the sum of the squared residuals (SSR is used to measure the health of an array of anemometers. Decisions are made by comparing the SSRs of training and test data. The performance of our proposed methods is evaluated through the competition website. As a final result, our team ranked the second place overall in both student and professional categories in this competition.

  3. Dominant color and texture feature extraction for banknote discrimination

    Science.gov (United States)

    Wang, Junmin; Fan, Yangyu; Li, Ning

    2017-07-01

    Banknote discrimination with image recognition technology is significant in many applications. The traditional methods based on image recognition only recognize the banknote denomination without discriminating the counterfeit banknote. To solve this problem, we propose a systematical banknote discrimination approach with the dominant color and texture features. After capturing the visible and infrared images of the test banknote, we first implement the tilt correction based on the principal component analysis (PCA) algorithm. Second, we extract the dominant color feature of the visible banknote image to recognize the denomination. Third, we propose an adaptively weighted local binary pattern with "delta" tolerance algorithm to extract the texture features of the infrared banknote image. At last, we discriminate the genuine or counterfeit banknote by comparing the texture features between the test banknote and the benchmark banknote. The proposed approach is tested using 14,000 banknotes of six different denominations from Chinese yuan (CNY). The experimental results show 100% accuracy for denomination recognition and 99.92% accuracy for counterfeit banknote discrimination.

  4. Extracted facial feature of racial closely related faces

    Science.gov (United States)

    Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu

    2010-02-01

    Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.

  5. Harnessing Satellite Imageries in Feature Extraction Using Google Earth Pro

    Science.gov (United States)

    Fernandez, Sim Joseph; Milano, Alan

    2016-07-01

    Climate change has been a long-time concern worldwide. Impending flooding, for one, is among its unwanted consequences. The Phil-LiDAR 1 project of the Department of Science and Technology (DOST), Republic of the Philippines, has developed an early warning system in regards to flood hazards. The project utilizes the use of remote sensing technologies in determining the lives in probable dire danger by mapping and attributing building features using LiDAR dataset and satellite imageries. A free mapping software named Google Earth Pro (GEP) is used to load these satellite imageries as base maps. Geotagging of building features has been done so far with the use of handheld Global Positioning System (GPS). Alternatively, mapping and attribution of building features using GEP saves a substantial amount of resources such as manpower, time and budget. Accuracy-wise, geotagging by GEP is dependent on either the satellite imageries or orthophotograph images of half-meter resolution obtained during LiDAR acquisition and not on the GPS of three-meter accuracy. The attributed building features are overlain to the flood hazard map of Phil-LiDAR 1 in order to determine the exposed population. The building features as obtained from satellite imageries may not only be used in flood exposure assessment but may also be used in assessing other hazards and a number of other uses. Several other features may also be extracted from the satellite imageries.

  6. An image-processing methodology for extracting bloodstain pattern features.

    Science.gov (United States)

    Arthur, Ravishka M; Humburg, Philomena J; Hoogenboom, Jerry; Baiker, Martin; Taylor, Michael C; de Bruin, Karla G

    2017-08-01

    There is a growing trend in forensic science to develop methods to make forensic pattern comparison tasks more objective. This has generally involved the application of suitable image-processing methods to provide numerical data for identification or comparison. This paper outlines a unique image-processing methodology that can be utilised by analysts to generate reliable pattern data that will assist them in forming objective conclusions about a pattern. A range of features were defined and extracted from a laboratory-generated impact spatter pattern. These features were based in part on bloodstain properties commonly used in the analysis of spatter bloodstain patterns. The values of these features were consistent with properties reported qualitatively for such patterns. The image-processing method developed shows considerable promise as a way to establish measurable discriminating pattern criteria that are lacking in current bloodstain pattern taxonomies. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Parasitic slow extraction of extremely weak beam from a high-intensity proton rapid cycling synchrotron

    Energy Technology Data Exchange (ETDEWEB)

    Zou, Ye [University of Science and Technology of China, Hefei, Anhui 230029 (China); Institute of High Energy Physics, CAS, Yuquan Road 19B, Beijing 100049 (China); Tang, Jingyu, E-mail: tangjy@ihep.ac.cn [University of Science and Technology of China, Hefei, Anhui 230029 (China); Institute of High Energy Physics, CAS, Yuquan Road 19B, Beijing 100049 (China); Yang, Zheng; Jing, Hantao [Institute of High Energy Physics, CAS, Yuquan Road 19B, Beijing 100049 (China)

    2014-02-11

    This paper proposes a novel method to extract extremely weak beam from a high-intensity proton rapid cycling synchrotron (RCS) in the parasitic mode, while maintaining the normal fast extraction. The usual slow extraction method from a synchrotron by employing third-order resonance cannot be applied in a high-intensity RCS due to a very short flat-top at the extraction energy and the strict control on beam loss. The proposed parasitic slow extraction method moves the beam to scrape a scattering foil prior to the fast beam extraction by employing either a local orbit bump or momentum deviation or their combination, so that the halo part of the beam will be scattered. A part of the scattered particles will be extracted from the RCS and guided to the experimental area. The slow extraction process can last about a few milliseconds before the beam is extracted by the fast extraction system. The method has been applied to the RCS of China Spallation Neutron Source. With 1.6 GeV in the extraction energy, 62.5 μA in the average current and 25 Hz in the repetition rate for the RCS, the proton intensity by the slow extraction method can be up to 2×10{sup 4} protons per cycle or 5×10{sup 5} protons per second. The extracted beam has also a good time structure of approximately uniform in a spill which is required for many applications such as detector tests. Detailed studies including the scattering effect in the foil, the local orbit bump by the bump magnets and dispersive orbit bump by modifying the RF pattern, the multi-particle simulations by ORBIT and TURTLE codes, and some technical features for the extraction magnets are presented.

  8. Extraction of texture features with a multiresolution neural network

    Science.gov (United States)

    Lepage, Richard; Laurendeau, Denis; Gagnon, Roger A.

    1992-09-01

    Texture is an important surface characteristic. Many industrial materials such as wood, textile, or paper are best characterized by their texture. Detection of defaults occurring on such materials or classification for quality control anD matching can be carried out through careful texture analysis. A system for the classification of pieces of wood used in the furniture industry is proposed. This paper is concerned with a neural network implementation of the features extraction and classification components of the proposed system. Texture appears differently depending at which spatial scale it is observed. A complete description of a texture thus implies an analysis at several spatial scales. We propose a compact pyramidal representation of the input image for multiresolution analysis. The feature extraction system is implemented on a multilayer artificial neural network. Each level of the pyramid, which is a representation of the input image at a given spatial resolution scale, is mapped into a layer of the neural network. A full resolution texture image is input at the base of the pyramid and a representation of the texture image at multiple resolutions is generated by the feedforward pyramid structure of the neural network. The receptive field of each neuron at a given pyramid level is preprogrammed as a discrete Gaussian low-pass filter. Meaningful characteristics of the textured image must be extracted if a good resolving power of the classifier must be achieved. Local dominant orientation is the principal feature which is extracted from the textured image. Local edge orientation is computed with a Sobel mask at four orientation angles (multiple of (pi) /4). The resulting intrinsic image, that is, the local dominant orientation image, is fed to the texture classification neural network. The classification network is a three-layer feedforward back-propagation neural network.

  9. Angiographic features of rapidly involuting congenital hemangioma (RICH)

    Energy Technology Data Exchange (ETDEWEB)

    Konez, Orhan; Burrows, Patricia E. [Department of Radiology, Children' s Hospital Boston, Harvard Medical School, 300 Longwood Avenue, Boston, MA 02115 (United States); Mulliken, John B. [Division of Plastic Surgery, Children' s Hospital Boston, Harvard Medical School, 300 Longwood Avenue, Boston, MA 02115 (United States); Fishman, Steven J. [Department of Pediatric Surgery, Children' s Hospital Boston, Harvard Medical School, 300 Longwood Avenue, Boston, MA 02115 (United States); Kozakewich, Harry P.W. [Department of Pathology, Children' s Hospital Boston, Harvard Medical School, 300 Longwood Avenue, Boston, MA 02115 (United States)

    2003-01-01

    Rapidly involuting congenital hemangioma (RICH) is a recently recognized entity in which the vascular tumor is fully developed at birth and undergoes rapid involution. Angiographic findings in two infants with congenital hemangioma are reported and compared with a more common postnatal infantile hemangioma and a congenital infantile fibrosarcoma. Congenital hemangiomas differed from infantile hemangiomas angiographically by inhomogeneous parenchymal staining, large and irregular feeding arteries in disorganized patterns, arterial aneurysms, direct arteriovenous shunts, and intravascular thrombi. Both infants had clinical evidence of a high-output cardiac failure and intralesional bleeding. This congenital high-flow vascular tumor is difficult to distinguish angiographically from arteriovenous malformation and congenital infantile fibrosarcoma. (orig.)

  10. A multi-approach feature extractions for iris recognition

    Science.gov (United States)

    Sanpachai, H.; Settapong, M.

    2014-04-01

    Biometrics is a promising technique that is used to identify individual traits and characteristics. Iris recognition is one of the most reliable biometric methods. As iris texture and color is fully developed within a year of birth, it remains unchanged throughout a person's life. Contrary to fingerprint, which can be altered due to several aspects including accidental damage, dry or oily skin and dust. Although iris recognition has been studied for more than a decade, there are limited commercial products available due to its arduous requirement such as camera resolution, hardware size, expensive equipment and computational complexity. However, at the present time, technology has overcome these obstacles. Iris recognition can be done through several sequential steps which include pre-processing, features extractions, post-processing, and matching stage. In this paper, we adopted the directional high-low pass filter for feature extraction. A box-counting fractal dimension and Iris code have been proposed as feature representations. Our approach has been tested on CASIA Iris Image database and the results are considered successful.

  11. Entropy Analysis as an Electroencephalogram Feature Extraction Method

    Directory of Open Access Journals (Sweden)

    P. I. Sotnikov

    2014-01-01

    Full Text Available The aim of this study was to evaluate a possibility for using an entropy analysis as an electroencephalogram (EEG feature extraction method in brain-computer interfaces (BCI. The first section of the article describes the proposed algorithm based on the characteristic features calculation using the Shannon entropy analysis. The second section discusses issues of the classifier development for the EEG records. We use a support vector machine (SVM as a classifier. The third section describes the test data. Further, we estimate an efficiency of the considered feature extraction method to compare it with a number of other methods. These methods include: evaluation of signal variance; estimation of spectral power density (PSD; estimation of autoregression model parameters; signal analysis using the continuous wavelet transform; construction of common spatial pattern (CSP filter. As a measure of efficiency we use the probability value of correctly recognized types of imagery movements. At the last stage we evaluate the impact of EEG signal preprocessing methods on the final classification accuracy. Finally, it concludes that the entropy analysis has good prospects in BCI applications.

  12. [Feature extraction for breast cancer data based on geometric algebra theory and feature selection using differential evolution].

    Science.gov (United States)

    Li, Jing; Hong, Wenxue

    2014-12-01

    The feature extraction and feature selection are the important issues in pattern recognition. Based on the geometric algebra representation of vector, a new feature extraction method using blade coefficient of geometric algebra was proposed in this study. At the same time, an improved differential evolution (DE) feature selection method was proposed to solve the elevated high dimension issue. The simple linear discriminant analysis was used as the classifier. The result of the 10-fold cross-validation (10 CV) classification of public breast cancer biomedical dataset was more than 96% and proved superior to that of the original features and traditional feature extraction method.

  13. Rapid extraction and preservation of genomic DNA from human samples.

    Science.gov (United States)

    Kalyanasundaram, D; Kim, J-H; Yeo, W-H; Oh, K; Lee, K-H; Kim, M-H; Ryew, S-M; Ahn, S-G; Gao, D; Cangelosi, G A; Chung, J-H

    2013-02-01

    Simple and rapid extraction of human genomic DNA remains a bottleneck for genome analysis and disease diagnosis. Current methods using microfilters require cumbersome, multiple handling steps in part because salt conditions must be controlled for attraction and elution of DNA in porous silica. We report a novel extraction method of human genomic DNA from buccal swab and saliva samples. DNA is attracted onto a gold-coated microchip by an electric field and capillary action while the captured DNA is eluted by thermal heating at 70 °C. A prototype device was designed to handle four microchips, and a compatible protocol was developed. The extracted DNA using microchips was characterized by qPCR for different sample volumes, using different lengths of PCR amplicon, and nuclear and mitochondrial genes. In comparison with a commercial kit, an equivalent yield of DNA extraction was achieved with fewer steps. Room-temperature preservation for 1 month was demonstrated for captured DNA, facilitating straightforward collection, delivery, and handling of genomic DNA in an environment-friendly protocol.

  14. Opinion mining feature-level using Naive Bayes and feature extraction based analysis dependencies

    Science.gov (United States)

    Sanda, Regi; Baizal, Z. K. Abdurahman; Nhita, Fhira

    2015-12-01

    Development of internet and technology, has major impact and providing new business called e-commerce. Many e-commerce sites that provide convenience in transaction, and consumers can also provide reviews or opinions on products that purchased. These opinions can be used by consumers and producers. Consumers to know the advantages and disadvantages of particular feature of the product. Procuders can analyse own strengths and weaknesses as well as it's competitors products. Many opinions need a method that the reader can know the point of whole opinion. The idea emerged from review summarization that summarizes the overall opinion based on sentiment and features contain. In this study, the domain that become the main focus is about the digital camera. This research consisted of four steps 1) giving the knowledge to the system to recognize the semantic orientation of an opinion 2) indentify the features of product 3) indentify whether the opinion gives a positive or negative 4) summarizing the result. In this research discussed the methods such as Naï;ve Bayes for sentiment classification, and feature extraction algorithm based on Dependencies Analysis, which is one of the tools in Natural Language Processing (NLP) and knowledge based dictionary which is useful for handling implicit features. The end result of research is a summary that contains a bunch of reviews from consumers on the features and sentiment. With proposed method, accuration for sentiment classification giving 81.2 % for positive test data, 80.2 % for negative test data, and accuration for feature extraction reach 90.3 %.

  15. Javanese Character Feature Extraction Based on Shape Energy

    Directory of Open Access Journals (Sweden)

    Galih Hendra Wibowo

    2017-07-01

    Full Text Available Javanese character is one of Indonesia's noble culture, especially in Java. However, the number of Javanese people who are able to read the letter has decreased so that there need to be conservation efforts in the form of a system that is able to recognize the characters. One solution to these problem lies in Optical Character Recognition (OCR studies, where one of its heaviest points lies in feature extraction which is to distinguish each character. Shape Energy is one of feature extraction method with the basic idea of how the character can be distinguished simply through its skeleton. Based on the basic idea, then the development of feature extraction is done based on its components to produce an angular histogram with various variations of multiples angle. Furthermore, the performance test of this method and its basic method is performed in Javanese character dataset, which has been obtained from various images, is 240 data with 19 labels by using K-Nearest Neighbors as its classification method. Performance values were obtained based on the accuracy which is generated through the Cross-Validation process of 80.83% in the angular histogram with an angle of 20 degrees, 23% better than Shape Energy. In addition, other test results show that this method is able to recognize rotated character with the lowest performance value of 86% at 180-degree rotation and the highest performance value of 96.97% at 90-degree rotation. It can be concluded that this method is able to improve the performance of Shape Energy in the form of recognition of Javanese characters as well as robust to the rotation.

  16. Waveform feature extraction algorithms for IceCube

    Energy Technology Data Exchange (ETDEWEB)

    Wallraff, Marius; Boersma, David; Wiebusch, Christopher [III. Physikalisches Institut, RWTH Aachen (Germany)

    2010-07-01

    The IceCube Neutrino Observatory at South Pole consists of digital optical modules (DOMs) deep down in the ice equipped with photomultipliers to capture Cherenkov light induced by muons and other particles. These DOMs digitize the analogue pulse shapes of the photomultiplier signals. The large amount of information has to be condensed for later particle track and energy reconstructions. This talk presents a new framework (the NewFeatureExtractor) to extract the arrival times and the number of photons. Three algorithms have been implemented in this framework to analyze different types of waveforms. Their performance is tested by comparison between experimental and simulated data and by comparison with earlier algorithms.

  17. How Lovebirds Maneuver Rapidly Using Super-Fast Head Saccades and Image Feature Stabilization.

    Directory of Open Access Journals (Sweden)

    Daniel Kress

    Full Text Available Diurnal flying animals such as birds depend primarily on vision to coordinate their flight path during goal-directed flight tasks. To extract the spatial structure of the surrounding environment, birds are thought to use retinal image motion (optical flow that is primarily induced by motion of their head. It is unclear what gaze behaviors birds perform to support visuomotor control during rapid maneuvering flight in which they continuously switch between flight modes. To analyze this, we measured the gaze behavior of rapidly turning lovebirds in a goal-directed task: take-off and fly away from a perch, turn on a dime, and fly back and land on the same perch. High-speed flight recordings revealed that rapidly turning lovebirds perform a remarkable stereotypical gaze behavior with peak saccadic head turns up to 2700 degrees per second, as fast as insects, enabled by fast neck muscles. In between saccades, gaze orientation is held constant. By comparing saccade and wingbeat phase, we find that these super-fast saccades are coordinated with the downstroke when the lateral visual field is occluded by the wings. Lovebirds thus maximize visual perception by overlying behaviors that impair vision, which helps coordinate maneuvers. Before the turn, lovebirds keep a high contrast edge in their visual midline. Similarly, before landing, the lovebirds stabilize the center of the perch in their visual midline. The perch on which the birds land swings, like a branch in the wind, and we find that retinal size of the perch is the most parsimonious visual cue to initiate landing. Our observations show that rapidly maneuvering birds use precisely timed stereotypic gaze behaviors consisting of rapid head turns and frontal feature stabilization, which facilitates optical flow based flight control. Similar gaze behaviors have been reported for visually navigating humans. This finding can inspire more effective vision-based autopilots for drones.

  18. How Lovebirds Maneuver Rapidly Using Super-Fast Head Saccades and Image Feature Stabilization

    Science.gov (United States)

    Kress, Daniel; van Bokhorst, Evelien; Lentink, David

    2015-01-01

    Diurnal flying animals such as birds depend primarily on vision to coordinate their flight path during goal-directed flight tasks. To extract the spatial structure of the surrounding environment, birds are thought to use retinal image motion (optical flow) that is primarily induced by motion of their head. It is unclear what gaze behaviors birds perform to support visuomotor control during rapid maneuvering flight in which they continuously switch between flight modes. To analyze this, we measured the gaze behavior of rapidly turning lovebirds in a goal-directed task: take-off and fly away from a perch, turn on a dime, and fly back and land on the same perch. High-speed flight recordings revealed that rapidly turning lovebirds perform a remarkable stereotypical gaze behavior with peak saccadic head turns up to 2700 degrees per second, as fast as insects, enabled by fast neck muscles. In between saccades, gaze orientation is held constant. By comparing saccade and wingbeat phase, we find that these super-fast saccades are coordinated with the downstroke when the lateral visual field is occluded by the wings. Lovebirds thus maximize visual perception by overlying behaviors that impair vision, which helps coordinate maneuvers. Before the turn, lovebirds keep a high contrast edge in their visual midline. Similarly, before landing, the lovebirds stabilize the center of the perch in their visual midline. The perch on which the birds land swings, like a branch in the wind, and we find that retinal size of the perch is the most parsimonious visual cue to initiate landing. Our observations show that rapidly maneuvering birds use precisely timed stereotypic gaze behaviors consisting of rapid head turns and frontal feature stabilization, which facilitates optical flow based flight control. Similar gaze behaviors have been reported for visually navigating humans. This finding can inspire more effective vision-based autopilots for drones. PMID:26107413

  19. How Lovebirds Maneuver Rapidly Using Super-Fast Head Saccades and Image Feature Stabilization.

    Science.gov (United States)

    Kress, Daniel; van Bokhorst, Evelien; Lentink, David

    2015-01-01

    Diurnal flying animals such as birds depend primarily on vision to coordinate their flight path during goal-directed flight tasks. To extract the spatial structure of the surrounding environment, birds are thought to use retinal image motion (optical flow) that is primarily induced by motion of their head. It is unclear what gaze behaviors birds perform to support visuomotor control during rapid maneuvering flight in which they continuously switch between flight modes. To analyze this, we measured the gaze behavior of rapidly turning lovebirds in a goal-directed task: take-off and fly away from a perch, turn on a dime, and fly back and land on the same perch. High-speed flight recordings revealed that rapidly turning lovebirds perform a remarkable stereotypical gaze behavior with peak saccadic head turns up to 2700 degrees per second, as fast as insects, enabled by fast neck muscles. In between saccades, gaze orientation is held constant. By comparing saccade and wingbeat phase, we find that these super-fast saccades are coordinated with the downstroke when the lateral visual field is occluded by the wings. Lovebirds thus maximize visual perception by overlying behaviors that impair vision, which helps coordinate maneuvers. Before the turn, lovebirds keep a high contrast edge in their visual midline. Similarly, before landing, the lovebirds stabilize the center of the perch in their visual midline. The perch on which the birds land swings, like a branch in the wind, and we find that retinal size of the perch is the most parsimonious visual cue to initiate landing. Our observations show that rapidly maneuvering birds use precisely timed stereotypic gaze behaviors consisting of rapid head turns and frontal feature stabilization, which facilitates optical flow based flight control. Similar gaze behaviors have been reported for visually navigating humans. This finding can inspire more effective vision-based autopilots for drones.

  20. Fractal Complexity-Based Feature Extraction Algorithm of Communication Signals

    Science.gov (United States)

    Wang, Hui; Li, Jingchao; Guo, Lili; Dou, Zheng; Lin, Yun; Zhou, Ruolin

    How to analyze and identify the characteristics of radiation sources and estimate the threat level by means of detecting, intercepting and locating has been the central issue of electronic support in the electronic warfare, and communication signal recognition is one of the key points to solve this issue. Aiming at accurately extracting the individual characteristics of the radiation source for the increasingly complex communication electromagnetic environment, a novel feature extraction algorithm for individual characteristics of the communication radiation source based on the fractal complexity of the signal is proposed. According to the complexity of the received signal and the situation of environmental noise, use the fractal dimension characteristics of different complexity to depict the subtle characteristics of the signal to establish the characteristic database, and then identify different broadcasting station by gray relation theory system. The simulation results demonstrate that the algorithm can achieve recognition rate of 94% even in the environment with SNR of ‑10dB, and this provides an important theoretical basis for the accurate identification of the subtle features of the signal at low SNR in the field of information confrontation.

  1. Featured Image: Making a Rapidly Rotating Black Hole

    Science.gov (United States)

    Kohler, Susanna

    2017-10-01

    These stills from a simulation show the evolution (from left to right and top to bottom) of a high-mass X-ray binary over 1.1 days, starting after the star on the right fails to explode as a supernova and then collapses into a black hole. Many high-mass X-ray binaries like the well-known Cygnus X-1, the first source widely accepted to be a black hole host rapidly spinning black holes. Despite our observations of these systems, however, were still not sure how these objects end up with such high rotation speeds. Using simulations like that shown above, a team of scientists led by Aldo Batta (UC Santa Cruz) has demonstrated how a failed supernova explosion can result in such a rapidly spinning black hole. The authors work shows that in a binary where one star attempts to explode as a supernova and fails it doesnt succeed in unbinding the star the large amount of fallback material can interact with the companion star and then accrete onto the black hole, spinning it up in the process. You can read more about the authors simulations and conclusions in the paper below.CitationAldo Batta et al 2017 ApJL 846 L15. doi:10.3847/2041-8213/aa8506

  2. Deep PDF parsing to extract features for detecting embedded malware.

    Energy Technology Data Exchange (ETDEWEB)

    Munson, Miles Arthur; Cross, Jesse S. (Missouri University of Science and Technology, Rolla, MO)

    2011-09-01

    The number of PDF files with embedded malicious code has risen significantly in the past few years. This is due to the portability of the file format, the ways Adobe Reader recovers from corrupt PDF files, the addition of many multimedia and scripting extensions to the file format, and many format properties the malware author may use to disguise the presence of malware. Current research focuses on executable, MS Office, and HTML formats. In this paper, several features and properties of PDF Files are identified. Features are extracted using an instrumented open source PDF viewer. The feature descriptions of benign and malicious PDFs can be used to construct a machine learning model for detecting possible malware in future PDF files. The detection rate of PDF malware by current antivirus software is very low. A PDF file is easy to edit and manipulate because it is a text format, providing a low barrier to malware authors. Analyzing PDF files for malware is nonetheless difficult because of (a) the complexity of the formatting language, (b) the parsing idiosyncrasies in Adobe Reader, and (c) undocumented correction techniques employed in Adobe Reader. In May 2011, Esparza demonstrated that PDF malware could be hidden from 42 of 43 antivirus packages by combining multiple obfuscation techniques [4]. One reason current antivirus software fails is the ease of varying byte sequences in PDF malware, thereby rendering conventional signature-based virus detection useless. The compression and encryption functions produce sequences of bytes that are each functions of multiple input bytes. As a result, padding the malware payload with some whitespace before compression/encryption can change many of the bytes in the final payload. In this study we analyzed a corpus of 2591 benign and 87 malicious PDF files. While this corpus is admittedly small, it allowed us to test a system for collecting indicators of embedded PDF malware. We will call these indicators features throughout

  3. Clinical feasibility of rapid confocal melanoma feature detection

    Science.gov (United States)

    Hennessy, Ricky; Jacques, Steve; Pellacani, Giovanni; Gareau, Daniel

    2010-02-01

    In vivo reflectance confocal microscopy shows promise for the early detection of malignant melanoma. One diagnostic trait of malignancy is the presence of pagetoid melanocytes in the epidermis. For automated detection of MM, this feature must be identified quantitatively through software. Beginning with in vivo, noninvasive confocal images from 10 unequivocal MMs and benign nevi, we developed a pattern recognition algorithm that automatically identified pagetoid melanocytes in all four MMs and identified none in five benign nevi. One data set was discarded due to artifacts caused by patient movement. With future work to bring the performance of this pattern recognition technique to the level of the clinicians on difficult lesions, melanoma diagnosis could be brought to primary care facilities and save many lives by improving early diagnosis.

  4. Rapid extraction of lexical tone phonology in Chinese characters: a visual mismatch negativity study.

    Directory of Open Access Journals (Sweden)

    Xiao-Dong Wang

    Full Text Available BACKGROUND: In alphabetic languages, emerging evidence from behavioral and neuroimaging studies shows the rapid and automatic activation of phonological information in visual word recognition. In the mapping from orthography to phonology, unlike most alphabetic languages in which there is a natural correspondence between the visual and phonological forms, in logographic Chinese, the mapping between visual and phonological forms is rather arbitrary and depends on learning and experience. The issue of whether the phonological information is rapidly and automatically extracted in Chinese characters by the brain has not yet been thoroughly addressed. METHODOLOGY/PRINCIPAL FINDINGS: We continuously presented Chinese characters differing in orthography and meaning to adult native Mandarin Chinese speakers to construct a constant varying visual stream. In the stream, most stimuli were homophones of Chinese characters: The phonological features embedded in these visual characters were the same, including consonants, vowels and the lexical tone. Occasionally, the rule of phonology was randomly violated by characters whose phonological features differed in the lexical tone. CONCLUSIONS/SIGNIFICANCE: We showed that the violation of the lexical tone phonology evoked an early, robust visual response, as revealed by whole-head electrical recordings of the visual mismatch negativity (vMMN, indicating the rapid extraction of phonological information embedded in Chinese characters. Source analysis revealed that the vMMN was involved in neural activations of the visual cortex, suggesting that the visual sensory memory is sensitive to phonological information embedded in visual words at an early processing stage.

  5. Rapid extraction of lexical tone phonology in Chinese characters: a visual mismatch negativity study.

    Science.gov (United States)

    Wang, Xiao-Dong; Liu, A-Ping; Wu, Yin-Yuan; Wang, Peng

    2013-01-01

    In alphabetic languages, emerging evidence from behavioral and neuroimaging studies shows the rapid and automatic activation of phonological information in visual word recognition. In the mapping from orthography to phonology, unlike most alphabetic languages in which there is a natural correspondence between the visual and phonological forms, in logographic Chinese, the mapping between visual and phonological forms is rather arbitrary and depends on learning and experience. The issue of whether the phonological information is rapidly and automatically extracted in Chinese characters by the brain has not yet been thoroughly addressed. We continuously presented Chinese characters differing in orthography and meaning to adult native Mandarin Chinese speakers to construct a constant varying visual stream. In the stream, most stimuli were homophones of Chinese characters: The phonological features embedded in these visual characters were the same, including consonants, vowels and the lexical tone. Occasionally, the rule of phonology was randomly violated by characters whose phonological features differed in the lexical tone. We showed that the violation of the lexical tone phonology evoked an early, robust visual response, as revealed by whole-head electrical recordings of the visual mismatch negativity (vMMN), indicating the rapid extraction of phonological information embedded in Chinese characters. Source analysis revealed that the vMMN was involved in neural activations of the visual cortex, suggesting that the visual sensory memory is sensitive to phonological information embedded in visual words at an early processing stage.

  6. PCA Fault Feature Extraction in Complex Electric Power Systems

    Directory of Open Access Journals (Sweden)

    ZHANG, J.

    2010-08-01

    Full Text Available Electric power system is one of the most complex artificial systems in the world. The complexity is determined by its characteristics about constitution, configuration, operation, organization, etc. The fault in electric power system cannot be completely avoided. When electric power system operates from normal state to failure or abnormal, its electric quantities (current, voltage and angles, etc. may change significantly. Our researches indicate that the variable with the biggest coefficient in principal component usually corresponds to the fault. Therefore, utilizing real-time measurements of phasor measurement unit, based on principal components analysis technology, we have extracted successfully the distinct features of fault component. Of course, because of the complexity of different types of faults in electric power system, there still exists enormous problems need a close and intensive study.

  7. Analyzing edge detection techniques for feature extraction in dental radiographs

    Directory of Open Access Journals (Sweden)

    Kanika Lakhani

    2016-09-01

    Full Text Available Several dental problems can be detected using radiographs but the main issue with radiographs is that they are not very prominent. In this paper, two well known edge detection techniques have been implemented for a set of 20 radiographs and number of pixels in each image has been calculated. Further, Gaussian filter has been applied over the images to smoothen the images so as to highlight the defect in the tooth. If the images data are available in the form of pixels for both healthy and decayed tooth, the images can easily be compared using edge detection techniques and the diagnosis is much easier. Further, Laplacian edge detection technique is applied to sharpen the edges of the given image. The aim is to detect discontinuities in dental radiographs when compared to original healthy tooth. Future work includes the feature extraction on the images for the classification of dental problems.

  8. Matrix exponential based discriminant locality preserving projections for feature extraction.

    Science.gov (United States)

    Lu, Gui-Fu; Wang, Yong; Zou, Jian; Wang, Zhongqun

    2018-01-01

    Discriminant locality preserving projections (DLPP), which has shown good performances in pattern recognition, is a feature extraction algorithm based on manifold learning. However, DLPP suffers from the well-known small sample size (SSS) problem, where the number of samples is less than the dimension of samples. In this paper, we propose a novel matrix exponential based discriminant locality preserving projections (MEDLPP). The proposed MEDLPP method can address the SSS problem elegantly since the matrix exponential of a symmetric matrix is always positive definite. Nevertheless, the computational complexity of MEDLPP is high since it needs to solve a large matrix exponential eigenproblem. Then, in this paper, we also present an efficient algorithm for solving MEDLPP. Besides, the main idea for solving MEDLPP efficiently is also generalized to other matrix exponential based methods. The experimental results on some data sets demonstrate the proposed algorithm outperforms many state-of-the-art discriminant analysis methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. A bio-inspired feature extraction for robust speech recognition.

    Science.gov (United States)

    Zouhir, Youssef; Ouni, Kaïs

    2014-01-01

    In this paper, a feature extraction method for robust speech recognition in noisy environments is proposed. The proposed method is motivated by a biologically inspired auditory model which simulates the outer/middle ear filtering by a low-pass filter and the spectral behaviour of the cochlea by the Gammachirp auditory filterbank (GcFB). The speech recognition performance of our method is tested on speech signals corrupted by real-world noises. The evaluation results show that the proposed method gives better recognition rates compared to the classic techniques such as Perceptual Linear Prediction (PLP), Linear Predictive Coding (LPC), Linear Prediction Cepstral coefficients (LPCC) and Mel Frequency Cepstral Coefficients (MFCC). The used recognition system is based on the Hidden Markov Models with continuous Gaussian Mixture densities (HMM-GM).

  10. An Improved AAM Method for Extracting Human Facial Features

    Directory of Open Access Journals (Sweden)

    Tao Zhou

    2012-01-01

    Full Text Available Active appearance model is a statistically parametrical model, which is widely used to extract human facial features and recognition. However, intensity values used in original AAM cannot provide enough information for image texture, which will lead to a larger error or a failure fitting of AAM. In order to overcome these defects and improve the fitting performance of AAM model, an improved texture representation is proposed in this paper. Firstly, translation invariant wavelet transform is performed on face images and then image structure is represented using the measure which is obtained by fusing the low-frequency coefficients with edge intensity. Experimental results show that the improved algorithm can increase the accuracy of the AAM fitting and express more information for structures of edge and texture.

  11. Real-time hypothesis driven feature extraction on parallel processing architectures

    DEFF Research Database (Denmark)

    Granmo, O.-C.; Jensen, Finn Verner

    2002-01-01

    Feature extraction in content-based indexing of media streams is often computational intensive. Typically, a parallel processing architecture is necessary for real-time performance when extracting features brute force. On the other hand, Bayesian network based systems for hypothesis driven feature......, rather than one-by-one. Thereby, the advantages of parallel feature extraction can be combined with the advantages of hypothesis driven feature extraction. The technique is based on a sequential backward feature set search and a correlation based feature set evaluation function. In order to reduce...

  12. A Study of Feature Extraction Using Divergence Analysis of Texture Features

    Science.gov (United States)

    Hallada, W. A.; Bly, B. G.; Boyd, R. K.; Cox, S.

    1982-01-01

    An empirical study of texture analysis for feature extraction and classification of high spatial resolution remotely sensed imagery (10 meters) is presented in terms of specific land cover types. The principal method examined is the use of spatial gray tone dependence (SGTD). The SGTD method reduces the gray levels within a moving window into a two-dimensional spatial gray tone dependence matrix which can be interpreted as a probability matrix of gray tone pairs. Haralick et al (1973) used a number of information theory measures to extract texture features from these matrices, including angular second moment (inertia), correlation, entropy, homogeneity, and energy. The derivation of the SGTD matrix is a function of: (1) the number of gray tones in an image; (2) the angle along which the frequency of SGTD is calculated; (3) the size of the moving window; and (4) the distance between gray tone pairs. The first three parameters were varied and tested on a 10 meter resolution panchromatic image of Maryville, Tennessee using the five SGTD measures. A transformed divergence measure was used to determine the statistical separability between four land cover categories forest, new residential, old residential, and industrial for each variation in texture parameters.

  13. Unsupervised segmentation of heel-strike IMU data using rapid cluster estimation of wavelet features.

    Science.gov (United States)

    Yuwono, Mitchell; Su, Steven W; Moulton, Bruce D; Nguyen, Hung T

    2013-01-01

    When undertaking gait-analysis, one of the most important factors to consider is heel-strike (HS). Signals from a waist worn Inertial Measurement Unit (IMU) provides sufficient accelerometric and gyroscopic information for estimating gait parameter and identifying HS events. In this paper we propose a novel adaptive, unsupervised, and parameter-free identification method for detection of HS events during gait episodes. Our proposed method allows the device to learn and adapt to the profile of the user without the need of supervision. The algorithm is completely parameter-free and requires no prior fine tuning. Autocorrelation features (ACF) of both antero-posterior acceleration (aAP) and medio-lateral acceleration (aML) are used to determine cadence episodes. The Discrete Wavelet Transform (DWT) features of signal peaks during cadence are extracted and clustered using Swarm Rapid Centroid Estimation (Swarm RCE). Left HS (LHS), Right HS (RHS), and movement artifacts are clustered based on intra-cluster correlation. Initial pilot testing of the system on 8 subjects show promising results up to 84.3%±9.2% and 86.7%±6.9% average accuracy with 86.8%±9.2% and 88.9%±7.1% average precision for the segmentation of LHS and RHS respectively.

  14. Pomegranate peel and peel extracts: chemistry and food features.

    Science.gov (United States)

    Akhtar, Saeed; Ismail, Tariq; Fraternale, Daniele; Sestili, Piero

    2015-05-01

    The present review focuses on the nutritional, functional and anti-infective properties of pomegranate (Punica granatum L.) peel (PoP) and peel extract (PoPx) and on their applications as food additives, functional food ingredients or biologically active components in nutraceutical preparations. Due to their well-known ethnomedical relevance and chemical features, the biomolecules available in PoP and PoPx have been proposed, for instance, as substitutes of synthetic food additives, as nutraceuticals and chemopreventive agents. However, because of their astringency and anti-nutritional properties, PoP and PoPx are not yet considered as ingredients of choice in food systems. Indeed, considering the prospects related to both their health promoting activity and chemical features, the nutritional and nutraceutical potential of PoP and PoPx seems to be still underestimated. The present review meticulously covers the wide range of actual and possible applications (food preservatives, stabilizers, supplements, prebiotics and quality enhancers) of PoP and PoPx components in various food products. Given the overall properties of PoP and PoPx, further investigations in toxicological and sensory aspects of PoP and PoPx should be encouraged to fully exploit the health promoting and technical/economic potential of these waste materials as food supplements. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Evaluation of wavelet techniques in rapid extraction of ABR variations from underlying EEG.

    Science.gov (United States)

    De Silva, A C; Schier, M A

    2011-11-01

    The aim of this study is to analyse an effective wavelet method for denoising and tracking temporal variations of the auditory brainstem response (ABR). The rapid and accurate extraction of ABRs in clinical practice has numerous benefits, including reductions in clinical test times and potential long-term patient monitoring applications. One method of achieving rapid extraction is through the application of wavelet filtering which, according to earlier research, has shown potential in denoising signals with low signal-to-noise ratios. The research documented in this paper evaluates the application of three such wavelet approaches on a common set of ABR data collected from eight participants. We introduced the use of the latency-intensity curve of ABR wave V for performance evaluation of tracking temporal variations. The application of these methods to the ABR required establishing threshold functions and time windows as an integral part of the research. Results revealed that the cyclic-shift-tree-denoising performed superior compared to other tested approaches. This required an ensemble of only 32 epochs to extract a fully featured ABR compared to the 1024 epochs with conventional ABR extraction based on linear moving time averaging.

  16. Maximum entropy methods for extracting the learned features of deep neural networks.

    Science.gov (United States)

    Finnegan, Alex; Song, Jun S

    2017-10-01

    New architectures of multilayer artificial neural networks and new methods for training them are rapidly revolutionizing the application of machine learning in diverse fields, including business, social science, physical sciences, and biology. Interpreting deep neural networks, however, currently remains elusive, and a critical challenge lies in understanding which meaningful features a network is actually learning. We present a general method for interpreting deep neural networks and extracting network-learned features from input data. We describe our algorithm in the context of biological sequence analysis. Our approach, based on ideas from statistical physics, samples from the maximum entropy distribution over possible sequences, anchored at an input sequence and subject to constraints implied by the empirical function learned by a network. Using our framework, we demonstrate that local transcription factor binding motifs can be identified from a network trained on ChIP-seq data and that nucleosome positioning signals are indeed learned by a network trained on chemical cleavage nucleosome maps. Imposing a further constraint on the maximum entropy distribution also allows us to probe whether a network is learning global sequence features, such as the high GC content in nucleosome-rich regions. This work thus provides valuable mathematical tools for interpreting and extracting learned features from feed-forward neural networks.

  17. Effective and extensible feature extraction method using genetic algorithm-based frequency-domain feature search for epileptic EEG multiclassification.

    Science.gov (United States)

    Wen, Tingxi; Zhang, Zhongnan

    2017-05-01

    In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy.

  18. Effective and extensible feature extraction method using genetic algorithm-based frequency-domain feature search for epileptic EEG multiclassification

    Science.gov (United States)

    Wen, Tingxi; Zhang, Zhongnan

    2017-01-01

    Abstract In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy. PMID:28489789

  19. Extraction of Wavelet Based Features for Classification of T2-Weighted MRI Brain Images

    OpenAIRE

    Ms. Yogita K.Dubey; Mushrif, Milind M.

    2012-01-01

    Extraction of discriminate features is very important task in classification algorithms. This paper presents technique for extraction cosine modulated feature for classification of the T2-weighted MRI images of human brain. Better discrimination and low design implementation complexity of the cosine-modulated wavelets has been effectively utilized to give better features and more accurate classification results. The proposed technique consists of two stages, namely, feature extraction, ...

  20. Feature Extraction in IR Images Via Synchronous Video Detection

    Science.gov (United States)

    Shepard, Steven M.; Sass, David T.

    1989-03-01

    IR video images acquired by scanning imaging radiometers are subject to several problems which make measurement of small temperature differences difficult. Among these problems are 1) aliasing, which occurs When events at frequencies higher than the video frame rate are observed, 2) limited temperature resolution imposed by the 3-bit digitization available in existing commercial systems, and 3) susceptibility to noise and background clutter. Bandwidth narrowing devices (e.g. lock-in amplifiers or boxcar averagers) are routinely used to achieve a high degree of signal to noise improvement for time-varying 1-dimensional signals. We will describe techniques which allow similar S/N improvement for 2-dimensional imagery acquired with an off the shelf scanning imaging radiometer system. These techniques are iplemented in near-real-time, utilizing a microcomputer and specially developed hardware and software . We will also discuss the application of the system to feature extraction in cluttered images, and to acquisition of events which vary faster than the frame rate.

  1. Extracting Feature Model Changes from the Linux Kernel Using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2014-01-01

    The Linux kernel feature model has been studied as an example of large scale evolving feature model and yet details of its evolution are not known. We present here a classification of feature changes occurring on the Linux kernel feature model, as well as a tool, FMDiff, designed to automatically

  2. Sliding Window Based Feature Extraction and Traffic Clustering for Green Mobile Cyberphysical Systems

    Directory of Open Access Journals (Sweden)

    Jiao Zhang

    2017-01-01

    Full Text Available Both the densification of small base stations and the diversity of user activities bring huge challenges for today’s heterogeneous networks, either heavy burdens on base stations or serious energy waste. In order to ensure coverage of the network while reducing the total energy consumption, we adopt a green mobile cyberphysical system (MCPS to handle this problem. In this paper, we propose a feature extraction method using sliding window to extract the distribution feature of mobile user equipment (UE, and a case study is presented to demonstrate that the method is efficacious in reserving the clustering distribution feature. Furthermore, we present traffic clustering analysis to categorize collected traffic distribution samples into a limited set of traffic patterns, where the patterns and corresponding optimized control strategies are used to similar traffic distributions for the rapid control of base station state. Experimental results show that the sliding window is more superior in enabling higher UE coverage over the grid method. Besides, the optimized control strategy obtained from the traffic pattern is capable of achieving a high coverage that can well serve over 98% of all mobile UE for similar traffic distributions.

  3. UNLABELED SELECTED SAMPLES IN FEATURE EXTRACTION FOR CLASSIFICATION OF HYPERSPECTRAL IMAGES WITH LIMITED TRAINING SAMPLES

    Directory of Open Access Journals (Sweden)

    A. Kianisarkaleh

    2015-12-01

    Full Text Available Feature extraction plays a key role in hyperspectral images classification. Using unlabeled samples, often unlimitedly available, unsupervised and semisupervised feature extraction methods show better performance when limited number of training samples exists. This paper illustrates the importance of selecting appropriate unlabeled samples that used in feature extraction methods. Also proposes a new method for unlabeled samples selection using spectral and spatial information. The proposed method has four parts including: PCA, prior classification, posterior classification and sample selection. As hyperspectral image passes these parts, selected unlabeled samples can be used in arbitrary feature extraction methods. The effectiveness of the proposed unlabeled selected samples in unsupervised and semisupervised feature extraction is demonstrated using two real hyperspectral datasets. Results show that through selecting appropriate unlabeled samples, the proposed method can improve the performance of feature extraction methods and increase classification accuracy.

  4. PyEEG: an open source Python module for EEG/MEG feature extraction.

    Science.gov (United States)

    Bao, Forrest Sheng; Liu, Xin; Zhang, Christina

    2011-01-01

    Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction.

  5. Bag-of-visual-words based feature extraction for SAR target classification

    Science.gov (United States)

    Amrani, Moussa; Chaib, Souleyman; Omara, Ibrahim; Jiang, Feng

    2017-07-01

    Feature extraction plays a key role in the classification performance of synthetic aperture radar automatic target recognition (SAR-ATR). It is very crucial to choose appropriate features to train a classifier, which is prerequisite. Inspired by the great success of Bag-of-Visual-Words (BoVW), we address the problem of feature extraction by proposing a novel feature extraction method for SAR target classification. First, Gabor based features are adopted to extract features from the training SAR images. Second, a discriminative codebook is generated using K-means clustering algorithm. Third, after feature encoding by computing the closest Euclidian distance, the targets are represented by new robust bag of features. Finally, for target classification, support vector machine (SVM) is used as a baseline classifier. Experiments on Moving and Stationary Target Acquisition and Recognition (MSTAR) public release dataset are conducted, and the classification accuracy and time complexity results demonstrate that the proposed method outperforms the state-of-the-art methods.

  6. Extracting features from protein sequences to improve deep extreme learning machine for protein fold recognition.

    Science.gov (United States)

    Ibrahim, Wisam; Abadeh, Mohammad Saniee

    2017-05-21

    Protein fold recognition is an important problem in bioinformatics to predict three-dimensional structure of a protein. One of the most challenging tasks in protein fold recognition problem is the extraction of efficient features from the amino-acid sequences to obtain better classifiers. In this paper, we have proposed six descriptors to extract features from protein sequences. These descriptors are applied in the first stage of a three-stage framework PCA-DELM-LDA to extract feature vectors from the amino-acid sequences. Principal Component Analysis PCA has been implemented to reduce the number of extracted features. The extracted feature vectors have been used with original features to improve the performance of the Deep Extreme Learning Machine DELM in the second stage. Four new features have been extracted from the second stage and used in the third stage by Linear Discriminant Analysis LDA to classify the instances into 27 folds. The proposed framework is implemented on the independent and combined feature sets in SCOP datasets. The experimental results show that extracted feature vectors in the first stage could improve the performance of DELM in extracting new useful features in second stage. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Designing basin-customized combined drought indices via feature extraction

    Science.gov (United States)

    Zaniolo, Marta; Giuliani, Matteo; Castelletti, Andrea

    2017-04-01

    The socio-economic costs of drought are progressively increasing worldwide due to the undergoing alteration of hydro-meteorological regimes induced by climate change. Although drought management is largely studied in the literature, most of the traditional drought indexes fail in detecting critical events in highly regulated systems, which generally rely on ad-hoc formulations and cannot be generalized to different context. In this study, we contribute a novel framework for the design of a basin-customized drought index. This index represents a surrogate of the state of the basin and is computed by combining the available information about the water available in the system to reproduce a representative target variable for the drought condition of the basin (e.g., water deficit). To select the relevant variables and how to combine them, we use an advanced feature extraction algorithm called Wrapper for Quasi Equally Informative Subset Selection (W-QEISS). The W-QEISS algorithm relies on a multi-objective evolutionary algorithm to find Pareto-efficient subsets of variables by maximizing the wrapper accuracy, minimizing the number of selected variables (cardinality) and optimizing relevance and redundancy of the subset. The accuracy objective is evaluated trough the calibration of a pre-defined model (i.e., an extreme learning machine) of the water deficit for each candidate subset of variables, with the index selected from the resulting solutions identifying a suitable compromise between accuracy, cardinality, relevance, and redundancy. The proposed methodology is tested in the case study of Lake Como in northern Italy, a regulated lake mainly operated for irrigation supply to four downstream agricultural districts. In the absence of an institutional drought monitoring system, we constructed the combined index using all the hydrological variables from the existing monitoring system as well as the most common drought indicators at multiple time aggregations. The soil

  8. Rapid DNA extraction of bacterial genome using laundry detergents ...

    African Journals Online (AJOL)

    Genomic DNA extraction from bacterial cells involves processes normally performed in most biological laboratories. Therefore, various methods have been offered, manually and kit, but these methods may be time consuming and costly. In this paper, genomic DNA extraction of Pseudomonas aeruginosa was investigated ...

  9. FEVER : Extracting Feature-oriented Changes from Commits

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2016-01-01

    The study of the evolution of highly configurable systems requires a thorough understanding of thee core ingredients of such systems: (1) the underlying variability model; (2) the assets that together implement the configurable features; and (3) the mapping from variable features to actual assets.

  10. Manifold learning based feature extraction for classification of hyperspectral data

    CSIR Research Space (South Africa)

    Lunga, D

    2014-01-01

    Full Text Available Interest in manifold learning for representing the topology of large, high dimensional nonlinear data sets in lower, but still meaningful dimensions for visualization and classification has grown rapidly over the past decade, and particularly...

  11. Feature curve extraction from point clouds via developable strip intersection

    Directory of Open Access Journals (Sweden)

    Kai Wah Lee

    2016-04-01

    Full Text Available In this paper, we study the problem of computing smooth feature curves from CAD type point clouds models. The proposed method reconstructs feature curves from the intersections of developable strip pairs which approximate the regions along both sides of the features. The generation of developable surfaces is based on a linear approximation of the given point cloud through a variational shape approximation approach. A line segment sequencing algorithm is proposed for collecting feature line segments into different feature sequences as well as sequential groups of data points. A developable surface approximation procedure is employed to refine incident approximation planes of data points into developable strips. Some experimental results are included to demonstrate the performance of the proposed method.

  12. A Fault Feature Extraction Method for Motor Bearing and Transmission Analysis

    Directory of Open Access Journals (Sweden)

    Wu Deng

    2017-04-01

    Full Text Available Roller bearings are the most widely used and easily damaged mechanical parts in rotating machinery. Their running state directly affects rotating machinery performance. Empirical mode decomposition (EMD easily occurs illusive component and mode mixing problem. From the view of feature extraction, a new feature extraction method based on integrating ensemble empirical mode decomposition (EEMD, the correlation coefficient method, and Hilbert transform is proposed to extract fault features and identify fault states for motor bearings in this paper. In the proposed feature extraction method, the EEMD is used to decompose the vibration signal into a series of intrinsic mode functions (IMFs with different frequency components. Then the correlation coefficient method is used to select the IMF components with the largest correlation coefficient, which are carried out with the Hilbert transform. The obtained corresponding envelope spectra are analyzed to extract the fault feature frequency and identify the fault state by comparing with the theoretical value. Finally, the fault signal transmission performance of vibration signals of the bearing inner ring and outer ring at the drive end and fan end are deeply studied. The experimental results show that the proposed feature extraction method can effectively eliminate the influence of the mode mixing and extract the fault feature frequency, and the energy of the vibration signal in the bearing outer ring at the fan end is lost during the transmission of the vibration signal. It is an effective method to extract the fault feature of the bearing from the noise with interference.

  13. Rapid metal extractability tests from polluted mining soils by ultrasound probe sonication and microwave-assisted extraction systems.

    Science.gov (United States)

    García-Salgado, Sara; Quijano, M Ángeles

    2016-12-01

    Ultrasonic probe sonication (UPS) and microwave-assisted extraction (MAE) were used for rapid single extraction of Cd, Cr, Cu, Ni, Pb, and Zn from soils polluted by former mining activities (Mónica Mine, Bustarviejo, NW Madrid, Spain), using 0.01 mol L-1 calcium chloride (CaCl2), 0.43 mol L-1 acetic acid (CH3COOH), and 0.05 mol L-1 ethylenediaminetetraacetic acid (EDTA) at pH 7 as extracting agents. The optimum extraction conditions by UPS consisted of an extraction time of 2 min for both CaCl2 and EDTA extractions and 15 min for CH3COOH extraction, at 30% ultrasound (US) amplitude, whereas in the case of MAE, they consisted of 5 min at 50 °C for both CaCl2 and EDTA extractions and 15 min at 120 °C for CH3COOH extraction. Extractable concentrations were determined by inductively coupled plasma atomic emission spectrometry (ICP-AES). The proposed methods were compared with a reduced version of the corresponding single extraction procedures proposed by the Standards, Measurements and Testing Programme (SM&T). The results obtained showed a great variability on extraction percentages, depending on the metal, the total concentration level and the soil sample, reaching high values in some areas. However, the correlation analysis showed that total concentration is the most relevant factor for element extractability in these soil samples. From the results obtained, the application of the accelerated extraction procedures, such as MAE and UPS, could be considered a useful approach to evaluate rapidly the extractability of the metals studied.

  14. Difet: Distributed Feature Extraction Tool for High Spatial Resolution Remote Sensing Images

    Science.gov (United States)

    Eken, S.; Aydın, E.; Sayar, A.

    2017-11-01

    In this paper, we propose distributed feature extraction tool from high spatial resolution remote sensing images. Tool is based on Apache Hadoop framework and Hadoop Image Processing Interface. Two corner detection (Harris and Shi-Tomasi) algorithms and five feature descriptors (SIFT, SURF, FAST, BRIEF, and ORB) are considered. Robustness of the tool in the task of feature extraction from LandSat-8 imageries are evaluated in terms of horizontal scalability.

  15. DIFET: DISTRIBUTED FEATURE EXTRACTION TOOL FOR HIGH SPATIAL RESOLUTION REMOTE SENSING IMAGES

    Directory of Open Access Journals (Sweden)

    S. Eken

    2017-11-01

    Full Text Available In this paper, we propose distributed feature extraction tool from high spatial resolution remote sensing images. Tool is based on Apache Hadoop framework and Hadoop Image Processing Interface. Two corner detection (Harris and Shi-Tomasi algorithms and five feature descriptors (SIFT, SURF, FAST, BRIEF, and ORB are considered. Robustness of the tool in the task of feature extraction from LandSat-8 imageries are evaluated in terms of horizontal scalability.

  16. Rule set transferability for object-based feature extraction

    NARCIS (Netherlands)

    Anders, N.S.; Seijmonsbergen, Arie C.; Bouten, Willem

    2015-01-01

    Cirques are complex landforms resulting from glacial erosion and can be used to estimate Equilibrium Line Altitudes and infer climate history. Automated extraction of cirques may help research on glacial geomorphology and climate change. Our objective was to test the transferability of an

  17. An Effective Fault Feature Extraction Method for Gas Turbine Generator System Diagnosis

    Directory of Open Access Journals (Sweden)

    Jian-Hua Zhong

    2016-01-01

    Full Text Available Fault diagnosis is very important to maintain the operation of a gas turbine generator system (GTGS in power plants, where any abnormal situations will interrupt the electricity supply. The fault diagnosis of the GTGS faces the main challenge that the acquired data, vibration or sound signals, contain a great deal of redundant information which extends the fault identification time and degrades the diagnostic accuracy. To improve the diagnostic performance in the GTGS, an effective fault feature extraction framework is proposed to solve the problem of the signal disorder and redundant information in the acquired signal. The proposed framework combines feature extraction with a general machine learning method, support vector machine (SVM, to implement an intelligent fault diagnosis. The feature extraction method adopts wavelet packet transform and time-domain statistical features to extract the features of faults from the vibration signal. To further reduce the redundant information in extracted features, kernel principal component analysis is applied in this study. Experimental results indicate that the proposed feature extracted technique is an effective method to extract the useful features of faults, resulting in improvement of the performance of fault diagnosis for the GTGS.

  18. Feature Extraction and Fusion Using Deep Convolutional Neural Networks for Face Detection

    Directory of Open Access Journals (Sweden)

    Xiaojun Lu

    2017-01-01

    Full Text Available This paper proposes a method that uses feature fusion to represent images better for face detection after feature extraction by deep convolutional neural network (DCNN. First, with Clarifai net and VGG Net-D (16 layers, we learn features from data, respectively; then we fuse features extracted from the two nets. To obtain more compact feature representation and mitigate computation complexity, we reduce the dimension of the fused features by PCA. Finally, we conduct face classification by SVM classifier for binary classification. In particular, we exploit offset max-pooling to extract features with sliding window densely, which leads to better matches of faces and detection windows; thus the detection result is more accurate. Experimental results show that our method can detect faces with severe occlusion and large variations in pose and scale. In particular, our method achieves 89.24% recall rate on FDDB and 97.19% average precision on AFW.

  19. Fault Features Extraction and Identification based Rolling Bearing Fault Diagnosis

    Science.gov (United States)

    Qin, B.; SUN, G. D.; ZHANG, L. Y.; WANG, J. G.; HU, J.

    2017-05-01

    For the fault classification model based on extreme learning machine (ELM), the diagnosis accuracy and stability of rolling bearing is greatly influenced by a critical parameter, which is the number of nodes in hidden layer of ELM. An adaptive adjustment strategy is proposed based on vibrational mode decomposition, permutation entropy, and nuclear kernel extreme learning machine to determine the tunable parameter. First, the vibration signals are measured and then decomposed into different fault feature models based on variation mode decomposition. Then, fault feature of each model is formed to a high dimensional feature vector set based on permutation entropy. Second, the ELM output function is expressed by the inner product of Gauss kernel function to adaptively determine the number of hidden layer nodes. Finally, the high dimension feature vector set is used as the input to establish the kernel ELM rolling bearing fault classification model, and the classification and identification of different fault states of rolling bearings are carried out. In comparison with the fault classification methods based on support vector machine and ELM, the experimental results show that the proposed method has higher classification accuracy and better generalization ability.

  20. Improving features used for hyper-temporal land cover change detection by reducing the uncertainty in the feature extraction method

    CSIR Research Space (South Africa)

    Salmon, BP

    2017-07-01

    Full Text Available the effect which the length of a temporal sliding window has on the success of detecting land cover change. It is shown using a short Fourier transform as a feature extraction method provides meaningful robust input to a machine learning method. In theory...

  1. Extraction and representation of common feature from uncertain facial expressions with cloud model.

    Science.gov (United States)

    Wang, Shuliang; Chi, Hehua; Yuan, Hanning; Geng, Jing

    2017-12-01

    Human facial expressions are key ingredient to convert an individual's innate emotion in communication. However, the variation of facial expressions affects the reliable identification of human emotions. In this paper, we present a cloud model to extract facial features for representing human emotion. First, the uncertainties in facial expression are analyzed in the context of cloud model. The feature extraction and representation algorithm is established under cloud generators. With forward cloud generator, facial expression images can be re-generated as many as we like for visually representing the extracted three features, and each feature shows different roles. The effectiveness of the computing model is tested on Japanese Female Facial Expression database. Three common features are extracted from seven facial expression images. Finally, the paper is concluded and remarked.

  2. Accelerating Biomedical Signal Processing Using GPU: A Case Study of Snore Sound Feature Extraction.

    Science.gov (United States)

    Guo, Jian; Qian, Kun; Zhang, Gongxuan; Xu, Huijie; Schuller, Björn

    2017-12-01

    The advent of 'Big Data' and 'Deep Learning' offers both, a great challenge and a huge opportunity for personalised health-care. In machine learning-based biomedical data analysis, feature extraction is a key step for 'feeding' the subsequent classifiers. With increasing numbers of biomedical data, extracting features from these 'big' data is an intensive and time-consuming task. In this case study, we employ a Graphics Processing Unit (GPU) via Python to extract features from a large corpus of snore sound data. Those features can subsequently be imported into many well-known deep learning training frameworks without any format processing. The snore sound data were collected from several hospitals (20 subjects, with 770-990 MB per subject - in total 17.20 GB). Experimental results show that our GPU-based processing significantly speeds up the feature extraction phase, by up to seven times, as compared to the previous CPU system.

  3. Biosensor method and system based on feature vector extraction

    Science.gov (United States)

    Greenbaum, Elias [Knoxville, TN; Rodriguez, Jr., Miguel; Qi, Hairong [Knoxville, TN; Wang, Xiaoling [San Jose, CA

    2012-04-17

    A method of biosensor-based detection of toxins comprises the steps of providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  4. Rapid new methods for paint collection and lead extraction.

    Science.gov (United States)

    Gutknecht, William F; Harper, Sharon L; Winstead, Wayne; Sorrell, Kristen; Binstock, David A; Salmons, Cynthia A; Haas, Curtis; McCombs, Michelle; Studabaker, William; Wall, Constance V; Moore, Curtis

    2009-01-01

    Chronic exposure of children to lead can result in permanent physiological impairment. In adults, it can cause irritability, poor muscle coordination, and nerve damage to the sense organs and nerves controlling the body. Surfaces coated with lead-containing paints are potential sources of exposure to lead. In April 2008, the U.S. Environmental Protection Agency (EPA) finalized new requirements that would reduce exposure to lead hazards created by renovation, repair, and painting activities, which disturb lead-based paint. On-site, inexpensive identification of lead-based paint is required. Two steps have been taken to meet this challenge. First, this paper presents a new, highly efficient method for paint collection that is based on the use of a modified wood drill bit. Second, this paper presents a novel, one-step approach for quantitatively grinding and extracting lead from paint samples for subsequent lead determination. This latter method is based on the use of a high-revolutions per minute rotor with stator to break up the paint into approximately 50 micron-size particles. Nitric acid (25%, v/v) is used to extract the lead in 95% for real-world paints, National Institute of Standards and Technology's standard reference materials, and audit samples from the American Industrial Hygiene Association's Environmental Lead Proficiency Analytical Testing Program. This quantitative extraction procedure, when paired with quantitative paint sample collection and lead determination, may enable the development of a lead paint test kit that will meet the specifications of the final EPA rule.

  5. Malicious JavaScript Detection by Features Extraction

    Directory of Open Access Journals (Sweden)

    Gerardo Canfora

    2015-06-01

    Full Text Available In recent years, JavaScript-based attacks have become one of the most common and successful types of attack. Existing techniques for detecting malicious JavaScripts could fail for different reasons. Some techniques are tailored on specific kinds of attacks, and are ineffective for others. Some other techniques require costly computational resources to be implemented. Other techniques could be circumvented with evasion methods. This paper proposes a method for detecting malicious JavaScript code based on five features that capture different characteristics of a script: execution time, external referenced domains and calls to JavaScript functions. Mixing different types of features could result in a more effective detection technique, and overcome the limitations of existing tools created for identifying malicious JavaScript. The experimentation carried out suggests that a combination of these features is able to successfully detect malicious JavaScript code (in the best cases we obtained a precision of 0.979 and a recall of 0.978.

  6. Semantic control of feature extraction from natural scenes.

    Science.gov (United States)

    Neri, Peter

    2014-02-05

    In the early stages of image analysis, visual cortex represents scenes as spatially organized maps of locally defined features (e.g., edge orientation). As image reconstruction unfolds and features are assembled into larger constructs, cortex attempts to recover semantic content for object recognition. It is conceivable that higher level representations may feed back onto early processes and retune their properties to align with the semantic structure projected by the scene; however, there is no clear evidence to either support or discard the applicability of this notion to the human visual system. Obtaining such evidence is challenging because low and higher level processes must be probed simultaneously within the same experimental paradigm. We developed a methodology that targets both levels of analysis by embedding low-level probes within natural scenes. Human observers were required to discriminate probe orientation while semantic interpretation of the scene was selectively disrupted via stimulus inversion or reversed playback. We characterized the orientation tuning properties of the perceptual process supporting probe discrimination; tuning was substantially reshaped by semantic manipulation, demonstrating that low-level feature detectors operate under partial control from higher level modules. The manner in which such control was exerted may be interpreted as a top-down predictive strategy whereby global semantic content guides and refines local image reconstruction. We exploit the novel information gained from data to develop mechanistic accounts of unexplained phenomena such as the classic face inversion effect.

  7. Feature extraction using multi-temporal fully polarimetric SAR data

    Science.gov (United States)

    Ramya, M. N. S.; Kumar, Shashi

    2016-05-01

    The main objective of this study was to explore the potential of the multi-temporal PolSAR data in LULC mapping and to evaluate the accuracy of classification using single date and multi-temporal data. Multi-temporal data acquired on three different dates were used. Advanced classification techniques Support Vector Machine and Rule Based Hierarchical approaches were performed on multitemporal ALOS PALSAR data to classify features at different temporal combinations. In this study, SVM classification was applied on the derived output of Yamaguchi decomposition model, for which kernel approach of second order polynomial was used. In Rule Based Hierarchical approach, Backscattering coefficients, Yamaguchi and H/A/Alpha decomposition statistics are computed and analyzed to estimate the decision boundaries of the features to separate feature at different hierarchical levels. SVM classified the PolSAR data efficiently of single data, highest overall accuracy and kappa statistics achieved was 67.65% and 0.61 from the individual image. Rule based classified map of single date, highest overall accuracy and kappa statistics achieved was 68% and 0.67. Based on the accuracy assessment, SVM and Rule Based classification both are approximately of same accuracy but comparatively Rule Based classification was accurate temporally. Rule Based classification was further considered for multi-temporal classification and achieved high overall accuracy and kappa statistics of 80% and 0.76. This proves that multi-temporal PolSAR data helps to increase the accuracy of classification in LULC mapping.

  8. Automatic extraction of disease-specific features from Doppler images

    Science.gov (United States)

    Negahdar, Mohammadreza; Moradi, Mehdi; Parajuli, Nripesh; Syeda-Mahmood, Tanveer

    2017-03-01

    Flow Doppler imaging is widely used by clinicians to detect diseases of the valves. In particular, continuous wave (CW) Doppler mode scan is routinely done during echocardiography and shows Doppler signal traces over multiple heart cycles. Traditionally, echocardiographers have manually traced such velocity envelopes to extract measurements such as decay time and pressure gradient which are then matched to normal and abnormal values based on clinical guidelines. In this paper, we present a fully automatic approach to deriving these measurements for aortic stenosis retrospectively from echocardiography videos. Comparison of our method with measurements made by echocardiographers shows large agreement as well as identification of new cases missed by echocardiographers.

  9. A Novel Feature Extraction Technique Using Binarization of Bit Planes for Content Based Image Classification

    Directory of Open Access Journals (Sweden)

    Sudeep Thepade

    2014-01-01

    Full Text Available A number of techniques have been proposed earlier for feature extraction using image binarization. Efficiency of the techniques was dependent on proper threshold selection for the binarization method. In this paper, a new feature extraction technique using image binarization has been proposed. The technique has binarized the significant bit planes of an image by selecting local thresholds. The proposed algorithm has been tested on a public dataset and has been compared with existing widely used techniques using binarization for extraction of features. It has been inferred that the proposed method has outclassed all the existing techniques and has shown consistent classification performance.

  10. The research of edge extraction and target recognition based on inherent feature of objects

    Science.gov (United States)

    Xie, Yu-chan; Lin, Yu-chi; Huang, Yin-guo

    2008-03-01

    Current research on computer vision often needs specific techniques for particular problems. Little use has been made of high-level aspects of computer vision, such as three-dimensional (3D) object recognition, that are appropriate for large classes of problems and situations. In particular, high-level vision often focuses mainly on the extraction of symbolic descriptions, and pays little attention to the speed of processing. In order to extract and recognize target intelligently and rapidly, in this paper we developed a new 3D target recognition method based on inherent feature of objects in which cuboid was taken as model. On the basis of analysis cuboid nature contour and greyhound distributing characteristics, overall fuzzy evaluating technique was utilized to recognize and segment the target. Then Hough transform was used to extract and match model's main edges, we reconstruct aim edges by stereo technology in the end. There are three major contributions in this paper. Firstly, the corresponding relations between the parameters of cuboid model's straight edges lines in an image field and in the transform field were summed up. By those, the aimless computations and searches in Hough transform processing can be reduced greatly and the efficiency is improved. Secondly, as the priori knowledge about cuboids contour's geometry character known already, the intersections of the component extracted edges are taken, and assess the geometry of candidate edges matches based on the intersections, rather than the extracted edges. Therefore the outlines are enhanced and the noise is depressed. Finally, a 3-D target recognition method is proposed. Compared with other recognition methods, this new method has a quick response time and can be achieved with high-level computer vision. The method present here can be used widely in vision-guide techniques to strengthen its intelligence and generalization, which can also play an important role in object tracking, port AGV, robots

  11. Water Feature Extraction and Change Detection Using Multitemporal Landsat Imagery

    Directory of Open Access Journals (Sweden)

    Komeil Rokni

    2014-05-01

    Full Text Available Lake Urmia is the 20th largest lake and the second largest hyper saline lake (before September 2010 in the world. It is also the largest inland body of salt water in the Middle East. Nevertheless, the lake has been in a critical situation in recent years due to decreasing surface water and increasing salinity. This study modeled the spatiotemporal changes of Lake Urmia in the period 2000–2013 using the multi-temporal Landsat 5-TM, 7-ETM+ and 8-OLI images. In doing so, the applicability of different satellite-derived indexes including Normalized Difference Water Index (NDWI, Modified NDWI (MNDWI, Normalized Difference Moisture Index (NDMI, Water Ratio Index (WRI, Normalized Difference Vegetation Index (NDVI, and Automated Water Extraction Index (AWEI were investigated for the extraction of surface water from Landsat data. Overall, the NDWI was found superior to other indexes and hence it was used to model the spatiotemporal changes of the lake. In addition, a new approach based on Principal Components of multi-temporal NDWI (NDWI-PCs was proposed and evaluated for surface water change detection. The results indicate an intense decreasing trend in Lake Urmia surface area in the period 2000–2013, especially between 2010 and 2013 when the lake lost about one third of its surface area compared to the year 2000. The results illustrate the effectiveness of the NDWI-PCs approach for surface water change detection, especially in detecting the changes between two and three different times, simultaneously.

  12. Optimization-Based Approaches To Feature Extraction from Aerial Images

    Science.gov (United States)

    Fua, Pascal; Gruen, Armin; Li, Haihong

    Extracting cartographic objects from images is a difficult task because aerial images are inherently noisy, complex, and ambiguous. Using models of the objects of interest to guide the search has proved to be an effective approach that yields good results. In such an approach, the problem becomes one of fitting the models to the image data, which we phrase as an optimization problem. The appropriate optimization technique to use depends on the exact nature of the model. In this paper, we review and contrast some of the approaches we have developed for extracting cartographic objects and present the key aspects of their implementation. Using these techniques, rough initial sketches of 2-D and 3-D objects can automatically be refined, resulting in accurate models that can be guaranteed to be consistent with one another. We believe that such capabilities will prove indispensable to automating the generation of complex object databases from imagery, such as the ones required for high-resolution mapping, realistic simulations or intelligence analysis.LNES 95, p. 190 ff.

  13. Novel Method for Color Textures Features Extraction Based on GLCM

    Directory of Open Access Journals (Sweden)

    R. Hudec

    2007-12-01

    Full Text Available Texture is one of most popular features for image classification and retrieval. Forasmuch as grayscale textures provide enough information to solve many tasks, the color information was not utilized. But in the recent years, many researchers have begun to take color information into consideration. In the texture analysis field, many algorithms have been enhanced to process color textures and new ones have been researched. In this paper the new method for color GLCM textures and comparing with other good known methods is presented.

  14. Deep Convolutional Neural Networks: Structure, Feature Extraction and Training

    Directory of Open Access Journals (Sweden)

    Namatēvs Ivars

    2017-12-01

    Full Text Available Deep convolutional neural networks (CNNs are aimed at processing data that have a known network like topology. They are widely used to recognise objects in images and diagnose patterns in time series data as well as in sensor data classification. The aim of the paper is to present theoretical and practical aspects of deep CNNs in terms of convolution operation, typical layers and basic methods to be used for training and learning. Some practical applications are included for signal and image classification. Finally, the present paper describes the proposed block structure of CNN for classifying crucial features from 3D sensor data.

  15. Static gesture recognition using features extracted from skeletal data

    CSIR Research Space (South Africa)

    Mangera, R

    2013-12-01

    Full Text Available is used to cluster the data and the performance of the feature vectors is evaluated on a collected static dataset. The rest of this paper is ordered as follows: Section II de- scribes current gesture recognition systems. Section III details...: left shoulder (ls), right shoulder (rs), left elbow (le), right elbow (re), left hand (lh), right hand (rh) and head (he) joints. 1) Relative Joint Distances: For each pose, the 3-D dis- tance between each of the 6 arm joints and the head joint...

  16. Residual signal feature extraction for gearbox planetary stage fault detection

    DEFF Research Database (Denmark)

    Skrimpas, Georgios Alexandros; Ursin, Thomas; Sweeney, Christian Walsted

    2017-01-01

    Faults in planetary gears and related bearings, e.g. planet bearings and planet carrier bearings, pose inherent difficulties on their accurate and consistent detection associated mainly to the low energy in slow rotating stages and the operating complexity of planetary gearboxes. In this work......, statistical features measuring the signal energy and Gaussianity are calculated from the residual signals between each pair from the first to the fifth tooth mesh frequency of the meshing process in a multi-stage wind turbine gearbox. The suggested algorithm includes resampling from time to angular domain...

  17. Despeckle and geographical feature extraction in SAR images by wavelet transform

    Science.gov (United States)

    Gupta, Karunesh K.; Gupta, Rajiv

    This paper presents a method to despeckle Synthetic Aperture Radar (SAR) image, and then extract geographical features in it. In this research work, speckle is reduced by multiscale analysis in wavelet domain. In terms of geographical feature preservation the result shows that the method is better compared to spatial domain filters, such as Lee, Kuan, Frost, Ehfrost, Median, Gamma filters. The geographical features such as roads, airport runways, rivers and other ribbon-like shape structures are detected by the new wavelet-based method as proposed by Yuan Yan Tang. Experimental results show that the proposed method extracts geographical features of different width as well as different gray levels.

  18. Electromembrane extraction as a rapid and selective miniaturized sample preparation technique for biological fluids

    DEFF Research Database (Denmark)

    Gjelstad, Astrid; Pedersen-Bjergaard, Stig; Seip, Knut Fredrik

    2015-01-01

    of organic solvent, and into an aqueous receiver solution. The extraction is promoted by application of an electrical field, causing electrokinetic migration of the charged analytes. The method has shown to perform excellent clean-up and selectivity from complicated aqueous matrices like biological fluids......This special report discusses the sample preparation method electromembrane extraction, which was introduced in 2006 as a rapid and selective miniaturized extraction method. The extraction principle is based on isolation of charged analytes extracted from an aqueous sample, across a thin film...

  19. Furuncular myiasis: a simple and rapid method for extraction of intact Dermatobia hominis larvae.

    Science.gov (United States)

    Boggild, Andrea K; Keystone, Jay S; Kain, Kevin C

    2002-08-01

    We report a case of furuncular myiasis complicated by Staphylococcus aureus infection and beta-hemolytic streptococcal cellulitis. The Dermatobia hominis larva that caused this lesion could not be extracted using standard methods, including suffocation and application of lateral pressure, and surgery was contraindicated because of cellulitis. The botfly maggot was completely and rapidly extracted with an inexpensive, disposable, commercial venom extractor.

  20. Supervised non-negative tensor factorization for automatic hyperspectral feature extraction and target discrimination

    Science.gov (United States)

    Anderson, Dylan; Bapst, Aleksander; Coon, Joshua; Pung, Aaron; Kudenov, Michael

    2017-05-01

    Hyperspectral imaging provides a highly discriminative and powerful signature for target detection and discrimination. Recent literature has shown that considering additional target characteristics, such as spatial or temporal profiles, simultaneously with spectral content can greatly increase classifier performance. Considering these additional characteristics in a traditional discriminative algorithm requires a feature extraction step be performed first. An example of such a pipeline is computing a filter bank response to extract spatial features followed by a support vector machine (SVM) to discriminate between targets. This decoupling between feature extraction and target discrimination yields features that are suboptimal for discrimination, reducing performance. This performance reduction is especially pronounced when the number of features or available data is limited. In this paper, we propose the use of Supervised Nonnegative Tensor Factorization (SNTF) to jointly perform feature extraction and target discrimination over hyperspectral data products. SNTF learns a tensor factorization and a classification boundary from labeled training data simultaneously. This ensures that the features learned via tensor factorization are optimal for both summarizing the input data and separating the targets of interest. Practical considerations for applying SNTF to hyperspectral data are presented, and results from this framework are compared to decoupled feature extraction/target discrimination pipelines.

  1. PyEEG: An Open Source Python Module for EEG/MEG Feature Extraction

    Directory of Open Access Journals (Sweden)

    Forrest Sheng Bao

    2011-01-01

    Full Text Available Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction.

  2. A method for real-time implementation of HOG feature extraction

    Science.gov (United States)

    Luo, Hai-bo; Yu, Xin-rong; Liu, Hong-mei; Ding, Qing-hai

    2011-08-01

    Histogram of oriented gradient (HOG) is an efficient feature extraction scheme, and HOG descriptors are feature descriptors which is widely used in computer vision and image processing for the purpose of biometrics, target tracking, automatic target detection(ATD) and automatic target recognition(ATR) etc. However, computation of HOG feature extraction is unsuitable for hardware implementation since it includes complicated operations. In this paper, the optimal design method and theory frame for real-time HOG feature extraction based on FPGA were proposed. The main principle is as follows: firstly, the parallel gradient computing unit circuit based on parallel pipeline structure was designed. Secondly, the calculation of arctangent and square root operation was simplified. Finally, a histogram generator based on parallel pipeline structure was designed to calculate the histogram of each sub-region. Experimental results showed that the HOG extraction can be implemented in a pixel period by these computing units.

  3. Wavelet and K-L Seperability Based Feature Extraction Method for Functional Data Classification

    OpenAIRE

    Jun Wan; Zehua Chen; Yingwu Chen; Zhidong Bai

    2010-01-01

    This paper proposes a novel feature extraction method, based on Discrete Wavelet Transform (DWT) and K-L Seperability (KLS), for the classification of Functional Data (FD). This method combines the decorrelation and reduction property of DWT and the additive independence property of KLS, which is helpful to extraction classification features of FD. It is an advanced approach of the popular wavelet based shrinkage method for functional data reduction and classification. A ...

  4. Application in Feature Extraction of AE Signal for Rolling Bearing in EEMD and Cloud Similarity Measurement

    OpenAIRE

    Han, Long; Li, Chengwei; Shen, Liqun

    2015-01-01

    Due to the powerful ability of EEMD algorithm in noising, it is usually applied to feature extraction of fault signal of rolling bearing. But the selective correctness of sensitive IMF after decomposition can directly influence the correctness of feature extraction of fault signal. In order to solve the problem, the paper firstly proposes a new method on selecting sensitive IMF based on Cloud Similarity Measurement. By comparing this method in simulation experiment with the traditional mutual...

  5. Sequential injection analysis with chemiluminescence detection for rapid monitoring of commercial Calendula officinalis extractions.

    Science.gov (United States)

    Hughes, Rachel R; Scown, David; Lenehan, Claire E

    2015-01-01

    Plant extracts containing high levels of antioxidants are desirable due to their reported health benefits. Most techniques capable of determining the antioxidant activity of plant extracts are unsuitable for rapid at-line analysis as they require extensive sample preparation and/or long analysis times. Therefore, analytical techniques capable of real-time or pseudo real-time at-line monitoring of plant extractions, and determination of extraction endpoints, would be useful to manufacturers of antioxidant-rich plant extracts. To develop a reliable method for the rapid at-line extraction monitoring of antioxidants in plant extracts. Calendula officinalis extracts were prepared from dried flowers and analysed for antioxidant activity using sequential injection analysis (SIA) with chemiluminescence (CL) detection. The intensity of CL emission from the reaction of acidic potassium permanganate with antioxidants within the extract was used as the analytical signal. The SIA-CL method was applied to monitor the extraction of C. officinalis over the course of a batch extraction to determine the extraction endpoint. Results were compared with those from ultra high performance liquid chromatography (UHPLC). Pseudo real-time, at-line monitoring showed the level of antioxidants in a batch extract of Calendula officinalis plateaued after 100 min of extraction. These results correlated well with those of an offline UHPLC study. SIA-CL was found to be a suitable method for pseudo real-time monitoring of plant extractions and determination of extraction endpoints with respect to antioxidant concentrations. The method was applied at-line in the manufacturing industry. Copyright © 2015 John Wiley & Sons, Ltd.

  6. Aggregation of Electric Current Consumption Features to Extract Maintenance KPIs

    Science.gov (United States)

    Simon, Victor; Johansson, Carl-Anders; Galar, Diego

    2017-09-01

    All electric powered machines offer the possibility of extracting information and calculating Key Performance Indicators (KPIs) from the electric current signal. Depending on the time window, sampling frequency and type of analysis, different indicators from the micro to macro level can be calculated for such aspects as maintenance, production, energy consumption etc. On the micro-level, the indicators are generally used for condition monitoring and diagnostics and are normally based on a short time window and a high sampling frequency. The macro indicators are normally based on a longer time window with a slower sampling frequency and are used as indicators for overall performance, cost or consumption. The indicators can be calculated directly from the current signal but can also be based on a combination of information from the current signal and operational data like rpm, position etc. One or several of those indicators can be used for prediction and prognostics of a machine's future behavior. This paper uses this technique to calculate indicators for maintenance and energy optimization in electric powered machines and fleets of machines, especially machine tools.

  7. AGGREGATION OF ELECTRIC CURRENT CONSUMPTION FEATURES TO EXTRACT MAINTENANCE KPIs

    Directory of Open Access Journals (Sweden)

    Victor SIMON

    2017-07-01

    Full Text Available All electric powered machines offer the possibility of extracting information and calculating Key Performance Indicators (KPIs from the electric current signal. Depending on the time window, sampling frequency and type of analysis, differ-ent indicators from the micro to macro level can be calculated for such aspects as maintenance, production, energy consumption etc. On the micro-level, the indicators are generally used for condition monitoring and diagnostics and are normally based on a short time window and a high sampling frequency. The macro indicators are normally based on a longer time window with a slower sampling frequency and are used as indicators for overall performance, cost or con-sumption. The indicators can be calculated directly from the current signal but can also be based on a combination of information from the current signal and operational data like rpm, position etc. One or several of those indicators can be used for prediction and prognostics of a machine’s future behavior. This paper uses this technique to calculate indicators for maintenance and energy optimization in electric powered machines and fleets of machines, especially machine tools.

  8. A Method of SAR Target Recognition Based on Gabor Filter and Local Texture Feature Extraction

    Directory of Open Access Journals (Sweden)

    Wang Lu

    2015-12-01

    Full Text Available This paper presents a novel texture feature extraction method based on a Gabor filter and Three-Patch Local Binary Patterns (TPLBP for Synthetic Aperture Rader (SAR target recognition. First, SAR images are processed by a Gabor filter in different directions to enhance the significant features of the targets and their shadows. Then, the effective local texture features based on the Gabor filtered images are extracted by TPLBP. This not only overcomes the shortcoming of Local Binary Patterns (LBP, which cannot describe texture features for large scale neighborhoods, but also maintains the rotation invariant characteristic which alleviates the impact of the direction variations of SAR targets on recognition performance. Finally, we use an Extreme Learning Machine (ELM classifier and extract the texture features. The experimental results of MSTAR database demonstrate the effectiveness of the proposed method.

  9. ESVC-based extraction and segmentation of texture features

    Science.gov (United States)

    Yang, Jingan; Zhuang, Yanbin; Wu, Feng

    2012-12-01

    Inspired by Krige' variogram and the multi-channel filtering theory for human vision information processing, this paper proposes a novel algorithm for segmenting the textures based on experimental semi-variogram function (ESVF), which can simultaneously describe structural property and statistical property of textures. The single variogram function value (SVFV) and the variance distance obtained by ESVF are used as texture feature description for segmenting textures. The feasibility and effectiveness of the proposed method are demonstrated by testing on some texture images. The computational complexity of the proposed approach depends neither on the number of the textures nor on the number of the gray levels, and only on the size of the image blocks. We have proved theoretically that the algorithm has the advantages of direction invariability and a higher sensitivity to different textures and can detect almost all kinds of the boundaries of the shape textures. Experimental results on the Brodatz texture databases show that the performance of this algorithm is superior to the traditional techniques such as texture spectrum, SIFT, k-mean method, and Gabor filters. The proposed approach is found to be robust, efficient, and satisfactory.

  10. A Hierarchical Feature Extraction Model for Multi-Label Mechanical Patent Classification

    Directory of Open Access Journals (Sweden)

    Jie Hu

    2018-01-01

    Full Text Available Various studies have focused on feature extraction methods for automatic patent classification in recent years. However, most of these approaches are based on the knowledge from experts in related domains. Here we propose a hierarchical feature extraction model (HFEM for multi-label mechanical patent classification, which is able to capture both local features of phrases as well as global and temporal semantics. First, a n-gram feature extractor based on convolutional neural networks (CNNs is designed to extract salient local lexical-level features. Next, a long dependency feature extraction model based on the bidirectional long–short-term memory (BiLSTM neural network model is proposed to capture sequential correlations from higher-level sequence representations. Then the HFEM algorithm and its hierarchical feature extraction architecture are detailed. We establish the training, validation and test datasets, containing 72,532, 18,133, and 2679 mechanical patent documents, respectively, and then check the performance of HFEMs. Finally, we compared the results of the proposed HFEM and three other single neural network models, namely CNN, long–short-term memory (LSTM, and BiLSTM. The experimental results indicate that our proposed HFEM outperforms the other compared models in both precision and recall.

  11. Waveform fitting and geometry analysis for full-waveform lidar feature extraction

    Science.gov (United States)

    Tsai, Fuan; Lai, Jhe-Syuan; Cheng, Yi-Hsiu

    2016-10-01

    This paper presents a systematic approach that integrates spline curve fitting and geometry analysis to extract full-waveform LiDAR features for land-cover classification. The cubic smoothing spline algorithm is used to fit the waveform curve of the received LiDAR signals. After that, the local peak locations of the waveform curve are detected using a second derivative method. According to the detected local peak locations, commonly used full-waveform features such as full width at half maximum (FWHM) and amplitude can then be obtained. In addition, the number of peaks, time difference between the first and last peaks, and the average amplitude are also considered as features of LiDAR waveforms with multiple returns. Based on the waveform geometry, dynamic time-warping (DTW) is applied to measure the waveform similarity. The sum of the absolute amplitude differences that remain after time-warping can be used as a similarity feature in a classification procedure. An airborne full-waveform LiDAR data set was used to test the performance of the developed feature extraction method for land-cover classification. Experimental results indicate that the developed spline curve- fitting algorithm and geometry analysis can extract helpful full-waveform LiDAR features to produce better land-cover classification than conventional LiDAR data and feature extraction methods. In particular, the multiple-return features and the dynamic time-warping index can improve the classification results significantly.

  12. A comparison of different feature extraction methods for diagnosis of valvular heart diseases using PCG signals.

    Science.gov (United States)

    Rouhani, M; Abdoli, R

    2012-01-01

    This article presents a novel method for diagnosis of valvular heart disease (VHD) based on phonocardiography (PCG) signals. Application of the pattern classification and feature selection and reduction methods in analysing normal and pathological heart sound was investigated. After signal preprocessing using independent component analysis (ICA), 32 features are extracted. Those include carefully selected linear and nonlinear time domain, wavelet and entropy features. By examining different feature selection and feature reduction methods such as principal component analysis (PCA), genetic algorithms (GA), genetic programming (GP) and generalized discriminant analysis (GDA), the four most informative features are extracted. Furthermore, support vector machines (SVM) and neural network classifiers are compared for diagnosis of pathological heart sounds. Three valvular heart diseases are considered: aortic stenosis (AS), mitral stenosis (MS) and mitral regurgitation (MR). An overall accuracy of 99.47% was achieved by proposed algorithm. Copyright © 2012 Informa UK, Ltd.

  13. A flexible mechanism of rule selection enables rapid feature-based reinforcement learning

    Directory of Open Access Journals (Sweden)

    Matthew eBalcarras

    2016-03-01

    Full Text Available Learning in a new environment is influenced by prior learning and experience. Correctly applying a rule that maps a context to stimuli, actions, and outcomes enables faster learning and better outcomes compared to relying on strategies for learning that are ignorant of task structure. However, it is often difficult to know when and how to apply learned rules in new contexts. In our study we explored how subjects employ different strategies for learning the relationship between stimulus features and positive outcomes in a probabilistic task context. We test the hypothesis that task naive subjects will show enhanced learning of feature specific reward associations by switching to the use of an abstract rule that associates stimuli by feature type and restricts selections to that dimension. To test this hypothesis we designed a decision making task where subjects receive probabilistic feedback following choices between pairs of stimuli. In the task, trials are grouped in two contexts by blocks, where in one type of block there is no unique relationship between a specific feature dimension (stimulus shape or colour and positive outcomes, and following an un-cued transition, alternating blocks have outcomes that are linked to either stimulus shape or colour. Two-thirds of subjects (n=22/32 exhibited behaviour that was best fit by a hierarchical feature-rule model. Supporting the prediction of the model mechanism these subjects showed significantly enhanced performance in feature-reward blocks, and rapidly switched their choice strategy to using abstract feature rules when reward contingencies changed. Choice behaviour of other subjects (n=10/32 was fit by a range of alternative reinforcement learning models representing strategies that do not benefit from applying previously learned rules. In summary, these results show that untrained subjects are capable of flexibly shifting between behavioural rules by leveraging simple model-free reinforcement

  14. Prediction of occult invasive disease in ductal carcinoma in situ using computer-extracted mammographic features

    Science.gov (United States)

    Shi, Bibo; Grimm, Lars J.; Mazurowski, Maciej A.; Marks, Jeffrey R.; King, Lorraine M.; Maley, Carlo C.; Hwang, E. Shelley; Lo, Joseph Y.

    2017-03-01

    Predicting the risk of occult invasive disease in ductal carcinoma in situ (DCIS) is an important task to help address the overdiagnosis and overtreatment problems associated with breast cancer. In this work, we investigated the feasibility of using computer-extracted mammographic features to predict occult invasive disease in patients with biopsy proven DCIS. We proposed a computer-vision algorithm based approach to extract mammographic features from magnification views of full field digital mammography (FFDM) for patients with DCIS. After an expert breast radiologist provided a region of interest (ROI) mask for the DCIS lesion, the proposed approach is able to segment individual microcalcifications (MCs), detect the boundary of the MC cluster (MCC), and extract 113 mammographic features from MCs and MCC within the ROI. In this study, we extracted mammographic features from 99 patients with DCIS (74 pure DCIS; 25 DCIS plus invasive disease). The predictive power of the mammographic features was demonstrated through binary classifications between pure DCIS and DCIS with invasive disease using linear discriminant analysis (LDA). Before classification, the minimum redundancy Maximum Relevance (mRMR) feature selection method was first applied to choose subsets of useful features. The generalization performance was assessed using Leave-One-Out Cross-Validation and Receiver Operating Characteristic (ROC) curve analysis. Using the computer-extracted mammographic features, the proposed model was able to distinguish DCIS with invasive disease from pure DCIS, with an average classification performance of AUC = 0.61 +/- 0.05. Overall, the proposed computer-extracted mammographic features are promising for predicting occult invasive disease in DCIS.

  15. Rapid Solid-Liquid Dynamic Extraction (RSLDE): a New Rapid and Greener Method for Extracting Two Steviol Glycosides (Stevioside and Rebaudioside A) from Stevia Leaves.

    Science.gov (United States)

    Gallo, Monica; Vitulano, Manuela; Andolfi, Anna; DellaGreca, Marina; Conte, Esterina; Ciaravolo, Martina; Naviglio, Daniele

    2017-06-01

    Stevioside and rebaudioside A are the main diterpene glycosides present in the leaves of the Stevia rebaudiana plant, which is used in the production of foods and low-calorie beverages. The difficulties associated with their extraction and purification are currently a problem for the food processing industries. The objective of this study was to develop an effective and economically viable method to obtain a high-quality product while trying to overcome the disadvantages derived from the conventional transformation processes. For this reason, extractions were carried out using a conventional maceration (CM) and a cyclically pressurized extraction known as rapid solid-liquid dynamic extraction (RSLDE) by the Naviglio extractor (NE). After only 20 min of extraction using the NE, a quantity of rebaudioside A and stevioside equal to 1197.8 and 413.6 mg/L was obtained, respectively, while for the CM, the optimum time was 90 min. From the results, it can be stated that the extraction process by NE and its subsequent purification developed in this study is a simple, economical, environmentally friendly method for producing steviol glycosides. Therefore, this method constitutes a valid alternative to conventional extraction by reducing the extraction time and the consumption of toxic solvents and favouring the use of the extracted metabolites as food additives and/or nutraceuticals. As an added value and of local interest, the experiment was carried out on stevia leaves from the Benevento area (Italy), where a high content of rebaudioside A was observed, which exhibits a sweet taste compared to stevioside, which has a significant bitter aftertaste.

  16. Extracting foreground ensemble features to detect abnormal crowd behavior in intelligent video-surveillance systems

    Science.gov (United States)

    Chan, Yi-Tung; Wang, Shuenn-Jyi; Tsai, Chung-Hsien

    2017-09-01

    Public safety is a matter of national security and people's livelihoods. In recent years, intelligent video-surveillance systems have become important active-protection systems. A surveillance system that provides early detection and threat assessment could protect people from crowd-related disasters and ensure public safety. Image processing is commonly used to extract features, e.g., people, from a surveillance video. However, little research has been conducted on the relationship between foreground detection and feature extraction. Most current video-surveillance research has been developed for restricted environments, in which the extracted features are limited by having information from a single foreground; they do not effectively represent the diversity of crowd behavior. This paper presents a general framework based on extracting ensemble features from the foreground of a surveillance video to analyze a crowd. The proposed method can flexibly integrate different foreground-detection technologies to adapt to various monitored environments. Furthermore, the extractable representative features depend on the heterogeneous foreground data. Finally, a classification algorithm is applied to these features to automatically model crowd behavior and distinguish an abnormal event from normal patterns. The experimental results demonstrate that the proposed method's performance is both comparable to that of state-of-the-art methods and satisfies the requirements of real-time applications.

  17. A neuro-fuzzy system for extracting environment features based on ultrasonic sensors.

    Science.gov (United States)

    Marichal, Graciliano Nicolás; Hernández, Angela; Acosta, Leopoldo; González, Evelio José

    2009-01-01

    In this paper, a method to extract features of the environment based on ultrasonic sensors is presented. A 3D model of a set of sonar systems and a workplace has been developed. The target of this approach is to extract in a short time, while the vehicle is moving, features of the environment. Particularly, the approach shown in this paper has been focused on determining walls and corners, which are very common environment features. In order to prove the viability of the devised approach, a 3D simulated environment has been built. A Neuro-Fuzzy strategy has been used in order to extract environment features from this simulated model. Several trials have been carried out, obtaining satisfactory results in this context. After that, some experimental tests have been conducted using a real vehicle with a set of sonar systems. The obtained results reveal the satisfactory generalization properties of the approach in this case.

  18. Extraction Of Audio Features For Emotion Recognition System Based On Music

    Directory of Open Access Journals (Sweden)

    Kee Moe Han

    2015-08-01

    Full Text Available Music is the combination of melody linguistic information and the vocalists emotion. Since music is a work of art analyzing emotion in music by computer is a difficult task. Many approaches have been developed to detect the emotions included in music but the results are not satisfactory because emotion is very complex. In this paper the evaluations of audio features from the music files are presented. The extracted features are used to classify the different emotion classes of the vocalists. Musical features extraction is done by using Music Information Retrieval MIR tool box in this paper. The database of 100 music clips are used to classify the emotions perceived in music clips. Music may contain many emotions according to the vocalists mood such as happy sad nervous bored peace etc. In this paper the audio features related to the emotions of the vocalists are extracted to use in emotion recognition system based on music.

  19. The Use of Features Extracted from Noisy Samples for Image Restoration Purposes

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available An important feature of neural networks is the ability they have to learn from their environment, and, through learning to improve performance in some sense. In the following we restrict the development to the problem of feature extracting unsupervised neural networks derived on the base of the biologically motivated Hebbian self-organizing principle which is conjectured to govern the natural neural assemblies and the classical principal component analysis (PCA method used by statisticians for almost a century for multivariate data analysis and feature extraction. The research work reported in the paper aims to propose a new image reconstruction method based on the features extracted from the noise given by the principal components of the noise covariance matrix.

  20. A Method of Road Extraction from High-resolution Remote Sensing Images Based on Shape Features

    Directory of Open Access Journals (Sweden)

    LEI Xiaoqi

    2016-02-01

    Full Text Available Road extraction from high-resolution remote sensing image is an important and difficult task.Since remote sensing images include complicated information,the methods that extract roads by spectral,texture and linear features have certain limitations.Also,many methods need human-intervention to get the road seeds(semi-automatic extraction,which have the great human-dependence and low efficiency.The road-extraction method,which uses the image segmentation based on principle of local gray consistency and integration shape features,is proposed in this paper.Firstly,the image is segmented,and then the linear and curve roads are obtained by using several object shape features,so the method that just only extract linear roads are rectified.Secondly,the step of road extraction is carried out based on the region growth,the road seeds are automatic selected and the road network is extracted.Finally,the extracted roads are regulated by combining the edge information.In experiments,the images that including the better gray uniform of road and the worse illuminated of road surface were chosen,and the results prove that the method of this study is promising.

  1. Stacked Denoise Autoencoder Based Feature Extraction and Classification for Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Chen Xing

    2016-01-01

    Full Text Available Deep learning methods have been successfully applied to learn feature representations for high-dimensional data, where the learned features are able to reveal the nonlinear properties exhibited in the data. In this paper, deep learning method is exploited for feature extraction of hyperspectral data, and the extracted features can provide good discriminability for classification task. Training a deep network for feature extraction and classification includes unsupervised pretraining and supervised fine-tuning. We utilized stacked denoise autoencoder (SDAE method to pretrain the network, which is robust to noise. In the top layer of the network, logistic regression (LR approach is utilized to perform supervised fine-tuning and classification. Since sparsity of features might improve the separation capability, we utilized rectified linear unit (ReLU as activation function in SDAE to extract high level and sparse features. Experimental results using Hyperion, AVIRIS, and ROSIS hyperspectral data demonstrated that the SDAE pretraining in conjunction with the LR fine-tuning and classification (SDAE_LR can achieve higher accuracies than the popular support vector machine (SVM classifier.

  2. Feature extraction using convolutional neural network for classifying breast density in mammographic images

    Science.gov (United States)

    Thomaz, Ricardo L.; Carneiro, Pedro C.; Patrocinio, Ana C.

    2017-03-01

    Breast cancer is the leading cause of death for women in most countries. The high levels of mortality relate mostly to late diagnosis and to the direct proportionally relationship between breast density and breast cancer development. Therefore, the correct assessment of breast density is important to provide better screening for higher risk patients. However, in modern digital mammography the discrimination among breast densities is highly complex due to increased contrast and visual information for all densities. Thus, a computational system for classifying breast density might be a useful tool for aiding medical staff. Several machine-learning algorithms are already capable of classifying small number of classes with good accuracy. However, machinelearning algorithms main constraint relates to the set of features extracted and used for classification. Although well-known feature extraction techniques might provide a good set of features, it is a complex task to select an initial set during design of a classifier. Thus, we propose feature extraction using a Convolutional Neural Network (CNN) for classifying breast density by a usual machine-learning classifier. We used 307 mammographic images downsampled to 260x200 pixels to train a CNN and extract features from a deep layer. After training, the activation of 8 neurons from a deep fully connected layer are extracted and used as features. Then, these features are feedforward to a single hidden layer neural network that is cross-validated using 10-folds to classify among four classes of breast density. The global accuracy of this method is 98.4%, presenting only 1.6% of misclassification. However, the small set of samples and memory constraints required the reuse of data in both CNN and MLP-NN, therefore overfitting might have influenced the results even though we cross-validated the network. Thus, although we presented a promising method for extracting features and classifying breast density, a greater database is

  3. EXTRACTING SPATIOTEMPORAL OBJECTS FROM RASTER DATA TO REPRESENT PHYSICAL FEATURES AND ANALYZE RELATED PROCESSES

    Directory of Open Access Journals (Sweden)

    J. A. Zollweg

    2017-10-01

    Full Text Available Numerous ground-based, airborne, and orbiting platforms provide remotely-sensed data of remarkable spatial resolution at short time intervals. However, this spatiotemporal data is most valuable if it can be processed into information, thereby creating meaning. We live in a world of objects: cars, buildings, farms, etc. On a stormy day, we don’t see millions of cubes of atmosphere; we see a thunderstorm ‘object’. Temporally, we don’t see the properties of those individual cubes changing, we see the thunderstorm as a whole evolving and moving. There is a need to represent the bulky, raw spatiotemporal data from remote sensors as a small number of relevant spatiotemporal objects, thereby matching the human brain’s perception of the world. This presentation reveals an efficient algorithm and system to extract the objects/features from raster-formatted remotely-sensed data. The system makes use of the Python object-oriented programming language, SciPy/NumPy for matrix manipulation and scientific computation, and export/import to the GeoJSON standard geographic object data format. The example presented will show how thunderstorms can be identified and characterized in a spatiotemporal continuum using a Python program to process raster data from NOAA’s High-Resolution Rapid Refresh v2 (HRRRv2 data stream.

  4. A Smartphone Indoor Localization Algorithm Based on WLAN Location Fingerprinting with Feature Extraction and Clustering

    Directory of Open Access Journals (Sweden)

    Junhai Luo

    2017-06-01

    Full Text Available With the development of communication technology, the demand for location-based services is growing rapidly. This paper presents an algorithm for indoor localization based on Received Signal Strength (RSS, which is collected from Access Points (APs. The proposed localization algorithm contains the offline information acquisition phase and online positioning phase. Firstly, the AP selection algorithm is reviewed and improved based on the stability of signals to remove useless AP; secondly, Kernel Principal Component Analysis (KPCA is analyzed and used to remove the data redundancy and maintain useful characteristics for nonlinear feature extraction; thirdly, the Affinity Propagation Clustering (APC algorithm utilizes RSS values to classify data samples and narrow the positioning range. In the online positioning phase, the classified data will be matched with the testing data to determine the position area, and the Maximum Likelihood (ML estimate will be employed for precise positioning. Eventually, the proposed algorithm is implemented in a real-world environment for performance evaluation. Experimental results demonstrate that the proposed algorithm improves the accuracy and computational complexity.

  5. A Smartphone Indoor Localization Algorithm Based on WLAN Location Fingerprinting with Feature Extraction and Clustering.

    Science.gov (United States)

    Luo, Junhai; Fu, Liang

    2017-06-09

    With the development of communication technology, the demand for location-based services is growing rapidly. This paper presents an algorithm for indoor localization based on Received Signal Strength (RSS), which is collected from Access Points (APs). The proposed localization algorithm contains the offline information acquisition phase and online positioning phase. Firstly, the AP selection algorithm is reviewed and improved based on the stability of signals to remove useless AP; secondly, Kernel Principal Component Analysis (KPCA) is analyzed and used to remove the data redundancy and maintain useful characteristics for nonlinear feature extraction; thirdly, the Affinity Propagation Clustering (APC) algorithm utilizes RSS values to classify data samples and narrow the positioning range. In the online positioning phase, the classified data will be matched with the testing data to determine the position area, and the Maximum Likelihood (ML) estimate will be employed for precise positioning. Eventually, the proposed algorithm is implemented in a real-world environment for performance evaluation. Experimental results demonstrate that the proposed algorithm improves the accuracy and computational complexity.

  6. Extracting Spatiotemporal Objects from Raster Data to Represent Physical Features and Analyze Related Processes

    Science.gov (United States)

    Zollweg, J. A.

    2017-10-01

    Numerous ground-based, airborne, and orbiting platforms provide remotely-sensed data of remarkable spatial resolution at short time intervals. However, this spatiotemporal data is most valuable if it can be processed into information, thereby creating meaning. We live in a world of objects: cars, buildings, farms, etc. On a stormy day, we don't see millions of cubes of atmosphere; we see a thunderstorm `object'. Temporally, we don't see the properties of those individual cubes changing, we see the thunderstorm as a whole evolving and moving. There is a need to represent the bulky, raw spatiotemporal data from remote sensors as a small number of relevant spatiotemporal objects, thereby matching the human brain's perception of the world. This presentation reveals an efficient algorithm and system to extract the objects/features from raster-formatted remotely-sensed data. The system makes use of the Python object-oriented programming language, SciPy/NumPy for matrix manipulation and scientific computation, and export/import to the GeoJSON standard geographic object data format. The example presented will show how thunderstorms can be identified and characterized in a spatiotemporal continuum using a Python program to process raster data from NOAA's High-Resolution Rapid Refresh v2 (HRRRv2) data stream.

  7. A rapid direct solvent extraction method for the extraction of 2-dodecylcyclobutanone from irradiated ground beef patties using acetonitrile.

    Science.gov (United States)

    Hijaz, Faraj; Kumar, Amit; Smith, J Scott

    2010-08-01

    The amount of irradiated beef in the U.S. market is growing, and a reliable, rapid method is needed to detect irradiated beef and quantify the irradiation dose. The official analytical method (BS EN 1785 2003) that has been adopted by the European Union is time consuming. The objective of this study was to develop a rapid method for the analysis of 2-dodecylcyclobutanone (2-DCB) in irradiated beef. A 5 g sample of commercially irradiated ground beef patty (90/10) was extracted with n-hexane using a Soxhlet apparatus or with acetonitrile via direct solvent extraction. The Soxhlet hexane extract was evaporated to dryness, and the sample was dissolved in a mixture of ethyl acetate and acetonitrile (1:1). The defatted extract was purified with a 1 g silica cartridge. Another 5 g aliquot of the same patty was mixed with 50 mL acetonitrile and either blended for 1 min with a hand blender or crushed for 10 min with a glass rod. The extraction procedure was repeated 3 times, and the acetonitrile was collected and evaporated to dryness. Eluants from both methods were concentrated under nitrogen and injected into a gas chromatography-mass spectrometry. The 2-DCB concentration in the commercial samples was 0.031 +/- 0.0026 ppm (n = 5) for the Soxhlet method and 0.031 +/- 0.0025 ppm (n = 10) for direct solvent extraction. Recovery of 2-DCB from spiked beef samples in the direct solvent extraction method was 93.2 +/- 9.0% (n = 7). This study showed that the direct solvent extraction method is simple and as efficient and reproducible as the Soxhlet method.

  8. A Rapid and Reliable Method for Total Protein Extraction from Succulent Plants for Proteomic Analysis.

    Science.gov (United States)

    Lledías, Fernando; Hernández, Felipe; Rivas, Viridiana; García-Mendoza, Abisaí; Cassab, Gladys I; Nieto-Sotelo, Jorge

    2017-08-01

    Crassulacean acid metabolism plants have some morphological features, such as succulent and reduced leaves, thick cuticles, and sunken stomata that help them prevent excessive water loss and irradiation. As molecular constituents of these morphological adaptations to xeric environments, succulent plants produce a set of specific compounds such as complex polysaccharides, pigments, waxes, and terpenoids, to name a few, in addition to uncharacterized proteases. Since all these compounds interfere with the analysis of proteins by electrophoretic techniques, preparation of high quality samples from these sources represents a real challenge. The absence of adequate protocols for protein extraction has restrained the study of this class of plants at the molecular level. Here, we present a rapid and reliable protocol that could be accomplished in 1 h and applied to a broad range of plants with reproducible results. We were able to obtain well-resolved SDS/PAGE protein patterns in extracts from different members of the subfamilies Agavoideae (Agave, Yucca, Manfreda, and Furcraea), Nolinoideae (Dasylirion and Beucarnea), and the Cactaceae family. This method is based on the differential solubility of contaminants and proteins in the presence of acetone and pH-altered solutions. We speculate about the role of saponins and high molecular weight carbohydrates to produce electrophoretic-compatible samples. A modification of the basic protocol allowed the analysis of samples by bidimensional electrophoresis (2DE) for proteomic analysis. Furostanol glycoside 26-O-β-glucosidase (an enzyme involved in steroid saponin synthesis) was successfully identified by mass spectrometry analysis and de novo sequencing of a 2DE spot from an Agave attenuata sample.

  9. Feature extraction and learning using context cue and Rényi entropy based mutual information

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2015-01-01

    improving the robustness of CKD. For feature learning and reduction, we propose a novel codebook learning method, based on a Rényi quadratic entropy based mutual information measure called Cauchy-Schwarz Quadratic Mutual Information (CSQMI), to learn a compact and discriminative CKD codebook. Projecting......Feature extraction and learning play a critical role for visual perception tasks. We focus on improving the robustness of the kernel descriptors (KDES) by embedding context cues and further learning a compact and discriminative feature codebook for feature reduction using Rényi entropy based mutual...... information. In particular, for feature extraction, we develop a new set of kernel descriptors−Context Kernel Descriptors (CKD), which enhance the original KDES by embedding the spatial context into the descriptors. Context cues contained in the context kernel enforce some degree of spatial consistency, thus...

  10. FAST DISCRETE CURVELET TRANSFORM BASED ANISOTROPIC FEATURE EXTRACTION FOR IRIS RECOGNITION

    Directory of Open Access Journals (Sweden)

    Amol D. Rahulkar

    2010-11-01

    Full Text Available The feature extraction plays a very important role in iris recognition. Recent researches on multiscale analysis provide good opportunity to extract more accurate information for iris recognition. In this work, a new directional iris texture features based on 2-D Fast Discrete Curvelet Transform (FDCT is proposed. The proposed approach divides the normalized iris image into six sub-images and the curvelet transform is applied independently on each sub-image. The anisotropic feature vector for each sub-image is derived using the directional energies of the curvelet coefficients. These six feature vectors are combined to create the resultant feature vector. During recognition, the nearest neighbor classifier based on Euclidean distance has been used for authentication. The effectiveness of the proposed approach has been tested on two different databases namely UBIRIS and MMU1. Experimental results show the superiority of the proposed approach.

  11. Feature Extraction for Facial Expression Recognition based on Hybrid Face Regions

    Directory of Open Access Journals (Sweden)

    LAJEVARDI, S.M.

    2009-10-01

    Full Text Available Facial expression recognition has numerous applications, including psychological research, improved human computer interaction, and sign language translation. A novel facial expression recognition system based on hybrid face regions (HFR is investigated. The expression recognition system is fully automatic, and consists of the following modules: face detection, facial detection, feature extraction, optimal features selection, and classification. The features are extracted from both whole face image and face regions (eyes and mouth using log Gabor filters. Then, the most discriminate features are selected based on mutual information criteria. The system can automatically recognize six expressions: anger, disgust, fear, happiness, sadness and surprise. The selected features are classified using the Naive Bayesian (NB classifier. The proposed method has been extensively assessed using Cohn-Kanade database and JAFFE database. The experiments have highlighted the efficiency of the proposed HFR method in enhancing the classification rate.

  12. Applying a Locally Linear Embedding Algorithm for Feature Extraction and Visualization of MI-EEG

    Directory of Open Access Journals (Sweden)

    Mingai Li

    2016-01-01

    Full Text Available Robotic-assisted rehabilitation system based on Brain-Computer Interface (BCI is an applicable solution for stroke survivors with a poorly functioning hemiparetic arm. The key technique for rehabilitation system is the feature extraction of Motor Imagery Electroencephalography (MI-EEG, which is a nonlinear time-varying and nonstationary signal with remarkable time-frequency characteristic. Though a few people have made efforts to explore the nonlinear nature from the perspective of manifold learning, they hardly take into full account both time-frequency feature and nonlinear nature. In this paper, a novel feature extraction method is proposed based on the Locally Linear Embedding (LLE algorithm and DWT. The multiscale multiresolution analysis is implemented for MI-EEG by DWT. LLE is applied to the approximation components to extract the nonlinear features, and the statistics of the detail components are calculated to obtain the time-frequency features. Then, the two features are combined serially. A backpropagation neural network is optimized by genetic algorithm and employed as a classifier to evaluate the effectiveness of the proposed method. The experiment results of 10-fold cross validation on a public BCI Competition dataset show that the nonlinear features visually display obvious clustering distribution and the fused features improve the classification accuracy and stability. This paper successfully achieves application of manifold learning in BCI.

  13. Special features of SCF solid extraction of natural products: deoiling of wheat gluten and extraction of rose hip oil

    Directory of Open Access Journals (Sweden)

    Eggers R.

    2000-01-01

    Full Text Available Supercritical CO2 extraction has shown great potential in separating vegetable oils as well as removing undesirable oil residuals from natural products. The influence of process parameters, such as pressure, temperature, mass flow and particle size, on the mass transfer kinetics of different natural products has been studied by many authors. However, few publications have focused on specific features of the raw material (moisture, mechanical pretreatment, bed compressibility, etc., which could play an important role, particularly in the scale-up of extraction processes. A review of the influence of both process parameters and specific features of the material on oilseed extraction is given in Eggers (1996. Mechanical pretreatment has been commonly used in order to facilitate mass transfer from the material into the supercritical fluid. However, small particle sizes, especially when combined with high moisture contents, may lead to inefficient extraction results. This paper focuses on the problems that appear during scale-up in processes on a lab to pilot or industrial plant scale related to the pretreatment of material, the control of initial water content and vessel shape. Two applications were studied: deoiling of wheat gluten with supercritical carbon dioxide to produce a totally oil-free (< 0.1 % oil powder (wheat gluten and the extraction of oil from rose hip seeds. Different ways of pretreating the feed material were successfully tested in order to develop an industrial-scale gluten deoiling process. The influence of shape and size of the fixed bed on the extraction results was also studied. In the case of rose hip seeds, the present work discusses the influence of pretreatment of the seeds prior to the extraction process on extraction kinetics.

  14. Evaluation of antimicrobial activity of selected plant extracts by rapid XTT colorimetry and bacterial enumeration.

    Science.gov (United States)

    Al-Bakri, Amal G; Afifi, Fatma U

    2007-01-01

    The aim of this study was to screen and evaluate the antimicrobial activity of indigenous Jordanian plant extracts, dissolved in dimethylsulfoxide, using the rapid XTT assay and viable count methods. XTT rapid assay was used for the initial screening of antimicrobial activity for the plant extracts. Antimicrobial activity of potentially active plant extracts was further assessed using the "viable plate count" method. Four degrees of antimicrobial activity (high, moderate, weak and inactive) against Bacillus subtilis, Staphylococcus aureus, Escherichia coli and Pseudomonas aeruginosa, respectively, were recorded. The plant extracts of Hypericum triquetrifolium, Ballota undulata, Ruta chalepensis, Ononis natrix, Paronychia argentea and Marrubium vulgare had shown promising antimicrobial activity. This study showed that while both XTT and viable count methods are comparable when estimating the overall antimicrobial activity of experimental substances, there is no strong linear correlation between the two methods.

  15. A Novel Feature Selection Strategy for Enhanced Biomedical Event Extraction Using the Turku System

    Directory of Open Access Journals (Sweden)

    Jingbo Xia

    2014-01-01

    Full Text Available Feature selection is of paramount importance for text-mining classifiers with high-dimensional features. The Turku Event Extraction System (TEES is the best performing tool in the GENIA BioNLP 2009/2011 shared tasks, which relies heavily on high-dimensional features. This paper describes research which, based on an implementation of an accumulated effect evaluation (AEE algorithm applying the greedy search strategy, analyses the contribution of every single feature class in TEES with a view to identify important features and modify the feature set accordingly. With an updated feature set, a new system is acquired with enhanced performance which achieves an increased F-score of 53.27% up from 51.21% for Task 1 under strict evaluation criteria and 57.24% according to the approximate span and recursive criterion.

  16. Low-power coprocessor for Haar-like feature extraction with pixel-based pipelined architecture

    Science.gov (United States)

    Luo, Aiwen; An, Fengwei; Fujita, Yuki; Zhang, Xiangyu; Chen, Lei; Jürgen Mattausch, Hans

    2017-04-01

    Intelligent analysis of image and video data requires image-feature extraction as an important processing capability for machine-vision realization. A coprocessor with pixel-based pipeline (CFEPP) architecture is developed for real-time Haar-like cell-based feature extraction. Synchronization with the image sensor’s pixel frequency and immediate usage of each input pixel for the feature-construction process avoids the dependence on memory-intensive conventional strategies like integral-image construction or frame buffers. One 180 nm CMOS prototype can extract the 1680-dimensional Haar-like feature vectors, applied in the speeded up robust features (SURF) scheme, using an on-chip memory of only 96 kb (kilobit). Additionally, a low power dissipation of only 43.45 mW at 1.8 V supply voltage is achieved during VGA video procession at 120 MHz frequency with more than 325 fps. The Haar-like feature-extraction coprocessor is further evaluated by the practical application of vehicle recognition, achieving the expected high accuracy which is comparable to previous work.

  17. Facial Expression Recognition with Fusion Features Extracted from Salient Facial Areas.

    Science.gov (United States)

    Liu, Yanpeng; Li, Yibin; Ma, Xin; Song, Rui

    2017-03-29

    In the pattern recognition domain, deep architectures are currently widely used and they have achieved fine results. However, these deep architectures make particular demands, especially in terms of their requirement for big datasets and GPU. Aiming to gain better results without deep networks, we propose a simplified algorithm framework using fusion features extracted from the salient areas of faces. Furthermore, the proposed algorithm has achieved a better result than some deep architectures. For extracting more effective features, this paper firstly defines the salient areas on the faces. This paper normalizes the salient areas of the same location in the faces to the same size; therefore, it can extracts more similar features from different subjects. LBP and HOG features are extracted from the salient areas, fusion features' dimensions are reduced by Principal Component Analysis (PCA) and we apply several classifiers to classify the six basic expressions at once. This paper proposes a salient areas definitude method which uses peak expressions frames compared with neutral faces. This paper also proposes and applies the idea of normalizing the salient areas to align the specific areas which express the different expressions. As a result, the salient areas found from different subjects are the same size. In addition, the gamma correction method is firstly applied on LBP features in our algorithm framework which improves our recognition rates significantly. By applying this algorithm framework, our research has gained state-of-the-art performances on CK+ database and JAFFE database.

  18. NIRS feature extraction based on deep auto-encoder neural network

    Science.gov (United States)

    Liu, Ting; Li, Zhongren; Yu, Chunxia; Qin, Yuhua

    2017-12-01

    As a secondary analysis method, Near Infrared Spectroscopy (NIRS) needs an effective feature extraction method to improve the model performance. Deep auto-encoder (DAE) can build up an adaptive multilayer encoder network to transform the high-dimensional data into a low-dimensional code with both linear and nonlinear feature combinations. To evaluate its capability, we experimented on the spectra data obtained from different categories of cigarette with the method of DAE, and compared with the principal component analysis (PCA). The results showed that the DAE can extract more nonlinear features to characterize cigarette quality. In addition, the DAE also got the linear distribution of cigarette quality by its nonlinear transformation of features. Finally, we employed k-Nearest Neighbor (kNN) to classify different categories of cigarette with the features extracted by the linear transformation methods as PCA and wavelet transform-principal component analysis (WT-PCA), and the nonlinear transformation methods as DAE and isometric mapping (ISOMAP). The results showed that the pattern recognition mode built on features extracted by DAE was provided with more validity.

  19. The extraction of motion-onset VEP BCI features based on deep learning and compressed sensing.

    Science.gov (United States)

    Ma, Teng; Li, Hui; Yang, Hao; Lv, Xulin; Li, Peiyang; Liu, Tiejun; Yao, Dezhong; Xu, Peng

    2017-01-01

    Motion-onset visual evoked potentials (mVEP) can provide a softer stimulus with reduced fatigue, and it has potential applications for brain computer interface(BCI)systems. However, the mVEP waveform is seriously masked in the strong background EEG activities, and an effective approach is needed to extract the corresponding mVEP features to perform task recognition for BCI control. In the current study, we combine deep learning with compressed sensing to mine discriminative mVEP information to improve the mVEP BCI performance. The deep learning and compressed sensing approach can generate the multi-modality features which can effectively improve the BCI performance with approximately 3.5% accuracy incensement over all 11 subjects and is more effective for those subjects with relatively poor performance when using the conventional features. Compared with the conventional amplitude-based mVEP feature extraction approach, the deep learning and compressed sensing approach has a higher classification accuracy and is more effective for subjects with relatively poor performance. According to the results, the deep learning and compressed sensing approach is more effective for extracting the mVEP feature to construct the corresponding BCI system, and the proposed feature extraction framework is easy to extend to other types of BCIs, such as motor imagery (MI), steady-state visual evoked potential (SSVEP)and P300. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Low-Level Color and Texture Feature Extraction of Coral Reef Components

    Directory of Open Access Journals (Sweden)

    Ma. Sheila Angeli Marcos

    2003-06-01

    Full Text Available The purpose of this study is to develop a computer-based classifier that automates coral reef assessmentfrom digitized underwater video. We extract low-level color and texture features from coral images toserve as input to a high-level classifier. Low-level features for color were labeled blue, green, yellow/brown/orange, and gray/white, which are described by the normalized chromaticity histograms of thesemajor colors. The color matching capability of these features was determined through a technique called“Histogram Backprojection”. The low-level texture feature marks a region as coarse or fine dependingon the gray-level variance of the region.

  1. StereoGene: rapid estimation of genome-wide correlation of continuous or interval feature data.

    Science.gov (United States)

    Stavrovskaya, Elena D; Niranjan, Tejasvi; Fertig, Elana J; Wheelan, Sarah J; Favorov, Alexander V; Mironov, Andrey A

    2017-10-15

    Genomics features with similar genome-wide distributions are generally hypothesized to be functionally related, for example, colocalization of histones and transcription start sites indicate chromatin regulation of transcription factor activity. Therefore, statistical algorithms to perform spatial, genome-wide correlation among genomic features are required. Here, we propose a method, StereoGene, that rapidly estimates genome-wide correlation among pairs of genomic features. These features may represent high-throughput data mapped to reference genome or sets of genomic annotations in that reference genome. StereoGene enables correlation of continuous data directly, avoiding the data binarization and subsequent data loss. Correlations are computed among neighboring genomic positions using kernel correlation. Representing the correlation as a function of the genome position, StereoGene outputs the local correlation track as part of the analysis. StereoGene also accounts for confounders such as input DNA by partial correlation. We apply our method to numerous comparisons of ChIP-Seq datasets from the Human Epigenome Atlas and FANTOM CAGE to demonstrate its wide applicability. We observe the changes in the correlation between epigenomic features across developmental trajectories of several tissue types consistent with known biology and find a novel spatial correlation of CAGE clusters with donor splice sites and with poly(A) sites. These analyses provide examples for the broad applicability of StereoGene for regulatory genomics. The StereoGene C ++ source code, program documentation, Galaxy integration scripts and examples are available from the project homepage http://stereogene.bioinf.fbb.msu.ru/. favorov@sensi.org. Supplementary data are available at Bioinformatics online.

  2. Automatic feature extraction in large fusion databases by using deep learning approach

    Energy Technology Data Exchange (ETDEWEB)

    Farias, Gonzalo, E-mail: gonzalo.farias@ucv.cl [Pontificia Universidad Católica de Valparaíso, Valparaíso (Chile); Dormido-Canto, Sebastián [Departamento de Informática y Automática, UNED, Madrid (Spain); Vega, Jesús; Rattá, Giuseppe [Asociación EURATOM/CIEMAT Para Fusión, CIEMAT, Madrid (Spain); Vargas, Héctor; Hermosilla, Gabriel; Alfaro, Luis; Valencia, Agustín [Pontificia Universidad Católica de Valparaíso, Valparaíso (Chile)

    2016-11-15

    Highlights: • Feature extraction is a very critical stage in any machine learning algorithm. • The problem dimensionality can be reduced enormously when selecting suitable attributes. • Despite the importance of feature extraction, the process is commonly done manually by trial and error. • Fortunately, recent advances in deep learning approach have proposed an encouraging way to find a good feature representation automatically. • In this article, deep learning is applied to the TJ-II fusion database to get more robust and accurate classifiers in comparison to previous work. - Abstract: Feature extraction is one of the most important machine learning issues. Finding suitable attributes of datasets can enormously reduce the dimensionality of the input space, and from a computational point of view can help all of the following steps of pattern recognition problems, such as classification or information retrieval. However, the feature extraction step is usually performed manually. Moreover, depending on the type of data, we can face a wide range of methods to extract features. In this sense, the process to select appropriate techniques normally takes a long time. This work describes the use of recent advances in deep learning approach in order to find a good feature representation automatically. The implementation of a special neural network called sparse autoencoder and its application to two classification problems of the TJ-II fusion database is shown in detail. Results have shown that it is possible to get robust classifiers with a high successful rate, in spite of the fact that the feature space is reduced to less than 0.02% from the original one.

  3. A method for automatic feature points extraction of human vertebrae three-dimensional model

    Science.gov (United States)

    Wu, Zhen; Wu, Junsheng

    2017-05-01

    A method for automatic extraction of the feature points of the human vertebrae three-dimensional model is presented. Firstly, the statistical model of vertebrae feature points is established based on the results of manual vertebrae feature points extraction. Then anatomical axial analysis of the vertebrae model is performed according to the physiological and morphological characteristics of the vertebrae. Using the axial information obtained from the analysis, a projection relationship between the statistical model and the vertebrae model to be extracted is established. According to the projection relationship, the statistical model is matched with the vertebrae model to get the estimated position of the feature point. Finally, by analyzing the curvature in the spherical neighborhood with the estimated position of feature points, the final position of the feature points is obtained. According to the benchmark result on multiple test models, the mean relative errors of feature point positions are less than 5.98%. At more than half of the positions, the error rate is less than 3% and the minimum mean relative error is 0.19%, which verifies the effectiveness of the method.

  4. A Relation Extraction Framework for Biomedical Text Using Hybrid Feature Set

    Directory of Open Access Journals (Sweden)

    Abdul Wahab Muzaffar

    2015-01-01

    Full Text Available The information extraction from unstructured text segments is a complex task. Although manual information extraction often produces the best results, it is harder to manage biomedical data extraction manually because of the exponential increase in data size. Thus, there is a need for automatic tools and techniques for information extraction in biomedical text mining. Relation extraction is a significant area under biomedical information extraction that has gained much importance in the last two decades. A lot of work has been done on biomedical relation extraction focusing on rule-based and machine learning techniques. In the last decade, the focus has changed to hybrid approaches showing better results. This research presents a hybrid feature set for classification of relations between biomedical entities. The main contribution of this research is done in the semantic feature set where verb phrases are ranked using Unified Medical Language System (UMLS and a ranking algorithm. Support Vector Machine and Naïve Bayes, the two effective machine learning techniques, are used to classify these relations. Our approach has been validated on the standard biomedical text corpus obtained from MEDLINE 2001. Conclusively, it can be articulated that our framework outperforms all state-of-the-art approaches used for relation extraction on the same corpus.

  5. 3D FEATURE POINT EXTRACTION FROM LIDAR DATA USING A NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    Y. Feng

    2016-06-01

    Full Text Available Accurate positioning of vehicles plays an important role in autonomous driving. In our previous research on landmark-based positioning, poles were extracted both from reference data and online sensor data, which were then matched to improve the positioning accuracy of the vehicles. However, there are environments which contain only a limited number of poles. 3D feature points are one of the proper alternatives to be used as landmarks. They can be assumed to be present in the environment, independent of certain object classes. To match the LiDAR data online to another LiDAR derived reference dataset, the extraction of 3D feature points is an essential step. In this paper, we address the problem of 3D feature point extraction from LiDAR datasets. Instead of hand-crafting a 3D feature point extractor, we propose to train it using a neural network. In this approach, a set of candidates for the 3D feature points is firstly detected by the Shi-Tomasi corner detector on the range images of the LiDAR point cloud. Using a back propagation algorithm for the training, the artificial neural network is capable of predicting feature points from these corner candidates. The training considers not only the shape of each corner candidate on 2D range images, but also their 3D features such as the curvature value and surface normal value in z axis, which are calculated directly based on the LiDAR point cloud. Subsequently the extracted feature points on the 2D range images are retrieved in the 3D scene. The 3D feature points extracted by this approach are generally distinctive in the 3D space. Our test shows that the proposed method is capable of providing a sufficient number of repeatable 3D feature points for the matching task. The feature points extracted by this approach have great potential to be used as landmarks for a better localization of vehicles.

  6. Seed oil polyphenols: rapid and sensitive extraction method and high resolution-mass spectrometry identification.

    Science.gov (United States)

    Koubaa, Mohamed; Mhemdi, Houcine; Vorobiev, Eugène

    2015-05-01

    Phenolic content is a primary parameter for vegetables oil quality evaluation, and directly involved in the prevention of oxidation and oil preservation. Several methods have been reported in the literature for polyphenols extraction from seed oil but the approaches commonly used remain manually handled. In this work, we propose a rapid and sensitive method for seed oil polyphenols extraction and identification. For this purpose, polyphenols were extracted from Opuntia stricta Haw seed oil, using high frequency agitation, separated, and then identified using a liquid chromatography-high resolution mass spectrometry method. Our results showed good sensitivity and reproducibility of the developed methods. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. A Transform-Based Feature Extraction Approach for Motor Imagery Tasks Classification

    Science.gov (United States)

    Khorshidtalab, Aida; Mesbah, Mostefa; Salami, Momoh J. E.

    2015-01-01

    In this paper, we present a new motor imagery classification method in the context of electroencephalography (EEG)-based brain–computer interface (BCI). This method uses a signal-dependent orthogonal transform, referred to as linear prediction singular value decomposition (LP-SVD), for feature extraction. The transform defines the mapping as the left singular vectors of the LP coefficient filter impulse response matrix. Using a logistic tree-based model classifier; the extracted features are classified into one of four motor imagery movements. The proposed approach was first benchmarked against two related state-of-the-art feature extraction approaches, namely, discrete cosine transform (DCT) and adaptive autoregressive (AAR)-based methods. By achieving an accuracy of 67.35%, the LP-SVD approach outperformed the other approaches by large margins (25% compared with DCT and 6 % compared with AAR-based methods). To further improve the discriminatory capability of the extracted features and reduce the computational complexity, we enlarged the extracted feature subset by incorporating two extra features, namely, Q- and the Hotelling’s \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$T^{2}$ \\end{document} statistics of the transformed EEG and introduced a new EEG channel selection method. The performance of the EEG classification based on the expanded feature set and channel selection method was compared with that of a number of the state-of-the-art classification methods previously reported with the BCI IIIa competition data set. Our method came second with an average accuracy of 81.38%. PMID:27170898

  8. Automation of lidar-based hydrologic feature extraction workflows using GIS

    Science.gov (United States)

    Borlongan, Noel Jerome B.; de la Cruz, Roel M.; Olfindo, Nestor T.; Perez, Anjillyn Mae C.

    2016-10-01

    With the advent of LiDAR technology, higher resolution datasets become available for use in different remote sensing and GIS applications. One significant application of LiDAR datasets in the Philippines is in resource features extraction. Feature extraction using LiDAR datasets require complex and repetitive workflows which can take a lot of time for researchers through manual execution and supervision. The Development of the Philippine Hydrologic Dataset for Watersheds from LiDAR Surveys (PHD), a project under the Nationwide Detailed Resources Assessment Using LiDAR (Phil-LiDAR 2) program, created a set of scripts, the PHD Toolkit, to automate its processes and workflows necessary for hydrologic features extraction specifically Streams and Drainages, Irrigation Network, and Inland Wetlands, using LiDAR Datasets. These scripts are created in Python and can be added in the ArcGIS® environment as a toolbox. The toolkit is currently being used as an aid for the researchers in hydrologic feature extraction by simplifying the workflows, eliminating human errors when providing the inputs, and providing quick and easy-to-use tools for repetitive tasks. This paper discusses the actual implementation of different workflows developed by Phil-LiDAR 2 Project 4 in Streams, Irrigation Network and Inland Wetlands extraction.

  9. An alternative to scale-space representation for extracting local features in image recognition

    DEFF Research Database (Denmark)

    Andersen, Hans Jørgen; Nguyen, Phuong Giang

    2012-01-01

    In image recognition, the common approach for extracting local features using a scale-space representation has usually three main steps; first interest points are extracted at different scales, next from a patch around each interest point the rotation is calculated with corresponding orientation...... and compensation, and finally a descriptor is computed for the derived patch (i.e. feature of the patch). To avoid the memory and computational intensive process of constructing the scale-space, we use a method where no scale-space is required This is done by dividing the given image into a number of triangles...

  10. Segmentation and feature extraction of fluid-filled uterine fibroid–A knowledge-based approach

    Directory of Open Access Journals (Sweden)

    Ratha Jeyalakshmi

    2010-10-01

    Full Text Available Uterine fibroids are the most common pelvic tumours in females. Ultrasound images of fibroids require image segmentation and feature extraction for analysis. This paper proposes a new method for segmenting the fluid-filled fibroid found in the uterus. It presents a fully automatic approach in which there is no need for human intervention. The method used in this paper employs a number of knowledge-based rules to locate the object and also utilises the concepts in mathematical morphology. It also extracts the necessary features of the fibroid which can be used to prepare the radiological report. The performance of this method is evaluated using area-based metrics.

  11. Research on Algorithm for Feature Extraction and Classification of Motor Imagery EEG Signals

    Directory of Open Access Journals (Sweden)

    Tian Juan

    2017-01-01

    Full Text Available This paper made a research on the feature extraction and pattern recognition of left and right hands motor imagery EEG signals. In combination with the data from BCI Competition III, denoising preprocessing is carried out for EEG signals firstly; and then, the relative wavelet energy is extracted as a feature vector from the Channels C3 and C4 by use of the algorithm for relative wavelet energy, and pattern recognition is carried out by use of the radial basis function neural network (RBFNN. Simulation results show that the proposed method achieves good classification results.

  12. Rapid and reliable extraction of genomic DNA from various wild-type and transgenic plants

    Directory of Open Access Journals (Sweden)

    Yang Moon-Sik

    2004-09-01

    Full Text Available Abstract Background DNA extraction methods for PCR-quality DNA from calluses and plants are not time efficient, since they require that the tissues be ground in liquid nitrogen, followed by precipitation of the DNA pellet in ethanol, washing and drying the pellet, etc. The need for a rapid and simple procedure is urgent, especially when hundreds of samples need to be analyzed. Here, we describe a simple and efficient method of isolating high-quality genomic DNA for PCR amplification and enzyme digestion from calluses, various wild-type and transgenic plants. Results We developed new rapid and reliable genomic DNA extraction method. With our developed method, plant genomic DNA extraction could be performed within 30 min. The method was as follows. Plant tissue was homogenized with salt DNA extraction buffer using hand-operated homogenizer and extracted by phenol:chloroform:isoamyl alcohol (25:24:1. After centrifugation, the supernatant was directly used for DNA template for PCR, resulting in successful amplification for RAPD from various sources of plants and specific foreign genes from transgenic plants. After precipitating the supernatant, the DNA was completely digested by restriction enzymes. Conclusion This DNA extraction procedure promises simplicity, speed, and efficiency, both in terms of time and the amount of plant sample required. In addition, this method does not require expensive facilities for plant genomic DNA extraction.

  13. An unsupervised feature extraction method for high dimensional image data compaction

    Science.gov (United States)

    Ghassemian, Hassan; Landgrebe, David

    1987-01-01

    A new on-line unsupervised feature extraction method for high-dimensional remotely sensed image data compaction is presented. This method can be utilized to solve the problem of data redundancy in scene representation by satellite-borne high resolution multispectral sensors. The algorithm first partitions the observation space into an exhaustive set of disjoint objects. Then, pixels that belong to an object are characterized by an object feature. Finally, the set of object features is used for data transmission and classification. The example results show that the performance with the compacted features provides a slight improvement in classification accuracy instead of any degradation. Also, the information extraction method does not need to be preceded by a data decompaction.

  14. Application of multi-scale feature extraction to surface defect classification of hot-rolled steels

    Science.gov (United States)

    Xu, Ke; Ai, Yong-hao; Wu, Xiu-yong

    2013-01-01

    Feature extraction is essential to the classification of surface defect images. The defects of hot-rolled steels distribute in different directions. Therefore, the methods of multi-scale geometric analysis (MGA) were employed to decompose the image into several directional subbands at several scales. Then, the statistical features of each subband were calculated to produce a high-dimensional feature vector, which was reduced to a lower-dimensional vector by graph embedding algorithms. Finally, support vector machine (SVM) was used for defect classification. The multi-scale feature extraction method was implemented via curvelet transform and kernel locality preserving projections (KLPP). Experiment results show that the proposed method is effective for classifying the surface defects of hot-rolled steels and the total classification rate is up to 97.33%.

  15. An Adequate Approach to Image Retrieval Based on Local Level Feature Extraction

    Directory of Open Access Journals (Sweden)

    Sumaira Muhammad Hayat Khan

    2010-10-01

    Full Text Available Image retrieval based on text annotation has become obsolete and is no longer interesting for scientists because of its high time complexity and low precision in results. Alternatively, increase in the amount of digital images has generated an excessive need for an accurate and efficient retrieval system. This paper proposes content based image retrieval technique at a local level incorporating all the rudimentary features. Image undergoes the segmentation process initially and each segment is then directed to the feature extraction process. The proposed technique is also based on image?s content which primarily includes texture, shape and color. Besides these three basic features, FD (Fourier Descriptors and edge histogram descriptors are also calculated to enhance the feature extraction process by taking hold of information at the boundary. Performance of the proposed method is found to be quite adequate when compared with the results from one of the best local level CBIR (Content Based Image Retrieval techniques.

  16. Improving Protein Fold Recognition by Extracting Fold-specific Features from Predicted Residue-residue Contacts.

    Science.gov (United States)

    Zhu, Jianwei; Zhang, Haicang; Li, Shuai Cheng; Wang, Chao; Kong, Lupeng; Sun, Shiwei; Zheng, Wei-Mou; Bu, Dongbo

    2017-08-16

    Accurate recognition of protein fold types is a key step for template-based prediction of protein structures. The existing approaches to fold recognition mainly exploit the features derived from alignments of query protein against templates. These approaches have been shown to be successful for fold recognition at family level, but usually failed at superfamily/fold levels. To overcome this limitation, one of the key points is to explore more structurally-informative features of proteins. Although residue-residue contacts carry abundant structural information, how to thoroughly exploit these information for fold recognition still remains a challenge. In this study, we present an approach (called DeepFR) to improve fold recognition at superfamily/fold levels. The basic idea of our approach is to extract fold-specific features from predicted residue-residue contacts of proteins using deep convolutional neural network (DCNN) technique. Based on these fold-specific features, we calculated similarity between query protein and templates, and then assigned query protein with fold type of the most similar template. DCNN has showed excellent performance in image feature extraction and image recognition; the rational underlying the application of DCNN for fold recognition is that contact likelihood maps are essentially analogy to images, as they both display compositional hierarchy. Experimental results on the LINDAHL dataset suggest that even using the extracted fold-specific features alone, our approach achieved success rate comparable to the state-of-the-art approaches. When further combining these features with traditional alignment-related features, the success rate of our approach increased to 92.3%, 82.5%, and 78.8% at family, superfamily, and fold levels, respectively, which is about 18% higher than the state-of-the-art approach at fold level, 6% higher at superfamily level, and 1% higher at family level. An independent assessment on SCOP_TEST dataset showed

  17. Adaptive spectral window sizes for extraction of diagnostic features from optical spectra

    Science.gov (United States)

    Kan, Chih-Wen; Lee, Andy Y.; Nieman, Linda T.; Sokolov, Konstantin; Markey, Mia K.

    2010-07-01

    We present an approach to adaptively adjust the spectral window sizes for optical spectra feature extraction. Previous studies extracted features from spectral windows of a fixed width. In our algorithm, piecewise linear regression is used to adaptively adjust the window sizes to find the maximum window size with reasonable linear fit with the spectrum. This adaptive windowing technique ensures the signal linearity in defined windows; hence, the adaptive windowing technique retains more diagnostic information while using fewer windows. This method was tested on a data set of diffuse reflectance spectra of oral mucosa lesions. Eight features were extracted from each window. We performed classifications using linear discriminant analysis with cross-validation. Using windowing techniques results in better classification performance than not using windowing. The area under the receiver-operating-characteristics curve for windowing techniques was greater than a nonwindowing technique for both normal versus mild dysplasia (MD) plus severe high-grade dysplasia or carcinama (SD) (MD+SD) and benign versus MD+SD. Although adaptive and fixed-size windowing perform similarly, adaptive windowing utilizes significantly fewer windows than fixed-size windows (number of windows per spectrum: 8 versus 16). Because adaptive windows retain most diagnostic information while reducing the number of windows needed for feature extraction, our results suggest that it isolates unique diagnostic features in optical spectra.

  18. Facial Expression Recognition with Fusion Features Extracted from Salient Facial Areas

    Science.gov (United States)

    Liu, Yanpeng; Li, Yibin; Ma, Xin; Song, Rui

    2017-01-01

    In the pattern recognition domain, deep architectures are currently widely used and they have achieved fine results. However, these deep architectures make particular demands, especially in terms of their requirement for big datasets and GPU. Aiming to gain better results without deep networks, we propose a simplified algorithm framework using fusion features extracted from the salient areas of faces. Furthermore, the proposed algorithm has achieved a better result than some deep architectures. For extracting more effective features, this paper firstly defines the salient areas on the faces. This paper normalizes the salient areas of the same location in the faces to the same size; therefore, it can extracts more similar features from different subjects. LBP and HOG features are extracted from the salient areas, fusion features’ dimensions are reduced by Principal Component Analysis (PCA) and we apply several classifiers to classify the six basic expressions at once. This paper proposes a salient areas definitude method which uses peak expressions frames compared with neutral faces. This paper also proposes and applies the idea of normalizing the salient areas to align the specific areas which express the different expressions. As a result, the salient areas found from different subjects are the same size. In addition, the gamma correction method is firstly applied on LBP features in our algorithm framework which improves our recognition rates significantly. By applying this algorithm framework, our research has gained state-of-the-art performances on CK+ database and JAFFE database. PMID:28353671

  19. Merging vector features extracted from imagery with GIS data using vector linking

    Science.gov (United States)

    Lewis, Robine

    1996-12-01

    Development of vector feature extraction techniques combined with large amounts of digital map data can create a rich database for intelligence activities. Our ability to use this data depends on understanding the relationships between different features and different representations of the same feature. Linking is the process of determining that two features in different layers represent the same object. Links between features in two or more layers can be used for: co-registering vector data from multiple sources, automatic revisions of linear data, feature attribute inheritance, or network transversal problems. One sample application uses linking for generating a database with features extracted from stereo imagery and attributes from existing DMA data sources. Time and personnel are limited resources. Therefore, the linking process needs to be automated. Two data sets are used -- pre-existing road data at 1:250 K scale (Set A) and stereo imagery used for extracting 1:50 K roads (Set B). Set A is richly attributed. We want the spatial accuracy of the roads from Set B and the attributes of the roads from Set A. Linking can match the two sets of roads. A new combined data set is created in several stages. Roads where a one-to-one linkage exists between A and B use the spatial data from Set B and the attributes from Set A. Roads that are unique in either set are added. Attributes are retained for the roads from Set A. This paper discusses a new technique for automatic feature linking developed at GDE Systems Inc. and demonstrated in a prototype. The prototype uses the characteristics of a linear vector feature to identify the same features from different sources and from different scales.

  20. A NOVEL SHAPE BASED FEATURE EXTRACTION TECHNIQUE FOR DIAGNOSIS OF LUNG DISEASES USING EVOLUTIONARY APPROACH

    OpenAIRE

    Bhuvaneswari, C.; Aruna, P.; Loganathan, D.

    2014-01-01

    Lung diseases are one of the most common diseases that affect the human community worldwide. When the diseases are not diagnosed they may lead to serious problems and may even lead to transience. As an outcome to assist the medical community this study helps in detecting some of the lung diseases specifically bronchitis, pneumonia and normal lung images. In this paper, to detect the lung diseases feature extraction is done by the proposed shape based methods, feature selection through the gen...

  1. Multistatic micro-doppler radar feature extraction for classification of unloaded/loaded micro-drones

    OpenAIRE

    Ritchie, Matthew; Fioranelli, Francesco; Borrion, Hervé; Griffiths, Hugh

    2017-01-01

    This paper presents the use of micro-Doppler signatures collected by a multistatic radar to detect and discriminate between micro-drones hovering and flying while carrying different payloads, which may be an indication of unusual or potentially hostile activities. Different features have been extracted and tested, namely features related to the Radar Cross Section of the micro-drones, as well as the Singular Value Decomposition (SVD) and centroid of the micro-Doppler signatures. In particular...

  2. Representation and Metrics Extraction from Feature Basis: An Object Oriented Approach

    Directory of Open Access Journals (Sweden)

    Fausto Neri da Silva Vanin

    2010-10-01

    Full Text Available This tutorial presents an object oriented approach to data reading and metrics extraction from feature basis. Structural issues about basis are discussed first, then the Object Oriented Programming (OOP is aplied to modeling the main elements in this context. The model implementation is then discussed using C++ as programing language. To validate the proposed model, we apply on some feature basis from the University of Carolina, Irvine Machine Learning Database.

  3. Managing the social impacts of the rapidly-expanding extractive industries in Greenland

    NARCIS (Netherlands)

    Hansen, Anne Merrild; Vanclay, Frank; Croal, Peter; Skjervedal, Anna Sofie Hurup

    2016-01-01

    The recent rapid expansion of extractive industries in Greenland is both causing high hopes for the future and anxieties among the local population. In the Arctic context, even small projects carry risks of major social impacts at local and national scales, and have the potential to severely affect

  4. Managing the social impacts of the rapidly expanding extractive industries in Greenland

    DEFF Research Database (Denmark)

    Hansen, Anne Merrild; Vanclay, Frank; Croal, Peter

    2016-01-01

    The recent rapid expansion of extractive industries in Greenland is both causing high hopes for the future and anxieties among the local population. In the Arctic context, even small projects carry risks of major social impacts at local and national scales, and have the potential to severely affect...

  5. Reliable Fault Classification of Induction Motors Using Texture Feature Extraction and a Multiclass Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Jia Uddin

    2014-01-01

    Full Text Available This paper proposes a method for the reliable fault detection and classification of induction motors using two-dimensional (2D texture features and a multiclass support vector machine (MCSVM. The proposed model first converts time-domain vibration signals to 2D gray images, resulting in texture patterns (or repetitive patterns, and extracts these texture features by generating the dominant neighborhood structure (DNS map. The principal component analysis (PCA is then used for the purpose of dimensionality reduction of the high-dimensional feature vector including the extracted texture features due to the fact that the high-dimensional feature vector can degrade classification performance, and this paper configures an effective feature vector including discriminative fault features for diagnosis. Finally, the proposed approach utilizes the one-against-all (OAA multiclass support vector machines (MCSVMs to identify induction motor failures. In this study, the Gaussian radial basis function kernel cooperates with OAA MCSVMs to deal with nonlinear fault features. Experimental results demonstrate that the proposed approach outperforms three state-of-the-art fault diagnosis algorithms in terms of fault classification accuracy, yielding an average classification accuracy of 100% even in noisy environments.

  6. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search.

    Science.gov (United States)

    Chang, Yuan-Jyun; Hwang, Wen-Jyi; Chen, Chih-Chang

    2016-12-07

    The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO). The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.

  7. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search

    Directory of Open Access Journals (Sweden)

    Yuan-Jyun Chang

    2016-12-01

    Full Text Available The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO. The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.

  8. A Novel Technique for Shape Feature Extraction Using Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Dhanoa Jaspreet Singh

    2016-01-01

    Full Text Available With the advent of technology and multimedia information, digital images are increasing very quickly. Various techniques are being developed to retrieve/search digital information or data contained in the image. Traditional Text Based Image Retrieval System is not plentiful. Since it is time consuming as it require manual image annotation. Also, the image annotation differs with different peoples. An alternate to this is Content Based Image Retrieval (CBIR system. It retrieves/search for image using its contents rather the text, keywords etc. A lot of exploration has been compassed in the range of Content Based Image Retrieval (CBIR with various feature extraction techniques. Shape is a significant image feature as it reflects the human perception. Moreover, Shape is quite simple to use by the user to define object in an image as compared to other features such as Color, texture etc. Over and above, if applied alone, no descriptor will give fruitful results. Further, by combining it with an improved classifier, one can use the positive features of both the descriptor and classifier. So, a tryout will be made to establish an algorithm for accurate feature (Shape extraction in Content Based Image Retrieval (CBIR. The main objectives of this project are: (a To propose an algorithm for shape feature extraction using CBIR, (b To evaluate the performance of proposed algorithm and (c To compare the proposed algorithm with state of art techniques.

  9. Feature extraction through parallel Probabilistic Principal Component Analysis for heart disease diagnosis

    Science.gov (United States)

    Shah, Syed Muhammad Saqlain; Batool, Safeera; Khan, Imran; Ashraf, Muhammad Usman; Abbas, Syed Hussnain; Hussain, Syed Adnan

    2017-09-01

    Automatic diagnosis of human diseases are mostly achieved through decision support systems. The performance of these systems is mainly dependent on the selection of the most relevant features. This becomes harder when the dataset contains missing values for the different features. Probabilistic Principal Component Analysis (PPCA) has reputation to deal with the problem of missing values of attributes. This research presents a methodology which uses the results of medical tests as input, extracts a reduced dimensional feature subset and provides diagnosis of heart disease. The proposed methodology extracts high impact features in new projection by using Probabilistic Principal Component Analysis (PPCA). PPCA extracts projection vectors which contribute in highest covariance and these projection vectors are used to reduce feature dimension. The selection of projection vectors is done through Parallel Analysis (PA). The feature subset with the reduced dimension is provided to radial basis function (RBF) kernel based Support Vector Machines (SVM). The RBF based SVM serves the purpose of classification into two categories i.e., Heart Patient (HP) and Normal Subject (NS). The proposed methodology is evaluated through accuracy, specificity and sensitivity over the three datasets of UCI i.e., Cleveland, Switzerland and Hungarian. The statistical results achieved through the proposed technique are presented in comparison to the existing research showing its impact. The proposed technique achieved an accuracy of 82.18%, 85.82% and 91.30% for Cleveland, Hungarian and Switzerland dataset respectively.

  10. PERSONAL AUTHENTICATION USING PALMPRINT WITH SOBEL CODE, CANNY EDGE AND PHASE CONGRUENCY FEATURE EXTRACTION METHOD

    Directory of Open Access Journals (Sweden)

    Jyoti Malik

    2012-02-01

    Full Text Available Palmprint recognition refers to recognizing a person on the basis of palmprint features. In this paper, we have proposed a palmprint based biometric authentication method with improvement in accuracy, so as to make it a real time palmprint authentication system. Several edge detection methods, Directional operator, Wavelet transform, Fourier transform etc. are available to extract line feature from the palmprint. In this paper, Sobel Code operators, Canny edge and Phase Congruency methods are applied to the palmprint image to extract palmprint features. The extracted Palmprint features are stored in Palmprint feature vector. The corresponding feature vectors are matched using sliding window with Hamming Distance similarity measurement method. In this paper, a Min Max Threshold Range (MMTR method is proposed that helps in increasing overall system accuracy by reducing the False Acceptance Rate (FAR. The person authenticated by reference threshold is again verified by second level of authentication using MMTR method. Experimental results indicate that the MMTR method improves the False Acceptance Rate drastically. The accuracy improvement leads to proposed real time authentication system.

  11. Microwave assisted extraction-solid phase extraction for high-efficient and rapid analysis of monosaccharides in plants.

    Science.gov (United States)

    Zhang, Ying; Li, Hai-Fang; Ma, Yuan; Jin, Yan; Kong, Guanghui; Lin, Jin-Ming

    2014-11-01

    Monosaccharides are the fundamental composition units of saccharides which are a common source of energy for metabolism. An effective and simple method consisting of microwave assisted extraction (MAE), solid phase extraction (SPE) and high performance liquid chromatography-refractive index detector (HPLC-RID) was developed for rapid detection of monosaccharides in plants. The MAE was applied to break down the structure of the plant cells and release the monosaccharides, while the SPE procedure was adopted to purify the extract before analysis. Finally, the HPLC-RID was employed to separate and analyze the monosaccharides with amino column. As a result, the extraction time was reduced to 17 min, which was nearly 85 times faster than soxhlet extraction. The recoveries of arabinose, xylose, fructose and glucose were 85.01%, 87.79%, 103.17%, and 101.24%, with excellent relative standard deviations (RSDs) of 1.94%, 1.13%, 0.60% and 1.67%, respectively. The proposed method was demonstrated to be efficient and time-saving, and had been applied to analyze monosaccharides in tobacco and tea successfully. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Feature extraction for EEG-based brain-computer interfaces by wavelet packet best basis decomposition.

    Science.gov (United States)

    Yang, Bang-hua; Yan, Guo-zheng; Yan, Rong-guo; Wu, Ting

    2006-12-01

    A method based on wavelet packet best basis decomposition (WPBBD) is investigated for the purpose of extracting features of electroencephalogram signals produced during motor imagery tasks in brain-computer interfaces. The method includes the following three steps. (1) Original signals are decomposed by wavelet packet transform (WPT) and a wavelet packet library can be formed. (2) The best basis for classification is selected from the library. (3) Subband energies included in the best basis are used as effective features. Three different motor imagery tasks are discriminated using the features. The WPBBD produces a 70.3% classification accuracy, which is 4.2% higher than that of the existing wavelet packet method.

  13. Feature extraction for EEG-based brain computer interfaces by wavelet packet best basis decomposition

    Science.gov (United States)

    Yang, Bang-hua; Yan, Guo-zheng; Yan, Rong-guo; Wu, Ting

    2006-12-01

    A method based on wavelet packet best basis decomposition (WPBBD) is investigated for the purpose of extracting features of electroencephalogram signals produced during motor imagery tasks in brain-computer interfaces. The method includes the following three steps. (1) Original signals are decomposed by wavelet packet transform (WPT) and a wavelet packet library can be formed. (2) The best basis for classification is selected from the library. (3) Subband energies included in the best basis are used as effective features. Three different motor imagery tasks are discriminated using the features. The WPBBD produces a 70.3% classification accuracy, which is 4.2% higher than that of the existing wavelet packet method.

  14. Rapid and effective DNA extraction method with bead grinding for a large amount of fungal DNA.

    Science.gov (United States)

    Watanabe, M; Lee, K; Goto, K; Kumagai, S; Sugita-Konishi, Y; Hara-Kudo, Y

    2010-06-01

    To identify a rapid method for extracting a large amount of DNA from fungi associated with food hygiene, extraction methods were compared using fungal pellets formed rapidly in liquid media. Combinations of physical and chemical methods or commercial kits were evaluated with 3 species of yeast, 10 species of ascomycetous molds, and 4 species of zygomycetous molds. Bead grinding was the physical method, followed by chemical methods involving sodium dodecyl sulfate (SDS), cetyl trimethyl ammonium bromide (CTAB), and benzyl chloride and two commercial kits. Quantity was calculated by UV absorbance at 260 nm, quality was determined by the ratio of UV absorbance at 260 and 280 nm, and gene amplifications and electrophoresis profiles of whole genomes were analyzed. Bead grinding with the SDS method was the most effective for DNA extraction for yeasts and ascomycetous molds, and bead grinding with the CTAB method was most effective with zygomycetous molds. For both groups of molds, bead grinding with the CTAB method was the best approach for DNA extraction. Because this combination also is relatively effective for yeasts, it can be used to extract a large amount of DNA from a wide range of fungi. The DNA extraction methods are useful for developing gene indexes to identify fungi with molecular techniques, such as DNA fingerprinting.

  15. RAPID AND EFFICIENT METHOD FOR ENVIRONMENTAL DNA EXTRACTION AND PURIFICATION FROM SOIL

    Directory of Open Access Journals (Sweden)

    J. Hamedi

    2016-06-01

    Full Text Available Large proportion of microbial population in the world is unculturable. Extraction of total DNA from soil is usually a crucial step considering to the difficulties of study the uncultivable microorganisms. Humic acid is considered as the main inhibitory agent in the environmental DNA studies. Here, we introduced a rapid and efficient method for DNA extraction and purification from soil. Yield of DNA extraction by the presented method was 130 ng/µl. Three conventional methods of DNA extraction including liquid nitrogen incursion, bead beating and sonication were performed as control methods. Yield of DNA extraction by these methods were 110, 90 and 50 ng/µl, respectively. A rapid and efficient one step DNA purification method was introduced instead of hazardous conventional phenol-chloroform methods. Humic acid removal percentage by the introduced method was 95.8 % that is comparable with 97 % gained by the conventional gel extraction method and yield of DNA after purification was 84 % and 73 %, respectively. This study could be useful in molecular ecology and metagenomics study as a fast and reliable method.

  16. Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System

    Directory of Open Access Journals (Sweden)

    Hongqiang Li

    2016-10-01

    Full Text Available Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias.

  17. Aircraft micro-doppler feature extraction from high range resolution profiles

    CSIR Research Space (South Africa)

    Berndt, RJ

    2015-10-01

    Full Text Available The use of high range resolution measurements and the micro-Doppler effect produced by rotating or vibrating parts of a target has been well documented. This paper presents a technique for extracting features related to helicopter rotors...

  18. Computer-extracted Features Can Distinguish Noncancerous Confounding Disease from Prostatic Adenocarcinoma at Multiparametric MR Imaging

    OpenAIRE

    Litjens, Geert J. S.; Elliott, Robin; Shih, Natalie NC; Feldman, Michael D.; Kobus, Thiele; Hulsbergen-van de Kaa, Christina; Barentsz, Jelle O.; Henkjan J. Huisman; Madabhushi, Anant

    2015-01-01

    For each class of benign disease, we identified a unique set of computer-extracted MR imaging–derived features, such as a high b value for benign prostatic hyperplasia and focal appearance on dynamic contrast-enhanced images for atrophy, that could help improve the differential diagnosis of prostate cancer.

  19. A Survey of Neural Network Techniques for Feature Extraction from Text

    OpenAIRE

    John, Vineet

    2017-01-01

    This paper aims to catalyze the discussions about text feature extraction techniques using neural network architectures. The research questions discussed in the paper focus on the state-of-the-art neural network techniques that have proven to be useful tools for language processing, language generation, text classification and other computational linguistics tasks.

  20. VHDL implementation of feature-extraction algorithm for the PANDA electromagnetic calorimeter

    NARCIS (Netherlands)

    Guliyev, E.; Kavatsyuk, M.; Lemmens, P. J. J.; Tambave, G.; Löhner, H.

    2012-01-01

    A simple, efficient, and robust feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA spectrometer at FAIR, Darmstadt, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The source-code is available as an

  1. Thermal feature extraction of servers in a datacenter using thermal image registration

    Science.gov (United States)

    Liu, Hang; Ran, Jian; Xie, Ting; Gao, Shan

    2017-09-01

    Thermal cameras provide fine-grained thermal information that enhances monitoring and enables automatic thermal management in large datacenters. Recent approaches employing mobile robots or thermal camera networks can already identify the physical locations of hot spots. Other distribution information used to optimize datacenter management can also be obtained automatically using pattern recognition technology. However, most of the features extracted from thermal images, such as shape and gradient, may be affected by changes in the position and direction of the thermal camera. This paper presents a method for extracting the thermal features of a hot spot or a server in a container datacenter. First, thermal and visual images are registered based on textural characteristics extracted from images acquired in datacenters. Then, the thermal distribution of each server is standardized. The features of a hot spot or server extracted from the standard distribution can reduce the impact of camera position and direction. The results of experiments show that image registration is efficient for aligning the corresponding visual and thermal images in the datacenter, and the standardization procedure reduces the impacts of camera position and direction on hot spot or server features.

  2. Using active contour models for feature extraction in camera-based seam tracking of arc welding

    DEFF Research Database (Denmark)

    Liu, Jinchao; Fan, Zhun; Olsen, Søren

    2009-01-01

    . It is highly desirable to extract groove features closer to the arc and thus facilitate for a nearly-closed-loop control situation. On the other hand, for performing seam tracking and nearly-closed-loop control it is not necessary to obtain very detailed information about the molten pool area as long as some...

  3. Spectral and bispectral feature-extraction neural networks for texture classification

    Science.gov (United States)

    Kameyama, Keisuke; Kosugi, Yukio

    1997-10-01

    A neural network model (Kernel Modifying Neural Network: KM Net) specialized for image texture classification, which unifies the filtering kernels for feature extraction and the layered network classifier, will be introduced. The KM Net consists of a layer of convolution kernels that are constrained to be 2D Gabor filters to guarantee efficient spectral feature localization. The KM Net enables an automated feature extraction in multi-channel texture classification through simultaneous modification of the Gabor kernel parameters (central frequency and bandwidth) and the connection weights of the subsequent classifier layers by a backpropagation-based training rule. The capability of the model and its training rule was verified via segmentation of common texture mosaic images. In comparison with the conventional multi-channel filtering method which uses numerous filters to cover the spatial frequency domain, the proposed strategy can greatly reduce the computational cost both in feature extraction and classification. Since the adaptive Gabor filtering scheme is also applicable to band selection in moment spectra of higher orders, the network model was extended for adaptive bispectral filtering for extraction of the phase relation among the frequency components. The ability of this Bispectral KM Net was demonstrated in the discrimination of visually discriminable synthetic textures with identical local power spectral distributions.

  4. VHDL Implementation of Feature-Extraction Algorithm for the PANDA Electromagnetic Calorimeter

    NARCIS (Netherlands)

    Kavatsyuk, M.; Guliyev, E.; Lemmens, P. J. J.; Löhner, H.; Tambave, G.

    2010-01-01

    The feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA detector at the future FAIR facility, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The use of modified firmware with the running on-line

  5. Investigation of ICA algorithms for feature extraction of EEG signals in discrimination of Alzheimer disease

    OpenAIRE

    Solé-Casals, Jordi; Vialatte, François B.; Chen, Zhe; Cichocki, Andrej

    2008-01-01

    In this paper we present a quantitative comparisons of different independent component analysis (ICA) algorithms in order to investigate their potential use in preprocessing (such as noise reduction and feature extraction) the electroencephalogram (EEG) data for early detection of Alzhemier disease (AD) or discrimination between AD (or mild cognitive impairment, MCI) and age-match control subjects.

  6. A New Feature Extraction Algorithm Based on Entropy Cloud Characteristics of Communication Signals

    Directory of Open Access Journals (Sweden)

    Jingchao Li

    2015-01-01

    Full Text Available Identifying communication signals under low SNR environment has become more difficult due to the increasingly complex communication environment. Most relevant literatures revolve around signal recognition under stable SNR, but not applicable under time-varying SNR environment. To solve this problem, we propose a new feature extraction method based on entropy cloud characteristics of communication modulation signals. The proposed algorithm extracts the Shannon entropy and index entropy characteristics of the signals first and then effectively combines the entropy theory and cloud model theory together. Compared with traditional feature extraction methods, instability distribution characteristics of the signals’ entropy characteristics can be further extracted from cloud model’s digital characteristics under low SNR environment by the proposed algorithm, which improves the signals’ recognition effects significantly. The results from the numerical simulations show that entropy cloud feature extraction algorithm can achieve better signal recognition effects, and even when the SNR is −11 dB, the signal recognition rate can still reach 100%.

  7. Feature extraction using adaptive multiwavelets and synthetic detection index for rotor fault diagnosis of rotating machinery

    Science.gov (United States)

    Lu, Na; Xiao, Zhihuai; Malik, O. P.

    2015-02-01

    State identification to diagnose the condition of rotating machinery is often converted to a classification problem of values of non-dimensional symptom parameters (NSPs). To improve the sensitivity of the NSPs to the changes in machine condition, a novel feature extraction method based on adaptive multiwavelets and the synthetic detection index (SDI) is proposed in this paper. Based on the SDI maximization principle, optimal multiwavelets are searched by genetic algorithms (GAs) from an adaptive multiwavelets library and used for extracting fault features from vibration signals. By the optimal multiwavelets, more sensitive NSPs can be extracted. To examine the effectiveness of the optimal multiwavelets, conventional methods are used for comparison study. The obtained NSPs are fed into K-means classifier to diagnose rotor faults. The results show that the proposed method can effectively improve the sensitivity of the NSPs and achieve a higher discrimination rate for rotor fault diagnosis than the conventional methods.

  8. Computerized lung nodule detection using 3D feature extraction and learning based algorithms.

    Science.gov (United States)

    Ozekes, Serhat; Osman, Onur

    2010-04-01

    In this paper, a Computer Aided Detection (CAD) system based on three-dimensional (3D) feature extraction is introduced to detect lung nodules. First, eight directional search was applied in order to extract regions of interests (ROIs). Then, 3D feature extraction was performed which includes 3D connected component labeling, straightness calculation, thickness calculation, determining the middle slice, vertical and horizontal widths calculation, regularity calculation, and calculation of vertical and horizontal black pixel ratios. To make a decision for each ROI, feed forward neural networks (NN), support vector machines (SVM), naive Bayes (NB) and logistic regression (LR) methods were used. These methods were trained and tested via k-fold cross validation, and results were compared. To test the performance of the proposed system, 11 cases, which were taken from Lung Image Database Consortium (LIDC) dataset, were used. ROC curves were given for all methods and 100% detection sensitivity was reached except naive Bayes.

  9. The Rolling Bearing Fault Feature Extraction Based on the LMD and Envelope Demodulation

    Directory of Open Access Journals (Sweden)

    Jun Ma

    2015-01-01

    Full Text Available Since the working process of rolling bearings is a complex and nonstationary dynamic process, the common time and frequency characteristics of vibration signals are submerged in the noise. Thus, it is the key of fault diagnosis to extract the fault feature from vibration signal. Therefore, a fault feature extraction method for the rolling bearing based on the local mean decomposition (LMD and envelope demodulation is proposed. Firstly, decompose the original vibration signal by LMD to get a series of production functions (PFs. Then dispose the envelope demodulation analysis on PF component. Finally, perform Fourier Transform on the demodulation signals and judge failure condition according to the dominant frequency of the spectrum. The results show that the proposed method can correctly extract the fault characteristics to diagnose faults.

  10. An Accurate Integral Method for Vibration Signal Based on Feature Information Extraction

    Directory of Open Access Journals (Sweden)

    Yong Zhu

    2015-01-01

    Full Text Available After summarizing the advantages and disadvantages of current integral methods, a novel vibration signal integral method based on feature information extraction was proposed. This method took full advantage of the self-adaptive filter characteristic and waveform correction feature of ensemble empirical mode decomposition in dealing with nonlinear and nonstationary signals. This research merged the superiorities of kurtosis, mean square error, energy, and singular value decomposition on signal feature extraction. The values of the four indexes aforementioned were combined into a feature vector. Then, the connotative characteristic components in vibration signal were accurately extracted by Euclidean distance search, and the desired integral signals were precisely reconstructed. With this method, the interference problem of invalid signal such as trend item and noise which plague traditional methods is commendably solved. The great cumulative error from the traditional time-domain integral is effectively overcome. Moreover, the large low-frequency error from the traditional frequency-domain integral is successfully avoided. Comparing with the traditional integral methods, this method is outstanding at removing noise and retaining useful feature information and shows higher accuracy and superiority.

  11. Features extraction of EMG signal using time domain analysis for arm rehabilitation device

    Science.gov (United States)

    Jali, Mohd Hafiz; Ibrahim, Iffah Masturah; Sulaima, Mohamad Fani; Bukhari, W. M.; Izzuddin, Tarmizi Ahmad; Nasir, Mohamad Na'im

    2015-05-01

    Rehabilitation device is used as an exoskeleton for people who had failure of their limb. Arm rehabilitation device may help the rehab program whom suffers from arm disability. The device that is used to facilitate the tasks of the program should improve the electrical activity in the motor unit and minimize the mental effort of the user. Electromyography (EMG) is the techniques to analyze the presence of electrical activity in musculoskeletal systems. The electrical activity in muscles of disable person is failed to contract the muscle for movements. In order to prevent the muscles from paralysis becomes spasticity, the force of movements should minimize the mental efforts. Therefore, the rehabilitation device should analyze the surface EMG signal of normal people that can be implemented to the device. The signal is collected according to procedure of surface electromyography for non-invasive assessment of muscles (SENIAM). The EMG signal is implemented to set the movements' pattern of the arm rehabilitation device. The filtered EMG signal was extracted for features of Standard Deviation (STD), Mean Absolute Value (MAV) and Root Mean Square (RMS) in time-domain. The extraction of EMG data is important to have the reduced vector in the signal features with less of error. In order to determine the best features for any movements, several trials of extraction methods are used by determining the features with less of errors. The accurate features can be use for future works of rehabilitation control in real-time.

  12. Feature extraction and analysis of online reviews for the recommendation of books using opinion mining technique

    Directory of Open Access Journals (Sweden)

    Shahab Saquib Sohail

    2016-09-01

    Full Text Available The customer's review plays an important role in deciding the purchasing behaviour for online shopping as a customer prefers to get the opinion of other customers by observing their opinion through online products’ reviews, blogs and social networking sites, etc. The customer's reviews reflect the customer's sentiments and have a substantial significance for the products being sold online including electronic gadgets, movies, house hold appliances and books. Hence, extracting the exact features of the products by analyzing the text of reviews requires a lot of efforts and human intelligence. In this paper we intend to analyze the online reviews available for books and extract book-features from the reviews using human intelligence. We have proposed a technique to categorize the features of books from the reviews of the customers. The extracted features may help in deciding the books to be recommended for readers. The ultimate goal of the work is to fulfil the requirement of the user and provide them their desired books. Thus, we have evaluated our categorization method by users themselves, and surveyed qualified persons for the concerned books. The survey results show high precision of the features categorized which clearly indicates that proposed method is very useful and appealing. The proposed technique may help in recommending the best books for concerned people and may also be generalized to recommend any product to the users.

  13. Fuzzy clustering-based feature extraction method for mental task classification.

    Science.gov (United States)

    Gupta, Akshansh; Kumar, Dhirendra

    2017-06-01

    A brain computer interface (BCI) is a communication system by which a person can send messages or requests for basic necessities without using peripheral nerves and muscles. Response to mental task-based BCI is one of the privileged areas of investigation. Electroencephalography (EEG) signals are used to represent the brain activities in the BCI domain. For any mental task classification model, the performance of the learning model depends on the extraction of features from EEG signal. In literature, wavelet transform and empirical mode decomposition are two popular feature extraction methods used to analyze a signal having non-linear and non-stationary property. By adopting the virtue of both techniques, a theoretical adaptive filter-based method to decompose non-linear and non-stationary signal has been proposed known as empirical wavelet transform (EWT) in recent past. EWT does not work well for the signals having overlapped in frequency and time domain and failed to provide good features for further classification. In this work, Fuzzy c-means algorithm is utilized along with EWT to handle this problem. It has been observed from the experimental results that EWT along with fuzzy clustering outperforms in comparison to EWT for the EEG-based response to mental task problem. Further, in case of mental task classification, the ratio of samples to features is very small. To handle the problem of small ratio of samples to features, in this paper, we have also utilized three well-known multivariate feature selection methods viz. Bhattacharyya distance (BD), ratio of scatter matrices (SR), and linear regression (LR). The results of experiment demonstrate that the performance of mental task classification has improved considerably by aforesaid methods. Ranking method and Friedman's statistical test are also performed to rank and compare different combinations of feature extraction methods and feature selection methods which endorse the efficacy of the proposed approach.

  14. Toward high-throughput phenotyping: unbiased automated feature extraction and selection from knowledge sources.

    Science.gov (United States)

    Yu, Sheng; Liao, Katherine P; Shaw, Stanley Y; Gainer, Vivian S; Churchill, Susanne E; Szolovits, Peter; Murphy, Shawn N; Kohane, Isaac S; Cai, Tianxi

    2015-09-01

    Analysis of narrative (text) data from electronic health records (EHRs) can improve population-scale phenotyping for clinical and genetic research. Currently, selection of text features for phenotyping algorithms is slow and laborious, requiring extensive and iterative involvement by domain experts. This paper introduces a method to develop phenotyping algorithms in an unbiased manner by automatically extracting and selecting informative features, which can be comparable to expert-curated ones in classification accuracy. Comprehensive medical concepts were collected from publicly available knowledge sources in an automated, unbiased fashion. Natural language processing (NLP) revealed the occurrence patterns of these concepts in EHR narrative notes, which enabled selection of informative features for phenotype classification. When combined with additional codified features, a penalized logistic regression model was trained to classify the target phenotype. The authors applied our method to develop algorithms to identify patients with rheumatoid arthritis and coronary artery disease cases among those with rheumatoid arthritis from a large multi-institutional EHR. The area under the receiver operating characteristic curves (AUC) for classifying RA and CAD using models trained with automated features were 0.951 and 0.929, respectively, compared to the AUCs of 0.938 and 0.929 by models trained with expert-curated features. Models trained with NLP text features selected through an unbiased, automated procedure achieved comparable or slightly higher accuracy than those trained with expert-curated features. The majority of the selected model features were interpretable. The proposed automated feature extraction method, generating highly accurate phenotyping algorithms with improved efficiency, is a significant step toward high-throughput phenotyping. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All

  15. Vibration Feature Extraction and Analysis for Fault Diagnosis of Rotating Machinery-A Literature Survey

    Directory of Open Access Journals (Sweden)

    Saleem Riaz

    2017-02-01

    Full Text Available Safety, reliability, efficiency and performance of rotating machinery in all industrial applications are the main concerns. Rotating machines are widely used in various industrial applications. Condition monitoring and fault diagnosis of rotating machinery faults are very important and often complex and labor-intensive. Feature extraction techniques play a vital role for a reliable, effective and efficient feature extraction for the diagnosis of rotating machinery. Therefore, developing effective bearing fault diagnostic method using different fault features at different steps becomes more attractive. Bearings are widely used in medical applications, food processing industries, semi-conductor industries, paper making industries and aircraft components. This paper review has demonstrated that the latest reviews applied to rotating machinery on the available a variety of vibration feature extraction. Generally literature is classified into two main groups: frequency domain, time frequency analysis. However, fault detection and diagnosis of rotating machine vibration signal processing methods to present their own limitations. In practice, most healthy ingredients faulty vibration signal from background noise and mechanical vibration signals are buried. This paper also reviews that how the advanced signal processing methods, empirical mode decomposition and interference cancellation algorithm has been investigated and developed. The condition for rotating machines based rehabilitation, prevent failures increase the availability and reduce the cost of maintenance is becoming necessary too. Rotating machine fault detection and diagnostics in developing algorithms signal processing based on a key problem is the fault feature extraction or quantification. Currently, vibration signal, fault detection and diagnosis of rotating machinery based techniques most widely used techniques. Furthermore, the researchers are widely interested to make automatic

  16. Graph theory for feature extraction and classification: a migraine pathology case study.

    Science.gov (United States)

    Jorge-Hernandez, Fernando; Garcia Chimeno, Yolanda; Garcia-Zapirain, Begonya; Cabrera Zubizarreta, Alberto; Gomez Beldarrain, Maria Angeles; Fernandez-Ruanova, Begonya

    2014-01-01

    Graph theory is also widely used as a representational form and characterization of brain connectivity network, as is machine learning for classifying groups depending on the features extracted from images. Many of these studies use different techniques, such as preprocessing, correlations, features or algorithms. This paper proposes an automatic tool to perform a standard process using images of the Magnetic Resonance Imaging (MRI) machine. The process includes pre-processing, building the graph per subject with different correlations, atlas, relevant feature extraction according to the literature, and finally providing a set of machine learning algorithms which can produce analyzable results for physicians or specialists. In order to verify the process, a set of images from prescription drug abusers and patients with migraine have been used. In this way, the proper functioning of the tool has been proved, providing results of 87% and 92% of success depending on the classifier used.

  17. Rapid, room-temperature synthesis of amorphous selenium/protein composites using Capsicum annuum L extract

    Science.gov (United States)

    Li, Shikuo; Shen, Yuhua; Xie, Anjian; Yu, Xuerong; Zhang, Xiuzhen; Yang, Liangbao; Li, Chuanhao

    2007-10-01

    We describe the formation of amorphous selenium (α-Se)/protein composites using Capsicum annuum L extract to reduce selenium ions (SeO32-) at room temperature. The reaction occurs rapidly and the process is simple and easy to handle. A protein with a molecular weight of 30 kDa extracted from Capsicum annuum L not only reduces the SeO32- ions to Se0, but also controls the nucleation and growth of Se0, and even participates in the formation of α-Se/protein composites. The size and shell thickness of the α-Se/protein composites increases with high Capsicum annuum L extract concentration, and decreases with low reaction solution pH. The results suggest that this eco-friendly, biogenic synthesis strategy could be widely used for preparing inorganic/organic biocomposites. In addition, we also discuss the possible mechanism of the reduction of SeO32- ions by Capsicum annuum L extract.

  18. Automatic Epileptic Seizure Detection in EEG Signals Using Multi-Domain Feature Extraction and Nonlinear Analysis

    Directory of Open Access Journals (Sweden)

    Lina Wang

    2017-05-01

    Full Text Available Epileptic seizure detection is commonly implemented by expert clinicians with visual observation of electroencephalography (EEG signals, which tends to be time consuming and sensitive to bias. The epileptic detection in most previous research suffers from low power and unsuitability for processing large datasets. Therefore, a computerized epileptic seizure detection method is highly required to eradicate the aforementioned problems, expedite epilepsy research and aid medical professionals. In this work, we propose an automatic epilepsy diagnosis framework based on the combination of multi-domain feature extraction and nonlinear analysis of EEG signals. Firstly, EEG signals are pre-processed by using the wavelet threshold method to remove the artifacts. We then extract representative features in the time domain, frequency domain, time-frequency domain and nonlinear analysis features based on the information theory. These features are further extracted in five frequency sub-bands based on the clinical interest, and the dimension of the original feature space is then reduced by using both a principal component analysis and an analysis of variance. Furthermore, the optimal combination of the extracted features is identified and evaluated via different classifiers for the epileptic seizure detection of EEG signals. Finally, the performance of the proposed method is investigated by using a public EEG database at the University Hospital Bonn, Germany. Experimental results demonstrate that the proposed epileptic seizure detection method can achieve a high average accuracy of 99.25%, indicating a powerful method in the detection and classification of epileptic seizures. The proposed seizure detection scheme is thus hoped to eliminate the burden of expert clinicians when they are processing a large number of data by visual observation and to speed-up the epilepsy diagnosis.

  19. Joint Feature Extraction and Classifier Design for ECG-Based Biometric Recognition.

    Science.gov (United States)

    Gutta, Sandeep; Cheng, Qi

    2016-03-01

    Traditional biometric recognition systems often utilize physiological traits such as fingerprint, face, iris, etc. Recent years have seen a growing interest in electrocardiogram (ECG)-based biometric recognition techniques, especially in the field of clinical medicine. In existing ECG-based biometric recognition methods, feature extraction and classifier design are usually performed separately. In this paper, a multitask learning approach is proposed, in which feature extraction and classifier design are carried out simultaneously. Weights are assigned to the features within the kernel of each task. We decompose the matrix consisting of all the feature weights into sparse and low-rank components. The sparse component determines the features that are relevant to identify each individual, and the low-rank component determines the common feature subspace that is relevant to identify all the subjects. A fast optimization algorithm is developed, which requires only the first-order information. The performance of the proposed approach is demonstrated through experiments using the MIT-BIH Normal Sinus Rhythm database.

  20. Rapid DNA extraction protocol from stool, suitable for molecular genetic diagnosis of colon cancer.

    Science.gov (United States)

    Abbaszadegan, Mohammad Reza; Velayati, Arash; Tavasoli, Alireza; Dadkhah, Ezzat

    2007-07-01

    Colorectal cancer (CRC) is one of the most common forms of cancers in the world and is curable if diagnosed at the early stage. Analysis of DNA extracted from stool specimens is a recent advantage to cancer diagnostics. Many protocols have been recommended for DNA extraction from stool, and almost all of them are difficult and time consuming, dealing with high amount of toxic materials like phenol. Their results vary due to sample collection method and further purification treatment. In this study, an easy and rapid method was optimized for isolating the human DNA with reduced PCR inhibitors present in stool. Fecal samples were collected from 10 colonoscopy-negative adult volunteers and 10 patients with CRC. Stool (1 g) was extracted using phenol/chloroform based protocol. The amplification of P53 exon 9 was examined to evaluate the extraction efficiency for human genomic targets and also compared its efficiency with Machiels et al. and Ito et al. protocols. The amplification of exon 9 of P53 from isolated fecal DNA was possible in most cases in 35 rounds of PCR using no additional purification procedure for elimination of the remaining inhibitors.inhibitors. A useful, rapid and easy protocol for routine extraction of DNA from stool was introduced and compared with two previous protocols.

  1. Annual Report on Radar Image Enhancement, Feature Extraction and Motion Compensation Using Joint Time-Frequency Techniques

    National Research Council Canada - National Science Library

    Hao, Ling

    2000-01-01

    This report summarizes the scientific progress on the research grant "Radar Image Enhancement, Feature Extraction, and Motion Compensation Using Joint Time-Frequency Techniques" during the period 15...

  2. The rapid determination of sideroxylonals in Eucalyptus foliage by extraction with sonication followed by HPLC.

    Science.gov (United States)

    Wallis, Ian R; Foley, William J

    2005-01-01

    A rapid method is described for the quantification of sideroxylonals, a group of formylated phloroglucinol compounds found in some eucalypts. Samples of dry, ground foliage were extracted by sonication with 20% methanol in acetonitrile, 7% water in acetonitrile or 40% water in acetonitrile and the extracts analysed by reversed phase HPLC. The extracts from the two water-acetonitrile extractions were stable for at least 48 h. All three sonication methods recovered more sideroxylonals than did the Soxhlet extraction with petroleum spirit and acetone. Adding 0.1% trifluoracetic acid to the water-acetonitrile extraction solvents led to even higher recoveries of sideroxylonals. Soaking the sample in extracting solvent for 5 min recovered 70% of the sideroxylonals, whilst sonicating the suspension for 1 min recovered the remainder. The developed method involving sonication of the sample for 5 min in 7% water in acetonitrile with 0.1% trifluoroacetic acid is fast and requires minimal equipment and solvents compared with the traditional methods. With an autosampler it is possible to prepare and run 100 samples a day. More importantly, the technique is ideal for the analysis of small samples, e.g. individual leaves, which is essential when studying the evolutionary ecology of eucalypts.

  3. Anti-aliasing lifting scheme for mechanical vibration fault feature extraction

    Science.gov (United States)

    Bao, Wen; Zhou, Rui; Yang, Jianguo; Yu, Daren; Li, Ning

    2009-07-01

    A troublesome problem in application of wavelet transform for mechanical vibration fault feature extraction is frequency aliasing. In this paper, an anti-aliasing lifting scheme is proposed to solve this problem. With this method, the input signal is firstly transformed by a redundant lifting scheme to avoid the aliasing caused by split and merge operations. Then the resultant coefficients and their single subband reconstructed signals are further processed to remove the aliasing caused by the unideal frequency property of lifting filters based on the fast Fourier transform (FFT) technique. Because the aliasing in each subband signal is eliminated, the ratio of signal to noise (SNR) is improved. The anti-aliasing lifting scheme is applied to analyze a practical vibration signal measured from a faulty ball bearing and testing results confirm that the proposed method is effective for extracting weak fault feature from a complex background. The proposed method is also applied to the fault diagnosis of valve trains in different working conditions on a gasoline engine. The experimental results show that using the features extracted from the anti-aliasing lifting scheme for classification can obtain a higher accuracy than using those extracted from the lifting scheme and the redundant lifting scheme.

  4. Motor Imagery signal Classification for BCI System Using Empirical Mode Décomposition and Bandpower Feature Extraction

    Directory of Open Access Journals (Sweden)

    Dalila Trad

    2016-06-01

    Full Text Available The idea that brain activity could be used as a communication channel has rapidly developed. Indeed, Electroencephalography (EEG is the most common technique to measure the brain activity on the scalp and in real-time. In this study we examine the use of EEG signals in Brain Computer Interface (BCI. This approach consists of combining the Empirical Mode Decomposition (EMD and band power (BP for the extraction of EEG signals in order to classify motor imagery (MI. This new feature extraction approach is intended for non-stationary and non-linear characteristics MI EEG. The EMD method is proposed to decompose the EEG signal into a set of stationary time series called Intrinsic Mode Functions (IMF. These IMFs are analyzed with the bandpower (BP to detect the characteristics of sensorimotor rhythms (mu and beta when a subject imagines a left or right hand movement. Finally, the data were just reconstructed with the specific IMFs and the bandpower is applied on the new database. Once the new feature vector is rebuilt, the classification of MI is performed using two types of classifiers: generative and discriminant. The results obtained show that the EMD allows the most reliable features to be extracted from EEG and that the classification rate obtained is higher and better than using the direct BP approach only. Such a system is a promising communication channel for people suffering from severe paralysis, for instance, people with myopathic diseases or muscular dystrophy (MD in order to help them move a joystick to a desired direction corresponding to the specific motor imagery.

  5. Intelligibility Evaluation of Pathological Speech through Multigranularity Feature Extraction and Optimization.

    Science.gov (United States)

    Fang, Chunying; Li, Haifeng; Ma, Lin; Zhang, Mancai

    2017-01-01

    Pathological speech usually refers to speech distortion resulting from illness or other biological insults. The assessment of pathological speech plays an important role in assisting the experts, while automatic evaluation of speech intelligibility is difficult because it is usually nonstationary and mutational. In this paper, we carry out an independent innovation of feature extraction and reduction, and we describe a multigranularity combined feature scheme which is optimized by the hierarchical visual method. A novel method of generating feature set based on S-transform and chaotic analysis is proposed. There are BAFS (430, basic acoustics feature), local spectral characteristics MSCC (84, Mel S-transform cepstrum coefficients), and chaotic features (12). Finally, radar chart and F-score are proposed to optimize the features by the hierarchical visual fusion. The feature set could be optimized from 526 to 96 dimensions based on NKI-CCRT corpus and 104 dimensions based on SVD corpus. The experimental results denote that new features by support vector machine (SVM) have the best performance, with a recognition rate of 84.4% on NKI-CCRT corpus and 78.7% on SVD corpus. The proposed method is thus approved to be effective and reliable for pathological speech intelligibility evaluation.

  6. Manifold Learning with Self-Organizing Mapping for Feature Extraction of Nonlinear Faults in Rotating Machinery

    Directory of Open Access Journals (Sweden)

    Lin Liang

    2015-01-01

    Full Text Available A new method for extracting the low-dimensional feature automatically with self-organization mapping manifold is proposed for the detection of rotating mechanical nonlinear faults (such as rubbing, pedestal looseness. Under the phase space reconstructed by single vibration signal, the self-organization mapping (SOM with expectation maximization iteration algorithm is used to divide the local neighborhoods adaptively without manual intervention. After that, the local tangent space alignment algorithm is adopted to compress the high-dimensional phase space into low-dimensional feature space. The proposed method takes advantages of the manifold learning in low-dimensional feature extraction and adaptive neighborhood construction of SOM and can extract intrinsic fault features of interest in two dimensional projection space. To evaluate the performance of the proposed method, the Lorenz system was simulated and rotation machinery with nonlinear faults was obtained for test purposes. Compared with the holospectrum approaches, the results reveal that the proposed method is superior in identifying faults and effective for rotating machinery condition monitoring.

  7. Application in Feature Extraction of AE Signal for Rolling Bearing in EEMD and Cloud Similarity Measurement

    Directory of Open Access Journals (Sweden)

    Long Han

    2015-01-01

    Full Text Available Due to the powerful ability of EEMD algorithm in noising, it is usually applied to feature extraction of fault signal of rolling bearing. But the selective correctness of sensitive IMF after decomposition can directly influence the correctness of feature extraction of fault signal. In order to solve the problem, the paper firstly proposes a new method on selecting sensitive IMF based on Cloud Similarity Measurement. By comparing this method in simulation experiment with the traditional mutual information method, it is obvious that the proposed method has overcome the misjudgment in the traditional method and it has higher accuracy, by factually collecting the normal, damage, and fracture fault AE signal of the inner ring of rolling bearing as samples, which will be decomposed by EEMD algorithm in the experiments. It uses Cloud Similarity Measurement to select sensitive IMF which can reflect the fault features. Finally, it sets the Multivariate Multiscale Entropy (MME of sensitive IMF as the eigenvalue of original signal; then it is classified by the SVM to determine the fault types exactly. The results of the experiments show that the selected sensitive IMF based on Cloud Similarity Measurement is effective; it can help to improve the accuracy of the fault diagnosis and feature extraction.

  8. Topologically Ordered Feature Extraction Based on Sparse Group Restricted Boltzmann Machines

    Directory of Open Access Journals (Sweden)

    Zhong Chen

    2015-01-01

    Full Text Available How to extract topologically ordered features efficiently from high-dimensional data is an important problem of unsupervised feature learning domains for deep learning. To address this problem, we propose a new type of regularization for Restricted Boltzmann Machines (RBMs. Adding two extra terms in the log-likelihood function to penalize the group weights and topologically ordered factors, this type of regularization extracts topologically ordered features based on sparse group Restricted Boltzmann Machines (SGRBMs. Therefore, it encourages an RBM to learn a much smoother probability distribution because its formulations turn out to be a combination of the group weight-decay and topologically ordered factor regularizations. We apply this proposed regularization scheme to image datasets of natural images and Flying Apsara images in the Dunhuang Grotto Murals at four different historical periods. The experimental results demonstrate that the combination of these two extra terms in the log-likelihood function helps to extract more discriminative features with much sparser and more aggregative hidden activation probabilities.

  9. A new feature extraction framework based on wavelets for breast cancer diagnosis.

    Science.gov (United States)

    Ergin, Semih; Kilinc, Onur

    2014-08-01

    This paper investigates a pattern recognition framework in order to determine and classify breast cancer cases. Initially, a two-class separation study classifying normal and abnormal (cancerous) breast tissues is achieved. The Histogram of Oriented Gradients (HOG), Dense Scale Invariant Feature Transform (DSIFT), and Local Configuration Pattern (LCP) methods are used to extract the rotation- and scale-invariant features for all tissue patches. A classification is made utilizing Support Vector Machine (SVM), k-Nearest Neighborhood (k-NN), Decision Tree, and Fisher Linear Discriminant Analysis (FLDA) via 10-fold cross validation. Then, a three-class study (normal, benign, and malignant cancerous cases) is carried out using similar procedures in a two-class case; however, the attained classification accuracies are not sufficiently satisfied. Therefore, a new feature extraction framework is proposed. The feature vectors are again extracted with this new framework, and more satisfactory results are obtained. Our new framework achieved a remarkable increase in recognition performance for the three-class study. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. The Feature Extraction Based on Texture Image Information for Emotion Sensing in Speech

    Directory of Open Access Journals (Sweden)

    Kun-Ching Wang

    2014-09-01

    Full Text Available In this paper, we present a novel texture image feature for Emotion Sensing in Speech (ESS. This idea is based on the fact that the texture images carry emotion-related information. The feature extraction is derived from time-frequency representation of spectrogram images. First, we transform the spectrogram as a recognizable image. Next, we use a cubic curve to enhance the image contrast. Then, the texture image information (TII derived from the spectrogram image can be extracted by using Laws’ masks to characterize emotional state. In order to evaluate the effectiveness of the proposed emotion recognition in different languages, we use two open emotional databases including the Berlin Emotional Speech Database (EMO-DB and eNTERFACE corpus and one self-recorded database (KHUSC-EmoDB, to evaluate the performance cross-corpora. The results of the proposed ESS system are presented using support vector machine (SVM as a classifier. Experimental results show that the proposed TII-based feature extraction inspired by visual perception can provide significant classification for ESS systems. The two-dimensional (2-D TII feature can provide the discrimination between different emotions in visual expressions except for the conveyance pitch and formant tracks. In addition, the de-noising in 2-D images can be more easily completed than de-noising in 1-D speech.

  11. Road and Roadside Feature Extraction Using Imagery and LIDAR Data for Transportation Operation

    Science.gov (United States)

    Ural, S.; Shan, J.; Romero, M. A.; Tarko, A.

    2015-03-01

    Transportation agencies require up-to-date, reliable, and feasibly acquired information on road geometry and features within proximity to the roads as input for evaluating and prioritizing new or improvement road projects. The information needed for a robust evaluation of road projects includes road centerline, width, and extent together with the average grade, cross-sections, and obstructions near the travelled way. Remote sensing is equipped with a large collection of data and well-established tools for acquiring the information and extracting aforementioned various road features at various levels and scopes. Even with many remote sensing data and methods available for road extraction, transportation operation requires more than the centerlines. Acquiring information that is spatially coherent at the operational level for the entire road system is challenging and needs multiple data sources to be integrated. In the presented study, we established a framework that used data from multiple sources, including one-foot resolution color infrared orthophotos, airborne LiDAR point clouds, and existing spatially non-accurate ancillary road networks. We were able to extract 90.25% of a total of 23.6 miles of road networks together with estimated road width, average grade along the road, and cross sections at specified intervals. Also, we have extracted buildings and vegetation within a predetermined proximity to the extracted road extent. 90.6% of 107 existing buildings were correctly identified with 31% false detection rate.

  12. ROAD AND ROADSIDE FEATURE EXTRACTION USING IMAGERY AND LIDAR DATA FOR TRANSPORTATION OPERATION

    Directory of Open Access Journals (Sweden)

    S. Ural

    2015-03-01

    Full Text Available Transportation agencies require up-to-date, reliable, and feasibly acquired information on road geometry and features within proximity to the roads as input for evaluating and prioritizing new or improvement road projects. The information needed for a robust evaluation of road projects includes road centerline, width, and extent together with the average grade, cross-sections, and obstructions near the travelled way. Remote sensing is equipped with a large collection of data and well-established tools for acquiring the information and extracting aforementioned various road features at various levels and scopes. Even with many remote sensing data and methods available for road extraction, transportation operation requires more than the centerlines. Acquiring information that is spatially coherent at the operational level for the entire road system is challenging and needs multiple data sources to be integrated. In the presented study, we established a framework that used data from multiple sources, including one-foot resolution color infrared orthophotos, airborne LiDAR point clouds, and existing spatially non-accurate ancillary road networks. We were able to extract 90.25% of a total of 23.6 miles of road networks together with estimated road width, average grade along the road, and cross sections at specified intervals. Also, we have extracted buildings and vegetation within a predetermined proximity to the extracted road extent. 90.6% of 107 existing buildings were correctly identified with 31% false detection rate.

  13. AN EFFICIENT METHOD FOR AUTOMATIC ROAD EXTRACTION BASED ON MULTIPLE FEATURES FROM LiDAR DATA

    Directory of Open Access Journals (Sweden)

    Y. Li

    2016-06-01

    Full Text Available The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1 road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2 local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3 hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for “Urban Classification and 3D Building Reconstruction” project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  14. Four-Channel Biosignal Analysis and Feature Extraction for Automatic Emotion Recognition

    Science.gov (United States)

    Kim, Jonghwa; André, Elisabeth

    This paper investigates the potential of physiological signals as a reliable channel for automatic recognition of user's emotial state. For the emotion recognition, little attention has been paid so far to physiological signals compared to audio-visual emotion channels such as facial expression or speech. All essential stages of automatic recognition system using biosignals are discussed, from recording physiological dataset up to feature-based multiclass classification. Four-channel biosensors are used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to search the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by emotion recognition results.

  15. Extraction of DNA by magnetic ionic liquids: tunable solvents for rapid and selective DNA analysis.

    Science.gov (United States)

    Clark, Kevin D; Nacham, Omprakash; Yu, Honglian; Li, Tianhao; Yamsek, Melissa M; Ronning, Donald R; Anderson, Jared L

    2015-02-03

    DNA extraction represents a significant bottleneck in nucleic acid analysis. In this study, hydrophobic magnetic ionic liquids (MILs) were synthesized and employed as solvents for the rapid and efficient extraction of DNA from aqueous solution. The DNA-enriched microdroplets were manipulated by application of a magnetic field. The three MILs examined in this study exhibited unique DNA extraction capabilities when applied toward a variety of DNA samples and matrices. High extraction efficiencies were obtained for smaller single-stranded and double-stranded DNA using the benzyltrioctylammonium bromotrichloroferrate(III) ([(C8)3BnN(+)][FeCl3Br(-)]) MIL, while the dicationic 1,12-di(3-hexadecylbenzimidazolium)dodecane bis[(trifluoromethyl)sulfonyl]imide bromotrichloroferrate(III) ([(C16BnIM)2C12(2+)][NTf2(-), FeCl3Br(-)]) MIL produced higher extraction efficiencies for larger DNA molecules. The MIL-based method was also employed for the extraction of DNA from a complex matrix containing albumin, revealing a competitive extraction behavior for the trihexyl(tetradecyl)phosphonium tetrachloroferrate(III) ([P6,6,6,14(+)][FeCl4(-)]) MIL in contrast to the [(C8)3BnN(+)][FeCl3Br(-)] MIL, which resulted in significantly less coextraction of albumin. The MIL-DNA method was employed for the extraction of plasmid DNA from bacterial cell lysate. DNA of sufficient quality and quantity for polymerase chain reaction (PCR) amplification was recovered from the MIL extraction phase, demonstrating the feasibility of MIL-based DNA sample preparation prior to downstream analysis.

  16. Land Cover Classification of Landsat Data with Phenological Features Extracted from Time Series MODIS NDVI Data

    Directory of Open Access Journals (Sweden)

    Kun Jia

    2014-11-01

    Full Text Available Temporal-related features are important for improving land cover classification accuracy using remote sensing data. This study investigated the efficacy of phenological features extracted from time series MODIS Normalized Difference Vegetation Index (NDVI data in improving the land cover classification accuracy of Landsat data. The MODIS NDVI data were first fused with Landsat data via the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM algorithm to obtain NDVI data at the Landsat spatial resolution. Next, phenological features, including the beginning and ending dates of the growing season, the length of the growing season, seasonal amplitude, and the maximum fitted NDVI value, were extracted from the fused time series NDVI data using the TIMESAT tool. The extracted data were integrated with the spectral data of the Landsat data to improve classification accuracy using a maximum likelihood classifier (MLC and support vector machine (SVM classifier. The results indicated that phenological features had a statistically significant effect on improving the land cover classification accuracy of single Landsat data (an approximately 3% increase in overall classification accuracy, especially for vegetation type discrimination. However, the phenological features did not improve on statistical measures including the maximum, the minimum, the mean, and the standard deviation values of the time series NDVI dataset, especially for human-managed vegetation types. Regarding different classifiers, SVM could achieve better classification accuracy than the traditional MLC classifier, but the improvement in accuracy obtained using advanced classifiers was inferior to that achieved by involving the temporally derived features for land cover classification.

  17. Machinery running state identification based on discriminant semi-supervised local tangent space alignment for feature fusion and extraction

    Science.gov (United States)

    Su, Zuqiang; Xiao, Hong; Zhang, Yi; Tang, Baoping; Jiang, Yonghua

    2017-04-01

    Extraction of sensitive features is a challenging but key task in data-driven machinery running state identification. Aimed at solving this problem, a method for machinery running state identification that applies discriminant semi-supervised local tangent space alignment (DSS-LTSA) for feature fusion and extraction is proposed. Firstly, in order to extract more distinct features, the vibration signals are decomposed by wavelet packet decomposition WPD, and a mixed-domain feature set consisted of statistical features, autoregressive (AR) model coefficients, instantaneous amplitude Shannon entropy and WPD energy spectrum is extracted to comprehensively characterize the properties of machinery running state(s). Then, the mixed-dimension feature set is inputted into DSS-LTSA for feature fusion and extraction to eliminate redundant information and interference noise. The proposed DSS-LTSA can extract intrinsic structure information of both labeled and unlabeled state samples, and as a result the over-fitting problem of supervised manifold learning and blindness problem of unsupervised manifold learning are overcome. Simultaneously, class discrimination information is integrated within the dimension reduction process in a semi-supervised manner to improve sensitivity of the extracted fusion features. Lastly, the extracted fusion features are inputted into a pattern recognition algorithm to achieve the running state identification. The effectiveness of the proposed method is verified by a running state identification case in a gearbox, and the results confirm the improved accuracy of the running state identification.

  18. Nonparametric Single-Trial EEG Feature Extraction and Classification of Driver's Cognitive Responses

    Science.gov (United States)

    Lin, Chin-Teng; Lin, Ken-Li; Ko, Li-Wei; Liang, Sheng-Fu; Kuo, Bor-Chen; Chung, I.-Fang

    2008-12-01

    We proposed an electroencephalographic (EEG) signal analysis approach to investigate the driver's cognitive response to traffic-light experiments in a virtual-reality-(VR-) based simulated driving environment. EEG signals are digitally sampled and then transformed by three different feature extraction methods including nonparametric weighted feature extraction (NWFE), principal component analysis (PCA), and linear discriminant analysis (LDA), which were also used to reduce the feature dimension and project the measured EEG signals to a feature space spanned by their eigenvectors. After that, the mapped data could be classified with fewer features and their classification results were compared by utilizing two different classifiers including [InlineEquation not available: see fulltext.] nearest neighbor classification (KNNC) and naive bayes classifier (NBC). Experimental data were collected from 6 subjects and the results show that NWFE+NBC gives the best classification accuracy ranging from [InlineEquation not available: see fulltext.], which is over [InlineEquation not available: see fulltext.] higher than LDA+KNN1. It also demonstrates the feasibility of detecting and analyzing single-trial EEG signals that represent operators' cognitive states and responses to task events.

  19. Nonparametric Single-Trial EEG Feature Extraction and Classification of Driver's Cognitive Responses

    Directory of Open Access Journals (Sweden)

    I-Fang Chung

    2008-05-01

    Full Text Available We proposed an electroencephalographic (EEG signal analysis approach to investigate the driver's cognitive response to traffic-light experiments in a virtual-reality-(VR- based simulated driving environment. EEG signals are digitally sampled and then transformed by three different feature extraction methods including nonparametric weighted feature extraction (NWFE, principal component analysis (PCA, and linear discriminant analysis (LDA, which were also used to reduce the feature dimension and project the measured EEG signals to a feature space spanned by their eigenvectors. After that, the mapped data could be classified with fewer features and their classification results were compared by utilizing two different classifiers including k nearest neighbor classification (KNNC and naive bayes classifier (NBC. Experimental data were collected from 6 subjects and the results show that NWFE+NBC gives the best classification accuracy ranging from 71%∼77%, which is over 10%∼24% higher than LDA+KNN1. It also demonstrates the feasibility of detecting and analyzing single-trial EEG signals that represent operators' cognitive states and responses to task events.

  20. Automatic Target Recognition in Synthetic Aperture Sonar Images Based on Geometrical Feature Extraction

    Directory of Open Access Journals (Sweden)

    J. Del Rio Vera

    2009-01-01

    Full Text Available This paper presents a new supervised classification approach for automated target recognition (ATR in SAS images. The recognition procedure starts with a novel segmentation stage based on the Hilbert transform. A number of geometrical features are then extracted and used to classify observed objects against a previously compiled database of target and non-target features. The proposed approach has been tested on a set of 1528 simulated images created by the NURC SIGMAS sonar model, achieving up to 95% classification accuracy.

  1. Fatty acids are rapidly delivered to and extracted from membranes by methyl-beta-cyclodextrin.

    Science.gov (United States)

    Brunaldi, Kellen; Huang, Nasi; Hamilton, James A

    2010-01-01

    We performed detailed biophysical studies of transfer of long-chain fatty acids (FAs) from methyl-beta-CD (MBCD) to model membranes (egg-PC vesicles) and cells and the extraction of FA from membranes by MBCD. We used i) fluorescein phosphatidylethanolamine to detect transfer of FA anions arriving in the outer membrane leaflet; ii) entrapped pH dyes to measure pH changes after FA diffusion (flip-flop) across the lipid bilayer; and iii) soluble fluorescent-labeled FA binding protein to measure the concentration of unbound FA in water. FA dissociated from MBCD, bound to the membrane, and underwent flip-flop within milliseconds. In the presence of vesicles, MBCD maintained the aqueous concentration of unbound FA at low levels comparable to those measured with albumin. In studies with cells, addition of oleic acid (OA) complexed with MBCD yielded rapid (seconds) dose-dependent OA transport into 3T3-L1 preadipocytes and HepG2 cells. MBCD extracted OA from cells and model membranes rapidly at concentrations exceeding those required for OA delivery but much lower than concentrations commonly used for extracting cholesterol. Compared with albumin, MBCD can transfer its entire FA load and is less likely to extract cell nutrients and to introduce impurities.

  2. A Rapid and Economical Method for Efficient DNA Extraction from Diverse Soils Suitable for Metagenomic Applications.

    Directory of Open Access Journals (Sweden)

    Selvaraju Gayathri Devi

    Full Text Available A rapid, cost effective method of metagenomic DNA extraction from soil is a useful tool for environmental microbiology. The present work describes an improved method of DNA extraction namely "powdered glass method" from diverse soils. The method involves the use of sterile glass powder for cell lysis followed by addition of 1% powdered activated charcoal (PAC as purifying agent to remove humic substances. The method yielded substantial DNA (5.87 ± 0.04 μg/g of soil with high purity (A260/280: 1.76 ± 0.05 and reduced humic substances (A340: 0.047 ± 0.03. The quality of the extracted DNA was compared against five different methods based on 16S rDNA PCR amplification, BamHI digestion and validated using quantitative PCR. The digested DNA was used for a metagenomic library construction with the transformation efficiency of 4 X 106 CFU mL-1. Besides providing rapid, efficient and economical extraction of metgenomic DNA from diverse soils, this method's applicability is also demonstrated for cultivated organisms (Gram positive B. subtilis NRRL-B-201, Gram negative E. coli MTCC40, and a microalgae C. sorokiniana UTEX#1666.

  3. A Rapid and Economical Method for Efficient DNA Extraction from Diverse Soils Suitable for Metagenomic Applications.

    Science.gov (United States)

    Devi, Selvaraju Gayathri; Fathima, Anwar Aliya; Radha, Sudhakar; Arunraj, Rex; Curtis, Wayne R; Ramya, Mohandass

    2015-01-01

    A rapid, cost effective method of metagenomic DNA extraction from soil is a useful tool for environmental microbiology. The present work describes an improved method of DNA extraction namely "powdered glass method" from diverse soils. The method involves the use of sterile glass powder for cell lysis followed by addition of 1% powdered activated charcoal (PAC) as purifying agent to remove humic substances. The method yielded substantial DNA (5.87 ± 0.04 μg/g of soil) with high purity (A260/280: 1.76 ± 0.05) and reduced humic substances (A340: 0.047 ± 0.03). The quality of the extracted DNA was compared against five different methods based on 16S rDNA PCR amplification, BamHI digestion and validated using quantitative PCR. The digested DNA was used for a metagenomic library construction with the transformation efficiency of 4 X 106 CFU mL-1. Besides providing rapid, efficient and economical extraction of metgenomic DNA from diverse soils, this method's applicability is also demonstrated for cultivated organisms (Gram positive B. subtilis NRRL-B-201, Gram negative E. coli MTCC40, and a microalgae C. sorokiniana UTEX#1666).

  4. A Rapid and Economical Method for Efficient DNA Extraction from Diverse Soils Suitable for Metagenomic Applications

    Science.gov (United States)

    Devi, Selvaraju Gayathri; Fathima, Anwar Aliya; Radha, Sudhakar; Arunraj, Rex; Curtis, Wayne R.; Ramya, Mohandass

    2015-01-01

    A rapid, cost effective method of metagenomic DNA extraction from soil is a useful tool for environmental microbiology. The present work describes an improved method of DNA extraction namely “powdered glass method” from diverse soils. The method involves the use of sterile glass powder for cell lysis followed by addition of 1% powdered activated charcoal (PAC) as purifying agent to remove humic substances. The method yielded substantial DNA (5.87 ± 0.04 μg/g of soil) with high purity (A260/280: 1.76 ± 0.05) and reduced humic substances (A340: 0.047 ± 0.03). The quality of the extracted DNA was compared against five different methods based on 16S rDNA PCR amplification, BamHI digestion and validated using quantitative PCR. The digested DNA was used for a metagenomic library construction with the transformation efficiency of 4 X 106 CFU mL-1. Besides providing rapid, efficient and economical extraction of metgenomic DNA from diverse soils, this method’s applicability is also demonstrated for cultivated organisms (Gram positive B. subtilis NRRL-B-201, Gram negative E. coli MTCC40, and a microalgae C. sorokiniana UTEX#1666). PMID:26167854

  5. New rapid DNA extraction method with Chelex from Venturia inaequalis spores.

    Science.gov (United States)

    Turan, Ceren; Nanni, Irene Maja; Brunelli, Agostino; Collina, Marina

    2015-08-01

    The objective of this study was to develop a rapid method to isolate DNA from Venturia inaequalis spores for use in diagnostic DNA mutation analysis. Chelex-100 resin was evaluated and compared with a well established DNA exctraction method, utilizing CTAB in order to have a robust comparison. In this research we demonstrated that Chelex-100 efficiently makes extraction of the DNA from V. inaequalis spores available for direct use in molecular analyses. Also, the quantity and quality of extracted DNA were shown to be adequate for PCR analysis. Comparatively, the quality of DNA samples isolated using Chelex method was better than those extracted using CTAB. In conclusion, the Chelex method is recommended for PCR experiments considering its simplicity and cost-effectiveness. Copyright © 2015. Published by Elsevier B.V.

  6. Complex Biological Event Extraction from Full Text using Signatures of Linguistic and Semantic Features

    Energy Technology Data Exchange (ETDEWEB)

    McGrath, Liam R.; Domico, Kelly O.; Corley, Courtney D.; Webb-Robertson, Bobbie-Jo M.

    2011-06-24

    Building on technical advances from the BioNLP 2009 Shared Task Challenge, the 2011 challenge sets forth to generalize techniques to other complex biological event extraction tasks. In this paper, we present the implementation and evaluation of a signature-based machine-learning technique to predict events from full texts of infectious disease documents. Specifically, our approach uses novel signatures composed of traditional linguistic features and semantic knowledge to predict event triggers and their candidate arguments. Using a leave-one out analysis, we report the contribution of linguistic and shallow semantic features in the trigger prediction and candidate argument extraction. Lastly, we examine evaluations and posit causes for errors of infectious disease track subtasks.

  7. Special object extraction from medieval books using superpixels and bag-of-features

    Science.gov (United States)

    Yang, Ying; Rushmeier, Holly

    2017-01-01

    We propose a method to extract special objects in images of medieval books, which generally represent, for example, figures and capital letters. Instead of working on the single-pixel level, we consider superpixels as the basic classification units for improved time efficiency. More specifically, we classify superpixels into different categories/objects by using a bag-of-features approach, where a superpixel category classifier is trained with the local features of the superpixels of the training images. With the trained classifier, we are able to assign the category labels to the superpixels of a historical document image under test. Finally, special objects can easily be identified and extracted after analyzing the categorization results. Experimental results demonstrate that, as compared to the state-of-the-art algorithms, our method provides comparable performance for some historical books but greatly outperforms them in terms of generality and computational time.

  8. Technology-aware algorithm design for neural spike detection, feature extraction, and dimensionality reduction.

    Science.gov (United States)

    Gibson, Sarah; Judy, Jack W; Marković, Dejan

    2010-10-01

    Applications such as brain-machine interfaces require hardware spike sorting in order to 1) obtain single-unit activity and 2) perform data reduction for wireless data transmission. Such systems must be low-power, low-area, high-accuracy, automatic, and able to operate in real time. Several detection, feature-extraction, and dimensionality-reduction algorithms for spike sorting are described and evaluated in terms of accuracy versus complexity. The nonlinear energy operator is chosen as the optimal spike-detection algorithm, being most robust over noise and relatively simple. Discrete derivatives is chosen as the optimal feature-extraction method, maintaining high accuracy across signal-to-noise ratios with a complexity orders of magnitude less than that of traditional methods such as principal-component analysis. We introduce the maximum-difference algorithm, which is shown to be the best dimensionality-reduction method for hardware spike sorting.

  9. THE MORPHOLOGICAL PYRAMID AND ITS APPLICATIONS TO REMOTE SENSING: MULTIRESOLUTION DATA ANALYSIS AND FEATURES EXTRACTION

    Directory of Open Access Journals (Sweden)

    Laporterie Florence

    2011-05-01

    Full Text Available In remote sensing, sensors are more and more numerous, and their spatial resolution is higher and higher. Thus, the availability of a quick and accurate characterisation of the increasing amount of data is now a quite important issue. This paper deals with an approach combining a pyramidal algorithm and mathematical morphology to study the physiographic characteristics of terrestrial ecosystems. Our pyramidal strategy involves first morphological filters, then extraction at each level of resolution of well-known landscapes features. The approach is applied to a digitised aerial photograph representing an heterogeneous landscape of orchards and forests along the Garonne river (France. This example, simulating very high spatial resolution imagery, highlights the influence of the parameters of the pyramid according to the spatial properties of the studied patterns. It is shown that, the morphological pyramid approach is a promising attempt for multi-level features extraction by modelling geometrical relevant parameters.

  10. Visual feature extraction and establishment of visual tags in the intelligent visual internet of things

    Science.gov (United States)

    Zhao, Yiqun; Wang, Zhihui

    2015-12-01

    The Internet of things (IOT) is a kind of intelligent networks which can be used to locate, track, identify and supervise people and objects. One of important core technologies of intelligent visual internet of things ( IVIOT) is the intelligent visual tag system. In this paper, a research is done into visual feature extraction and establishment of visual tags of the human face based on ORL face database. Firstly, we use the principal component analysis (PCA) algorithm for face feature extraction, then adopt the support vector machine (SVM) for classifying and face recognition, finally establish a visual tag for face which is already classified. We conducted a experiment focused on a group of people face images, the result show that the proposed algorithm have good performance, and can show the visual tag of objects conveniently.

  11. Transverse beam splitting made operational: Key features of the multiturn extraction at the CERN Proton Synchrotron

    Directory of Open Access Journals (Sweden)

    A. Huschauer

    2017-06-01

    Full Text Available Following a successful commissioning period, the multiturn extraction (MTE at the CERN Proton Synchrotron (PS has been applied for the fixed-target physics programme at the Super Proton Synchrotron (SPS since September 2015. This exceptional extraction technique was proposed to replace the long-serving continuous transfer (CT extraction, which has the drawback of inducing high activation in the ring. MTE exploits the principles of nonlinear beam dynamics to perform loss-free beam splitting in the horizontal phase space. Over multiple turns, the resulting beamlets are then transferred to the downstream accelerator. The operational deployment of MTE was rendered possible by the full understanding and mitigation of different hardware limitations and by redesigning the extraction trajectories and nonlinear optics, which was required due to the installation of a dummy septum to reduce the activation of the magnetic extraction septum. This paper focuses on these key features including the use of the transverse damper and the septum shadowing, which allowed a transition from the MTE study to a mature operational extraction scheme.

  12. Transverse beam splitting made operational: Key features of the multiturn extraction at the CERN Proton Synchrotron

    Science.gov (United States)

    Huschauer, A.; Blas, A.; Borburgh, J.; Damjanovic, S.; Gilardoni, S.; Giovannozzi, M.; Hourican, M.; Kahle, K.; Le Godec, G.; Michels, O.; Sterbini, G.; Hernalsteens, C.

    2017-06-01

    Following a successful commissioning period, the multiturn extraction (MTE) at the CERN Proton Synchrotron (PS) has been applied for the fixed-target physics programme at the Super Proton Synchrotron (SPS) since September 2015. This exceptional extraction technique was proposed to replace the long-serving continuous transfer (CT) extraction, which has the drawback of inducing high activation in the ring. MTE exploits the principles of nonlinear beam dynamics to perform loss-free beam splitting in the horizontal phase space. Over multiple turns, the resulting beamlets are then transferred to the downstream accelerator. The operational deployment of MTE was rendered possible by the full understanding and mitigation of different hardware limitations and by redesigning the extraction trajectories and nonlinear optics, which was required due to the installation of a dummy septum to reduce the activation of the magnetic extraction septum. This paper focuses on these key features including the use of the transverse damper and the septum shadowing, which allowed a transition from the MTE study to a mature operational extraction scheme.

  13. A RAPID DNA EXTRACTION METHOD IS SUCCESSFULLY APPLIED TO ITS-RFLP ANALYSIS OF MYCORRHIZAL ROOT TIPS

    Science.gov (United States)

    A rapid method for extracting DNA from intact, single root tips using a Xanthine solution was developed to handle very large numbers of analyses of ectomycorrhizas. By using an extraction without grinding we have attempted to bias the extraction towards the fungal DNA in the man...

  14. Using the erroneous data clustering to improve the feature extraction weights of original image algorithms

    Science.gov (United States)

    Wu, Tin-Yu; Chang, Tse; Chu, Teng-Hao

    2017-02-01

    Many data mining adopts the form of Artificial Neural Network (ANN) to solve many problems, many problems will be involved in the process of training Artificial Neural Network, such as the number of samples with volume label, the time and performance of training, the number of hidden layers and Transfer function, if the compared data results are not expected, it cannot be known clearly that which dimension causes the deviation, the main reason is that Artificial Neural Network trains compared results through the form of modifying weight, and it is not a kind of training to improve the original algorithm for the extraction algorithm of image, but tend to obtain correct value aimed at the result plus the weigh; in terms of these problems, this paper will mainly put forward a method to assist in the image data analysis of Artificial Neural Network; normally, a parameter will be set as the value to extract feature vector during processing the image, which will be considered by us as weight, the experiment will use the value extracted from feature point of Speeded Up Robust Features (SURF) Image as the basis for training, SURF itself can extract different feature points according to extracted values, we will make initial semi-supervised clustering according to these values, and use Modified K - on his Neighbors (MFKNN) as training and classification, the matching mode of unknown images is not one-to-one complete comparison, but only compare group Centroid, its main purpose is to save its efficiency and speed up, and its retrieved data results will be observed and analyzed eventually; the method is mainly to make clustering and classification with the use of the nature of image feature point to give values to groups with high error rate to produce new feature points and put them into Input Layer of Artificial Neural Network for training, and finally comparative analysis is made with Back-Propagation Neural Network (BPN) of Genetic Algorithm-Artificial Neural Network

  15. Clinical and pathological features of Nerium oleander extract toxicosis in wistar rats

    OpenAIRE

    Akhtar, Tasleem; Sheikh, Nadeem; Abbasi, Muddasir Hassan

    2014-01-01

    Background Nerium oleander has been widely studied for medicinal purposes for variety of maladies. N. oleander has also been reported having noxious effects because of its number of components that may show signs of toxicity by inhibiting plasma lemma Na+, K+-ATPase. The present study was performed to scrutinize the toxic effect of N. oleander leaves extract and its clinical and pathological features in wistar rats. Results Hematological analysis showed significant variations in RBCs count (P...

  16. Extracting features for power system vulnerability assessment from wide-area measurements

    Energy Technology Data Exchange (ETDEWEB)

    Kamwa, I. [Hydro-Quebec, Varennes, PQ (Canada). IREQ; Pradhan, A.; Joos, G. [McGill Univ., Montreal, PQ (Canada)

    2006-07-01

    Many power systems now operate close to their stability limits as a result of deregulation. Some utilities have chosen to install phason measurement units (PMUs) to monitor power system dynamics. The synchronized phasors of different areas of power systems available through a wide-area measurement system (WAMS) are expected to provide an effective security assessment tool as well as a stabilizing control action for inter-area oscillations and a system protection scheme (SPS) to evade possible blackouts. This paper presented tool extracting features for vulnerability assessment from WAMS-data. A Fourier-transform based technique was proposed for monitoring inter-area oscillations. FFT, wavelet transform and curve fitting approaches were investigated to analyze oscillatory signals. A dynamic voltage stability prediction algorithm was proposed for control action. An integrated framework was then proposed to assess a power system through extracted features from WAMS-data on first swing stability, voltage stability and inter-area oscillations. The centre of inertia (COI) concept was applied to the angle of voltage phasor. Prony analysis was applied to filtered signals to extract the damping coefficients. The minimum post-fault voltage of an area was considered for voltage stability, and an algorithm was used to monitor voltage stability issues. A data clustering technique was applied to classify the features in a group for improved system visualization. The overall performance of the technique was examined using a 67-bus system with 38 PMUs. The method used to extract features from both frequency and time domain analysis was provided. The test power system was described. The results of 4 case studies indicated that adoption of the method will be beneficial for system operators. 13 refs., 2 tabs., 13 figs.

  17. Vaccine adverse event text mining system for extracting features from vaccine safety reports.

    Science.gov (United States)

    Botsis, Taxiarchis; Buttolph, Thomas; Nguyen, Michael D; Winiecki, Scott; Woo, Emily Jane; Ball, Robert

    2012-01-01

    To develop and evaluate a text mining system for extracting key clinical features from vaccine adverse event reporting system (VAERS) narratives to aid in the automated review of adverse event reports. Based upon clinical significance to VAERS reviewing physicians, we defined the primary (diagnosis and cause of death) and secondary features (eg, symptoms) for extraction. We built a novel vaccine adverse event text mining (VaeTM) system based on a semantic text mining strategy. The performance of VaeTM was evaluated using a total of 300 VAERS reports in three sequential evaluations of 100 reports each. Moreover, we evaluated the VaeTM contribution to case classification; an information retrieval-based approach was used for the identification of anaphylaxis cases in a set of reports and was compared with two other methods: a dedicated text classifier and an online tool. The performance metrics of VaeTM were text mining metrics: recall, precision and F-measure. We also conducted a qualitative difference analysis and calculated sensitivity and specificity for classification of anaphylaxis cases based on the above three approaches. VaeTM performed best in extracting diagnosis, second level diagnosis, drug, vaccine, and lot number features (lenient F-measure in the third evaluation: 0.897, 0.817, 0.858, 0.874, and 0.914, respectively). In terms of case classification, high sensitivity was achieved (83.1%); this was equal and better compared to the text classifier (83.1%) and the online tool (40.7%), respectively. Our VaeTM implementation of a semantic text mining strategy shows promise in providing accurate and efficient extraction of key features from VAERS narratives.

  18. Breast cancer mitosis detection in histopathological images with spatial feature extraction

    Science.gov (United States)

    Albayrak, Abdülkadir; Bilgin, Gökhan

    2013-12-01

    In this work, cellular mitosis detection in histopathological images has been investigated. Mitosis detection is very expensive and time consuming process. Development of digital imaging in pathology has enabled reasonable and effective solution to this problem. Segmentation of digital images provides easier analysis of cell structures in histopathological data. To differentiate normal and mitotic cells in histopathological images, feature extraction step is very crucial step for the system accuracy. A mitotic cell has more distinctive textural dissimilarities than the other normal cells. Hence, it is important to incorporate spatial information in feature extraction or in post-processing steps. As a main part of this study, Haralick texture descriptor has been proposed with different spatial window sizes in RGB and La*b* color spaces. So, spatial dependencies of normal and mitotic cellular pixels can be evaluated within different pixel neighborhoods. Extracted features are compared with various sample sizes by Support Vector Machines using k-fold cross validation method. According to the represented results, it has been shown that separation accuracy on mitotic and non-mitotic cellular pixels gets better with the increasing size of spatial window.

  19. Feature extraction of the first difference of EMG time series for EMG pattern recognition.

    Science.gov (United States)

    Phinyomark, Angkoon; Quaine, Franck; Charbonnier, Sylvie; Serviere, Christine; Tarpin-Bernard, Franck; Laurillau, Yann

    2014-11-01

    This paper demonstrates the utility of a differencing technique to transform surface EMG signals measured during both static and dynamic contractions such that they become more stationary. The technique was evaluated by three stationarity tests consisting of the variation of two statistical properties, i.e., mean and standard deviation, and the reverse arrangements test. As a result of the proposed technique, the first difference of EMG time series became more stationary compared to the original measured signal. Based on this finding, the performance of time-domain features extracted from raw and transformed EMG was investigated via an EMG classification problem (i.e., eight dynamic motions and four EMG channels) on data from 18 subjects. The results show that the classification accuracies of all features extracted from the transformed signals were higher than features extracted from the original signals for six different classifiers including quadratic discriminant analysis. On average, the proposed differencing technique improved classification accuracies by 2-8%. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  20. Extracting product features and opinion words using pattern knowledge in customer reviews.

    Science.gov (United States)

    Htay, Su Su; Lynn, Khin Thidar

    2013-01-01

    Due to the development of e-commerce and web technology, most of online Merchant sites are able to write comments about purchasing products for customer. Customer reviews expressed opinion about products or services which are collectively referred to as customer feedback data. Opinion extraction about products from customer reviews is becoming an interesting area of research and it is motivated to develop an automatic opinion mining application for users. Therefore, efficient method and techniques are needed to extract opinions from reviews. In this paper, we proposed a novel idea to find opinion words or phrases for each feature from customer reviews in an efficient way. Our focus in this paper is to get the patterns of opinion words/phrases about the feature of product from the review text through adjective, adverb, verb, and noun. The extracted features and opinions are useful for generating a meaningful summary that can provide significant informative resource to help the user as well as merchants to track the most suitable choice of product.

  1. Fine-Grain Feature Extraction from Malware's Scan Behavior Based on Spectrum Analysis

    Science.gov (United States)

    Eto, Masashi; Sonoda, Kotaro; Inoue, Daisuke; Yoshioka, Katsunari; Nakao, Koji

    Network monitoring systems that detect and analyze malicious activities as well as respond against them, are becoming increasingly important. As malwares, such as worms, viruses, and bots, can inflict significant damages on both infrastructure and end user, technologies for identifying such propagating malwares are in great demand. In the large-scale darknet monitoring operation, we can see that malwares have various kinds of scan patterns that involves choosing destination IP addresses. Since many of those oscillations seemed to have a natural periodicity, as if they were signal waveforms, we considered to apply a spectrum analysis methodology so as to extract a feature of malware. With a focus on such scan patterns, this paper proposes a novel concept of malware feature extraction and a distinct analysis method named “SPectrum Analysis for Distinction and Extraction of malware features(SPADE)”. Through several evaluations using real scan traffic, we show that SPADE has the significant advantage of recognizing the similarities and dissimilarities between the same and different types of malwares.

  2. Exploration of Genetic Programming Optimal Parameters for Feature Extraction from Remote Sensed Imagery

    Science.gov (United States)

    Gao, P.; Shetty, S.; Momm, H. G.

    2014-11-01

    Evolutionary computation is used for improved information extraction from high-resolution satellite imagery. The utilization of evolutionary computation is based on stochastic selection of input parameters often defined in a trial-and-error approach. However, exploration of optimal input parameters can yield improved candidate solutions while requiring reduced computation resources. In this study, the design and implementation of a system that investigates the optimal input parameters was researched in the problem of feature extraction from remotely sensed imagery. The two primary assessment criteria were the highest fitness value and the overall computational time. The parameters explored include the population size and the percentage and order of mutation and crossover. The proposed system has two major subsystems; (i) data preparation: the generation of random candidate solutions; and (ii) data processing: evolutionary process based on genetic programming, which is used to spectrally distinguish the features of interest from the remaining image background of remote sensed imagery. The results demonstrate that the optimal generation number is around 1500, the optimal percentage of mutation and crossover ranges from 35% to 40% and 5% to 0%, respectively. Based on our findings the sequence that yielded better results was mutation over crossover. These findings are conducive to improving the efficacy of utilizing genetic programming for feature extraction from remotely sensed imagery.

  3. Adaptive Morphological Feature Extraction and Support Vector Regressive Classification for Bearing Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Jun Shuai

    2017-01-01

    Full Text Available Numerous studies on fault diagnosis have been conducted in recent years because the timely and correct detection of machine fault effectively minimizes the damage resulting in the unexpected breakdown of machineries. The mathematical morphological analysis has been performed to denoise raw signal. However, the improper choice of the length of the structure element (SE will substantially influence the effectiveness of fault feature extraction. Moreover, the classification of fault type is a significant step in intelligent fault diagnosis, and many techniques have already been developed, such as support vector machine (SVM. This study proposes an intelligent fault diagnosis strategy that combines the extraction of morphological feature and support vector regression (SVR classifier. The vibration signal is first processed using various scales of morphological analysis, where the length of SE is determined adaptively. Thereafter, nine statistical features are extracted from the processed signal. Lastly, an SVR classifier is used to identify the health condition of the machinery. The effectiveness of the proposed scheme is validated using the data set from a bearing test rig. Results show the high accuracy of the proposed method despite the influence of noise.

  4. Bilinear modeling of EMG signals to extract user-independent features for multiuser myoelectric interface.

    Science.gov (United States)

    Matsubara, Takamitsu; Morimoto, Jun

    2013-08-01

    In this study, we propose a multiuser myoelectric interface that can easily adapt to novel users. When a user performs different motions (e.g., grasping and pinching), different electromyography (EMG) signals are measured. When different users perform the same motion (e.g., grasping), different EMG signals are also measured. Therefore, designing a myoelectric interface that can be used by multiple users to perform multiple motions is difficult. To cope with this problem, we propose for EMG signals a bilinear model that is composed of two linear factors: 1) user dependent and 2) motion dependent. By decomposing the EMG signals into these two factors, the extracted motion-dependent factors can be used as user-independent features. We can construct a motion classifier on the extracted feature space to develop the multiuser interface. For novel users, the proposed adaptation method estimates the user-dependent factor through only a few interactions. The bilinear EMG model with the estimated user-dependent factor can extract the user-independent features from the novel user data. We applied our proposed method to a recognition task of five hand gestures for robotic hand control using four-channel EMG signals measured from subject forearms. Our method resulted in 73% accuracy, which was statistically significantly different from the accuracy of standard nonmultiuser interfaces, as the result of a two-sample t -test at a significance level of 1%.

  5. Feature extraction and classification for EEG signals using wavelet transform and machine learning techniques.

    Science.gov (United States)

    Amin, Hafeez Ullah; Malik, Aamir Saeed; Ahmad, Rana Fayyaz; Badruddin, Nasreen; Kamel, Nidal; Hussain, Muhammad; Chooi, Weng-Tink

    2015-03-01

    This paper describes a discrete wavelet transform-based feature extraction scheme for the classification of EEG signals. In this scheme, the discrete wavelet transform is applied on EEG signals and the relative wavelet energy is calculated in terms of detailed coefficients and the approximation coefficients of the last decomposition level. The extracted relative wavelet energy features are passed to classifiers for the classification purpose. The EEG dataset employed for the validation of the proposed method consisted of two classes: (1) the EEG signals recorded during the complex cognitive task--Raven's advance progressive metric test and (2) the EEG signals recorded in rest condition--eyes open. The performance of four different classifiers was evaluated with four performance measures, i.e., accuracy, sensitivity, specificity and precision values. The accuracy was achieved above 98 % by the support vector machine, multi-layer perceptron and the K-nearest neighbor classifiers with approximation (A4) and detailed coefficients (D4), which represent the frequency range of 0.53-3.06 and 3.06-6.12 Hz, respectively. The findings of this study demonstrated that the proposed feature extraction approach has the potential to classify the EEG signals recorded during a complex cognitive task by achieving a high accuracy rate.

  6. Extracting Product Features and Opinion Words Using Pattern Knowledge in Customer Reviews

    Directory of Open Access Journals (Sweden)

    Su Su Htay

    2013-01-01

    Full Text Available Due to the development of e-commerce and web technology, most of online Merchant sites are able to write comments about purchasing products for customer. Customer reviews expressed opinion about products or services which are collectively referred to as customer feedback data. Opinion extraction about products from customer reviews is becoming an interesting area of research and it is motivated to develop an automatic opinion mining application for users. Therefore, efficient method and techniques are needed to extract opinions from reviews. In this paper, we proposed a novel idea to find opinion words or phrases for each feature from customer reviews in an efficient way. Our focus in this paper is to get the patterns of opinion words/phrases about the feature of product from the review text through adjective, adverb, verb, and noun. The extracted features and opinions are useful for generating a meaningful summary that can provide significant informative resource to help the user as well as merchants to track the most suitable choice of product.

  7. A Novel Method for PD Feature Extraction of Power Cable with Renyi Entropy

    Directory of Open Access Journals (Sweden)

    Jikai Chen

    2015-11-01

    Full Text Available Partial discharge (PD detection can effectively achieve the status maintenance of XLPE (Cross Linked Polyethylene cable, so it is the direction of the development of equipment maintenance in power systems. At present, a main method of PD detection is the broadband electromagnetic coupling with a high-frequency current transformer (HFCT. Due to the strong electromagnetic interference (EMI generated among the mass amount of cables in a tunnel and the impedance mismatching of HFCT and the data acquisition equipment, the features of the pulse current generated by PD are often submerged in the background noise. The conventional method for the stationary signal analysis cannot analyze the PD signal, which is transient and non-stationary. Although the algorithm of Shannon wavelet singular entropy (SWSE can be used to analyze the PD signal at some level, its precision and anti-interference capability of PD feature extraction are still insufficient. For the above problem, a novel method named Renyi wavelet packet singular entropy (RWPSE is proposed and applied to the PD feature extraction on power cables. Taking a three-level system as an example, we analyze the statistical properties of Renyi entropy and the intrinsic correlation with Shannon entropy under different values of α . At the same time, discrete wavelet packet transform (DWPT is taken instead of discrete wavelet transform (DWT, and Renyi entropy is combined to construct the RWPSE algorithm. Taking the grounding current signal from the shielding layer of XLPE cable as the research object, which includes the current pulse feature of PD, the effectiveness of the novel method is tested. The theoretical analysis and experimental results show that compared to SWSE, RWPSE can not only improve the feature extraction accuracy for PD, but also can suppress EMI effectively.

  8. Myoelectric feature extraction using temporal-spatial descriptors for multifunction prosthetic hand control.

    Science.gov (United States)

    Khushaba, Rami N; Al-Timemy, Ali; Al-Ani, Ahmed; Al-Jumaily, Adel

    2016-08-01

    We tackle the challenging problem of myoelectric prosthesis control with an improved feature extraction algorithm. The proposed algorithm correlates a set of spectral moments and their nonlinearly mapped version across the temporal and spatial domains to form accurate descriptors of muscular activity. The main processing step involves the extraction of the Electromyogram (EMG) signal power spectrum characteristics directly from the time-domain for each analysis window, a step to preserve the computational power required for the construction of spectral features. The subsequent analyses involve computing 1) the correlation between the time-domain descriptors extracted from each analysis window and a nonlinearly mapped version of it across the same EMG channel; representing the temporal evolution of the EMG signals, and 2) the correlation between the descriptors extracted from differences of all possible combinations of channels and a nonlinearly mapped version of them, focusing on how the EMG signals from different channels correlates with each other. The proposed Temporal-Spatial Descriptors (TSDs) are validated on EMG data collected from six transradial amputees performing 11 classes of finger movements. Classification results showed significant reductions (at least 8%) in classification error rates compared to other methods.

  9. A new method to extract stable feature points based on self-generated simulation images

    Science.gov (United States)

    Long, Fei; Zhou, Bin; Ming, Delie; Tian, Jinwen

    2015-10-01

    Recently, image processing has got a lot of attention in the field of photogrammetry, medical image processing, etc. Matching two or more images of the same scene taken at different times, by different cameras, or from different viewpoints, is a popular and important problem. Feature extraction plays an important part in image matching. Traditional SIFT detectors reject the unstable points by eliminating the low contrast and edge response points. The disadvantage is the need to set the threshold manually. The main idea of this paper is to get the stable extremums by machine learning algorithm. Firstly we use ASIFT approach coupled with the light changes and blur to generate multi-view simulated images, which make up the set of the simulated images of the original image. According to the way of generating simulated images set, affine transformation of each generated image is also known. Instead of the traditional matching process which contain the unstable RANSAC method to get the affine transformation, this approach is more stable and accurate. Secondly we calculate the stability value of the feature points by the set of image with its affine transformation. Then we get the different feature properties of the feature point, such as DOG features, scales, edge point density, etc. Those two form the training set while stability value is the dependent variable and feature property is the independent variable. At last, a process of training by Rank-SVM is taken. We will get a weight vector. In use, based on the feature properties of each points and weight vector calculated by training, we get the sort value of each feature point which refers to the stability value, then we sort the feature points. In conclusion, we applied our algorithm and the original SIFT detectors to test as a comparison. While in different view changes, blurs, illuminations, it comes as no surprise that experimental results show that our algorithm is more efficient.

  10. Rapid detection of Ganoderma-infected oil palms by microwave ergosterol extraction with HPLC and TLC.

    Science.gov (United States)

    Muniroh, M S; Sariah, M; Zainal Abidin, M A; Lima, N; Paterson, R R M

    2014-05-01

    Detection of basal stem rot (BSR) by Ganoderma of oil palms was based on foliar symptoms and production of basidiomata. Enzyme-Linked Immunosorbent Assays-Polyclonal Antibody (ELISA-PAB) and PCR have been proposed as early detection methods for the disease. These techniques are complex, time consuming and have accuracy limitations. An ergosterol method was developed which correlated well with the degree of infection in oil palms, including samples growing in plantations. However, the method was capable of being optimised. This current study was designed to develop a simpler, more rapid and efficient ergosterol method with utility in the field that involved the use of microwave extraction. The optimised procedure involved extracting a small amount of Ganoderma, or Ganoderma-infected oil palm suspended in low volumes of solvent followed by irradiation in a conventional microwave oven at 70°C and medium high power for 30s, resulting in simultaneous extraction and saponification. Ergosterol was detected by thin layer chromatography (TLC) and quantified using high performance liquid chromatography with diode array detection. The TLC method was novel and provided a simple, inexpensive method with utility in the field. The new method was particularly effective at extracting high yields of ergosterol from infected oil palm and enables rapid analysis of field samples on site, allowing infected oil palms to be treated or culled very rapidly. Some limitations of the method are discussed herein. The procedures lend themselves to controlling the disease more effectively and allowing more effective use of land currently employed to grow oil palms, thereby reducing pressure to develop new plantations. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Neurocognitive features distinguishing primary central nervous system lymphoma from other possible causes of rapidly progressive dementia.

    Science.gov (United States)

    Deutsch, Mariel B; Mendez, Mario F

    2015-03-01

    Define the neurocognitive features of primary central nervous system lymphoma (PCNSL) presenting with dementia, and compare with other causes of rapidly progressive dementia (RPD). PCNSL can present as an RPD. Differentiating PCNSL from other RPDs is critical because lymphomatous dementia may be reversible, and untreated PCNSL is fatal. We performed a meta-analysis of case reports of dementia from PCNSL (between 1950 and 2013); 20 patients (14 with lymphomatosis cerebri) met our criteria. We compared these patients to a case series of patients with RPD from Creutzfeldt-Jakob disease and other non-PCNSL etiologies (Sala et al, 2012. Alzheimer Dis Assoc Disord. 26:267-271). Median age was 66 years (range 41 to 81); 70% were men. Time from symptom onset to evaluation was <6 months in 65%. No patients had seizures; 5% had headaches; 45% had non-aphasic speech difficulty. There was significantly more memory impairment in patients with PCNSL than other RPDs and significantly less myoclonus and parkinsonism. Behavioral changes and cerebellar signs were not significantly different. Significantly more patients with PCNSL than other RPDs had white matter changes; significantly fewer had atrophy. Elevated CSF protein and pleocytosis were more frequent in PCNSL; patients with other RPDs tended to have normal CSF±14-3-3 protein. Unlike patients with RPD from other causes, those with PCNSL commonly present with impaired memory, apathy, and abnormal speech and gait, without headache, seizure, or myoclonus. White matter changes and CSF abnormalities predominate. Improved clinical awareness of PCNSL can prompt earlier diagnosis and treatment.

  12. A DFT-Based Method of Feature Extraction for Palmprint Recognition

    Science.gov (United States)

    Choge, H. Kipsang; Karungaru, Stephen G.; Tsuge, Satoru; Fukumi, Minoru

    Over the last quarter century, research in biometric systems has developed at a breathtaking pace and what started with the focus on the fingerprint has now expanded to include face, voice, iris, and behavioral characteristics such as gait. Palmprint is one of the most recent additions, and is currently the subject of great research interest due to its inherent uniqueness, stability, user-friendliness and ease of acquisition. This paper describes an effective and procedurally simple method of palmprint feature extraction specifically for palmprint recognition, although verification experiments are also conducted. This method takes advantage of the correspondences that exist between prominent palmprint features or objects in the spatial domain with those in the frequency or Fourier domain. Multi-dimensional feature vectors are formed by extracting a GA-optimized set of points from the 2-D Fourier spectrum of the palmprint images. The feature vectors are then used for palmprint recognition, before and after dimensionality reduction via the Karhunen-Loeve Transform (KLT). Experiments performed using palmprint images from the ‘PolyU Palmprint Database’ indicate that using a compact set of DFT coefficients, combined with KLT and data preprocessing, produces a recognition accuracy of more than 98% and can provide a fast and effective technique for personal identification.

  13. A Novel Approach Based on Data Redundancy for Feature Extraction of EEG Signals.

    Science.gov (United States)

    Amin, Hafeez Ullah; Malik, Aamir Saeed; Kamel, Nidal; Hussain, Muhammad

    2016-03-01

    Feature extraction and classification for electroencephalogram (EEG) in medical applications is a challenging task. The EEG signals produce a huge amount of redundant data or repeating information. This redundancy causes potential hurdles in EEG analysis. Hence, we propose to use this redundant information of EEG as a feature to discriminate and classify different EEG datasets. In this study, we have proposed a JPEG2000 based approach for computing data redundancy from multi-channels EEG signals and have used the redundancy as a feature for classification of EEG signals by applying support vector machine, multi-layer perceptron and k-nearest neighbors classifiers. The approach is validated on three EEG datasets and achieved high accuracy rate (95-99 %) in the classification. Dataset-1 includes the EEG signals recorded during fluid intelligence test, dataset-2 consists of EEG signals recorded during memory recall test, and dataset-3 has epileptic seizure and non-seizure EEG. The findings demonstrate that the approach has the ability to extract robust feature and classify the EEG signals in various applications including clinical as well as normal EEG patterns.

  14. IMPLEMENTATION OF ARTIFICIAL NEURAL NETWORK FOR FACE RECOGNITION USING GABOR FEATURE EXTRACTION

    Directory of Open Access Journals (Sweden)

    Muthukannan K

    2013-11-01

    Full Text Available Face detection and recognition is the first step for many applications in various fields such as identification and is used as a key to enter into the various electronic devices, video surveillance, and human computer interface and image database management. This paper focuses on feature extraction in an image using Gabor filter and the extracted image feature vector is then given as an input to the neural network. The neural network is trained with the input data. The Gabor wavelet concentrates on the important components of the face including eye, mouth, nose, cheeks. The main requirement of this technique is the threshold, which gives privileged sensitivity. The threshold values are the feature vectors taken from the faces. These feature vectors are given into the feed forward neural network to train the network. Using the feed forward neural network as a classifier, the recognized and unrecognized faces are classified. This classifier attains a higher face deduction rate. By training more input vectors the system proves to be effective. The effectiveness of the proposed method is demonstrated by the experimental results.

  15. Feature extraction from 3D lidar point clouds using image processing methods

    Science.gov (United States)

    Zhu, Ling; Shortridge, Ashton; Lusch, David; Shi, Ruoming

    2011-10-01

    Airborne LiDAR data have become cost-effective to produce at local and regional scales across the United States and internationally. These data are typically collected and processed into surface data products by contractors for state and local communities. Current algorithms for advanced processing of LiDAR point cloud data are normally implemented in specialized, expensive software that is not available for many users, and these users are therefore unable to experiment with the LiDAR point cloud data directly for extracting desired feature classes. The objective of this research is to identify and assess automated, readily implementable GIS procedures to extract features like buildings, vegetated areas, parking lots and roads from LiDAR data using standard image processing tools, as such tools are relatively mature with many effective classification methods. The final procedure adopted employs four distinct stages. First, interpolation is used to transfer the 3D points to a high-resolution raster. Raster grids of both height and intensity are generated. Second, multiple raster maps - a normalized surface model (nDSM), difference of returns, slope, and the LiDAR intensity map - are conflated to generate a multi-channel image. Third, a feature space of this image is created. Finally, supervised classification on the feature space is implemented. The approach is demonstrated in both a conceptual model and on a complex real-world case study, and its strengths and limitations are addressed.

  16. Diesel Engine Valve Clearance Fault Diagnosis Based on Features Extraction Techniques and FastICA-SVM

    Science.gov (United States)

    Jing, Ya-Bing; Liu, Chang-Wen; Bi, Feng-Rong; Bi, Xiao-Yang; Wang, Xia; Shao, Kang

    2017-07-01

    Numerous vibration-based techniques are rarely used in diesel engines fault diagnosis in a direct way, due to the surface vibration signals of diesel engines with the complex non-stationary and nonlinear time-varying features. To investigate the fault diagnosis of diesel engines, fractal correlation dimension, wavelet energy and entropy as features reflecting the diesel engine fault fractal and energy characteristics are extracted from the decomposed signals through analyzing vibration acceleration signals derived from the cylinder head in seven different states of valve train. An intelligent fault detector FastICA-SVM is applied for diesel engine fault diagnosis and classification. The results demonstrate that FastICA-SVM achieves higher classification accuracy and makes better generalization performance in small samples recognition. Besides, the fractal correlation dimension and wavelet energy and entropy as the special features of diesel engine vibration signal are considered as input vectors of classifier FastICA-SVM and could produce the excellent classification results. The proposed methodology improves the accuracy of feature extraction and the fault diagnosis of diesel engines.

  17. Homomorphic encryption-based secure SIFT for privacy-preserving feature extraction

    Science.gov (United States)

    Hsu, Chao-Yung; Lu, Chun-Shien; Pei, Soo-Chang

    2011-02-01

    Privacy has received much attention but is still largely ignored in the multimedia community. Consider a cloud computing scenario, where the server is resource-abundant and is capable of finishing the designated tasks, it is envisioned that secure media retrieval and search with privacy-preserving will be seriously treated. In view of the fact that scale-invariant feature transform (SIFT) has been widely adopted in various fields, this paper is the first to address the problem of secure SIFT feature extraction and representation in the encrypted domain. Since all the operations in SIFT must be moved to the encrypted domain, we propose a homomorphic encryption-based secure SIFT method for privacy-preserving feature extraction and representation based on Paillier cryptosystem. In particular, homomorphic comparison is a must for SIFT feature detection but is still a challenging issue for homomorphic encryption methods. To conquer this problem, we investigate a quantization-like secure comparison strategy in this paper. Experimental results demonstrate that the proposed homomorphic encryption-based SIFT performs comparably to original SIFT on image benchmarks, while preserving privacy additionally. We believe that this work is an important step toward privacy-preserving multimedia retrieval in an environment, where privacy is a major concern.

  18. Feature Extraction in the North Sinai Desert Using Spaceborne Synthetic Aperture Radar: Potential Archaeological Applications

    Directory of Open Access Journals (Sweden)

    Christopher Stewart

    2016-10-01

    Full Text Available Techniques were implemented to extract anthropogenic features in the desert region of North Sinai using data from the first- and second-generation Phased Array type L-band Synthetic Aperture Radar (PALSAR-1 and 2. To obtain a synoptic view over the study area, a mosaic of average, multitemporal (De Grandi filtered PALSAR-1 σ° backscatter of North Sinai was produced. Two subset regions were selected for further analysis. The first included an area of abundant linear features of high relative backscatter in a strategic, but sparsely developed area between the Wadi Tumilat and Gebel Maghara. The second included an area of low backscatter anomaly features in a coastal sabkha around the archaeological sites of Tell el-Farama, Tell el-Mahzan, and Tell el-Kanais. Over the subset region between the Wadi Tumilat and Gebel Maghara, algorithms were developed to extract linear features and convert them to vector format to facilitate interpretation. The algorithms were based on mathematical morphology, but to distinguish apparent man-made features from sand dune ridges, several techniques were applied. The first technique took as input the average σ° backscatter and used a Digital Elevation Model (DEM derived Local Incidence Angle (LAI mask to exclude sand dune ridges. The second technique, which proved more effective, used the average interferometric coherence as input. Extracted features were compared with other available information layers and in some cases revealed partially buried roads. Over the coastal subset region a time series of PALSAR-2 spotlight data were processed. The coefficient of variation (CoV of De Grandi filtered imagery clearly revealed anomaly features of low CoV. These were compared with the results of an archaeological field walking survey carried out previously. The features generally correspond with isolated areas identified in the field survey as having a higher density of archaeological finds, and interpreted as possible

  19. Extraction of features from sleep EEG for Bayesian assessment of brain development.

    Directory of Open Access Journals (Sweden)

    Vitaly Schetinin

    Full Text Available Brain development can be evaluated by experts analysing age-related patterns in sleep electroencephalograms (EEG. Natural variations in the patterns, noise, and artefacts affect the evaluation accuracy as well as experts' agreement. The knowledge of predictive posterior distribution allows experts to estimate confidence intervals within which decisions are distributed. Bayesian approach to probabilistic inference has provided accurate estimates of intervals of interest. In this paper we propose a new feature extraction technique for Bayesian assessment and estimation of predictive distribution in a case of newborn brain development assessment. The new EEG features are verified within the Bayesian framework on a large EEG data set including 1,100 recordings made from newborns in 10 age groups. The proposed features are highly correlated with brain maturation and their use increases the assessment accuracy.

  20. iPcc: a novel feature extraction method for accurate disease class discovery and prediction.

    Science.gov (United States)

    Ren, Xianwen; Wang, Yong; Zhang, Xiang-Sun; Jin, Qi

    2013-08-01

    Gene expression profiling has gradually become a routine procedure for disease diagnosis and classification. In the past decade, many computational methods have been proposed, resulting in great improvements on various levels, including feature selection and algorithms for classification and clustering. In this study, we present iPcc, a novel method from the feature extraction perspective to further propel gene expression profiling technologies from bench to bedside. We define 'correlation feature space' for samples based on the gene expression profiles by iterative employment of Pearson's correlation coefficient. Numerical experiments on both simulated and real gene expression data sets demonstrate that iPcc can greatly highlight the latent patterns underlying noisy gene expression data and thus greatly improve the robustness and accuracy of the algorithms currently available for disease diagnosis and classification based on gene expression profiles.

  1. Handwritten Chinese character recognition based on supervised competitive learning neural network and block-based relative fuzzy feature extraction

    Science.gov (United States)

    Sun, Limin; Wu, Shuanhu

    2005-02-01

    Offline handwritten chinese character recognition is still a difficult problem because of its large stroke changes, writing anomaly, and the difficulty for obtaining its stroke ranking information. Generally, offline handwritten chinese character can be divided into two procedures: feature extraction for capturing handwritten chinese character information and feature classifying for character recognition. In this paper, we proposed a new Chinese character recognition algorithm. In feature extraction part, we adopted elastic mesh dividing method for extracting the block features and its relative fuzzy features that utilized the relativities between different strokes and distribution probability of a stroke in its neighbor sub-blocks. In recognition part, we constructed a classifier based on a supervised competitive learning algorithm to train competitive learning neural network with the extracted features set. Experimental results show that the performance of our algorithm is encouraging and can be comparable to other algorithms.

  2. Rapid green synthesis of silver nanoparticles and nanorods using Piper nigrum extract

    Energy Technology Data Exchange (ETDEWEB)

    Mohapatra, Bandita [Multifunctional Nanomaterials Laboratory, School of Basic and Applied Sciences, Guru Gobind Singh Indraprastha University, Dwarka, New Delhi 110078 (India); Kuriakose, Sini [Multifunctional Nanomaterials Laboratory, School of Basic and Applied Sciences, Guru Gobind Singh Indraprastha University, Dwarka, New Delhi 110078 (India); School of Basic and Applied Sciences, Guru Gobind Singh Indraprastha University, Dwarka, New Delhi 110078 (India); Mohapatra, Satyabrata, E-mail: smiuac@gmail.com [Multifunctional Nanomaterials Laboratory, School of Basic and Applied Sciences, Guru Gobind Singh Indraprastha University, Dwarka, New Delhi 110078 (India); School of Basic and Applied Sciences, Guru Gobind Singh Indraprastha University, Dwarka, New Delhi 110078 (India)

    2015-07-15

    Highlights: • Silver nanorods were synthesized by photoreduction using Piper nigrum extract. • The morphological and structural properties were studied by XRD and AFM. • Silver nanoparticles were formed at lower AgNO{sub 3} concentration. • Increase in AgNO{sub 3} concentration resulted in formation of silver nanorods. - Abstract: We report sun light driven rapid green synthesis of stable aqueous dispersions of silver nanoparticles and nanorods at room temperature using photoreduction of silver ions with Piper nigrum extract. Silver nanoparticles were formed within 3 min of sun light irradiation following addition of Piper nigrum extract to the AgNO{sub 3} solution. The effects of AgNO{sub 3} concentration and irradiation time on the formation and plasmonic properties of biosynthesized silver nanoparticles were studied using UV–visible absorption spectroscopy. The morphology and structure of silver nanoparticles were well characterized by atomic force microscopy (AFM) and X-ray diffraction (XRD). The size of Ag nanoparticles increased with increase in irradiation time, leading to the formation of anisotropic nanostructures. Increasing the AgNO{sub 3} concentration resulted in the formation of Ag nanorods. UV–visible absorption studies revealed the presence of surface plasmon resonance (SPR) peaks which red shift and broaden with increasing AgNO{sub 3} concentration. We have demonstrated a facile, energy efficient and rapid green synthetic route to synthesize stable aqueous dispersions of silver nanoparticles and nanorods.

  3. Phenological Metrics Extraction for Agricultural Land-use Types Using RapidEye and MODIS

    Science.gov (United States)

    Xu, Xingmei; Doktor, Daniel; Conrad, Christopher

    2016-04-01

    Crop phenology involves the various agricultural events, such as planting, emergence, flowering, development of fruit and harvest. These phenological stages of a crop contain essential information for practical agricultural management, crop productivity estimation, investigations of crop-weather relationships, and also play an important role in improving agricultural land-use classification. In this study, we used MODIS and RapidEye images to extract phenological metrics in central Germany between 2010 and 2014. The Best Index Slope Extraction algorithm was used to remove undesirable data noise from Normalized Difference Vegetation Index (NDVI) time series of both satellite data before fast Fourier transformation was applied. Metrics optimization for phenology of major crops in the study area (winter wheat, winter barley, winter oilseed rape and sugar beet) and validation were performed with intensive ground observations from the German Weather Service (2010-2014) and our own measurements of BBCH code (Biologische Bundesanstalt für Land- und Forstwirtschaft, Bundessortenamt und CHemische Industrie) (in 2014). We found that the dates with maximum NDVI have a close link to the heading stage of cereals (RMSE = 9.48 days for MODIS and RMSE = 13.55 days for RapidEye), and the dates of local half maximum during senescence period of winter crops was strongly related to ripeness stage (BBCH: 87) (RMSE = 8.87 days for MODIS and RMSE = 9.62 days for RapidEye). The root-mean-square errors (RMSE) of derived green up dates for both winter and summer crops were larger than 2 weeks, which was caused by limited number of good quality images during the winter season. Comparison between RapidEye and homogeneous MODIS pixels indicated that phenological metrics derived from both satellites were similar to the crop calendar in this region. We also investigated the influence of spatial aggregation of RapidEye-scale phenology to MODIS scale as well as the effect of decreasing the

  4. A Framework of Temporal-Spatial Descriptors-Based Feature Extraction for Improved Myoelectric Pattern Recognition.

    Science.gov (United States)

    Khushaba, Rami N; Al-Timemy, Ali H; Al-Ani, Ahmed; Al-Jumaily, Adel

    2017-10-01

    The extraction of the accurate and efficient descriptors of muscular activity plays an important role in tackling the challenging problem of myoelectric control of powered prostheses. In this paper, we present a new feature extraction framework that aims to give an enhanced representation of muscular activities through increasing the amount of information that can be extracted from individual and combined electromyogram (EMG) channels. We propose to use time-domain descriptors (TDDs) in estimating the EMG signal power spectrum characteristics; a step that preserves the computational power required for the construction of spectral features. Subsequently, TDD is used in a process that involves: 1) representing the temporal evolution of the EMG signals by progressively tracking the correlation between the TDD extracted from each analysis time window and a nonlinearly mapped version of it across the same EMG channel and 2) representing the spatial coherence between the different EMG channels, which is achieved by calculating the correlation between the TDD extracted from the differences of all possible combinations of pairs of channels and their nonlinearly mapped versions. The proposed temporal-spatial descriptors (TSDs) are validated on multiple sparse and high-density (HD) EMG data sets collected from a number of intact-limbed and amputees performing a large number of hand and finger movements. Classification results showed significant reductions in the achieved error rates in comparison to other methods, with the improvement of at least 8% on average across all subjects. Additionally, the proposed TSDs achieved significantly well in problems with HD-EMG with average classification errors of <5% across all subjects using windows lengths of 50 ms only.

  5. Predictability of intracranial pressure level in traumatic brain injury: features extraction, statistical analysis and machine learning-based evaluation.

    Science.gov (United States)

    Chen, Wenan; Cockrell, Charles H; Ward, Kevin; Najarian, Kayvan

    2013-01-01

    This paper attempts to predict Intracranial Pressure (ICP) based on features extracted from non-invasively collected patient data. These features include midline shift measurement and textural features extracted from Computed axial Tomography (CT) images. A statistical analysis is performed to examine the relationship between ICP and midline shift. Machine learning is also applied to estimate ICP levels with a two-stage feature selection scheme. To avoid overfitting, all feature selections and parameter selections are performed using a nested 10-fold cross validation within the training data. The classification results demonstrate the effectiveness of the proposed method in ICP prediction.

  6. A New Method for Weak Fault Feature Extraction Based on Improved MED

    Directory of Open Access Journals (Sweden)

    Junlin Li

    2018-01-01

    Full Text Available Because of the characteristics of weak signal and strong noise, the low-speed vibration signal fault feature extraction has been a hot spot and difficult problem in the field of equipment fault diagnosis. Moreover, the traditional minimum entropy deconvolution (MED method has been proved to be used to detect such fault signals. The MED uses objective function method to design the filter coefficient, and the appropriate threshold value should be set in the calculation process to achieve the optimal iteration effect. It should be pointed out that the improper setting of the threshold will cause the target function to be recalculated, and the resulting error will eventually affect the distortion of the target function in the background of strong noise. This paper presents an improved MED based method of fault feature extraction from rolling bearing vibration signals that originate in high noise environments. The method uses the shuffled frog leaping algorithm (SFLA, finds the set of optimal filter coefficients, and eventually avoids the artificial error influence of selecting threshold parameter. Therefore, the fault bearing under the two rotating speeds of 60 rpm and 70 rpm is selected for verification with typical low-speed fault bearing as the research object; the results show that SFLA-MED extracts more obvious bearings and has a higher signal-to-noise ratio than the prior MED method.

  7. Deep SOMs for automated feature extraction and classification from big data streaming

    Science.gov (United States)

    Sakkari, Mohamed; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    In this paper, we proposed a deep self-organizing map model (Deep-SOMs) for automated features extracting and learning from big data streaming which we benefit from the framework Spark for real time streams and highly parallel data processing. The SOMs deep architecture is based on the notion of abstraction (patterns automatically extract from the raw data, from the less to more abstract). The proposed model consists of three hidden self-organizing layers, an input and an output layer. Each layer is made up of a multitude of SOMs, each map only focusing at local headmistress sub-region from the input image. Then, each layer trains the local information to generate more overall information in the higher layer. The proposed Deep-SOMs model is unique in terms of the layers architecture, the SOMs sampling method and learning. During the learning stage we use a set of unsupervised SOMs for feature extraction. We validate the effectiveness of our approach on large data sets such as Leukemia dataset and SRBCT. Results of comparison have shown that the Deep-SOMs model performs better than many existing algorithms for images classification.

  8. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications

    Directory of Open Access Journals (Sweden)

    Francesco Nex

    2009-05-01

    Full Text Available In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc. and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A2 SIFT has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems.

  9. Feature extraction techniques using multivariate analysis for identification of lung cancer volatile organic compounds

    Science.gov (United States)

    Thriumani, Reena; Zakaria, Ammar; Hashim, Yumi Zuhanis Has-Yun; Helmy, Khaled Mohamed; Omar, Mohammad Iqbal; Jeffree, Amanina; Adom, Abdul Hamid; Shakaff, Ali Yeon Md; Kamarudin, Latifah Munirah

    2017-03-01

    In this experiment, three different cell cultures (A549, WI38VA13 and MCF7) and blank medium (without cells) as a control were used. The electronic nose (E-Nose) was used to sniff the headspace of cultured cells and the data were recorded. After data pre-processing, two different features were extracted by taking into consideration of both steady state and the transient information. The extracted data are then being processed by multivariate analysis, Linear Discriminant Analysis (LDA) to provide visualization of the clustering vector information in multi-sensor space. The Probabilistic Neural Network (PNN) classifier was used to test the performance of the E-Nose on determining the volatile organic compounds (VOCs) of lung cancer cell line. The LDA data projection was able to differentiate between the lung cancer cell samples and other samples (breast cancer, normal cell and blank medium) effectively. The features extracted from the steady state response reached 100% of classification rate while the transient response with the aid of LDA dimension reduction methods produced 100% classification performance using PNN classifier with a spread value of 0.1. The results also show that E-Nose application is a promising technique to be applied to real patients in further work and the aid of Multivariate Analysis; it is able to be the alternative to the current lung cancer diagnostic methods.

  10. A Structure Feature for Automatic Extraction of Plantation from High-resolution Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    YAN Li

    2016-09-01

    Full Text Available Satellite remote sensing is an invaluable tool to manage land resources. However, data analysis procedures should satisfy the good adaptability, wide application prospects and high accuracy levels demanded by users. This study presented a novel multi-scale and multi-direction structure index (MMI to describe the structure feature of plantation caused by cultivation. Plantation are extracted by performing a threshold on the MMI feature map, and combined with morphological operators to refine the extraction results. We designed three groups of experiments to test our method, each group used panchromatic and multispectral imagery respectively with various cultivation mode, different vegetated background and structure complexity. The results show our method is much more adaptive on plantation extraction than traditional methods. It is efficient for various complex plantations, e.g. multi-direction, multi-scale, highly vegetated backgrounds, low regularity of planting mode with deformation of textons and planting lines, the accuracy results exceed 90%. And panchromatic images achieve accuracies as high as multispectral images, which indicate our method has low dependence on spectrum, thus it is more flexible for data selection and application.

  11. Application of rapid cloud point extraction method for trace cobalt analysis coupled with spectrophotometric determination

    Science.gov (United States)

    Wen, Xiaodong; He, Lei; Shi, Chunsheng; Deng, Qingwen; Wang, Jiwei; Zhao, Xia

    2013-11-01

    In this work, the analytical performance of conventional spectrophotometer was improved through the coupling of effective preconcentration method with spectrophotometric determination. Rapidly synergistic cloud point extraction (RS-CPE) was used to pre-concentrate ultra trace cobalt and firstly coupled with spectrophotometric determination. The developed coupling was simple, rapid and efficient. The factors influencing RS-CPE and spectrophotometer were optimized. Under the optimal conditions, the limit of detection (LOD) was 0.6 μg L-1, with sensitivity enhancement factor of 23. The relative standard deviation (RSD) for seven replicate measurements of 50 μg L-1 of cobalt was 4.3%. The recoveries for the spiked samples were in the acceptable range of 93.8-105%.

  12. A motion correction method for indoor robot based on lidar feature extraction and matching

    Science.gov (United States)

    Gou, Jiansong; Guo, Yu; Wei, Yang; Li, Zheng; Zhao, Yeming; Wang, Lirong; Chen, Xiaohe

    2018-01-01

    For robots used for the indoor environment detection, positioning and navigation with a Light Detection and Ranging system (Lidar), the accuracy of map building, positioning and navigation, is largely restricted by the motion accuracy. Due to manufacture error and transmission error of the mechanical structure, sensors easily affected by the environment and other factors, robots' cumulative motion error is inevitable. This paper presents a series of methods and solutions to overcome those problems, such as point set partition and feature extraction methods for processing Lidar scan points, feature matching method to correct the motion process, with less computation, more reasonable and rigorous threshold, wider scope of application, higher efficiency and accuracy. While extracting environment features and building indoor maps, these methods analyze the motion error of the robot and correct it, improving the accuracy of movement and map without any additional hardware. Experiments prove that the rotation error and translation error of the robot platform used in experiments can by reduced by 50% and by 70% respectively. The methods evidently improve the motion accuracy with a strong effectiveness and practicality.

  13. Integrating angle-frequency domain synchronous averaging technique with feature extraction for gear fault diagnosis

    Science.gov (United States)

    Zhang, Shengli; Tang, J.

    2018-01-01

    Gear fault diagnosis relies heavily on the scrutiny of vibration responses measured. In reality, gear vibration signals are noisy and dominated by meshing frequencies as well as their harmonics, which oftentimes overlay the fault related components. Moreover, many gear transmission systems, e.g., those in wind turbines, constantly operate under non-stationary conditions. To reduce the influences of non-synchronous components and noise, a fault signature enhancement method that is built upon angle-frequency domain synchronous averaging is developed in this paper. Instead of being averaged in the time domain, the signals are processed in the angle-frequency domain to solve the issue of phase shifts between signal segments due to uncertainties caused by clearances, input disturbances, and sampling errors, etc. The enhanced results are then analyzed through feature extraction algorithms to identify the most distinct features for fault classification and identification. Specifically, Kernel Principal Component Analysis (KPCA) targeting at nonlinearity, Multilinear Principal Component Analysis (MPCA) targeting at high dimensionality, and Locally Linear Embedding (LLE) targeting at local similarity among the enhanced data are employed and compared to yield insights. Numerical and experimental investigations are performed, and the results reveal the effectiveness of angle-frequency domain synchronous averaging in enabling feature extraction and classification.

  14. An energy ratio feature extraction method for optical fiber vibration signal

    Science.gov (United States)

    Sheng, Zhiyong; Zhang, Xinyan; Wang, Yanping; Hou, Weiming; Yang, Dan

    2017-12-01

    The intrusion events in the optical fiber pre-warning system (OFPS) are divided into two types which are harmful intrusion event and harmless interference event. At present, the signal feature extraction methods of these two types of events are usually designed from the view of the time domain. However, the differences of time-domain characteristics for different harmful intrusion events are not obvious, which cannot reflect the diversity of them in detail. We find that the spectrum distribution of different intrusion signals has obvious differences. For this reason, the intrusion signal is transformed into the frequency domain. In this paper, an energy ratio feature extraction method of harmful intrusion event is drawn on. Firstly, the intrusion signals are pre-processed and the power spectral density (PSD) is calculated. Then, the energy ratio of different frequency bands is calculated, and the corresponding feature vector of each type of intrusion event is further formed. The linear discriminant analysis (LDA) classifier is used to identify the harmful intrusion events in the paper. Experimental results show that the algorithm improves the recognition rate of the intrusion signal, and further verifies the feasibility and validity of the algorithm.

  15. A NOVEL SHAPE BASED FEATURE EXTRACTION TECHNIQUE FOR DIAGNOSIS OF LUNG DISEASES USING EVOLUTIONARY APPROACH

    Directory of Open Access Journals (Sweden)

    C. Bhuvaneswari

    2014-07-01

    Full Text Available Lung diseases are one of the most common diseases that affect the human community worldwide. When the diseases are not diagnosed they may lead to serious problems and may even lead to transience. As an outcome to assist the medical community this study helps in detecting some of the lung diseases specifically bronchitis, pneumonia and normal lung images. In this paper, to detect the lung diseases feature extraction is done by the proposed shape based methods, feature selection through the genetics algorithm and the images are classified by the classifier such as MLP-NN, KNN, Bayes Net classifiers and their performances are listed and compared. The shape features are extracted and selected from the input CT images using the image processing techniques and fed to the classifier for categorization. A total of 300 lung CT images were used, out of which 240 are used for training and 60 images were used for testing. Experimental results show that MLP-NN has an accuracy of 86.75 % KNN Classifier has an accuracy of 85.2 % and Bayes net has an accuracy of 83.4% of classification accuracy. The sensitivity, specificity, F-measures, PPV values for the various classifiers are also computed. This concludes that the MLP-NN outperforms all other classifiers.

  16. Improving ELM-Based Service Quality Prediction by Concise Feature Extraction

    Directory of Open Access Journals (Sweden)

    Yuhai Zhao

    2015-01-01

    Full Text Available Web services often run on highly dynamic and changing environments, which generate huge volumes of data. Thus, it is impractical to monitor the change of every QoS parameter for the timely trigger precaution due to high computational costs associated with the process. To address the problem, this paper proposes an active service quality prediction method based on extreme learning machine. First, we extract web service trace logs and QoS information from the service log and convert them into feature vectors. Second, by the proposed EC rules, we are enabled to trigger the precaution of QoS as soon as possible with high confidence. An efficient prefix tree based mining algorithm together with some effective pruning rules is developed to mine such rules. Finally, we study how to extract a set of diversified features as the representative of all mined results. The problem is proved to be NP-hard. A greedy algorithm is presented to approximate the optimal solution. Experimental results show that ELM trained by the selected feature subsets can efficiently improve the reliability and the earliness of service quality prediction.

  17. An approach to EEG-based emotion recognition using combined feature extraction method.

    Science.gov (United States)

    Zhang, Yong; Ji, Xiaomin; Zhang, Suhua

    2016-10-28

    EEG signal has been widely used in emotion recognition. However, too many channels and extracted features are used in the current EEG-based emotion recognition methods, which lead to the complexity of these methods This paper studies on feature extraction on EEG-based emotion recognition model to overcome those disadvantages, and proposes an emotion recognition method based on empirical mode decomposition (EMD) and sample entropy. The proposed method first employs EMD strategy to decompose EEG signals only containing two channels into a series of intrinsic mode functions (IMFs). The first 4 IMFs are selected to calculate corresponding sample entropies and then to form feature vectors. These vectors are fed into support vector machine classifier for training and testing. The average accuracy of the proposed method is 94.98% for binary-class tasks and the best accuracy achieves 93.20% for the multi-class task on DEAP database, respectively. The results indicate that the proposed method is more suitable for emotion recognition than several methods of comparison. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Statistical Feature Extraction for Fault Locations in Nonintrusive Fault Detection of Low Voltage Distribution Systems

    Directory of Open Access Journals (Sweden)

    Hsueh-Hsien Chang

    2017-04-01

    Full Text Available This paper proposes statistical feature extraction methods combined with artificial intelligence (AI approaches for fault locations in non-intrusive single-line-to-ground fault (SLGF detection of low voltage distribution systems. The input features of the AI algorithms are extracted using statistical moment transformation for reducing the dimensions of the power signature inputs measured by using non-intrusive fault monitoring (NIFM techniques. The data required to develop the network are generated by simulating SLGF using the Electromagnetic Transient Program (EMTP in a test system. To enhance the identification accuracy, these features after normalization are given to AI algorithms for presenting and evaluating in this paper. Different AI techniques are then utilized to compare which identification algorithms are suitable to diagnose the SLGF for various power signatures in a NIFM system. The simulation results show that the proposed method is effective and can identify the fault locations by using non-intrusive monitoring techniques for low voltage distribution systems.

  19. Rapid, room-temperature synthesis of amorphous selenium/protein composites using Capsicum annuum L extract

    Energy Technology Data Exchange (ETDEWEB)

    Li Shikuo; Shen Yuhua; Xie Anjian; Yu Xuerong; Zhang Xiuzhen; Yang Liangbao; Li Chuanhao [School of Chemistry and Chemical Engineering, Anhui University, Hefei 230039 (China)

    2007-10-10

    We describe the formation of amorphous selenium ({alpha}-Se)/protein composites using Capsicum annuum L extract to reduce selenium ions (SeO{sub 3}{sup 2-}) at room temperature. The reaction occurs rapidly and the process is simple and easy to handle. A protein with a molecular weight of 30 kDa extracted from Capsicum annuum L not only reduces the SeO{sub 3}{sup 2-} ions to Se{sup 0}, but also controls the nucleation and growth of Se{sup 0}, and even participates in the formation of {alpha}-Se/protein composites. The size and shell thickness of the {alpha}-Se/protein composites increases with high Capsicum annuum L extract concentration, and decreases with low reaction solution pH. The results suggest that this eco-friendly, biogenic synthesis strategy could be widely used for preparing inorganic/organic biocomposites. In addition, we also discuss the possible mechanism of the reduction of SeO{sub 3}{sup 2-} ions by Capsicum annuum L extract.

  20. Consistent Feature Extraction From Vector Fields: Combinatorial Representations and Analysis Under Local Reference Frames

    Energy Technology Data Exchange (ETDEWEB)

    Bhatia, Harsh [Univ. of Utah, Salt Lake City, UT (United States)

    2015-05-01

    This dissertation presents research on addressing some of the contemporary challenges in the analysis of vector fields—an important type of scientific data useful for representing a multitude of physical phenomena, such as wind flow and ocean currents. In particular, new theories and computational frameworks to enable consistent feature extraction from vector fields are presented. One of the most fundamental challenges in the analysis of vector fields is that their features are defined with respect to reference frames. Unfortunately, there is no single “correct” reference frame for analysis, and an unsuitable frame may cause features of interest to remain undetected, thus creating serious physical consequences. This work develops new reference frames that enable extraction of localized features that other techniques and frames fail to detect. As a result, these reference frames objectify the notion of “correctness” of features for certain goals by revealing the phenomena of importance from the underlying data. An important consequence of using these local frames is that the analysis of unsteady (time-varying) vector fields can be reduced to the analysis of sequences of steady (timeindependent) vector fields, which can be performed using simpler and scalable techniques that allow better data management by accessing the data on a per-time-step basis. Nevertheless, the state-of-the-art analysis of steady vector fields is not robust, as most techniques are numerical in nature. The residing numerical errors can violate consistency with the underlying theory by breaching important fundamental laws, which may lead to serious physical consequences. This dissertation considers consistency as the most fundamental characteristic of computational analysis that must always be preserved, and presents a new discrete theory that uses combinatorial representations and algorithms to provide consistency guarantees during vector field analysis along with the uncertainty

  1. Applying Improved Multiscale Fuzzy Entropy for Feature Extraction of MI-EEG

    Directory of Open Access Journals (Sweden)

    Ming-ai Li

    2017-01-01

    Full Text Available Electroencephalography (EEG is considered the output of a brain and it is a bioelectrical signal with multiscale and nonlinear properties. Motor Imagery EEG (MI-EEG not only has a close correlation with the human imagination and movement intention but also contains a large amount of physiological or disease information. As a result, it has been fully studied in the field of rehabilitation. To correctly interpret and accurately extract the features of MI-EEG signals, many nonlinear dynamic methods based on entropy, such as Approximate Entropy (ApEn, Sample Entropy (SampEn, Fuzzy Entropy (FE, and Permutation Entropy (PE, have been proposed and exploited continuously in recent years. However, these entropy-based methods can only measure the complexity of MI-EEG based on a single scale and therefore fail to account for the multiscale property inherent in MI-EEG. To solve this problem, Multiscale Sample Entropy (MSE, Multiscale Permutation Entropy (MPE, and Multiscale Fuzzy Entropy (MFE are developed by introducing scale factor. However, MFE has not been widely used in analysis of MI-EEG, and the same parameter values are employed when the MFE method is used to calculate the fuzzy entropy values on multiple scales. Actually, each coarse-grained MI-EEG carries the characteristic information of the original signal on different scale factors. It is necessary to optimize MFE parameters to discover more feature information. In this paper, the parameters of MFE are optimized independently for each scale factor, and the improved MFE (IMFE is applied to the feature extraction of MI-EEG. Based on the event-related desynchronization (ERD/event-related synchronization (ERS phenomenon, IMFE features from multi channels are fused organically to construct the feature vector. Experiments are conducted on a public dataset by using Support Vector Machine (SVM as a classifier. The experiment results of 10-fold cross-validation show that the proposed method yields

  2. Robo-Psychophysics: Extracting Behaviorally Relevant Features from the Output of Sensors on a Prosthetic Finger.

    Science.gov (United States)

    Delhaye, Benoit P; Schluter, Erik W; Bensmaia, Sliman J

    2016-01-01

    Efforts are underway to restore sensorimotor function in amputees and tetraplegic patients using anthropomorphic robotic hands. For this approach to be clinically viable, sensory signals from the hand must be relayed back to the patient. To convey tactile feedback necessary for object manipulation, behaviorally relevant information must be extracted in real time from the output of sensors on the prosthesis. In the present study, we recorded the sensor output from a state-of-the-art bionic finger during the presentation of different tactile stimuli, including punctate indentations and scanned textures. Furthermore, the parameters of stimulus delivery (location, speed, direction, indentation depth, and surface texture) were systematically varied. We developed simple decoders to extract behaviorally relevant variables from the sensor output and assessed the degree to which these algorithms could reliably extract these different types of sensory information across different conditions of stimulus delivery. We then compared the performance of the decoders to that of humans in analogous psychophysical experiments. We show that straightforward decoders can extract behaviorally relevant features accurately from the sensor output and most of them outperform humans.

  3. Extraction of multi-scale landslide morphological features based on local Gi* using airborne LiDAR-derived DEM

    Science.gov (United States)

    Shi, Wenzhong; Deng, Susu; Xu, Wenbing

    2018-02-01

    For automatic landslide detection, landslide morphological features should be quantitatively expressed and extracted. High-resolution Digital Elevation Models (DEMs) derived from airborne Light Detection and Ranging (LiDAR) data allow fine-scale morphological features to be extracted, but noise in DEMs influences morphological feature extraction, and the multi-scale nature of landslide features should be considered. This paper proposes a method to extract landslide morphological features characterized by homogeneous spatial patterns. Both profile and tangential curvature are utilized to quantify land surface morphology, and a local Gi* statistic is calculated for each cell to identify significant patterns of clustering of similar morphometric values. The method was tested on both synthetic surfaces simulating natural terrain and airborne LiDAR data acquired over an area dominated by shallow debris slides and flows. The test results of the synthetic data indicate that the concave and convex morphologies of the simulated terrain features at different scales and distinctness could be recognized using the proposed method, even when random noise was added to the synthetic data. In the test area, cells with large local Gi* values were extracted at a specified significance level from the profile and the tangential curvature image generated from the LiDAR-derived 1-m DEM. The morphologies of landslide main scarps, source areas and trails were clearly indicated, and the morphological features were represented by clusters of extracted cells. A comparison with the morphological feature extraction method based on curvature thresholds proved the proposed method's robustness to DEM noise. When verified against a landslide inventory, the morphological features of almost all recent ( 10 years) landslides were extracted. This finding indicates that the proposed method can facilitate landslide detection, although the cell clusters extracted from curvature images should be filtered

  4. A new rapid method for Clostridium difficile DNA extraction and detection in stool: toward point-of-care diagnostic testing

    National Research Council Canada - National Science Library

    Freifeld, Alison G; Simonsen, Kari A; Booth, Christine S; Zhao, Xing; Whitney, Scott E; Karre, Teresa; Iwen, Peter C; Viljoen, Hendrik J

    2012-01-01

    We describe a new method for the rapid diagnosis of Clostridium difficile infection, with stool sample preparation and DNA extraction by heat and physical disruption in a single-use lysis microreactor (LMR...

  5. Correlated EEMD and Effective Feature Extraction for Both Periodic and Irregular Faults Diagnosis in Rotating Machinery

    Directory of Open Access Journals (Sweden)

    Jiejunyi Liang

    2017-10-01

    Full Text Available Intelligent fault diagnosis of complex machinery is crucial for industries to reduce the maintenance cost and to improve fault prediction performance. Acoustic signal is an ideal source for diagnosis because of its inherent characteristics in terms of being non-directional and insensitive to structural resonances. However, there are also two main drawbacks of acoustic signal, one of which is the low signal to noise ratio (SNR caused by its high sensitivity and the other one is the low computational efficiency caused by the huge data size. These would decrease the performance of the fault diagnosis system. Therefore, it is significant to develop a proper feature extraction method to improve computational efficiency and performance in both periodic and irregular fault diagnosis. To enhance SNR of the acquired acoustic signal, the correlation coefficient (CC method is employed to eliminate the redundant intrinsic mode functions (IMF, which comes from the decomposition procedure of pre-processing known as ensemble empirical mode decomposition (EEMD, because the higher the correlated coefficient of an IMF is, the more significant fault signatures it would contain, and the redundant IMF would compromise both the SNR and the computational cost performance. Singular value decomposition (SVD and sample Entropy (SampEn are subsequently used to extract the fault feature, by exploiting their sensitivities to irregular and periodic fault signals, respectively. In addition, the proposed feature extraction method using sparse Bayesian based pairwise coupled extreme learning machine (PC-SBELM outperforms the existing pairwise-coupling probabilistic neural network (PC-PNN and pairwise-coupling relevance vector machine (PC-RVM by 1.8% and 2%, respectively, to achieve an accuracy of 93.9%. The experiments conducted on the periodic and irregular faults in the gears and bearings have demonstrated that the proposed hybrid fault diagnosis system is effective.

  6. ECG Identification Based on Non-Fiducial Feature Extraction Using Window Removal Method

    Directory of Open Access Journals (Sweden)

    Woo-Hyuk Jung

    2017-11-01

    Full Text Available This study proposes electrocardiogram (ECG identification based on non-fiducial feature extraction using window removal method, nearest neighbor (NN, support vector machine (SVM, and linear discriminant analysis (LDA. In the pre-processing stage, Daubechies 4 is used to remove the baseline wander and noise of the original signal. In the feature extraction and selection stage, windows are set at a time interval of 5 s in the preprocessed signal, while autocorrelation, scaling, and discrete cosine transform (DCT are applied to extract and select features. Thereafter, the window removal method is applied to all of the generated windows to remove those that are unrecognizable. Lastly, in the classification stage, the NN, SVM, and LDA classifiers are used to perform individual identification. As a result, when the NN is used in the Normal Sinus Rhythm (NSR, PTB diagnostic, and QT database, the results indicate that the subject identification rates are 100%, 99.40% and 100%, while the window identification rates are 99.02%, 97.13% and 98.91%. When the SVM is used, all of the subject identification rates are 100%, while the window identification rates are 96.92%, 95.82% and 98.32%. When the LDA is used, all of the subject identification rates are 100%, while the window identification rates are 98.67%, 98.65% and 99.23%. The proposed method demonstrates good results with regard to data that not only includes normal signals, but also abnormal signals. In addition, the window removal method improves the individual identification accuracy by removing windows that cannot be recognized.

  7. Machine learning methods for the classification of gliomas: Initial results using features extracted from MR spectroscopy.

    Science.gov (United States)

    Ranjith, G; Parvathy, R; Vikas, V; Chandrasekharan, Kesavadas; Nair, Suresh

    2015-04-01

    With the advent of new imaging modalities, radiologists are faced with handling increasing volumes of data for diagnosis and treatment planning. The use of automated and intelligent systems is becoming essential in such a scenario. Machine learning, a branch of artificial intelligence, is increasingly being used in medical image analysis applications such as image segmentation, registration and computer-aided diagnosis and detection. Histopathological analysis is currently the gold standard for classification of brain tumors. The use of machine learning algorithms along with extraction of relevant features from magnetic resonance imaging (MRI) holds promise of replacing conventional invasive methods of tumor classification. The aim of the study is to classify gliomas into benign and malignant types using MRI data. Retrospective data from 28 patients who were diagnosed with glioma were used for the analysis. WHO Grade II (low-grade astrocytoma) was classified as benign while Grade III (anaplastic astrocytoma) and Grade IV (glioblastoma multiforme) were classified as malignant. Features were extracted from MR spectroscopy. The classification was done using four machine learning algorithms: multilayer perceptrons, support vector machine, random forest and locally weighted learning. Three of the four machine learning algorithms gave an area under ROC curve in excess of 0.80. Random forest gave the best performance in terms of AUC (0.911) while sensitivity was best for locally weighted learning (86.1%). The performance of different machine learning algorithms in the classification of gliomas is promising. An even better performance may be expected by integrating features extracted from other MR sequences. © The Author(s) 2015 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  8. Automatic Glaucoma Detection Based on Optic Disc Segmentation and Texture Feature Extraction

    Directory of Open Access Journals (Sweden)

    Maíla de Lima Claro

    2016-08-01

    Full Text Available The use of digital image processing techniques is prominent in medical settings for the automatic diagnosis of diseases. Glaucoma is the second leading cause of blindness in the world and it has no cure. Currently, there are treatments to prevent vision loss, but the disease must be detected in the early stages. Thus, the objective of this work is to develop an automatic detection method of Glaucoma in retinal images. The methodology used in the study were: acquisition of image database, Optic Disc segmentation, texture feature extraction in different color models and classiffication of images in glaucomatous or not. We obtained results of 93% accuracy.

  9. Hydro-geomorphic connectivity and landslide features extraction to identifying potential threats and hazardous areas

    Science.gov (United States)

    Tarolli, Paolo; Fuller, Ian C.; Basso, Federica; Cavalli, Marco; Sofia, Giulia

    2017-04-01

    Hydro-geomorphic connectivity has significantly emerged as a new concept to understand the transfer of surface water and sediment through landscapes. A further scientific challenge is determining how the concept can be used to enable sustainable land and water management. This research proposes an interesting approach to integrating remote sensing techniques, connectivity theory, and geomorphometry based on high-resolution digital terrain model (HR-DTMs) to automatically extract landslides crowns and gully erosion, to determine the different rate of connectivity among the main extracted features and the river network, and thus determine a possible categorization of hazardous areas. The study takes place in two mountainous regions in the Wellington Region (New Zealand). The methodology is a three step approach. Firstly, we performed an automatic detection of the likely landslides crowns through the use of thresholds obtained by the statistical analysis of the variability of landform curvature. After that, the research considered the Connectivity Index to analyse how a complex and rugged topography induces large variations in erosion and sediment delivery in the two catchments. Lastly, the two methods have been integrated to create a unique procedure able to classify the different rate of connectivity among the main features and the river network and thus identifying potential threats and hazardous areas. The methodology is fast, and it can produce a detailed and updated inventory map that could be a key tool for erosional and sediment delivery hazard mitigation. This fast and simple method can be a useful tool to manage emergencies giving priorities to more failure-prone zones. Furthermore, it could be considered to do a preliminary interpretations of geomorphological phenomena and more in general, it could be the base to develop inventory maps. References Cavalli M, Trevisani S, Comiti F, Marchi L. 2013. Geomorphometric assessment of spatial sediment connectivity

  10. Regularized generalized eigen-decomposition with applications to sparse supervised feature extraction and sparse discriminant analysis

    DEFF Research Database (Denmark)

    Han, Xixuan; Clemmensen, Line Katrine Harder

    2015-01-01

    We propose a general technique for obtaining sparse solutions to generalized eigenvalue problems, and call it Regularized Generalized Eigen-Decomposition (RGED). For decades, Fisher's discriminant criterion has been applied in supervised feature extraction and discriminant analysis...... techniques, for instance, 2D-Linear Discriminant Analysis (2D-LDA). Furthermore, an iterative algorithm based on the alternating direction method of multipliers is developed. The algorithm approximately solves RGED with monotonically decreasing convergence and at an acceptable speed for results of modest...... accuracy. Numerical experiments based on four data sets of different types of images show that RGED has competitive classification performance with existing multidimensional and sparse techniques of discriminant analysis....

  11. Automatic detection of melanoma using broad extraction of features from digital images.

    Science.gov (United States)

    Jafari, M H; Samavi, S; Karimi, N; Soroushmehr, S M R; Ward, K; Najarian, K

    2016-08-01

    Automatic and reliable diagnosis of skin cancer, as a smartphone application, is of great interest. Among different types of skin cancers, melanoma is the most dangerous one which causes most deaths. Meanwhile, melanoma is curable if it were diagnosed in its early stages. In this paper we propose an efficient system for prescreening of pigmented skin lesions for malignancy using general-purpose digital cameras. These images can be captured by a smartphone or a digital camera. This could be beneficial in different applications, such as computer aided diagnosis and telemedicine applications. It could assist dermatologists, or smartphone users, evaluate risk of suspicious moles. The proposed method enhances borders and extracts a broad set of dermatologically important features. These discriminative features allow classification of lesions into two groups of melanoma and benign. This method is computationally appropriate as a smartphone application. Experimental results show that our proposed method is superior in diagnosis accuracy compared to state-of-the-art methods.

  12. Research on Feature Extraction of Indicator Card Data for Sucker-Rod Pump Working Condition Diagnosis

    Directory of Open Access Journals (Sweden)

    Yunhua Yu

    2013-01-01

    Full Text Available Three feature extraction methods of sucker-rod pump indicator card data have been studied, simulated, and compared in this paper, which are based on Fourier Descriptors (FD, Geometric Moment Vector (GMV, and Gray Level Matrix Statistics (GLMX, respectively. Numerical experiments show that the Fourier Descriptors algorithm requires less running time and less memory space with possible loss of information due to nonoptimal numbers of Fourier Descriptors, the Geometric Moment Vector algorithm is more time-consuming and requires more memory space, while the Gray Level Matrix Statistics algorithm provides low-dimension feature vectors with more time consumption and more memory space. Furthermore, the characteristic of rotational invariance, both in the Fourier Descriptors algorithm and the Geometric Moment Vector algorithm, may result in improper pattern recognition of indicator card data when used for sucker-rod pump working condition diagnosis.

  13. [Extraction of first derivative spectrum features of soil organic matter via wavelet de-noising].

    Science.gov (United States)

    Liu, Wei; Chang, Qing-Rui; Guo, Man; Xing, Dong-Xing; Yuan, Yong-Sheng

    2011-01-01

    The hyperspectral reflectance of soil was measured by a ASD FieldSpec within 400-1 000 nm. Next, its first derivative of spectra were acquired and de-noised by the threshold de-noising method based on wavelet transform. From the de-noised derivative spectra, absorption areas used as indicatoresas for soil organic matter content were acquired by numerical integration. Results show that: (1) Because of much noise, it is difficult to identify spectra contour and features in the first derivative of soil spectra resulting from different organic content levels. (2) When the scale of wavelet decomposition was 3, the threshold de-noising method based on wavelet transform can keep the balance between smoothing curve and holding spectra features. (3) Absorption area S (538, 586) is extracted from de-noised first derivative of soil spectra, and the coefficient of correlation between it and organic matter content is 0.896 3.

  14. LOW-LEVEL TIE FEATURE EXTRACTION OF MOBILE MAPPING DATA (MLS/IMAGES AND AERIAL IMAGERY

    Directory of Open Access Journals (Sweden)

    P. Jende

    2016-03-01

    Full Text Available Mobile Mapping (MM is a technique to obtain geo-information using sensors mounted on a mobile platform or vehicle. The mobile platform’s position is provided by the integration of Global Navigation Satellite Systems (GNSS and Inertial Navigation Systems (INS. However, especially in urban areas, building structures can obstruct a direct line-of-sight between the GNSS receiver and navigation satellites resulting in an erroneous position estimation. Therefore, derived MM data products, such as laser point clouds or images, lack the expected positioning reliability and accuracy. This issue has been addressed by many researchers, whose aim to mitigate these effects mainly concentrates on utilising tertiary reference data. However, current approaches do not consider errors in height, cannot achieve sub-decimetre accuracy and are often not designed to work in a fully automatic fashion. We propose an automatic pipeline to rectify MM data products by employing high resolution aerial nadir and oblique imagery as horizontal and vertical reference, respectively. By exploiting the MM platform’s defective, and therefore imprecise but approximate orientation parameters, accurate feature matching techniques can be realised as a pre-processing step to minimise the MM platform’s three-dimensional positioning error. Subsequently, identified correspondences serve as constraints for an orientation update, which is conducted by an estimation or adjustment technique. Since not all MM systems employ laser scanners and imaging sensors simultaneously, and each system and data demands different approaches, two independent workflows are developed in parallel. Still under development, both workflows will be presented and preliminary results will be shown. The workflows comprise of three steps; feature extraction, feature matching and the orientation update. In this paper, initial results of low-level image and point cloud feature extraction methods will be discussed

  15. Coding Local and Global Binary Visual Features Extracted From Video Sequences.

    Science.gov (United States)

    Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2015-11-01

    Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the bag-of-visual word model. Several applications, including, for example, visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget while attaining a target level of efficiency. In this paper, we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can conveniently be adopted to support the analyze-then-compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs the visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the compress-then-analyze (CTA) paradigm. In this paper, we experimentally compare the ATC and the CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: 1) homography estimation and 2) content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with the CTA, especially in bandwidth limited scenarios.

  16. Learning object location predictors with boosting and grammar-guided feature extraction

    Energy Technology Data Exchange (ETDEWEB)

    Eads, Damian Ryan [Los Alamos National Laboratory; Rosten, Edward [UNIV OF CAMBRIDGE; Helmbold, David [UC/SANTA CRUZ

    2009-01-01

    The authors present BEAMER: a new spatially exploitative approach to learning object detectors which shows excellent results when applied to the task of detecting objects in greyscale aerial imagery in the presence of ambiguous and noisy data. There are four main contributions used to produce these results. First, they introduce a grammar-guided feature extraction system, enabling the exploration of a richer feature space while constraining the features to a useful subset. This is specified with a rule-based generative grammer crafted by a human expert. Second, they learn a classifier on this data using a newly proposed variant of AdaBoost which takes into account the spatially correlated nature of the data. Third, they perform another round of training to optimize the method of converting the pixel classifications generated by boosting into a high quality set of (x,y) locations. lastly, they carefully define three common problems in object detection and define two evaluation criteria that are tightly matched to these problems. Major strengths of this approach are: (1) a way of randomly searching a broad feature space, (2) its performance when evaluated on well-matched evaluation criteria, and (3) its use of the location prediction domain to learn object detectors as well as to generate detections that perform well on several tasks: object counting, tracking, and target detection. They demonstrate the efficacy of BEAMER with a comprehensive experimental evaluation on a challenging data set.

  17. Based On Intrinsic Mode Function Energy Tracking Method of Circuit Breaker Vibration Signal Feature Extraction Studies

    Directory of Open Access Journals (Sweden)

    Sun Yi-Hang

    2017-01-01

    Full Text Available In order to detect a mechanical type of structural failure of the circuit breaker, the characteristics of the circuit breaker mechanical vibration signal is analyzed in this paper. A combination of medium voltage circuit breaker based on empirical mode decomposition (EMD amount of energy and support vector machine (SVM theory vibration signal feature vector extraction and analysis of fault classification method is proposed. First, the vibration signal of the circuit breaker is decomposed by EMD, then intrinsic mode function (IMF is obtain. The major fault feature information intrinsic mode functions the amount of energy of the component is obtained by discrete sampling points and the amount of energy. Using the amount of energy of IMF component as a feature vector, the failure of the test sample signal as input feature vector into trained “BT-SVM” support vector machine classification mechanism for fault classification. The differences and fault type of vibration signals can be identified by this method through the experimental analysis.

  18. A Simple and Rapid Data Extraction Method for the Precision Aspheric Optical Surface Height

    Science.gov (United States)

    Xing, Guohua; Peng, Yunfeng; Su, Xing

    2017-10-01

    Nowadays, the application of aspheric optics is becoming more and more popular in the precision optical engineering field. Therefore, it urges the rapid development of the precision machining and measuring technology. Generally, the aspheric optical component is measured by the interferometer. The underlying question is that the figure output by interferometer can’t be always recognized by other analysis software or program though the interferometer has its own unique data processing system. In this paper, a robust, rapid and simple method is presented to interpret the surface height data of the precision machined aspheric optical surface. The optical surface is measured by interferometer. The result figure is split into two parts, one of which is the interferogram picture of the whole aspheric optical surface and the other is the colour reference column indicating the height value. The ratios of the red (R), green (G) and blue (B) are analysed based on the middle of the colour reference column, and the corresponding relationship between the colours and surface height is established and looked as a reference data base. Then the interferogram picture of the whole aspheric optical surface is also analysed and divided according to the red (R), green (G) and blue (B) colours. By comparing the ratios and values of RGB colour, the aspheric optical surface height can be extracted approximately. The feasibility of this method was approved by the extraction processing experiment of a polished aspheric optical surface.

  19. Multiple feature extraction and classification of electroencephalograph signal for Alzheimers' with spectrum and bispectrum

    Science.gov (United States)

    Wang, Ruofan; Wang, Jiang; Li, Shunan; Yu, Haitao; Deng, Bin; Wei, Xile

    2015-01-01

    In this paper, we have combined experimental neurophysiologic recording and statistical analysis to investigate the nonlinear characteristic and the cognitive function of the brain. Spectrum and bispectrum analyses are proposed to extract multiple effective features of electroencephalograph (EEG) signals from Alzheimer's disease (AD) patients and further applied to distinguish AD patients from the normal controls. Spectral analysis based on autoregressive Burg method is first used to quantify the power distribution of EEG series in the frequency domain. Compared to the control group, the relative power spectral density of AD group is significantly higher in the theta frequency band, while lower in the alpha frequency bands. In addition, median frequency of spectrum is decreased, and spectral entropy ratio of these two frequency bands undergoes drastic changes at the P3 electrode in the central-parietal brain region, implying that the electrophysiological behavior in AD brain is much slower and less irregular. In order to explore the nonlinear high order information, bispectral analysis which measures the complexity of phase-coupling is further applied to P3 electrode in the whole frequency band. It is demonstrated that less bispectral peaks appear and the amplitudes of peaks fall, suggesting a decrease of non-Gaussianity and nonlinearity of EEG in ADs. Notably, the application of this method to five brain regions shows higher concentration of the weighted center of bispectrum and lower complexity reflecting phase-coupling by bispectral entropy. Based on spectrum and bispectrum analyses, six efficient features are extracted and then applied to discriminate AD from the normal in the five brain regions. The classification results indicate that all these features could differentiate AD patients from the normal controls with a maximum accuracy of 90.2%. Particularly, different brain regions are sensitive to different features. Moreover, the optimal combination of

  20. A modified MS2 bacteriophage plaque reduction assay for the rapid screening of antiviral plant extracts.

    Science.gov (United States)

    Cock, Ian; Kalt, F R

    2010-07-01

    Traditional methods of screening plant extracts and purified components for antiviral activity require up to a week to perform, prompting the need to develop more rapid quantitative methods to measure the ability of plant based preparations to block viral replication. We describe an adaption of an MS2 plaque reduction assay for use in S. aureus. MS2 bacteriophage was capable of infecting and replicating in B. cereus, S. aureus and F + E. coli but not F- E. coli. Indeed, both B. cereus and S. aureus were more sensitive to MS2 induced lysis than F+ E. coli. When MS2 bacteriophage was mixed with Camellia sinensis extract (1 mg/ml), Scaevola spinescens extract (1 mg/ml) or Aloe barbadensis juice and the mixtures inoculated into S. aureus, the formation of plaques was reduced to 8.9 ± 3.8%, 5.4 ± 2.4% and 72.7 ± 20.9% of the untreated MS2 control values respectively. The ability of the MS2 plaque reduction assay to detect antiviral activity in these known antiviral plant preparations indicates its suitability as an antiviral screening tool. An advantage of this assay compared with traditionally used cytopathic effect reduction assays and replicon based assays is the more rapid acquisition of results. Antiviral activity was detected within 24 h of the start of testing. The MS2 assay is also inexpensive and non-pathogenic to humans making it ideal for initial screening studies or as a simulant for pathogenic viruses.

  1. Geometric and topological feature extraction of linear segments from 2D cross-section data of 3D point clouds

    Science.gov (United States)

    Ramamurthy, Rajesh; Harding, Kevin; Du, Xiaoming; Lucas, Vincent; Liao, Yi; Paul, Ratnadeep; Jia, Tao

    2015-05-01

    Optical measurement techniques are often employed to digitally capture three dimensional shapes of components. The digital data density output from these probes range from a few discrete points to exceeding millions of points in the point cloud. The point cloud taken as a whole represents a discretized measurement of the actual 3D shape of the surface of the component inspected to the measurement resolution of the sensor. Embedded within the measurement are the various features of the part that make up its overall shape. Part designers are often interested in the feature information since those relate directly to part function and to the analytical models used to develop the part design. Furthermore, tolerances are added to these dimensional features, making their extraction a requirement for the manufacturing quality plan of the product. The task of "extracting" these design features from the point cloud is a post processing task. Due to measurement repeatability and cycle time requirements often automated feature extraction from measurement data is required. The presence of non-ideal features such as high frequency optical noise and surface roughness can significantly complicate this feature extraction process. This research describes a robust process for extracting linear and arc segments from general 2D point clouds, to a prescribed tolerance. The feature extraction process generates the topology, specifically the number of linear and arc segments, and the geometry equations of the linear and arc segments automatically from the input 2D point clouds. This general feature extraction methodology has been employed as an integral part of the automated post processing algorithms of 3D data of fine features.

  2. A Biologically Inspired Approach to Frequency Domain Feature Extraction for EEG Classification

    Directory of Open Access Journals (Sweden)

    Nurhan Gursel Ozmen

    2018-01-01

    Full Text Available Classification of electroencephalogram (EEG signal is important in mental decoding for brain-computer interfaces (BCI. We introduced a feature extraction approach based on frequency domain analysis to improve the classification performance on different mental tasks using single-channel EEG. This biologically inspired method extracts the most discriminative spectral features from power spectral densities (PSDs of the EEG signals. We applied our method on a dataset of six subjects who performed five different imagination tasks: (i resting state, (ii mental arithmetic, (iii imagination of left hand movement, (iv imagination of right hand movement, and (v imagination of letter “A.” Pairwise and multiclass classifications were performed in single EEG channel using Linear Discriminant Analysis and Support Vector Machines. Our method produced results (mean classification accuracy of 83.06% for binary classification and 91.85% for multiclassification that are on par with the state-of-the-art methods, using single-channel EEG with low computational cost. Among all task pairs, mental arithmetic versus letter imagination yielded the best result (mean classification accuracy of 90.29%, indicating that this task pair could be the most suitable pair for a binary class BCI. This study contributes to the development of single-channel BCI, as well as finding the best task pair for user defined applications.

  3. LOCAL LINE BINARY PATTERN FOR FEATURE EXTRACTION ON PALM VEIN RECOGNITION

    Directory of Open Access Journals (Sweden)

    Jayanti Yusmah Sari

    2015-08-01

    Full Text Available In recent years, palm vein recognition has been studied to overcome problems in conventional systems in biometrics technology (finger print, face, and iris. Those problems in biometrics includes convenience and performance. However, due to the clarity of the palm vein image, the veins could not be segmented properly. To overcome this problem, we propose a palm vein recognition system using Local Line Binary Pattern (LLBP method that can extract robust features from the palm vein images that has unclear veins. LLBP is an advanced method of Local Binary Pattern (LBP, a texture descriptor based on the gray level comparison of a neighborhood of pixels. There are four major steps in this paper, Region of Interest (ROI detection, image preprocessing, features extraction using LLBP method, and matching using Fuzzy k-NN classifier. The proposed method was applied on the CASIA Multi-Spectral Image Database. Experimental results showed that the proposed method using LLBP has a good performance with recognition accuracy of 97.3%. In the future, experiments will be conducted to observe which parameter that could affect processing time and recognition accuracy of LLBP is needed

  4. Autonomous celestial navigation based on Earth ultraviolet radiance and fast gradient statistic feature extraction

    Science.gov (United States)

    Lu, Shan; Zhang, Hanmo

    2016-01-01

    To meet the requirement of autonomous orbit determination, this paper proposes a fast curve fitting method based on earth ultraviolet features to obtain accurate earth vector direction, in order to achieve the high precision autonomous navigation. Firstly, combining the stable characters of earth ultraviolet radiance and the use of transmission model software of atmospheric radiation, the paper simulates earth ultraviolet radiation model on different time and chooses the proper observation band. Then the fast improved edge extracting method combined Sobel operator and local binary pattern (LBP) is utilized, which can both eliminate noises efficiently and extract earth ultraviolet limb features accurately. And earth's centroid locations on simulated images are estimated via the least square fitting method using part of the limb edges. Taken advantage of the estimated earth vector direction and earth distance, Extended Kalman Filter (EKF) is applied to realize the autonomous navigation finally. Experiment results indicate the proposed method can achieve a sub-pixel earth centroid location estimation and extremely enhance autonomous celestial navigation precision.

  5. Geopositioning with a quadcopter: Extracted feature locations and predicted accuracy without a priori sensor attitude information

    Science.gov (United States)

    Dolloff, John; Hottel, Bryant; Edwards, David; Theiss, Henry; Braun, Aaron

    2017-05-01

    This paper presents an overview of the Full Motion Video-Geopositioning Test Bed (FMV-GTB) developed to investigate algorithm performance and issues related to the registration of motion imagery and subsequent extraction of feature locations along with predicted accuracy. A case study is included corresponding to a video taken from a quadcopter. Registration of the corresponding video frames is performed without the benefit of a priori sensor attitude (pointing) information. In particular, tie points are automatically measured between adjacent frames using standard optical flow matching techniques from computer vision, an a priori estimate of sensor attitude is then computed based on supplied GPS sensor positions contained in the video metadata and a photogrammetric/search-based structure from motion algorithm, and then a Weighted Least Squares adjustment of all a priori metadata across the frames is performed. Extraction of absolute 3D feature locations, including their predicted accuracy based on the principles of rigorous error propagation, is then performed using a subset of the registered frames. Results are compared to known locations (check points) over a test site. Throughout this entire process, no external control information (e.g. surveyed points) is used other than for evaluation of solution errors and corresponding accuracy.

  6. Development of a Rapid and Simple Method to Remove Polyphenols from Plant Extracts

    Directory of Open Access Journals (Sweden)

    Imali Ranatunge

    2017-01-01

    Full Text Available Polyphenols are secondary metabolites of plants, which are responsible for prevention of many diseases. Polyvinylpolypyrrolidone (PVPP has a high affinity towards polyphenols. This method involves the use of PVPP column to remove polyphenols under centrifugal force. Standards of gallic acid, epigallocatechin gallate, vanillin, and tea extracts (Camellia sinensis were used in this study. PVPP powder was packed in a syringe with different quantities. The test samples were layered over the PVPP column and subjected to centrifugation. Supernatant was tested for the total phenol content. The presence of phenolic compounds and caffeine was screened by HPLC and measuring the absorbance at 280. The antioxidant capacity of standards and tea extracts was compared with the polyphenol removed fractions using DPPH scavenging assay. No polyphenols were found in polyphenolic standards or tea extracts after PVPP treatment. The method described in the present study to remove polyphenols is simple, inexpensive, rapid, and efficient and can be employed to investigate the contribution of polyphenols present in natural products to their biological activity.

  7. Prediction of protein homo-oligomer types by pseudo amino acid composition: Approached with an improved feature extraction and Naive Bayes Feature Fusion.

    Science.gov (United States)

    Zhang, S-W; Pan, Q; Zhang, H-C; Shao, Z-C; Shi, J-Y

    2006-06-01

    The interaction of non-covalently bound monomeric protein subunits forms oligomers. The oligomeric proteins are superior to the monomers within the scope of functional evolution of biomacromolecules. Such complexes are involved in various biological processes, and play an important role. It is highly desirable to predict oligomer types automatically from their sequence. Here, based on the concept of pseudo amino acid composition, an improved feature extraction method of weighted auto-correlation function of amino acid residue index and Naive Bayes multi-feature fusion algorithm is proposed and applied to predict protein homo-oligomer types. We used the support vector machine (SVM) as base classifiers, in order to obtain better results. For example, the total accuracies of A, B, C, D and E sets based on this improved feature extraction method are 77.63, 77.16, 76.46, 76.70 and 75.06% respectively in the jackknife test, which are 6.39, 5.92, 5.22, 5.46 and 3.82% higher than that of G set based on conventional amino acid composition method with the same SVM. Comparing with Chou's feature extraction method of incorporating quasi-sequence-order effect, our method can increase the total accuracy at a level of 3.51 to 1.01%. The total accuracy improves from 79.66 to 80.83% by using the Naive Bayes Feature Fusion algorithm. These results show: 1) The improved feature extraction method is effective and feasible, and the feature vectors based on this method may contain more protein quaternary structure information and appear to capture essential information about the composition and hydrophobicity of residues in the surface patches that buried in the interfaces of associated subunits; 2) Naive Bayes Feature Fusion algorithm and SVM can be referred as a powerful computational tool for predicting protein homo-oligomer types.

  8. Clinical and pathological features of Nerium oleander extract toxicosis in wistar rats.

    Science.gov (United States)

    Akhtar, Tasleem; Sheikh, Nadeem; Abbasi, Muddasir Hassan

    2014-12-23

    Nerium oleander has been widely studied for medicinal purposes for variety of maladies. N. oleander has also been reported having noxious effects because of its number of components that may show signs of toxicity by inhibiting plasma lemma Na+, K+-ATPase. The present study was performed to scrutinize the toxic effect of N. oleander leaves extract and its clinical and pathological features in wistar rats. Hematological analysis showed significant variations in RBCs count (P = 0.01), Hb (P = 0.001), Hct (P = 0.0003), MCV (P = 0.013), lymphocyte count (P = 0.015), neutrophil count (P = 0.003), monocyte count (P = 0.012) and eosinophil count (P = 0.006). Histopathological studies have shown that in T1 group noticeable infiltration of inflammatory cells was found with low level of vascular damage. In T2 group, increased proportion of binucleated and inflammatory cells, hepatic necrosis, widening of sinusoidal spaces and mild level of vascular damage was observed. Taken together these findings we can conclude that N. oleander leaves extract significantly affects on experimental animals due to its toxicity. Efforts must be exerted to purify different chemical components from extract with no inflammation as this plant is utilized in folk medicine with narrow therapeutic indices.

  9. Retinal status analysis method based on feature extraction and quantitative grading in OCT images.

    Science.gov (United States)

    Fu, Dongmei; Tong, Hejun; Zheng, Shuang; Luo, Ling; Gao, Fulin; Minar, Jiri

    2016-07-22

    Optical coherence tomography (OCT) is widely used in ophthalmology for viewing the morphology of the retina, which is important for disease detection and assessing therapeutic effect. The diagnosis of retinal diseases is based primarily on the subjective analysis of OCT images by trained ophthalmologists. This paper describes an OCT images automatic analysis method for computer-aided disease diagnosis and it is a critical part of the eye fundus diagnosis. This study analyzed 300 OCT images acquired by Optovue Avanti RTVue XR (Optovue Corp., Fremont, CA). Firstly, the normal retinal reference model based on retinal boundaries was presented. Subsequently, two kinds of quantitative methods based on geometric features and morphological features were proposed. This paper put forward a retinal abnormal grading decision-making method which was used in actual analysis and evaluation of multiple OCT images. This paper showed detailed analysis process by four retinal OCT images with different abnormal degrees. The final grading results verified that the analysis method can distinguish abnormal severity and lesion regions. This paper presented the simulation of the 150 test images, where the results of analysis of retinal status showed that the sensitivity was 0.94 and specificity was 0.92.The proposed method can speed up diagnostic process and objectively evaluate the retinal status. This paper aims on studies of retinal status automatic analysis method based on feature extraction and quantitative grading in OCT images. The proposed method can obtain the parameters and the features that are associated with retinal morphology. Quantitative analysis and evaluation of these features are combined with reference model which can realize the target image abnormal judgment and provide a reference for disease diagnosis.

  10. Detailed Hydrographic Feature Extraction from High-Resolution LiDAR Data

    Energy Technology Data Exchange (ETDEWEB)

    Danny L. Anderson

    2012-05-01

    Detailed hydrographic feature extraction from high-resolution light detection and ranging (LiDAR) data is investigated. Methods for quantitatively evaluating and comparing such extractions are presented, including the use of sinuosity and longitudinal root-mean-square-error (LRMSE). These metrics are then used to quantitatively compare stream networks in two studies. The first study examines the effect of raster cell size on watershed boundaries and stream networks delineated from LiDAR-derived digital elevation models (DEMs). The study confirmed that, with the greatly increased resolution of LiDAR data, smaller cell sizes generally yielded better stream network delineations, based on sinuosity and LRMSE. The second study demonstrates a new method of delineating a stream directly from LiDAR point clouds, without the intermediate step of deriving a DEM. Direct use of LiDAR point clouds could improve efficiency and accuracy of hydrographic feature extractions. The direct delineation method developed herein and termed “mDn”, is an extension of the D8 method that has been used for several decades with gridded raster data. The method divides the region around a starting point into sectors, using the LiDAR data points within each sector to determine an average slope, and selecting the sector with the greatest downward slope to determine the direction of flow. An mDn delineation was compared with a traditional grid-based delineation, using TauDEM, and other readily available, common stream data sets. Although, the TauDEM delineation yielded a sinuosity that more closely matches the reference, the mDn delineation yielded a sinuosity that was higher than either the TauDEM method or the existing published stream delineations. Furthermore, stream delineation using the mDn method yielded the smallest LRMSE.

  11. Development of Novel Method for Rapid Extract of Radionuclides from Solution Using Polymer Ligand Film

    Science.gov (United States)

    Rim, Jung H.

    Accurate and fast determination of the activity of radionuclides in a sample is critical for nuclear forensics and emergency response. Radioanalytical techniques are well established for radionuclides measurement, however, they are slow and labor intensive, requiring extensive radiochemical separations and purification prior to analysis. With these limitations of current methods, there is great interest for a new technique to rapidly process samples. This dissertation describes a new analyte extraction medium called Polymer Ligand Film (PLF) developed to rapidly extract radionuclides. Polymer Ligand Film is a polymer medium with ligands incorporated in its matrix that selectively and rapidly extract analytes from a solution. The main focus of the new technique is to shorten and simplify the procedure necessary to chemically isolate radionuclides for determination by alpha spectrometry or beta counting. Five different ligands were tested for plutonium extraction: bis(2-ethylhexyl) methanediphosphonic acid (H2DEH[MDP]), di(2-ethyl hexyl) phosphoric acid (HDEHP), trialkyl methylammonium chloride (Aliquat-336), 4,4'(5')-di-t-butylcyclohexano 18-crown-6 (DtBuCH18C6), and 2-ethylhexyl 2-ethylhexylphosphonic acid (HEH[EHP]). The ligands that were effective for plutonium extraction further studied for uranium extraction. The plutonium recovery by PLFs has shown dependency on nitric acid concentration and ligand to total mass ratio. H2DEH[MDP] PLFs performed best with 1:10 and 1:20 ratio PLFs. 50.44% and 47.61% of plutonium were extracted on the surface of PLFs with 1M nitric acid for 1:10 and 1:20 PLF, respectively. HDEHP PLF provided the best combination of alpha spectroscopy resolution and plutonium recovery with 1:5 PLF when used with 0.1M nitric acid. The overall analyte recovery was lower than electrodeposited samples, which typically has recovery above 80%. However, PLF is designed to be a rapid field deployable screening technique and consistency is more important

  12. Antepartum fetal heart rate feature extraction and classification using empirical mode decomposition and support vector machine

    Directory of Open Access Journals (Sweden)

    Ahmed Shuhaila

    2011-01-01

    Full Text Available Abstract Background Cardiotocography (CTG is the most widely used tool for fetal surveillance. The visual analysis of fetal heart rate (FHR traces largely depends on the expertise and experience of the clinician involved. Several approaches have been proposed for the effective interpretation of FHR. In this paper, a new approach for FHR feature extraction based on empirical mode decomposition (EMD is proposed, which was used along with support vector machine (SVM for the classification of FHR recordings as 'normal' or 'at risk'. Methods The FHR were recorded from 15 subjects at a sampling rate of 4 Hz and a dataset consisting of 90 randomly selected records of 20 minutes duration was formed from these. All records were labelled as 'normal' or 'at risk' by two experienced obstetricians. A training set was formed by 60 records, the remaining 30 left as the testing set. The standard deviations of the EMD components are input as features to a support vector machine (SVM to classify FHR samples. Results For the training set, a five-fold cross validation test resulted in an accuracy of 86% whereas the overall geometric mean of sensitivity and specificity was 94.8%. The Kappa value for the training set was .923. Application of the proposed method to the testing set (30 records resulted in a geometric mean of 81.5%. The Kappa value for the testing set was .684. Conclusions Based on the overall performance of the system it can be stated that the proposed methodology is a promising new approach for the feature extraction and classification of FHR signals.

  13. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Science.gov (United States)

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  14. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Directory of Open Access Journals (Sweden)

    Florian Eyben

    Full Text Available Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  15. Using flow feature to extract pulsatile blood flow from 4D flow MRI images

    Science.gov (United States)

    Wang, Zhiqiang; Zhao, Ye; Yu, Whitney; Chen, Xi; Lin, Chen; Kralik, Stephen F.; Hutchins, Gary D.

    2017-02-01

    4D flow MRI images make it possible to measure pulsatile blood flow inside deforming vessel, which is critical in accurate blood flow visualization, simulation, and evaluation. Such data has great potential to overcome problems in existing work, which usually does not reflect the dynamic nature of elastic vessels and blood flows in cardiac cycles. However, the 4D flow MRI data is often low-resolution and with strong noise. Due to these challenges, few efforts have been successfully conducted to extract dynamic blood flow fields and deforming artery over cardiac cycles, especially for small artery like carotid. In this paper, a robust flow feature, particularly the mean flow intensity is used to segment blood flow regions inside vessels from 4D flow MRI images in whole cardiac cycle. To estimate this flow feature more accurately, adaptive weights are added to the raw velocity vectors based on the noise strength of MRI imaging. Then, based on this feature, target arteries are tracked in at different time steps in a cardiac cycle. This method is applied to the clinical 4D flow MRI data in neck area. Dynamic vessel walls and blood flows are effectively generated in a cardiac cycle in the relatively small carotid arteries. Good image segmentation results on 2D slices are presented, together with the visualization of 3D arteries and blood flows. Evaluation of the method was performed by clinical doctors and by checking flow volume rates in the vertebral and carotid arteries.

  16. Signals features extraction in liquid-gas flow measurements using gamma densitometry. Part 1: time domain

    Directory of Open Access Journals (Sweden)

    Hanus Robert

    2016-01-01

    Full Text Available The paper presents an application of the gamma-absorption method to study a gas-liquid two-phase flow in a horizontal pipeline. In the tests on laboratory installation two 241Am radioactive sources and scintillation probes with NaI(Tl crystals have been used. The experimental set-up allows recording of stochastic signals, which describe instantaneous content of the stream in the particular cross-section of the flow mixture. The analyses of these signals by statistical methods allow to determine the mean velocity of the gas phase. Meanwhile, the selected features of signals provided by the absorption set, can be applied to recognition of the structure of the flow. In this work such three structures of air-water flow as: plug, bubble, and transitional plug – bubble one were considered. The recorded raw signals were analyzed in time domain and several features were extracted. It was found that following features of signals as the mean, standard deviation, root mean square (RMS, variance and 4th moment are most useful to recognize the structure of the flow.

  17. Feature extraction for ultrasonic sensor based defect detection in ceramic components

    Science.gov (United States)

    Kesharaju, Manasa; Nagarajah, Romesh

    2014-02-01

    High density silicon carbide materials are commonly used as the ceramic element of hard armour inserts used in traditional body armour systems to reduce their weight, while providing improved hardness, strength and elastic response to stress. Currently, armour ceramic tiles are inspected visually offline using an X-ray technique that is time consuming and very expensive. In addition, from X-rays multiple defects are also misinterpreted as single defects. Therefore, to address these problems the ultrasonic non-destructive approach is being investigated. Ultrasound based inspection would be far more cost effective and reliable as the methodology is applicable for on-line quality control including implementation of accept/reject criteria. This paper describes a recently developed methodology to detect, locate and classify various manufacturing defects in ceramic tiles using sub band coding of ultrasonic test signals. The wavelet transform is applied to the ultrasonic signal and wavelet coefficients in the different frequency bands are extracted and used as input features to an artificial neural network (ANN) for purposes of signal classification. Two different classifiers, using artificial neural networks (supervised) and clustering (un-supervised) are supplied with features selected using Principal Component Analysis(PCA) and their classification performance compared. This investigation establishes experimentally that Principal Component Analysis(PCA) can be effectively used as a feature selection method that provides superior results for classifying various defects in the context of ultrasonic inspection in comparison with the X-ray technique.

  18. Characterization of lunar soils through spectral features extraction in the NIR

    Science.gov (United States)

    Mall, U.; Wöhler, C.; Grumpe, A.; Bugiolacchi, R.; Bhatt, M.

    2014-11-01

    Recently launched hyper-spectral instrumentation with ever-increasing data return capabilities deliver the remote-sensing data to characterize planetary soils with increased precision, thus generating the need to classify the returned data in an efficient way for further specialized analysis and detection of features of interest. This paper investigates how lunar near-infrared spectra generated by the SIR-2 on Chandrayaan-1 can be classified into distinctive groups of similar spectra with automated feature extraction algorithms. As common spectral parameters for the SIR-2 spectra, two absorption features near 1300 nm and 2000 and their characteristics provide 10 variables which are used in two different unsupervised clustering methods, the mean-shift clustering algorithm and the recently developed graph cut-based clustering algorithm by Müller et al. (2012). The spectra used in this paper were taken on the lunar near side centering around the Imbrium region of the Moon. More than 100,000 spectra were analyzed.

  19. Gearbox Fault Features Extraction Using Vibration Measurements and Novel Adaptive Filtering Scheme

    Directory of Open Access Journals (Sweden)

    Ghalib R. Ibrahim

    2012-01-01

    Full Text Available Vibration signals measured from a gearbox are complex multicomponent signals, generated by tooth meshing, gear shaft rotation, gearbox resonance vibration signatures, and a substantial amount of noise. This paper presents a novel scheme for extracting gearbox fault features using adaptive filtering techniques for enhancing condition features, meshing frequency sidebands. A modified least mean square (LMS algorithm is examined and validated using only one accelerometer, instead of using two accelerometers in traditional arrangement, as the main signal and a desired signal is artificially generated from the measured shaft speed and gear meshing frequencies. The proposed scheme is applied to a signal simulated from gearbox frequencies with a numerous values of step size. Findings confirm that 10−5 step size invariably produces more accurate results and there has been a substantial improvement in signal clarity (better signal-to-noise ratio, which makes meshing frequency sidebands more discernible. The developed scheme is validated via a number of experiments carried out using two-stage helical gearbox for a healthy pair of gears and a pair suffering from a tooth breakage with severity fault 1 (25% tooth removal and fault 2 (50% tooth removal under loads (0%, and 80% of the total load. The experimental results show remarkable improvements and enhance gear condition features. This paper illustrates that the new approach offers a more effective way to detect early faults.

  20. A threshold method for coastal line feature extraction from optical satellite imagery

    Science.gov (United States)

    Zoran, L. F. V.; Golovanov, C. Ionescu; Zoran, M. A.

    2007-10-01

    The coastal zone of world is under increasing stress due to development of industries, trade and commerce, tourism and resultant human population growth and migration, and deteriorating water quality. Satellite imagery is used for mapping of coastal zone ecosystems as well as to assess the extent and alteration in land cover/land use in coastal ecosystem. Beside anthropogenic activities, episodic events, such as storms, floods, induce certain changes or accelerate the process of change, so in order to conserve the coastal ecosystems and habitats is an urgent need to define coastal line and its spatio-temporal changes. Coastlines have never been stable in terms of their long term and short term positions. Coastal line is a simple but important type of feature in remote sensed images. In remote sensing have been proposed many valid approaches for automatically identifying of this feature, of which the accuracy and speed is the most important. The aim of the paper is to develop a threshold-based morphological approach for coastline feature extraction from optical remote sensing satellite images (LandsatTM 5, ETM 7 + and IKONOS) and to apply it for Romanian Black Sea coastal zone for period of 20 years (1985-2005).

  1. How lovebirds maneuver rapidly using super-fast head saccades and image feature stabilization

    NARCIS (Netherlands)

    Kress, Daniel; Bokhorst, Van Evelien; Lentink, David

    2015-01-01

    Diurnal flying animals such as birds depend primarily on vision to coordinate their flight path during goal-directed flight tasks. To extract the spatial structure of the surrounding environment, birds are thought to use retinal image motion (optical flow) that is primarily induced by motion of

  2. Data Exploration using Unsupervised Feature Extraction for Mixed Micro-Seismic Signals

    Science.gov (United States)

    Meyer, Matthias; Weber, Samuel; Beutel, Jan

    2017-04-01

    We present a system for the analysis of data originating in a multi-sensor and multi-year experiment focusing on slope stability and its underlying processes in fractured permafrost rock walls undertaken at 3500m a.s.l. on the Matterhorn Hörnligrat, (Zermatt, Switzerland). This system incorporates facilities for the transmission, management and storage of large-scales of data ( 7 GB/day), preprocessing and aggregation of multiple sensor types, machine-learning based automatic feature extraction for micro-seismic and acoustic emission data and interactive web-based visualization of the data. Specifically, a combination of three types of sensors are used to profile the frequency spectrum from 1 Hz to 80 kHz with the goal to identify the relevant destructive processes (e.g. micro-cracking and fracture propagation) leading to the eventual destabilization of large rock masses. The sensors installed for this profiling experiment (2 geophones, 1 accelerometers and 2 piezo-electric sensors for detecting acoustic emission), are further augmented with sensors originating from a previous activity focusing on long-term monitoring of temperature evolution and rock kinematics with the help of wireless sensor networks (crackmeters, cameras, weather station, rock temperature profiles, differential GPS) [Hasler2012]. In raw format, the data generated by the different types of sensors, specifically the micro-seismic and acoustic emission sensors, is strongly heterogeneous, in part unsynchronized and the storage and processing demand is large. Therefore, a purpose-built signal preprocessing and event-detection system is used. While the analysis of data from each individual sensor follows established methods, the application of all these sensor types in combination within a field experiment is unique. Furthermore, experience and methods from using such sensors in laboratory settings cannot be readily transferred to the mountain field site setting with its scale and full exposure to

  3. A new breast cancer risk analysis approach using features extracted from multiple sub-regions on bilateral mammograms

    Science.gov (United States)

    Sun, Wenqing; Tseng, Tzu-Liang B.; Zheng, Bin; Zhang, Jianying; Qian, Wei

    2015-03-01

    A novel breast cancer risk analysis approach is proposed for enhancing performance of computerized breast cancer risk analysis using bilateral mammograms. Based on the intensity of breast area, five different sub-regions were acquired from one mammogram, and bilateral features were extracted from every sub-region. Our dataset includes 180 bilateral mammograms from 180 women who underwent routine screening examinations, all interpreted as negative and not recalled by the radiologists during the original screening procedures. A computerized breast cancer risk analysis scheme using four image processing modules, including sub-region segmentation, bilateral feature extraction, feature selection, and classification was designed to detect and compute image feature asymmetry between the left and right breasts imaged on the mammograms. The highest computed area under the curve (AUC) is 0.763 ± 0.021 when applying the multiple sub-region features to our testing dataset. The positive predictive value and the negative predictive value were 0.60 and 0.73, respectively. The study demonstrates that (1) features extracted from multiple sub-regions can improve the performance of our scheme compared to using features from whole breast area only; (2) a classifier using asymmetry bilateral features can effectively predict breast cancer risk; (3) incorporating texture and morphological features with density features can boost the classification accuracy.

  4. A rapid and efficient DNA extraction method suitable for marine macroalgae.

    Science.gov (United States)

    Ramakrishnan, Gautham Subramaniam; Fathima, Anwar Aliya; Ramya, Mohandass

    2017-12-01

    Macroalgae are a diverse group of organisms. Marine macroalgae, in particular, have numerous medicinal and industrial applications. Molecular studies of macroalgae require suitable concentrations of DNA free of contaminants. At present, numerous protocols exist for DNA extraction from macroalgae. However, they are either time consuming, expensive or work only with few species. The method described in this study is rapid and efficient and applicable to different types of marine macroalgae. This method yields an average of 3.85 µg of DNA per 50 mg of algal tissue, with an average purity of 1.88. The isolated DNA was suitable for PCR amplification of universal plastid region of macroalgae.

  5. Protein function prediction using text-based features extracted from the biomedical literature: the CAFA challenge.

    Science.gov (United States)

    Wong, Andrew; Shatkay, Hagit

    2013-01-01

    Advances in sequencing technology over the past decade have resulted in an abundance of sequenced proteins whose function is yet unknown. As such, computational systems that can automatically predict and annotate protein function are in demand. Most computational systems use features derived from protein sequence or protein structure to predict function. In an earlier work, we demonstrated the utility of biomedical literature as a source of text features for predicting protein subcellular location. We have also shown that the combination of text-based and sequence-based prediction improves the performance of location predictors. Following up on this work, for the Critical Assessment of Function Annotations (CAFA) Challenge, we developed a text-based system that aims to predict molecular function and biological process (using Gene Ontology terms) for unannotated proteins. In this paper, we present the preliminary work and evaluation that we performed for our system, as part of the CAFA challenge. We have developed a preliminary system that represents proteins using text-based features and predicts protein function using a k-nearest neighbour classifier (Text-KNN). We selected text features for our classifier by extracting key terms from biomedical abstracts based on their statistical properties. The system was trained and tested using 5-fold cross-validation over a dataset of 36,536 proteins. System performance was measured using the standard measures of precision, recall, F-measure and overall accuracy. The performance of our system was compared to two baseline classifiers: one that assigns function based solely on the prior distribution of protein function (Base-Prior) and one that assigns function based on sequence similarity (Base-Seq). The overall prediction accuracy of Text-KNN, Base-Prior, and Base-Seq for molecular function classes are 62%, 43%, and 58% while the overall accuracy for biological process classes are 17%, 11%, and 28% respectively. Results

  6. Protein Function Prediction using Text-based Features extracted from the Biomedical Literature: The CAFA Challenge

    Science.gov (United States)

    2013-01-01

    Background Advances in sequencing technology over the past decade have resulted in an abundance of sequenced proteins whose function is yet unknown. As such, computational systems that can automatically predict and annotate protein function are in demand. Most computational systems use features derived from protein sequence or protein structure to predict function. In an earlier work, we demonstrated the utility of biomedical literature as a source of text features for predicting protein subcellular location. We have also shown that the combination of text-based and sequence-based prediction improves the performance of location predictors. Following up on this work, for the Critical Assessment of Function Annotations (CAFA) Challenge, we developed a text-based system that aims to predict molecular function and biological process (using Gene Ontology terms) for unannotated proteins. In this paper, we present the preliminary work and evaluation that we performed for our system, as part of the CAFA challenge. Results We have developed a preliminary system that represents proteins using text-based features and predicts protein function using a k-nearest neighbour classifier (Text-KNN). We selected text features for our classifier by extracting key terms from biomedical abstracts based on their statistical properties. The system was trained and tested using 5-fold cross-validation over a dataset of 36,536 proteins. System performance was measured using the standard measures of precision, recall, F-measure and overall accuracy. The performance of our system was compared to two baseline classifiers: one that assigns function based solely on the prior distribution of protein function (Base-Prior) and one that assigns function based on sequence similarity (Base-Seq). The overall prediction accuracy of Text-KNN, Base-Prior, and Base-Seq for molecular function classes are 62%, 43%, and 58% while the overall accuracy for biological process classes are 17%, 11%, and 28

  7. Discharges Classification using Genetic Algorithms and Feature Selection Algorithms on Time and Frequency Domain Data Extracted from Leakage Current Measurements

    OpenAIRE

    D. Pylarinos; Theofilatos, K.; K. Siderakis; E. Thalassinakis

    2013-01-01

    A number of 387 discharge portraying waveforms recorded on 18 different 150 kV post insulators installed at two different Substations in Crete, Greece are considered in this paper. Twenty different features are extracted from each waveform and two feature selection algorithms (t-test and mRMR) are employed. Genetic algorithms are used to classify waveforms in two different classes related to the portrayed discharges. Five different data sets are employed (1. the original feature vector, 2. ti...

  8. A Narrative Methodology to Recognize Iris Patterns By Extracting Features Using Gabor Filters and Wavelets

    Directory of Open Access Journals (Sweden)

    Shristi Jha

    2016-01-01

    Full Text Available Iris pattern Recognition is an automated method of biometric identification that uses mathematical pattern-Recognition techniques on images of one or both of the irises of an individual’s eyes, whose complex random patterns are unique, stable, and can be seen from some distance. Iris recognition uses video camera technology with subtle near infrared illumination to acquire images of the detail-rich, intricate structures of the iris which are visible externally. In this narrative research paper the input image is captured and the success of the iris recognition depends on the quality of the image so the captured image is subjected to the preliminary image preprocessing techniques like localization, segmentation, normalization and noise detection followed by texture and edge feature extraction by using Gabor filters and wavelets then the processed image is matched with templates stored in the database to detect the Iris Patterns.

  9. EMG signals characterization in three states of contraction by fuzzy network and feature extraction

    CERN Document Server

    Mokhlesabadifarahani, Bita

    2015-01-01

    Neuro-muscular and musculoskeletal disorders and injuries highly affect the life style and the motion abilities of an individual. This brief highlights a systematic method for detection of the level of muscle power declining in musculoskeletal and Neuro-muscular disorders. The neuro-fuzzy system is trained with 70 percent of the recorded Electromyography (EMG) cut off window and then used for classification and modeling purposes. The neuro-fuzzy classifier is validated in comparison to some other well-known classifiers in classification of the recorded EMG signals with the three states of contractions corresponding to the extracted features. Different structures of the neuro-fuzzy classifier are also comparatively analyzed to find the optimum structure of the classifier used.

  10. Study of Image Analysis Algorithms for Segmentation, Feature Extraction and Classification of Cells

    Directory of Open Access Journals (Sweden)

    Margarita Gamarra

    2017-08-01

    Full Text Available Recent advances in microcopy and improvements in image processing algorithms have allowed the development of computer-assisted analytical approaches in cell identification. Several applications could be mentioned in this field: Cellular phenotype identification, disease detection and treatment, identifying virus entry in cells and virus classification; these applications could help to complement the opinion of medical experts. Although many surveys have been presented in medical image analysis, they focus mainly in tissues and organs and none of the surveys about image cells consider an analysis following the stages in the typical image processing: Segmentation, feature extraction and classification. The goal of this study is to provide comprehensive and critical analyses about the trends in each stage of cell image processing. In this paper, we present a literature survey about cell identification using different image processing techniques.

  11. Iris image recognition wavelet filter-banks based iris feature extraction schemes

    CERN Document Server

    Rahulkar, Amol D

    2014-01-01

    This book provides the new results in wavelet filter banks based feature extraction, and the classifier in the field of iris image recognition. It provides the broad treatment on the design of separable, non-separable wavelets filter banks, and the classifier. The design techniques presented in the book are applied on iris image analysis for person authentication. This book also brings together the three strands of research (wavelets, iris image analysis, and classifier). It compares the performance of the presented techniques with state-of-the-art available schemes. This book contains the compilation of basic material on the design of wavelets that avoids reading many different books. Therefore, it provide an easier path for the new-comers, researchers to master the contents. In addition, the designed filter banks and classifier can also be effectively used than existing filter-banks in many signal processing applications like pattern classification, data-compression, watermarking, denoising etc.  that will...

  12. Rapid solid-phase extraction and analysis of resveratrol and other polyphenols in red wine.

    Science.gov (United States)

    Hashim, Shima N N S; Schwarz, Lachlan J; Boysen, Reinhard I; Yang, Yuanzhong; Danylec, Basil; Hearn, Milton T W

    2013-10-25

    Red wine has long been credited as a good source of health-beneficial antioxidants, including the bioactive polyphenols catechin, quercetin, and (E)-resveratrol. In this paper, we report the application of reusable molecularly imprinted polymers (MIPs) for the selective and robust solid-phase extraction (SPE) and rapid analysis of (E)-resveratrol (LOD=8.87×10(-3) mg/L, LOQ=2.94×10(-2) mg/L), along with a range of other polyphenols from an Australian Pinot noir red wine. Optimization of the molecularly imprinted solid-phase extraction (MISPE) protocol resulted in the significant enrichment of (E)-resveratrol and several structurally related polyphenols. These secondary metabolites were subsequently identified by RP-HPLC and μLC-ESI ion trap MS/MS methods. The developed MISPE protocol employed low volumes of environmentally benign solvents selected according to the Green Chemistry principles, and resulted in the recovery of 99% of the total (E)-resveratrol present. These results further demonstrate the potential of generic protocols for the analysis of target compound with health beneficial properties within the food and nutraceutical industries using tailor-made MIPs. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Rapid extraction and high-performance liquid chromatographic determination of parthenolide in feverfew (Tanacetum parthenium).

    Science.gov (United States)

    Zhou, J Z; Kou, X; Stevenson, D

    1999-03-01

    A rapid and sensitive method for quantifying parthenolide in feverfew herb (Tanacetum parthenium) was developed that is significantly faster than those reported in the literature. The extraction system consisted of acetonitrile/water (90:10, v/v) in a bottle with stirring for 30 min. Both Soxhlet and bottle-stirring extractions were studied. Samples were analyzed using high-performance liquid chromatography with a Cosmosil C18-AR column (150 x 4.6 mm, 5 microm, 120 A). The mobile phase consisted of acetonitrile/water (55:45, v/v) with a flow rate of 1.5 mL/min and UV detection at 210 nm. Analysis time was 6 min, with a detection limit of 0.10 ng on column. The calibration curve was linear over a range of 0.160-850 microg/mL parthenolide with R(2) = 0.9999. Replicate tests indicated good reproducibility of the method with an RSD% = 0.88 (n = 10). Spike recovery of parthenolide was found to be 99.3% with an RSD% = 1.6 (n = 6).

  14. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    OpenAIRE

    Dat Tien Nguyen; Ki Wan Kim; Hyung Gil Hong; Ja Hyung Koo; Min Cheol Kim; Kang Ryoung Park

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has ...

  15. Multivariate anomaly detection for Earth observations: a comparison of algorithms and feature extraction techniques

    Directory of Open Access Journals (Sweden)

    M. Flach

    2017-08-01

    Full Text Available Today, many processes at the Earth's surface are constantly monitored by multiple data streams. These observations have become central to advancing our understanding of vegetation dynamics in response to climate or land use change. Another set of important applications is monitoring effects of extreme climatic events, other disturbances such as fires, or abrupt land transitions. One important methodological question is how to reliably detect anomalies in an automated and generic way within multivariate data streams, which typically vary seasonally and are interconnected across variables. Although many algorithms have been proposed for detecting anomalies in multivariate data, only a few have been investigated in the context of Earth system science applications. In this study, we systematically combine and compare feature extraction and anomaly detection algorithms for detecting anomalous events. Our aim is to identify suitable workflows for automatically detecting anomalous patterns in multivariate Earth system data streams. We rely on artificial data that mimic typical properties and anomalies in multivariate spatiotemporal Earth observations like sudden changes in basic characteristics of time series such as the sample mean, the variance, changes in the cycle amplitude, and trends. This artificial experiment is needed as there is no gold standard for the identification of anomalies in real Earth observations. Our results show that a well-chosen feature extraction step (e.g., subtracting seasonal cycles, or dimensionality reduction is more important than the choice of a particular anomaly detection algorithm. Nevertheless, we identify three detection algorithms (k-nearest neighbors mean distance, kernel density estimation, a recurrence approach and their combinations (ensembles that outperform other multivariate approaches as well as univariate extreme-event detection methods. Our results therefore provide an effective workflow to

  16. Automated quality control of forced oscillation measurements: respiratory artifact detection with advanced feature extraction.

    Science.gov (United States)

    Pham, Thuy T; Leong, Philip H W; Robinson, Paul D; Gutzler, Thomas; Jee, Adelle S; King, Gregory G; Thamrin, Cindy

    2017-10-01

    The forced oscillation technique (FOT) can provide unique and clinically relevant lung function information with little cooperation with subjects. However, FOT has higher variability than spirometry, possibly because strategies for quality control and reducing artifacts in FOT measurements have yet to be standardized or validated. Many quality control procedures rely on either simple statistical filters or subjective evaluation by a human operator. In this study, we propose an automated artifact removal approach based on the resistance against flow profile, applied to complete breaths. We report results obtained from data recorded from children and adults, with and without asthma. Our proposed method has 76% agreement with a human operator for the adult data set and 79% for the pediatric data set. Furthermore, we assessed the variability of respiratory resistance measured by FOT using within-session variation (wCV) and between-session variation (bCV). In the asthmatic adults test data set, our method was again similar to that of the manual operator for wCV (6.5 vs. 6.9%) and significantly improved bCV (8.2 vs. 8.9%). Our combined automated breath removal approach based on advanced feature extraction offers better or equivalent quality control of FOT measurements compared with an expert operator and computationally more intensive methods in terms of accuracy and reducing intrasubject variability.NEW & NOTEWORTHY The forced oscillation technique (FOT) is gaining wider acceptance for clinical testing; however, strategies for quality control are still highly variable and require a high level of subjectivity. We propose an automated, complete breath approach for removal of respiratory artifacts from FOT measurements, using feature extraction and an interquartile range filter. Our approach offers better or equivalent performance compared with an expert operator, in terms of accuracy and reducing intrasubject variability. Copyright © 2017 the American Physiological

  17. Inkjet-printed conductive features for rapid integration of electronic circuits in centrifugal microfluidics

    CSIR Research Space (South Africa)

    Kruger, J

    2015-05-01

    Full Text Available activation is therefore a critical prerequisite to improve wetability of the substrate surface to the ink. Wiping or washing with a solvent cleans loose particles and degreases the surface, while also increasing the surface energy. Solvents... visual appraisal of the printed dogbone structures in Figure 6 indicates feature shapes that are well defined and adherent on the substrate, while edge raggedness is noticeable on the widest line. There appears to be a "coffee-ring" effect where...

  18. Multiple Adaptive Neuro-Fuzzy Inference System with Automatic Features Extraction Algorithm for Cervical Cancer Recognition

    Directory of Open Access Journals (Sweden)

    Mohammad Subhi Al-batah

    2014-01-01

    Full Text Available To date, cancer of uterine cervix is still a leading cause of cancer-related deaths in women worldwide. The current methods (i.e., Pap smear and liquid-based cytology (LBC to screen for cervical cancer are time-consuming and dependent on the skill of the cytopathologist and thus are rather subjective. Therefore, this paper presents an intelligent computer vision system to assist pathologists in overcoming these problems and, consequently, produce more accurate results. The developed system consists of two stages. In the first stage, the automatic features extraction (AFE algorithm is performed. In the second stage, a neuro-fuzzy model called multiple adaptive neuro-fuzzy inference system (MANFIS is proposed for recognition process. The MANFIS contains a set of ANFIS models which are arranged in parallel combination to produce a model with multi-input-multioutput structure. The system is capable of classifying cervical cell image into three groups, namely, normal, low-grade squamous intraepithelial lesion (LSIL and high-grade squamous intraepithelial lesion (HSIL. The experimental results prove the capability of the AFE algorithm to be as effective as the manual extraction by human experts, while the proposed MANFIS produces a good classification performance with 94.2% accuracy.

  19. Fatigue Feature Extraction Analysis based on a K-Means Clustering Approach

    Directory of Open Access Journals (Sweden)

    M.F.M. Yunoh

    2015-06-01

    Full Text Available This paper focuses on clustering analysis using a K-means approach for fatigue feature dataset extraction. The aim of this study is to group the dataset as closely as possible (homogeneity for the scattered dataset. Kurtosis, the wavelet-based energy coefficient and fatigue damage are calculated for all segments after the extraction process using wavelet transform. Kurtosis, the wavelet-based energy coefficient and fatigue damage are used as input data for the K-means clustering approach. K-means clustering calculates the average distance of each group from the centroid and gives the objective function values. Based on the results, maximum values of the objective function can be seen in the two centroid clusters, with a value of 11.58. The minimum objective function value is found at 8.06 for five centroid clusters. It can be seen that the objective function with the lowest value for the number of clusters is equal to five; which is therefore the best cluster for the dataset.

  20. Multiple adaptive neuro-fuzzy inference system with automatic features extraction algorithm for cervical cancer recognition.

    Science.gov (United States)

    Al-batah, Mohammad Subhi; Isa, Nor Ashidi Mat; Klaib, Mohammad Fadel; Al-Betar, Mohammed Azmi

    2014-01-01

    To date, cancer of uterine cervix is still a leading cause of cancer-related deaths in women worldwide. The current methods (i.e., Pap smear and liquid-based cytology (LBC)) to screen for cervical cancer are time-consuming and dependent on the skill of the cytopathologist and thus are rather subjective. Therefore, this paper presents an intelligent computer vision system to assist pathologists in overcoming these problems and, consequently, produce more accurate results. The developed system consists of two stages. In the first stage, the automatic features extraction (AFE) algorithm is performed. In the second stage, a neuro-fuzzy model called multiple adaptive neuro-fuzzy inference system (MANFIS) is proposed for recognition process. The MANFIS contains a set of ANFIS models which are arranged in parallel combination to produce a model with multi-input-multioutput structure. The system is capable of classifying cervical cell image into three groups, namely, normal, low-grade squamous intraepithelial lesion (LSIL) and high-grade squamous intraepithelial lesion (HSIL). The experimental results prove the capability of the AFE algorithm to be as effective as the manual extraction by human experts, while the proposed MANFIS produces a good classification performance with 94.2% accuracy.

  1. Gas purge-microsyringe extraction: a rapid and exhaustive direct microextraction technique of polycyclic aromatic hydrocarbons from plants.

    Science.gov (United States)

    Wang, Juan; Yang, Cui; Li, Huijie; Piao, Xiangfan; Li, Donghao

    2013-12-17

    Gas purge-microsyringe extraction (GP-MSE) is a rapid and exhaustive microextraction technique for volatile and semivolatile compounds. In this study, a theoretical system of GP-MSE was established by directly extracting and analyzing 16 kinds of polycyclic aromatic hydrocarbons (PAHs) from plant samples. On the basis of theoretical consideration, a full factorial experimental design was first used to evaluate the main effects and interactions of the experimental parameters affecting the extraction efficiency. Further experiments were carried out to determine the extraction kinetics and desorption temperature-dependent. The results indicated that three factors, namely desorption temperature (temperature of sample phase) Td, extraction time t, and gas flow rate u, had a significantly positive effect on the extraction efficiency of GP-MSE for PAHs. Extraction processes of PAHs in plant samples followed by first-order kinetics (relative coefficient R(2) of simulation curves were 0.731-1.000, with an average of 0.958 and 4.06% relative standard deviation), and obviously depended on the desorption temperature. Furthermore, the effect of the matrix was determined from the difference in Eapp,d. Finally, satisfactory recoveries of 16 PAHs were obtained using optimal parameters. The study demonstrated that GP-MSE could provide a rapid and exhaustive means of direct extraction of PAHs from plant samples. The extraction kinetics were similar that of the inverse process of the desorption kinetics of the sample phase. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Exact feature extraction using finite rate of innovation principles with an application to image super-resolution.

    Science.gov (United States)

    Baboulaz, Loïc; Dragotti, Pier Luigi

    2009-02-01

    The accurate registration of multiview images is of central importance in many advanced image processing applications. Image super-resolution, for example, is a typical application where the quality of the super-resolved image is degrading as registration errors increase. Popular registration methods are often based on features extracted from the acquired images. The accuracy of the registration is in this case directly related to the number of extracted features and to the precision at which the features are located: images are best registered when many features are found with a good precision. However, in low-resolution images, only a few features can be extracted and often with a poor precision. By taking a sampling perspective, we propose in this paper new methods for extracting features in low-resolution images in order to develop efficient registration techniques. We consider, in particular, the sampling theory of signals with finite rate of innovation and show that some features of interest for registration can be retrieved perfectly in this framework, thus allowing an exact registration. We also demonstrate through simulations that the sampling model which enables the use of finite rate of innovation principles is well suited for modeling the acquisition of images by a camera. Simulations of image registration and image super-resolution of artificially sampled images are first presented, analyzed and compared to traditional techniques. We finally present favorable experimental results of super-resolution of real images acquired by a digital camera available on the market.

  3. GNAR-GARCH model and its application in feature extraction for rolling bearing fault diagnosis

    Science.gov (United States)

    Ma, Jiaxin; Xu, Feiyun; Huang, Kai; Huang, Ren

    2017-09-01

    Given its simplicity of modeling and sensitivity to condition variations, time series model is widely used in feature extraction to realize fault classification and diagnosis. However, nonlinear and nonstationary characteristics common in fault signals of rolling bearing bring challenges to the diagnosis. In this paper, a hybrid model, the combination of a general expression for linear and nonlinear autoregressive (GNAR) model and a generalized autoregressive conditional heteroscedasticity (GARCH) model, (i.e., GNAR-GARCH), is proposed and applied to rolling bearing fault diagnosis. An exact expression of GNAR-GARCH model is given. Maximum likelihood method is used for parameter estimation and modified Akaike Information Criterion is adopted for structure identification of GNAR-GARCH model. The main advantage of this novel model over other models is that the combination makes the model suitable for nonlinear and nonstationary signals. It is verified with statistical tests that contain comparisons among the different time series models. Finally, GNAR-GARCH model is applied to fault diagnosis by modeling mechanical vibration signals including simulation and real data. With the parameters estimated and taken as feature vectors, k-nearest neighbor algorithm is utilized to realize the classification of fault status. The results show that GNAR-GARCH model exhibits higher accuracy and better performance than do other models.

  4. Automated Tongue Feature Extraction for ZHENG Classification in Traditional Chinese Medicine

    Directory of Open Access Journals (Sweden)

    Ratchadaporn Kanawong

    2012-01-01

    Full Text Available ZHENG, Traditional Chinese Medicine syndrome, is an integral and essential part of Traditional Chinese Medicine theory. It defines the theoretical abstraction of the symptom profiles of individual patients and thus, used as a guideline in disease classification in Chinese medicine. For example, patients suffering from gastritis may be classified as Cold or Hot ZHENG, whereas patients with different diseases may be classified under the same ZHENG. Tongue appearance is a valuable diagnostic tool for determining ZHENG in patients. In this paper, we explore new modalities for the clinical characterization of ZHENG using various supervised machine learning algorithms. We propose a novel-color-space-based feature set, which can be extracted from tongue images of clinical patients to build an automated ZHENG classification system. Given that Chinese medical practitioners usually observe the tongue color and coating to determine a ZHENG type and to diagnose different stomach disorders including gastritis, we propose using machine-learning techniques to establish the relationship between the tongue image features and ZHENG by learning through examples. The experimental results obtained over a set of 263 gastritis patients, most of whom suffering Cold Zheng or Hot ZHENG, and a control group of 48 healthy volunteers demonstrate an excellent performance of our proposed system.

  5. Relevant Feature Integration and Extraction for Single-Trial Motor Imagery Classification

    Directory of Open Access Journals (Sweden)

    Lili Li

    2017-06-01

    Full Text Available Brain computer interfaces provide a novel channel for the communication between brain and output devices. The effectiveness of the brain computer interface is based on the classification accuracy of single trial brain signals. The common spatial pattern (CSP algorithm is believed to be an effective algorithm for the classification of single trial brain signals. As the amplitude feature for spatial projection applied by this algorithm is based on a broad frequency bandpass filter (mainly 5–30 Hz in which the frequency band is often selected by experience, the CSP is sensitive to noise and the influence of other irrelevant information in the selected broad frequency band. In this paper, to improve the CSP, a novel relevant feature integration and extraction algorithm is proposed. Before projecting, we integrated the motor relevant information to suppress the interference of noise and irrelevant information, as well as to improve the spatial difference for projection. The algorithm was evaluated with public datasets. It showed significantly better classification performance with single trial electroencephalography (EEG data, increasing by 6.8% compared with the CSP.

  6. Improved Measurement of Blood Pressure by Extraction of Characteristic Features from the Cuff Oscillometric Waveform

    Directory of Open Access Journals (Sweden)

    Pooi Khoon Lim

    2015-06-01

    Full Text Available We present a novel approach to improve the estimation of systolic (SBP and diastolic blood pressure (DBP from oscillometric waveform data using variable characteristic ratios between SBP and DBP with mean arterial pressure (MAP. This was verified in 25 healthy subjects, aged 28 ± 5 years. The multiple linear regression (MLR and support vector regression (SVR models were used to examine the relationship between the SBP and the DBP ratio with ten features extracted from the oscillometric waveform envelope (OWE. An automatic algorithm based on relative changes in the cuff pressure and neighbouring oscillometric pulses was proposed to remove outlier points caused by movement artifacts. Substantial reduction in the mean and standard deviation of the blood pressure estimation errors were obtained upon artifact removal. Using the sequential forward floating selection (SFFS approach, we were able to achieve a significant reduction in the mean and standard deviation of differences between the estimated SBP values and the reference scoring (MLR: mean ± SD = −0.3 ± 5.8 mmHg; SVR and −0.6 ± 5.4 mmHg with only two features, i.e., Ratio2 and Area3, as compared to the conventional maximum amplitude algorithm (MAA method (mean ± SD = −1.6 ± 8.6 mmHg. Comparing the performance of both MLR and SVR models, our results showed that the MLR model was able to achieve comparable performance to that of the SVR model despite its simplicity.

  7. Improved Measurement of Blood Pressure by Extraction of Characteristic Features from the Cuff Oscillometric Waveform.

    Science.gov (United States)

    Lim, Pooi Khoon; Ng, Siew-Cheok; Jassim, Wissam A; Redmond, Stephen J; Zilany, Mohammad; Avolio, Alberto; Lim, Einly; Tan, Maw Pin; Lovell, Nigel H

    2015-06-16

    We present a novel approach to improve the estimation of systolic (SBP) and diastolic blood pressure (DBP) from oscillometric waveform data using variable characteristic ratios between SBP and DBP with mean arterial pressure (MAP). This was verified in 25 healthy subjects, aged 28 ± 5 years. The multiple linear regression (MLR) and support vector regression (SVR) models were used to examine the relationship between the SBP and the DBP ratio with ten features extracted from the oscillometric waveform envelope (OWE). An automatic algorithm based on relative changes in the cuff pressure and neighbouring oscillometric pulses was proposed to remove outlier points caused by movement artifacts. Substantial reduction in the mean and standard deviation of the blood pressure estimation errors were obtained upon artifact removal. Using the sequential forward floating selection (SFFS) approach, we were able to achieve a significant reduction in the mean and standard deviation of differences between the estimated SBP values and the reference scoring (MLR: mean ± SD = -0.3 ± 5.8 mmHg; SVR and -0.6 ± 5.4 mmHg) with only two features, i.e., Ratio2 and Area3, as compared to the conventional maximum amplitude algorithm (MAA) method (mean ± SD = -1.6 ± 8.6 mmHg). Comparing the performance of both MLR and SVR models, our results showed that the MLR model was able to achieve comparable performance to that of the SVR model despite its simplicity.

  8. A novel Bayesian framework for discriminative feature extraction in Brain-Computer Interfaces.

    Science.gov (United States)

    Suk, Heung-Il; Lee, Seong-Whan

    2013-02-01

    As there has been a paradigm shift in the learning load from a human subject to a computer, machine learning has been considered as a useful tool for Brain-Computer Interfaces (BCIs). In this paper, we propose a novel Bayesian framework for discriminative feature extraction for motor imagery classification in an EEG-based BCI in which the class-discriminative frequency bands and the corresponding spatial filters are optimized by means of the probabilistic and information-theoretic approaches. In our framework, the problem of simultaneous spatiospectral filter optimization is formulated as the estimation of an unknown posterior probability density function (pdf) that represents the probability that a single-trial EEG of predefined mental tasks can be discriminated in a state. In order to estimate the posterior pdf, we propose a particle-based approximation method by extending a factored-sampling technique with a diffusion process. An information-theoretic observation model is also devised to measure discriminative power of features between classes. From the viewpoint of classifier design, the proposed method naturally allows us to construct a spectrally weighted label decision rule by linearly combining the outputs from multiple classifiers. We demonstrate the feasibility and effectiveness of the proposed method by analyzing the results and its success on three public databases.

  9. A novel feature extracting method of QRS complex classification for mobile ECG signals

    Science.gov (United States)

    Zhu, Lingyun; Wang, Dong; Huang, Xianying; Wang, Yue

    2007-12-01

    The conventional classification parameters of QRS complex suffer from larger activity rang of patients and lower signal to noise ratio in mobile cardiac telemonitoring system and can not meet the identification needs of ECG signal. Based on individual sinus heart rhythm template built with mobile ECG signals in time window, we present semblance index to extract the classification features of QRS complex precisely and expeditiously. Relative approximation r2 and absolute error r3 are used as estimating parameters of semblance between testing QRS complex and template. The evaluate parameters corresponding to QRS width and types are demonstrated to choose the proper index. The results show that 99.99 percent of the QRS complex for sinus and superventricular ECG signals can be distinguished through r2 but its average accurate ratio is only 46.16%. More than 97.84 percent of QRS complexes are identified using r3 but its accurate ratio to the sinus and superventricular is not better than r2. By the feature parameter of width, only 42.65 percent of QRS complexes are classified correctly, but its accurate ratio to the ventricular is superior to r2. To combine the respective superiority of three parameters, a nonlinear weighing computation of QRS width, r2 and r3 is introduced and the total classification accuracy up to 99.48% by combing indexes.

  10. Image feature extraction in encrypted domain with privacy-preserving SIFT.

    Science.gov (United States)

    Hsu, Chao-Yung; Lu, Chun-Shien; Pei, Soo-Chang

    2012-11-01

    Privacy has received considerable attention but is still largely ignored in the multimedia community. Consider a cloud computing scenario where the server is resource-abundant, and is capable of finishing the designated tasks. It is envisioned that secure media applications with privacy preservation will be treated seriously. In view of the fact that scale-invariant feature transform (SIFT) has been widely adopted in various fields, this paper is the first to target the importance of privacy-preserving SIFT (PPSIFT) and to address the problem of secure SIFT feature extraction and representation in the encrypted domain. As all of the operations in SIFT must be moved to the encrypted domain, we propose a privacy-preserving realization of the SIFT method based on homomorphic encryption. We show through the security analysis based on the discrete logarithm problem and RSA that PPSIFT is secure against ciphertext only attack and known plaintext attack. Experimental results obtained from different case studies demonstrate that the proposed homomorphic encryption-based privacy-preserving SIFT performs comparably to the original SIFT and that our method is useful in SIFT-based privacy-preserving applications.

  11. Application of Non-Linear System Model Updating Using Feature Extraction and Parameter Effects Analysis

    Directory of Open Access Journals (Sweden)

    John F. Schultze

    2001-01-01

    Full Text Available This research presents a new method to improve analytical model fidelity for non-linear systems. The approach investigates several mechanisms to assist the analyst in updating an analytical model based on experimental data and statistical analysis of parameter effects. The first is a new approach at data reduction called feature extraction. This approach is an expansion of the `classic' update metrics to include specific phenomena or character of the response that is critical to model application. This is an extension of the familiar linear updating paradigm of utilizing the eigen-parameters or frequency response functions (FRFs to include such devices as peak acceleration, time of arrival or standard deviation of model error. The next expansion of the updating process is the inclusion of statistical based parameter analysis to quantify the effects of uncertain or significant effect parameters in the construction of a meta-model. This provides indicators of the statistical variation associated with parameters as well as confidence intervals on the coefficients of the resulting meta-model. Also included in this method is the investigation of linear parameter effect screening using a partial factorial variable array for simulation. This is intended to aid the analyst in eliminating from the investigation the parameters that do not have a significant variation effect on the feature metric. Finally, an investigation of the model to replicate the measured response variation is examined.

  12. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    Science.gov (United States)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory

  13. Device-Free Localization via an Extreme Learning Machine with Parameterized Geometrical Feature Extraction

    Directory of Open Access Journals (Sweden)

    Jie Zhang

    2017-04-01

    Full Text Available Device-free localization (DFL is becoming one of the new technologies in wireless localization field, due to its advantage that the target to be localized does not need to be attached to any electronic device. In the radio-frequency (RF DFL system, radio transmitters (RTs and radio receivers (RXs are used to sense the target collaboratively, and the location of the target can be estimated by fusing the changes of the received signal strength (RSS measurements associated with the wireless links. In this paper, we will propose an extreme learning machine (ELM approach for DFL, to improve the efficiency and the accuracy of the localization algorithm. Different from the conventional machine learning approaches for wireless localization, in which the above differential RSS measurements are trivially used as the only input features, we introduce the parameterized geometrical representation for an affected link, which consists of its geometrical intercepts and differential RSS measurement. Parameterized geometrical feature extraction (PGFE is performed for the affected links and the features are used as the inputs of ELM. The proposed PGFE-ELM for DFL is trained in the offline phase and performed for real-time localization in the online phase, where the estimated location of the target is obtained through the created ELM. PGFE-ELM has the advantages that the affected links used by ELM in the online phase can be different from those used for training in the offline phase, and can be more robust to deal with the uncertain combination of the detectable wireless links. Experimental results show that the proposed PGFE-ELM can improve the localization accuracy and learning speed significantly compared with a number of the existing machine learning and DFL approaches, including the weighted K-nearest neighbor (WKNN, support vector machine (SVM, back propagation neural network (BPNN, as well as the well-known radio tomographic imaging (RTI DFL approach.

  14. Hierarchical image feature extraction by an irregular pyramid of polygonal partitions

    Energy Technology Data Exchange (ETDEWEB)

    Skurikhin, Alexei N [Los Alamos National Laboratory

    2008-01-01

    We present an algorithmic framework for hierarchical image segmentation and feature extraction. We build a successive fine-to-coarse hierarchy of irregular polygonal partitions of the original image. This multiscale hierarchy forms the basis for object-oriented image analysis. The framework incorporates the Gestalt principles of visual perception, such as proximity and closure, and exploits spectral and textural similarities of polygonal partitions, while iteratively grouping them until dissimilarity criteria are exceeded. Seed polygons are built upon a triangular mesh composed of irregular sized triangles, whose spatial arrangement is adapted to the image content. This is achieved by building the triangular mesh on the top of detected spectral discontinuities (such as edges), which form a network of constraints for the Delaunay triangulation. The image is then represented as a spatial network in the form of a graph with vertices corresponding to the polygonal partitions and edges reflecting their relations. The iterative agglomeration of partitions into object-oriented segments is formulated as Minimum Spanning Tree (MST) construction. An important characteristic of the approach is that the agglomeration of polygonal partitions is constrained by the detected edges; thus the shapes of agglomerated partitions are more likely to correspond to the outlines of real-world objects. The constructed partitions and their spatial relations are characterized using spectral, textural and structural features based on proximity graphs. The framework allows searching for object-oriented features of interest across multiple levels of details of the built hierarchy and can be generalized to the multi-criteria MST to account for multiple criteria important for an application.

  15. Device-Free Localization via an Extreme Learning Machine with Parameterized Geometrical Feature Extraction.

    Science.gov (United States)

    Zhang, Jie; Xiao, Wendong; Zhang, Sen; Huang, Shoudong

    2017-04-17

    Device-free localization (DFL) is becoming one of the new technologies in wireless localization field, due to its advantage that the target to be localized does not need to be attached to any electronic device. In the radio-frequency (RF) DFL system, radio transmitters (RTs) and radio receivers (RXs) are used to sense the target collaboratively, and the location of the target can be estimated by fusing the changes of the received signal strength (RSS) measurements associated with the wireless links. In this paper, we will propose an extreme learning machine (ELM) approach for DFL, to improve the efficiency and the accuracy of the localization algorithm. Different from the conventional machine learning approaches for wireless localization, in which the above differential RSS measurements are trivially used as the only input features, we introduce the parameterized geometrical representation for an affected link, which consists of its geometrical intercepts and differential RSS measurement. Parameterized geometrical feature extraction (PGFE) is performed for the affected links and the features are used as the inputs of ELM. The proposed PGFE-ELM for DFL is trained in the offline phase and performed for real-time localization in the online phase, where the estimated location of the target is obtained through the created ELM. PGFE-ELM has the advantages that the affected links used by ELM in the online phase can be different from those used for training in the offline phase, and can be more robust to deal with the uncertain combination of the detectable wireless links. Experimental results show that the proposed PGFE-ELM can improve the localization accuracy and learning speed significantly compared with a number of the existing machine learning and DFL approaches, including the weighted K-nearest neighbor (WKNN), support vector machine (SVM), back propagation neural network (BPNN), as well as the well-known radio tomographic imaging (RTI) DFL approach.

  16. Unsupervised feature construction and knowledge extraction from genome-wide assays of breast cancer with denoising autoencoders.

    Science.gov (United States)

    Tan, Jie; Ung, Matthew; Cheng, Chao; Greene, Casey S

    2015-01-01

    Big data bring new opportunities for methods that efficiently summarize and automatically extract knowledge from such compendia. While both supervised learning algorithms and unsupervised clustering algorithms have been successfully applied to biological data, they are either dependent on known biology or limited to discerning the most significant signals in the data. Here we present denoising autoencoders (DAs), which employ a data-defined learning objective independent of known biology, as a method to identify and extract complex patterns from genomic data. We evaluate the performance of DAs by applying them to a large collection of breast cancer gene expression data. Results show that DAs successfully construct features that contain both clinical and molecular information. There are features that represent tumor or normal samples, estrogen receptor (ER) status, and molecular subtypes. Features constructed by the autoencoder generalize to an independent dataset collected using a distinct experimental platform. By integrating data from ENCODE for feature interpretation, we discover a feature representing ER status through association with key transcription factors in breast cancer. We also identify a feature highly predictive of patient survival and it is enriched by FOXM1 signaling pathway. The features constructed by DAs are often bimodally distributed with one peak near zero and another near one, which facilitates discretization. In summary, we demonstrate that DAs effectively extract key biological principles from gene expression data and summarize them into constructed features with convenient properties.

  17. Droplet centrifugation, droplet DNA extraction, and rapid droplet thermocycling for simpler and faster PCR assay using wire-guided manipulations

    Directory of Open Access Journals (Sweden)

    You David J

    2012-09-01

    Full Text Available Abstract A computer numerical control (CNC apparatus was used to perform droplet centrifugation, droplet DNA extraction, and rapid droplet thermocycling on a single superhydrophobic surface and a multi-chambered PCB heater. Droplets were manipulated using “wire-guided” method (a pipette tip was used in this study. This methodology can be easily adapted to existing commercial robotic pipetting system, while demonstrated added capabilities such as vibrational mixing, high-speed centrifuging of droplets, simple DNA extraction utilizing the hydrophobicity difference between the tip and the superhydrophobic surface, and rapid thermocycling with a moving droplet, all with wire-guided droplet manipulations on a superhydrophobic surface and a multi-chambered PCB heater (i.e., not on a 96-well plate. Serial dilutions were demonstrated for diluting sample matrix. Centrifuging was demonstrated by rotating a 10 μL droplet at 2300 round per minute, concentrating E. coli by more than 3-fold within 3 min. DNA extraction was demonstrated from E. coli sample utilizing the disposable pipette tip to cleverly attract the extracted DNA from the droplet residing on a superhydrophobic surface, which took less than 10 min. Following extraction, the 1500 bp sequence of Peptidase D from E. coli was amplified using rapid droplet thermocycling, which took 10 min for 30 cycles. The total assay time was 23 min, including droplet centrifugation, droplet DNA extraction and rapid droplet thermocycling. Evaporation from of 10 μL droplets was not significant during these procedures, since the longest time exposure to air and the vibrations was less than 5 min (during DNA extraction. The results of these sequentially executed processes were analyzed using gel electrophoresis. Thus, this work demonstrates the adaptability of the system to replace many common laboratory tasks on a single platform (through re-programmability, in rapid succession (using droplets

  18. A method of evolving novel feature extraction algorithms for detecting buried objects in FLIR imagery using genetic programming

    Science.gov (United States)

    Paino, A.; Keller, J.; Popescu, M.; Stone, K.

    2014-06-01

    In this paper we present an approach that uses Genetic Programming (GP) to evolve novel feature extraction algorithms for greyscale images. Our motivation is to create an automated method of building new feature extraction algorithms for images that are competitive with commonly used human-engineered features, such as Local Binary Pattern (LBP) and Histogram of Oriented Gradients (HOG). The evolved feature extraction algorithms are functions defined over the image space, and each produces a real-valued feature vector of variable length. Each evolved feature extractor breaks up the given image into a set of cells centered on every pixel, performs evolved operations on each cell, and then combines the results of those operations for every cell using an evolved operator. Using this method, the algorithm is flexible enough to reproduce both LBP and HOG features. The dataset we use to train and test our approach consists of a large number of pre-segmented image "chips" taken from a Forward Looking Infrared Imagery (FLIR) camera mounted on the hood of a moving vehicle. The goal is to classify each image chip as either containing or not containing a buried object. To this end, we define the fitness of a candidate solution as the cross-fold validation accuracy of the features generated by said candidate solution when used in conjunction with a Support Vector Machine (SVM) classifier. In order to validate our approach, we compare the classification accuracy of an SVM trained using our evolved features with the accuracy of an SVM trained using mainstream feature extraction algorithms, including LBP and HOG.

  19. Discharges Classification using Genetic Algorithms and Feature Selection Algorithms on Time and Frequency Domain Data Extracted from Leakage Current Measurements

    Directory of Open Access Journals (Sweden)

    D. Pylarinos

    2013-12-01

    Full Text Available A number of 387 discharge portraying waveforms recorded on 18 different 150 kV post insulators installed at two different Substations in Crete, Greece are considered in this paper. Twenty different features are extracted from each waveform and two feature selection algorithms (t-test and mRMR are employed. Genetic algorithms are used to classify waveforms in two different classes related to the portrayed discharges. Five different data sets are employed (1. the original feature vector, 2. time domain features, 3. frequency domain features, 4. t-test selected features 5. mRMR selected features. Results are discussed and compared with previous classification implementations on this particular data group.

  20. Rapid Screening of Natural Plant Extracts with Calcium Diacetate for Differential Effects Against Foodborne Pathogens and a Probiotic Bacterium.

    Science.gov (United States)

    Colonna, William; Brehm-Stecher, Byron; Shetty, Kalidas; Pometto, Anthony

    2017-12-01

    This study focused on advancing a rapid turbidimetric bioassay to screen antimicrobials using specific cocktails of targeted foodborne bacterial pathogens. Specifically, to show the relevance of this rapid screening tool, the antimicrobial potential of generally recognized as safe calcium diacetate (DAX) and blends with cranberry (NC) and oregano (OX) natural extracts was evaluated. Furthermore, the same extracts were evaluated against beneficial lactic acid bacteria. The targeted foodborne pathogens evaluated were Escherichia coli O157:H7, Salmonella spp., Listeria monocytogenes, and Staphylococcus aureus using optimized initial cocktails (∼108 colony-forming unit/mL) containing strains isolated from human food outbreaks. Of all extracts evaluated, 0.51% (w/v) DAX in ethanol was the most effective against all four pathogens. However, DAX when reduced to 0.26% and with added blends from ethanol extractions consisting of DAX:OX (3:1), slightly outperformed or was equal to same levels of DAX alone. Subculture of wells in which no growth occurred after 1 week indicated that all water and ethanol extracts were bacteriostatic against the pathogens tested. All the targeted antimicrobials had no effect on the probiotic organism Lactobacillus plantarum. The use of such rapid screening methods combined with the use of multistrain cocktails of targeted foodborne pathogens from outbreaks will allow rapid large-scale screening of antimicrobials and enable further detailed studies in targeted model food systems.

  1. Optimisation of pressurised liquid extraction (PLE) for rapid and efficient extraction of superficial and total mineral oil contamination from dry foods.

    Science.gov (United States)

    Moret, Sabrina; Scolaro, Marianna; Barp, Laura; Purcaro, Giorgia; Sander, Maren; Conte, Lanfranco S

    2014-08-15

    Pressurised liquid extraction (PLE) represents a powerful technique which can be conveniently used for rapid extraction of mineral oil saturated (MOSH) and aromatic hydrocarbons (MOAH) from dry foods with a low fat content, such as semolina pasta, rice, and other cereals. Two different PLE methods, one for rapid determination of superficial contamination mainly from the packaging, the other for efficient extraction of total contamination from different sources, have been developed and optimised. The two methods presented good performance characteristics in terms of repeatability (relative standard deviation lower than 5%) and recoveries (higher than 95%). To show their potentiality, the two methods have been applied in combination on semolina pasta and rice packaged in direct contact with recycled cardboard. In the case of semolina pasta it was possible to discriminate between superficial contamination coming from the packaging, and pre-existing contamination (firmly enclosed into the matrix). Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Rapid validated HPTLC method for estimation of betulinic acid in Nelumbo nucifera (Nymphaeaceae) rhizome extract.

    Science.gov (United States)

    Mukherjee, Debajyoti; Kumar, N Satheesh; Khatua, Taraknath; Mukherjee, Pulok K

    2010-01-01

    Betulinic acid (pentacyclic triterpenoid) is an important marker component present in Nelumbo nucifera Gaertn. rhizome. N. nucifera rhizome has several medicinal uses including hypoglycaemic, antidiarrhoeal, antimicrobial, diuretic, antipyretic, psychopharmacological activities. To establish a simple, sensitive, reliable, rapid and validated high-performance thin-layer chromatography method for estimation of betulinic acid in hydro-alcoholic extract of N. nucifera Gaertn. rhizome. The separation was carried out on a thin-layer chromatography aluminium plate pre-coated with silica gel 60F(254) , eluted with chloroform, methanol and formic acid (49 : 1 : 1 v/v). Post chromatographic derivatisation was done with anisaldehyde-sulphuric acid reagent and densitometric scanning was performed using a Camag TLC scanner III, at 420 nm. The system was found to produce a compact spot for betulinic acid (R(f) = 0.30). A good linear precision relationship between the concentrations (2-10 µg) and peak areas were obtained with the correlation coefficient (r) of 0.99698. The limit of detection and limit of quantification of betulinic acid were detected to be 0.4 and 2.30 µg per spot. The percentage of recovery was found to be 98.36%. The percentage relative standard deviations of intra-day and inter-day precisions were 0.82-0.394 and 0.85-0.341, respectively. This validated HPTLC method provides a new and powerful approach to estimate betulinic acid as phytomarker in the extract. Copyright © 2010 John Wiley & Sons, Ltd.

  3. A framework for automatic feature extraction from airborne light detection and ranging data

    Science.gov (United States)

    Yan, Jianhua

    Recent advances in airborne Light Detection and Ranging (LIDAR) technology allow rapid and inexpensive measurements of topography over large areas. Airborne LIDAR systems usually return a 3-dimensional cloud of point measurements from reflective objects scanned by the laser beneath the flight path. This technology is becoming a primary method for extracting information of different kinds of geometrical objects, such as high-resolution digital terrain models (DTMs), buildings and trees, etc. In the past decade, LIDAR gets more and more interest from researchers in the field of remote sensing and GIS. Compared to the traditional data sources, such as aerial photography and satellite images, LIDAR measurements are not influenced by sun shadow and relief displacement. However, voluminous data pose a new challenge for automated extraction the geometrical information from LIDAR measurements because many raster image processing techniques cannot be directly applied to irregularly spaced LIDAR points. In this dissertation, a framework is proposed to filter out information about different kinds of geometrical objects, such as terrain and buildings from LIDAR automatically. They are essential to numerous applications such as flood modeling, landslide prediction and hurricane animation. The framework consists of several intuitive algorithms. Firstly, a progressive morphological filter was developed to detect non-ground LIDAR measurements. By gradually increasing the window size and elevation difference threshold of the filter, the measurements of vehicles, vegetation, and buildings are removed, while ground data are preserved. Then, building measurements are identified from no-ground measurements using a region growing algorithm based on the plane-fitting technique. Raw footprints for segmented building measurements are derived by connecting boundary points and are further simplified and adjusted by several proposed operations to remove noise, which is caused by irregularly

  4. Neural network-based brain tissue segmentation in MR images using extracted features from intraframe coding in H.264

    Science.gov (United States)

    Jafari, Mehdi; Kasaei, Shohreh

    2012-01-01

    Automatic brain tissue segmentation is a crucial task in diagnosis and treatment of medical images. This paper presents a new algorithm to segment different brain tissues, such as white matter (WM), gray matter (GM), cerebral spinal fluid (CSF), background (BKG), and tumor tissues. The proposed technique uses the modified intraframe coding yielded from H.264/(AVC), for feature extraction. Extracted features are then imposed to an artificial back propagation neural network (BPN) classifier to assign each block to its appropriate class. Since the newest coding standard, H.264/AVC, has the highest compression ratio, it decreases the dimension of extracted features and thus yields to a more accurate classifier with low computational complexity. The performance of the BPN classifier is evaluated using the classification accuracy and computational complexity terms. The results show that the proposed technique is more robust and effective with low computational complexity compared to other recent works.

  5. Pelvis feature extraction and classification of Cardiff body match rig base measurements for input into a knowledge-based system.

    Science.gov (United States)

    Partlow, Adam; Gibson, Colin; Kulon, Janusz; Wilson, Ian; Wilcox, Steven

    2012-11-01

    The purpose of this paper is to determine whether it is possible to use an automated measurement tool to clinically classify clients who are wheelchair users with severe musculoskeletal deformities, replacing the current process which relies upon clinical engineers with advanced knowledge and skills. Clients' body shapes were captured using the Cardiff Body Match (CBM) Rig developed by the Rehabilitation Engineering Unit (REU) at Rookwood Hospital in Cardiff. A bespoke feature extraction algorithm was developed that estimates the position of external landmarks on clients' pelvises so that useful measurements can be obtained. The outputs of the feature extraction algorithms were compared to CBM measurements where the positions of the client's pelvis landmarks were known. The results show that using the extracted features facilitated classification. Qualitative analysis showed that the estimated positions of the landmark points were close enough to their actual positions to be useful to clinicians undertaking clinical assessments.

  6. Understanding the effects of pre-processing on extracted signal features from gait accelerometry signals.

    Science.gov (United States)

    Millecamps, Alexandre; Lowry, Kristin A; Brach, Jennifer S; Perera, Subashan; Redfern, Mark S; Sejdić, Ervin

    2015-07-01

    Gait accelerometry is an important approach for gait assessment. Previous contributions have adopted various pre-processing approaches for gait accelerometry signals, but none have thoroughly investigated the effects of such pre-processing operations on the obtained results. Therefore, this paper investigated the influence of pre-processing operations on signal features extracted from gait accelerometry signals. These signals were collected from 35 participants aged over 65years: 14 of them were healthy controls (HC), 10 had Parkinson׳s disease (PD) and 11 had peripheral neuropathy (PN). The participants walked on a treadmill at preferred speed. Signal features in time, frequency and time-frequency domains were computed for both raw and pre-processed signals. The pre-processing stage consisted of applying tilt correction and denoising operations to acquired signals. We first examined the effects of these operations separately, followed by the investigation of their joint effects. Several important observations were made based on the obtained results. First, the denoising operation alone had almost no effects in comparison to the trends observed in the raw data. Second, the tilt correction affected the reported results to a certain degree, which could lead to a better discrimination between groups. Third, the combination of the two pre-processing operations yielded similar trends as the tilt correction alone. These results indicated that while gait accelerometry is a valuable approach for the gait assessment, one has to carefully adopt any pre-processing steps as they alter the observed findings. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Sensor-Based Vibration Signal Feature Extraction Using an Improved Composite Dictionary Matching Pursuit Algorithm

    Directory of Open Access Journals (Sweden)

    Lingli Cui

    2014-09-01

    Full Text Available This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and

  8. Securing SIFT: Privacy-preserving Outsourcing Computation of Feature Extractions Over Encrypted Image Data.

    Science.gov (United States)

    Hu, Shengshan; Wang, Qian; Wang, Jingjun; Qin, Zhan; Ren, Kui

    2016-05-13

    Advances in cloud computing have greatly motivated data owners to outsource their huge amount of personal multimedia data and/or computationally expensive tasks onto the cloud by leveraging its abundant resources for cost saving and flexibility. Despite the tremendous benefits, the outsourced multimedia data and its originated applications may reveal the data owner's private information, such as the personal identity, locations or even financial profiles. This observation has recently aroused new research interest on privacy-preserving computations over outsourced multimedia data. In this paper, we propose an effective and practical privacy-preserving computation outsourcing protocol for the prevailing scale-invariant feature transform (SIFT) over massive encrypted image data. We first show that previous solutions to this problem have either efficiency/security or practicality issues, and none can well preserve the important characteristics of the original SIFT in terms of distinctiveness and robustness. We then present a new scheme design that achieves efficiency and security requirements simultaneously with the preservation of its key characteristics, by randomly splitting the original image data, designing two novel efficient protocols for secure multiplication and comparison, and carefully distributing the feature extraction computations onto two independent cloud servers. We both carefully analyze and extensively evaluate the security and effectiveness of our design. The results show that our solution is practically secure, outperforms the state-of-theart, and performs comparably to the original SIFT in terms of various characteristics, including rotation invariance, image scale invariance, robust matching across affine distortion, addition of noise and change in 3D viewpoint and illumination.

  9. SecSIFT: Privacy-preserving Outsourcing Computation of Feature Extractions Over Encrypted Image Data.

    Science.gov (United States)

    Hu, Shengshan; Wang, Qian; Wang, Jingjun; Qin, Zhan; Ren, Kui

    2016-05-13

    Advances in cloud computing have greatly motivated data owners to outsource their huge amount of personal multimedia data and/or computationally expensive tasks onto the cloud by leveraging its abundant resources for cost saving and flexibility. Despite the tremendous benefits, the outsourced multimedia data and its originated applications may reveal the data owner's private information, such as the personal identity, locations or even financial profiles. This observation has recently aroused new research interest on privacy-preserving computations over outsourced multimedia data. In this paper, we propose an effective and practical privacy-preserving computation outsourcing protocol for the prevailing scale-invariant feature transform (SIFT) over massive encrypted image data. We first show that previous solutions to this problem have either efficiency/security or practicality issues, and none can well preserve the important characteristics of the original SIFT in terms of distinctiveness and robustness. We then present a new scheme design that achieves efficiency and security requirements simultaneously with the preservation of its key characteristics, by randomly splitting the original image data, designing two novel efficient protocols for secure multiplication and comparison, and carefully distributing the feature extraction computations onto two independent cloud servers. We both carefully analyze and extensively evaluate the security and effectiveness of our design. The results show that our solution is practically secure, outperforms the state-of-theart, and performs comparably to the original SIFT in terms of various characteristics, including rotation invariance, image scale invariance, robust matching across affine distortion, addition of noise and change in 3D viewpoint and illumination.

  10. ClusTrack: feature extraction and similarity measures for clustering of genome-wide data sets.

    Directory of Open Access Journals (Sweden)

    Halfdan Rydbeck

    Full Text Available Clustering is a popular technique for explorative analysis of data, as it can reveal subgroupings and similarities between data in an unsupervised manner. While clustering is routinely applied to gene expression data, there is a lack of appropriate general methodology for clustering of sequence-level genomic and epigenomic data, e.g. ChIP-based data. We here introduce a general methodology for clustering data sets of coordinates relative to a genome assembly, i.e. genomic tracks. By defining appropriate feature extraction approaches and similarity measures, we allow biologically meaningful clustering to be performed for genomic tracks using standard clustering algorithms. An implementation of the methodology is provided through a tool, ClusTrack, which allows fine-tuned clustering analyses to be specified through a web-based interface. We apply our methods to the clustering of occupancy of the H3K4me1 histone modification in samples from a range of different cell types. The majority of samples form meaningful subclusters, confirming that the definitions of features and similarity capture biological, rather than technical, variation between the genomic tracks. Input data and results are available, and can be reproduced, through a Galaxy Pages document at http://hyperbrowser.uio.no/hb/u/hb-superuser/p/clustrack. The clustering functionality is available as a Galaxy tool, under the menu option "Specialized analyzis of tracks", and the submenu option "Cluster tracks based on genome level similarity", at the Genomic HyperBrowser server: http://hyperbrowser.uio.no/hb/.

  11. Feature extraction and wall motion classification of 2D stress echocardiography with support vector machines

    Science.gov (United States)

    Chykeyuk, Kiryl; Clifton, David A.; Noble, J. Alison

    2011-03-01

    Stress echocardiography is a common clinical procedure for diagnosing heart disease. Clinically, diagnosis of the heart wall motion depends mostly on visual assessment, which is highly subjective and operator-dependent. Introduction of automated methods for heart function assessment have the potential to minimise the variance in operator assessment. Automated wall motion analysis consists of two main steps: (i) segmentation of heart wall borders, and (ii) classification of heart function as either "normal" or "abnormal" based on the segmentation. This paper considers automated classification of rest and stress echocardiography. Most previous approaches to the classification of heart function have considered rest or stress data separately, and have only considered using features extracted from the two main frames (corresponding to the end-of-diastole and end-of-systole). One previous attempt [1] has been made to combine information from rest and stress sequences utilising a Hidden Markov Model (HMM), which has proven to be the best performing approach to date. Here, we propose a novel alternative feature selection approach using combined information from rest and stress sequences for motion classification of stress echocardiography, utilising a Support Vector Machines (SVM) classifier. We describe how the proposed SVM-based method overcomes difficulties that occur with HMM classification. Overall accuracy with the new method for global wall motion classification using datasets from 173 patients is 92.47%, and the accuracy of local wall motion classification is 87.20%, showing that the proposed method outperforms the current state-of-the-art HMM-based approach (for which global and local classification accuracy is 82.15% and 78.33%, respectively).

  12. Hybrid genetic algorithm-neural network: feature extraction for unpreprocessed microarray data.

    Science.gov (United States)

    Tong, Dong Ling; Schierz, Amanda C

    2011-09-01

    Suitable techniques for microarray analysis have been widely researched, particularly for the study of marker genes expressed to a specific type of cancer. Most of the machine learning methods that have been applied to significant gene selection focus on the classification ability rather than the selection ability of the method. These methods also require the microarray data to be preprocessed before analysis takes place. The objective of this study is to develop a hybrid genetic algorithm-neural network (GANN) model that emphasises feature selection and can operate on unpreprocessed microarray data. The GANN is a hybrid model where the fitness value of the genetic algorithm (GA) is based upon the number of samples correctly labelled by a standard feedforward artificial neural network (ANN). The model is evaluated by using two benchmark microarray datasets with different array platforms and differing number of classes (a 2-class oligonucleotide microarray data for acute leukaemia and a 4-class complementary DNA (cDNA) microarray dataset for SRBCTs (small round blue cell tumours)). The underlying concept of the GANN algorithm is to select highly informative genes by co-evolving both the GA fitness function and the ANN weights at the same time. The novel GANN selected approximately 50% of the same genes as the original studies. This may indicate that these common genes are more biologically significant than other genes in the datasets. The remaining 50% of the significant genes identified were used to build predictive models and for both datasets, the models based on the set of genes extracted by the GANN method produced more accurate results. The results also suggest that the GANN method not only can detect genes that are exclusively associated with a single cancer type but can also explore the genes that are differentially expressed in multiple cancer types. The results show that the GANN model has successfully extracted statistically significant genes from the

  13. A simple and rapid infrared-assisted self enzymolysis extraction method for total flavonoid aglycones extraction from Scutellariae Radix and mechanism exploration.

    Science.gov (United States)

    Wang, Liping; Duan, Haotian; Jiang, Jiebing; Long, Jiakun; Yu, Yingjia; Chen, Guiliang; Duan, Gengli

    2017-09-01

    A new, simple, and fast infrared-assisted self enzymolysis extraction (IRASEE) approach for the extraction of total flavonoid aglycones (TFA) mainly including baicalein, wogonin, and oroxylin A from Scutellariae Radix is presented to enhance extraction yield. Extraction enzymolysis temperature, enzymolysis liquid-to-solid ratio, enzymolysis pH, enzymolysis time and infrared power, the factors affecting IRASEE procedure, were investigated in a newly designed, temperature-controlled infrared-assisted extraction (TC-IRAE) system to acquire the optimum analysis conditions. The results illustrated that IRASEE possessed great advantages in terms of efficiency and time compared with other conventional extraction techniques. Furthermore, the mechanism of IRASEE was preliminarily explored by observing the microscopic change of the samples surface structures, studying the main chemical compositions change of the samples before and after extraction and investigating the kinetics and thermodynamics at three temperature levels during the IRASEE process. These findings revealed that IRASEE can destroy the surface microstructures to accelerate the mass transfer and reduce the activation energy to intensify the chemical process. This integrative study presents a simple, rapid, efficient, and environmental IRASEE method for TFA extraction which has promising prospects for other similar herbal medicines. Graphical Abstract ᅟ.

  14. Extraction of Line Features from Multifidus Muscle of CT Scanned Images with Morphologic Filter Together with Wavelet Multi Resolution Analysis

    OpenAIRE

    Yoichiro Kitajima; Yuichiro Eguchi; Kohei Arai

    2011-01-01

    A method for line feature extraction from multifidus muscle of Computer Tomography (CT) scanned image with morphologic filter together with wavelet based Multi Resolution Analysis (MRA) is proposed. The contour of the multifidus muscle can be extracted from hip CT image. The area of multifidus muscle is then estimated and is used for an index of belly fat because there is a high correlation between belly fat and multifidus muscle. When the area of the multifidus muscle was calculated from the...

  15. Rapid and sensitive diagnosis of fungal keratitis with direct PCR without template DNA extraction.

    Science.gov (United States)

    Zhao, G; Zhai, H; Yuan, Q; Sun, S; Liu, T; Xie, L

    2014-10-01

    This study was aimed at developing a direct PCR assay without template DNA extraction for the rapid and sensitive diagnosis of infectious keratitis. Eighty corneal scrapings from 67 consecutive patients with clinically suspected infectious keratitis were analysed prospectively. Direct PCR was performed with all scrapings, with specific primers for fungi, bacteria, herpes simplex virus-1 (HSV-1) and Acanthamoeba simultaneously. The results were compared with those obtained from culture, smear, and confocal microscopy. Discrepant results were resolved according to the therapeutic effects of the corresponding antimicrobial drugs. The lowest detection limit of direct PCR was ten copies of each pathogen. Sixty-six scrapings yielded positive results with direct PCR, giving a total positive detection rate of 82.5% (66/80). For 34 patients with high suspicion of fungal keratitis, the positive detection rate of direct PCR was 84.8% (39/46). This rate increased to 91.2% (31/34) when repeated scrapings were excluded, and was significantly higher than the rates obtained with culture (35.3%, 12/34) and smear (64.7%, 22/34) (p keratitis with direct PCR and culture were 98.0% and 47.1% (p keratitis, and it is expected to have an impact on the diagnosis and treatment of infectious keratitis in the future. © 2014 The Authors Clinical Microbiology and Infection © 2014 European Society of Clinical Microbiology and Infectious Diseases.

  16. Recyclable bio-reagent for rapid and selective extraction of contaminants from soil

    Energy Technology Data Exchange (ETDEWEB)

    Lomasney, H.L. [ISOTRON Corp., New Orleans, LA (United States)

    1997-10-01

    This Phase I Small Business Innovation Research program is confirming the effectiveness of a bio-reagent to cost-effectively and selectively extract a wide range of heavy metals and radionuclide contaminants from soil. This bioreagent solution, developed by ISOTRON{reg_sign} Corporation (New Orleans, LA), is flushed through the soil and recycled after flowing through an electrokinetic separation module, also developed by ISOTRON{reg_sign}. The process is ex situ, and the soil remains in its transport container through the decontamination process. The transport container can be a fiberglass box, or a bulk bag or {open_quotes}super sack.{close_quotes} Rocks, vegetation, roots, etc. need not be removed. High clay content soils are accommodated. The process provides rapid injection of reagent solution, and when needed, sand is introduced to speed up the heap leach step. The concentrated waste form is eventually solidified. The bio-reagent is essentially a natural product, therefore any solubizer residual in soil is not expected to cause regulatory concern. The Phase I work will confirm the effectiveness of this bio-reagent on a wide range of contaminants, and the engineering parameters that are needed to carry out a full-scale demonstration of the process. ISOTRON{reg_sign} scientists will work with contaminated soil from Los Alamos National Laboratory. LANL is in the process of decontaminating and decommissioning more than 300 sites within its complex, many of which contain heavy metals or radionuclides; some are mixed wastes containing TCE, PCB, and metals.

  17. Development of a micropulverized extraction method for rapid toxicological analysis of methamphetamine in hair.

    Science.gov (United States)

    Miyaguchi, Hajime; Kakuta, Masaya; Iwata, Yuko T; Matsuda, Hideaki; Tazawa, Hidekatsu; Kimura, Hiroko; Inoue, Hiroyuki

    2007-09-07

    We developed a rapid sample preparation method for the toxicological analysis of methamphetamine and amphetamine (the major metabolite of methamphetamine) in human hair by high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS), to facilitate fast screening and quantitation. Two milligrams of hair were mechanically micropulverized for 5 min in a 2-ml plastic tube together with 100 microl of an aqueous solvent containing 10% acetonitrile, 100 mM trifluoroacetic acid and the corresponding deuterium analogues as internal standards. The pulverizing highly disintegrated the hair components, simultaneously allowing the extraction of any drugs present in the hair. After filtering the suspension with a membrane-filter unit, the clear filtrate was directly analyzed by HPLC-MS/MS. No evaporation processes were required for sample preparation. Method optimization and validation study were carried out using real-case specimens and fortified samples in which the drugs had been artificially absorbed, respectively. Concentration ranges for quantitation were 0.040-125 and 0.040-25 ng/mg for methamphetamine and amphetamine, respectively. Real-case specimens were analyzed by the method presented here and by conventional ones to verify the applicability of our method to real-world analysis. Our method took less than 30 min for a set of chromatograms to be obtained from a washed hair sample.

  18. An Alternative and Rapid Method for the Extraction of Nucleic Acids from Ixodid Ticks by Potassium Acetate Procedure

    Directory of Open Access Journals (Sweden)

    Islay Rodríguez

    2014-08-01

    Full Text Available Four variants of the potassium acetate procedure for DNA extraction from ixodid ticks at different stage of their life cycles were evaluated and compared with phenol-chloroform and ammonium hydroxide methods. The most rapid and most efficient variant was validated in the DNA extraction procedure from the engorged ticks collected from bovine, canine as well as from house ticks for the screening of Borrelia burgdorferi sensu lato, Anaplasma spp. and Babesia spp. The ammonium hydroxide procedure was used for non-engorged ticks. All the variants were efficient and allowed obtaining PCR-quality material according to the specific amplification of 16S rRNA gene fragment of the original tick. DNA extracted from the ticks under the study was tested by multiplex PCR for the screening of tick-borne pathogens. Anaplasma spp. and Babesia spp. amplification products were obtained from 29/48 extracts. Ammonium hydroxide protocol was not efficient for two extracts. Detection of amplification products from the PCR indicated that DNA had been successfully extracted. The potassium acetate procedure could be an alternative, rapid, and reliable method for DNA extraction from the ixodid ticks, mainly for poorly-resourced laboratories.

  19. A new automated spectral feature extraction method and its application in spectral classification and defective spectra recovery

    Science.gov (United States)

    Wang, Ke; Guo, Ping; Luo, A.-Li

    2017-03-01

    Spectral feature extraction is a crucial procedure in automated spectral analysis. This procedure starts from the spectral data and produces informative and non-redundant features, facilitating the subsequent automated processing and analysis with machine-learning and data-mining techniques. In this paper, we present a new automated feature extraction method for astronomical spectra, with application in spectral classification and defective spectra recovery. The basic idea of our approach is to train a deep neural network to extract features of spectra with different levels of abstraction in different layers. The deep neural network is trained with a fast layer-wise learning algorithm in an analytical way without any iterative optimization procedure. We evaluate the performance of the proposed scheme on real-world spectral data. The results demonstrate that our method is superior regarding its comprehensive performance, and the computational cost is significantly lower than that for other methods. The proposed method can be regarded as a new valid alternative general-purpose feature extraction method for various tasks in spectral data analysis.

  20. Signals features extraction in liquid-gas flow measurements using gamma densitometry. Part 2: frequency domain

    Directory of Open Access Journals (Sweden)

    Hanus Robert

    2016-01-01

    Full Text Available Knowledge of the structure of a flow is really significant for the proper conduct a number of industrial processes. In this case a description of a two-phase flow regimes is possible by use of the time-series analysis e.g. in frequency domain. In this article the classical spectral analysis based on Fourier Transform (FT and Short-Time Fourier Transform (STFT were applied for analysis of signals obtained for water-air flow using gamma ray absorption. The presented method was illustrated by use data collected in experiments carried out on the laboratory hydraulic installation with a horizontal pipe of 4.5 m length and inner diameter of 30 mm equipped with two 241Am radioactive sources and scintillation probes with NaI(Tl crystals. Stochastic signals obtained from detectors for plug, bubble, and transitional plug – bubble flows were considered in this work. The recorded raw signals were analyzed and several features in the frequency domain were extracted using autospectral density function (ADF, cross-spectral density function (CSDF, and the STFT spectrogram. In result of a detail analysis it was found that the most promising to recognize of the flow structure are: maximum value of the CSDF magnitude, sum of the CSDF magnitudes in the selected frequency range, and the maximum value of the sum of selected amplitudes of STFT spectrogram.

  1. A Spatial Division Clustering Method and Low Dimensional Feature Extraction Technique Based Indoor Positioning System

    Directory of Open Access Journals (Sweden)

    Yun Mo

    2014-01-01

    Full Text Available Indoor positioning systems based on the fingerprint method are widely used due to the large number of existing devices with a wide range of coverage. However, extensive positioning regions with a massive fingerprint database may cause high computational complexity and error margins, therefore clustering methods are widely applied as a solution. However, traditional clustering methods in positioning systems can only measure the similarity of the Received Signal Strength without being concerned with the continuity of physical coordinates. Besides, outage of access points could result in asymmetric matching problems which severely affect the fine positioning procedure. To solve these issues, in this paper we propose a positioning system based on the Spatial Division Clustering (SDC method for clustering the fingerprint dataset subject to physical distance constraints. With the Genetic Algorithm and Support Vector Machine techniques, SDC can achieve higher coarse positioning accuracy than traditional clustering algorithms. In terms of fine localization, based on the Kernel Principal Component Analysis method, the proposed positioning system outperforms its counterparts based on other feature extraction methods in low dimensionality. Apart from balancing online matching computational burden, the new positioning system exhibits advantageous performance on radio map clustering, and also shows better robustness and adaptability in the asymmetric matching problem aspect.

  2. Real-time feature extraction of ECG signals using NI LabVIEW

    Directory of Open Access Journals (Sweden)

    Ayşe Nur AY

    2017-08-01

    Full Text Available This study is based on measuring the Electrocardiogram (ECG signals from the human body in real-time with the help of the software called NI LabVIEW. Not only the raw ECG signals, the digital filtered version of the ECG signals can also be displayed in real-time by processing the signals using the digital filtering tools of the program. The ECG itself provides various diagnostic information and NI LabVIEW biomedical toolkit offers many tools that helps to process the signals and perform feature extraction. Thus, this software was preferred for the ECG data acquisition. In this project, heart rate of a patient is calculated by detecting R-R intervals on the ECG tracing using the method called Teager Energy. In order to test the system, several experiments have been conducted with 12 subjects (6 non-smokers + 6 smokers. Their ECG signals were taken in relaxed and after running conditions. The experimental results were recorded for the graphical and statistical analysis. According to the results, the effect of smoking to the heart rate was discussed.

  3. Oil Spill Detection by SAR Images: Dark Formation Detection, Feature Extraction and Classification Algorithms

    Directory of Open Access Journals (Sweden)

    Konstantinos N. Topouzelis

    2008-10-01

    Full Text Available This paper provides a comprehensive review of the use of Synthetic Aperture Radar images (SAR for detection of illegal discharges from ships. It summarizes the current state of the art, covering operational and research aspects of the application. Oil spills are seriously affecting the marine ecosystem and cause political and scientific concern since they seriously effect fragile marine and coastal ecosystem. The amount of pollutant discharges and associated effects on the marine environment are important parameters in evaluating sea water quality. Satellite images can improve the possibilities for the detection of oil spills as they cover large areas and offer an economical and easier way of continuous coast areas patrolling. SAR images have been widely used for oil spill detection. The present paper gives an overview of the methodologies used to detect oil spills on the radar images. In particular we concentrate on the use of the manual and automatic approaches to distinguish oil spills from other natural phenomena. We discuss the most common techniques to detect dark formations on the SAR images, the features which are extracted from the detected dark formations and the most used classifiers. Finally we conclude with discussion of suggestions for further research. The references throughout the review can serve as starting point for more intensive studies on the subject.

  4. Extracting Road Features from Aerial Videos of Small Unmanned Aerial Vehicles

    Science.gov (United States)

    Rajamohan, D.; Rajan, K. S.

    2013-09-01

    With major aerospace companies showing interest in certifying UAV systems for civilian airspace, their use in commercial remote sensing applications like traffic monitoring, map refinement, agricultural data collection, etc., are on the rise. But ambitious requirements like real-time geo-referencing of data, support for multiple sensor angle-of-views, smaller UAV size and cheaper investment cost have lead to challenges in platform stability, sensor noise reduction and increased onboard processing. Especially in small UAVs the geo-referencing of data collected is only as good as the quality of their localization sensors. This drives a need for developing methods that pickup spatial features from the captured video/image and aid in geo-referencing. This paper presents one such method to identify road segments and intersections based on traffic flow and compares well with the accuracy of manual observation. Two test video datasets, one each from moving and stationary platforms were used. The results obtained show a promising average percentage difference of 7.01 % and 2.48 % for the road segment extraction process using moving and stationary platform respectively. For the intersection identification process, the moving platform shows an accuracy of 75 % where as the stationary platform data reaches an accuracy of 100 %.

  5. A Review of Physical and Perceptual Feature Extraction Techniques for Speech, Music and Environmental Sounds

    Directory of Open Access Journals (Sweden)

    Francesc Alías

    2016-05-01

    Full Text Available Endowing machines with sensing capabilities similar to those of humans is a prevalent quest in engineering and computer science. In the pursuit of making computers sense their surroundings, a huge effort has been conducted to allow machines and computers to acquire, process, analyze and understand their environment in a human-like way. Focusing on the sense of hearing, the ability of computers to sense their acoustic environment as humans do goes by the name of machine hearing. To achieve this ambitious aim, the representation of the audio signal is of paramount importance. In this paper, we present an up-to-date review of the most relevant audio feature extraction techniques developed to analyze the most usual audio signals: speech, music and environmental sounds. Besides revisiting classic approaches for completeness, we include the latest advances in the field based on new domains of analysis together with novel bio-inspired proposals. These approaches are described following a taxonomy that organizes them according to their physical or perceptual basis, being subsequently divided depending on the domain of computation (time, frequency, wavelet, image-based, cepstral, or other domains. The description of the approaches is accompanied with recent examples of their application to machine hearing related problems.

  6. A spatial division clustering method and low dimensional feature extraction technique based indoor positioning system.

    Science.gov (United States)

    Mo, Yun; Zhang, Zhongzhao; Meng, Weixiao; Ma, Lin; Wang, Yao

    2014-01-22

    Indoor positioning systems based on the fingerprint method are widely used due to the large number of existing devices with a wide range of coverage. However, extensive positioning regions with a massive fingerprint database may cause high computational complexity and error margins, therefore clustering methods are widely applied as a solution. However, traditional clustering methods in positioning systems can only measure the similarity of the Received Signal Strength without being concerned with the continuity of physical coordinates. Besides, outage of access points could result in asymmetric matching problems which severely affect the fine positioning procedure. To solve these issues, in this paper we propose a positioning system based on the Spatial Division Clustering (SDC) method for clustering the fingerprint dataset subject to physical distance constraints. With the Genetic Algorithm and Support Vector Machine techniques, SDC can achieve higher coarse positioning accuracy than traditional clustering algorithms. In terms of fine localization, based on the Kernel Principal Component Analysis method, the proposed positioning system outperforms its counterparts based on other feature extraction methods in low dimensionality. Apart from balancing online matching computational burden, the new positioning system exhibits advantageous performance on radio map clustering, and also shows better robustness and adaptability in the asymmetric matching problem aspect.

  7. Random Forest Based Coarse Locating and KPCA Feature Extraction for Indoor Positioning System

    Directory of Open Access Journals (Sweden)

    Yun Mo

    2014-01-01

    Full Text Available With the fast developing of mobile terminals, positioning techniques based on fingerprinting method draw attention from many researchers even world famous companies. To conquer some shortcomings of the existing fingerprinting systems and further improve the system performance, on the one hand, in the paper, we propose a coarse positioning method based on random forest, which is able to customize several subregions, and classify test point to the region with an outstanding accuracy compared with some typical clustering algorithms. On the other hand, through the mathematical analysis in engineering, the proposed kernel principal component analysis algorithm is applied for radio map processing, which may provide better robustness and adaptability compared with linear feature extraction methods and manifold learning technique. We build both theoretical model and real environment for verifying the feasibility and reliability. The experimental results show that the proposed indoor positioning system could achieve 99% coarse locating accuracy and enhance 15% fine positioning accuracy on average in a strong noisy environment compared with some typical fingerprinting based methods.

  8. Fault Feature Extraction and Diagnosis of Gearbox Based on EEMD and Deep Briefs Network

    Directory of Open Access Journals (Sweden)

    Kai Chen

    2017-01-01

    Full Text Available A gear transmission system is a complex nonstationary and nonlinear time-varying coupling system. When faults occur on gear system, it is difficult to extract the fault feature. In this paper, a novel fault diagnosis method based on ensemble empirical mode decomposition (EEMD and Deep Briefs Network (DBN is proposed to treat the vibration signals measured from gearbox. The original data is decomposed into a set of intrinsic mode functions (IMFs using EEMD, and then main IMFs were chosen for reconstructed signal to suppress abnormal interference from noise. The reconstructed signals were regarded as input of DBN to identify gearbox working states and fault types. To verify the effectiveness of the EEMD-DBN in detecting the faults, a series of gear fault simulate experiments at different states were carried out. Results showed that the proposed method which coupled EEMD and DBN can improve the accuracy of gear fault identification and it is capable of applying to fault diagnosis in practical application.

  9. Moment Invariant Features Extraction for Hand Gesture Recognition of Sign Language based on SIBI

    Directory of Open Access Journals (Sweden)

    Angga Rahagiyanto

    2017-07-01

    Full Text Available Myo Armband became an immersive technology to help deaf people for communication each other. The problem on Myo sensor is unstable clock rate. It causes the different length data for the same period even on the same gesture. This research proposes Moment Invariant Method to extract the feature of sensor data from Myo. This method reduces the amount of data and makes the same length of data. This research is user-dependent, according to the characteristics of Myo Armband. The testing process was performed by using alphabet A to Z on SIBI, Indonesian Sign Language, with static and dynamic finger movements. There are 26 class of alphabets and 10 variants in each class. We use min-max normalization for guarantying the range of data. We use K-Nearest Neighbor method to classify dataset. Performance analysis with leave-one-out-validation method produced an accuracy of 82.31%. It requires a more advanced method of classification to improve the performance on the detection results.

  10. Feature extraction from spike trains with Bayesian binning: 'latency is where the signal starts'.

    Science.gov (United States)

    Endres, Dominik; Oram, Mike

    2010-08-01

    The peristimulus time histogram (PSTH) and its more continuous cousin, the spike density function (SDF) are staples in the analytic toolkit of neurophysiologists. The former is usually obtained by binning spike trains, whereas the standard method for the latter is smoothing with a Gaussian kernel. Selection of a bin width or a kernel size is often done in an relatively arbitrary fashion, even though there have been recent attempts to remedy this situation (DiMatteo et al., Biometrika 88(4):1055-1071, 2001; Shimazaki and Shinomoto 2007a, Neural Comput 19(6):1503-1527, 2007b, c; Cunningham et al. 2008). We develop an exact Bayesian, generative model approach to estimating PSTHs. Advantages of our scheme include automatic complexity control and error bars on its predictions. We show how to perform feature extraction on spike trains in a principled way, exemplified through latency and firing rate posterior distribution evaluations on repeated and single trial data. We also demonstrate using both simulated and real neuronal data that our approach provides a more accurate estimates of the PSTH and the latency than current competing methods. We employ the posterior distributions for an information theoretic analysis of the neural code comprised of latency and firing rate of neurons in high-level visual area STSa. A software implementation of our method is available at the machine learning open source software repository ( www.mloss.org , project 'binsdfc').

  11. An adaptive approach for the segmentation and extraction of planar and linear/cylindrical features from laser scanning data

    Science.gov (United States)

    Lari, Zahra; Habib, Ayman

    2014-07-01

    Laser scanning systems have been established as leading tools for the collection of high density three-dimensional data over physical surfaces. The collected point cloud does not provide semantic information about the characteristics of the scanned surfaces. Therefore, different processing techniques have been developed for the extraction of useful information from this data which could be applied for diverse civil, industrial, and military applications. Planar and linear/cylindrical features are among the most important primitive information to be extracted from laser scanning data, especially those collected in urban areas. This paper introduces a new approach for the identification, parameterization, and segmentation of these features from laser scanning data while considering the internal characteristics of the utilized point cloud - i.e., local point density variation and noise level in the dataset. In the first step of this approach, a Principal Component Analysis of the local neighborhood of individual points is implemented to identify the points that belong to planar and linear/cylindrical features and select their appropriate representation model. For the detected planar features, the segmentation attributes are then computed through an adaptive cylinder neighborhood definition. Two clustering approaches are then introduced to segment and extract individual planar features in the reconstructed parameter domain. For the linear/cylindrical features, their directional and positional parameters are utilized as the segmentation attributes. A sequential clustering technique is proposed to isolate the points which belong to individual linear/cylindrical features through directional and positional attribute subspaces. Experimental results from simulated and real datasets demonstrate the feasibility of the proposed approach for the extraction of planar and linear/cylindrical features from laser scanning data.

  12. Easily detectable cytomorphological features to evaluate during ROSE for rapid lung cancer diagnosis: from cytology to histology.

    Science.gov (United States)

    Ravaioli, Sara; Bravaccini, Sara; Tumedei, Maria Maddalena; Pironi, Flavio; Candoli, Piero; Puccetti, Maurizio

    2017-02-14

    In lung cancer patients, the only available diagnostic material often comes from biopsy or from cytological samples obtained by fine needle aspiration (FNA). There is a lack of easily detectable cytomorphological features for rapid on-site evaluation (ROSE) to orient lung cancer diagnosis towards a specific tumor histotype. We studied the cytological features evaluated on site to define tumor histotype and to establish the number of specimens to be taken. Cytological specimens from 273 consecutive patients were analyzed with ROSE: bronchoscopy with transbronchial needle aspiration (TBNA) had been performed in 72 patients and with endobronchial ultrasound (EBUS)-TBNA in 201. Cytomorphological features were correlated with the final diagnosis and diagnostic accuracy was measured. Analysis of the different cytomorphological parameters showed that the best sensitivity and specificity were obtained for adenocarcinoma by combining the presence of nucleoli and small/medium cell clusters, and for squamous cell carcinoma by considering the presence of necrosis ≥50% and large cell clusters. For small cell carcinoma, the best diagnostic accuracy was obtained by combining moderate necrosis (lung cancers during ROSE using only a few easily identifiable cytomorphological parameters. An accurate diagnosis during ROSE could help endoscopists to decide how many tumor samples must be taken, e.g.a higher number of samples is needed for the biomolecular characterization of adenocarcinoma, whereas one sample may be sufficient for squamous cell carcinoma.

  13. Comparison of rapidly synergistic cloud point extraction and ultrasound-assisted cloud point extraction for trace selenium coupled with spectrophotometric determination.

    Science.gov (United States)

    Wen, Xiaodong; Zhang, Yanyan; Li, Chunyan; Fang, Xiang; Zhang, Xiaocan

    2014-04-05

    In this work, rapidly synergistic cloud point extraction (RS-CPE) and ultrasound-assisted cloud point extraction (UA-CPE) were firstly compared and coupled with spectrophotometer for selenium preconcentration and detection. The established RS-CPE pretreatment was simple, rapid and high effective. The extraction time was only 1min without heating process. Under the effect of ultrasound, UA-CPE accomplished extraction efficiently although the extraction procedure was relatively time-consuming. In this study, RS-CPE and UA-CPE were firstly applied for selenium preconcentration and coupled with conventional spectrophotometer. Their applications were expanded and the analytical performance of spectrophotometric determination for selenium was considerably improved. The influence factors relevant to RS-CPE and UA-CPE were studied in detail. Under the optimal conditions, the limits of detection (LODs) for selenium were respectively 0.2μgL(-1) of RS-CPE and 0.3μgL(-1) of UA-CPE with sensitivity enhancement factors (EFs) of 124 and 103. The developed methods were applied to the determination of trace selenium in real water samples with satisfactory analytical results. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Applying machine learning and image feature extraction techniques to the problem of cerebral aneurysm rupture

    Directory of Open Access Journals (Sweden)

    Steren Chabert

    2017-01-01

    Full Text Available Cerebral aneurysm is a cerebrovascular disorder characterized by a bulging in a weak area in the wall of an artery that supplies blood to the brain. It is relevant to understand the mechanisms leading to the apparition of aneurysms, their growth and, more important, leading to their rupture. The purpose of this study is to study the impact on aneurysm rupture of the combination of different parameters, instead of focusing on only one factor at a time as is frequently found in the literature, using machine learning and feature extraction techniques. This discussion takes relevance in the context of the complex decision that the physicians have to take to decide which therapy to apply, as each intervention bares its own risks, and implies to use a complex ensemble of resources (human resources, OR, etc. in hospitals always under very high work load. This project has been raised in our actual working team, composed of interventional neuroradiologist, radiologic technologist, informatics engineers and biomedical engineers, from Valparaiso public Hospital, Hospital Carlos van Buren, and from Universidad de Valparaíso – Facultad de Ingeniería and Facultad de Medicina. This team has been working together in the last few years, and is now participating in the implementation of an “interdisciplinary platform for innovation in health”, as part of a bigger project leaded by Universidad de Valparaiso (PMI UVA1402. It is relevant to emphasize that this project is made feasible by the existence of this network between physicians and engineers, and by the existence of data already registered in an orderly manner, structured and recorded in digital format. The present proposal arises from the description in nowadays literature that the actual indicators, whether based on morphological description of the aneurysm, or based on characterization of biomechanical factor or others, these indicators were shown not to provide sufficient information in order

  15. Tibetan Information Extraction Technology Integrated with Event Feature and Semantic Role Labelling

    Directory of Open Access Journals (Sweden)

    Wan Fucheng

    2017-01-01

    Full Text Available we integrate with semantic information which is based on syntactic analysis for extracting the Tibetan information. For Tibetan language information extraction, through experiments analyzed, syntactic analysis model which is integrated with information of semantics, as well as the evaluation of program can be used in Tibetan language information extraction task successfully.

  16. Meta-optimization of the extended kalman filter's parameters for improved feature extraction on hyper-temporal images

    CSIR Research Space (South Africa)

    Salmon, BP

    2011-07-01

    Full Text Available . This paper proposes a meta-optimization approach for setting the parameters of the non-linear Extended Kalman Filter to rapidly and efficiently estimate the features for the pair of triply modulated cosine functions. The approach is based on a unsupervised...

  17. Ask Me! self-reported features of adolescents experiencing neglect or emotional maltreatment: a rapid systematic review.

    Science.gov (United States)

    Naughton, A M; Cowley, L E; Tempest, V; Maguire, S A; Mann, M K; Kemp, A M

    2017-05-01

    Neglect is often overlooked in adolescence, due in part to assumptions about autonomy and misinterpretation of behaviors being part of normal adolescent development. Emotional maltreatment (abuse or neglect) has a damaging effect throughout the lifespan, but is rarely recognized amongst adolescents. Our review aims to identify features that adolescents experiencing neglect and/ or emotional maltreatment report. A rapid review methodology searched 8 databases (1990-2014), supplemented by hand searching journals, and references, identifying 2,568 abstracts. Two independent reviews were undertaken of 279 articles, by trained reviewers, using standardised critical appraisal. Eligible studies: primary studies of children aged 13-17 years, with substantiated neglect and/ or emotional maltreatment, containing self-reported features. 19 publications from 13 studies were included, demonstrating associations between both neglect and emotional maltreatment with internalising features (9 studies) including depression, post traumatic symptomatology and anxiety; emotional maltreatment was associated with suicidal ideation, while neglect was not (1 study); neglect was associated with alcohol related problems (3 studies), substance misuse (2 studies), delinquency for boys (1 study), teenage pregnancy (1 study), and general victimization for girls (1 study), while emotionally maltreated girls reported more externalising symptoms (1 study). Dating violence victimization was associated with neglect and emotional maltreatment (2 studies), while emotional abuse of boys, but not neglect, was associated with dating violence perpetration (1 study), and neither neglect nor emotional maltreatment had an association with low self-esteem (2 studies). Neither neglect nor emotional maltreatment had an effect on school performance (1 study), but neglected boys showed greater school engagement than neglected girls (1 study). If asked, neglected or emotionally maltreated adolescents describe

  18. Fault feature extraction of planet gear in wind turbine gearbox based on spectral kurtosis and time wavelet energy spectrum

    Science.gov (United States)

    Kong, Yun; Wang, Tianyang; Li, Zheng; Chu, Fulei

    2017-09-01

    Planetary transmission plays a vital role in wind turbine drivetrains, and its fault diagnosis has been an important and challenging issue. Owing to the complicated and coupled vibration source, time-variant vibration transfer path, and heavy background noise masking effect, the vibration signal of planet gear in wind turbine gearboxes exhibits several unique characteristics: Complex frequency components, low signal-to-noise ratio, and weak fault feature. In this sense, the periodic impulsive components induced by a localized defect are hard to extract, and the fault detection of planet gear in wind turbines remains to be a challenging research work. Aiming to extract the fault feature of planet gear effectively, we propose a novel feature extraction method based on spectral kurtosis and time wavelet energy spectrum (SK-TWES) in the paper. Firstly, the spectral kurtosis (SK) and kurtogram of raw vibration signals are computed and exploited to select the optimal filtering parameter for the subsequent band-pass filtering. Then, the band-pass filtering is applied to extrude periodic transient impulses using the optimal frequency band in which the corresponding SK value is maximal. Finally, the time wavelet energy spectrum analysis is performed on the filtered signal, selecting Morlet wavelet as the mother wavelet which possesses a high similarity to the impulsive components. The experimental signals collected from the wind turbine gearbox test rig demonstrate that the proposed method is effective at the feature extraction and fault diagnosis for the planet gear with a localized defect.

  19. A Semi-automated Vector Migration Tool Based on Road Feature Extraction from High Resolution Imagery

    Science.gov (United States)

    Haithcoat, T. L.; Song, W.

    2001-05-01

    A major stumbling block to the integration of remotely sensed data into existing GIS data base structures is the issue of positional accuracy of the existing line-work within the vector database. This inaccuracy manifests itself when overlain to more positional consistent imagery data. In the example case presented within this paper, the parcel map had a variable accuracy of up to 40 ft plus or minus once the various parcel map tiles were combined. This is the result of data being built by hand historically and remaining un-edgematched between tiles within a mylar mapping system. The investment to convert this (the only base map widely used) was made and the sheets were scanned and vectorized by the private sector, which very accurately reproduced the inherent errors of this mapping approach. With the incorporation of GPS and the associated problems of edgematching the tiles into a seamless database the local government consortium was stymied. This lead to the development of an image based reference for these data layers from the existing DOQQs (1995 vintage) and 1m Pan IKONOS imagery. A process was developed that uses road feature extraction from these imagery sources as well as road intersections derived from within the parcel map layer to create a continuum of linearized adjustments. The parcel linework is then degenerated into points and topological relatinships and the positional locations altered based on the adjustment surface. Once adjusted, the linework is re-built and topology re-established on the adjusted layer. This is a tool that can assist counties and cities in migrating their vector data to the image base while maintaining the integrity and the relative-positional accuracy of the vector data.

  20. Feature extraction of event-related potentials using wavelets: an application to human performance monitoring

    Science.gov (United States)

    Trejo, L. J.; Shensa, M. J.

    1999-01-01

    This report describes the development and evaluation of mathematical models for predicting human performance from discrete wavelet transforms (DWT) of event-related potentials (ERP) elicited by task-relevant stimuli. The DWT was compared to principal components analysis (PCA) for representation of ERPs in linear regression and neural network models developed to predict a composite measure of human signal detection performance. Linear regression models based on coefficients of the decimated DWT predicted signal detection performance with half as many free parameters as comparable models based on PCA scores. In addition, the DWT-based models were more resistant to model degradation due to over-fitting than PCA-based models. Feed-forward neural networks were trained using the backpropagation algorithm to predict signal detection performance based on raw ERPs, PCA scores, or high-power coefficients of the DWT. Neural networks based on high-power DWT coefficients trained with fewer iterations, generalized to new data better, and were more resistant to overfitting than networks based on raw ERPs. Networks based on PCA scores did not generalize to new data as well as either the DWT network or the raw ERP network. The results show that wavelet expansions represent the ERP efficiently and extract behaviorally important features for use in linear regression or neural network models of human performance. The efficiency of the DWT is discussed in terms of its decorrelation and energy compaction properties. In addition, the DWT models provided evidence that a pattern of low-frequency activity (1 to 3.5 Hz) occurring at specific times and scalp locations is a reliable correlate of human signal detection performance. Copyright 1999 Academic Press.

  1. EVALUATION OF THE IMPACT OF THE ECKLONIA MAXIMA EXTRACT ON SELECTED MORPHOLOGICAL FEATURES OF YELLOW PINE, SPRUCE AND THUJA STABBING

    Directory of Open Access Journals (Sweden)

    Jacek Sosnowski Sosnowski

    2016-07-01

    Full Text Available The study was focused on the impact of an extract of Ecklonia maxima on selected morphological features of yellow pine (Pinus ponderosa Dougl. ex C. Lawson, prickly spruce (Picea pungens Engelm. Variety Glauca, thuja (Thuja occidentalis variety Smaragd. The experiment was established in April 12, 2012 on the forest nursery in Ceranów. April 15, 2013 was introduced research agent in the form of a spraying an aqueous solution extract of Ecklonia maxima with trade name Kelpak SL. Biologically active compounds in the extract are plant hormones: auxin and cytokinin. There were studied increment in plant height, needle length of yellow pine, twigs length in prickly spruce and thuja. The measurements of increment in length of twigs and needles were made in each case on the same, specially marked parts of plants and have carried them on the 27th of each month beginning in May and ending in September. The results were evaluated statistically using the analysis of variance. Medium differentiations were verified by Tukey's test at a significance level p ≤ 0.05. The study showed that the diversity of traits features in the experiment was depended on the extract, the tree species and the measurement time. The best results after the extract using showed a pine and spruce. Seaweed preparation contributed to increment increased of trees height for in the pine and spruce and the needles length of pine and twigs of spruce. The species showing no reaction to the extract was thuja.

  2. Extraction and Recognition of Nonlinear Interval-Type Features Using Symbolic KDA Algorithm with Application to Face Recognition

    Directory of Open Access Journals (Sweden)

    P. S. Hiremath

    2008-01-01

    recognition in the framework of symbolic data analysis. Classical KDA extracts features, which are single-valued in nature to represent face images. These single-valued variables may not be able to capture variation of each feature in all the images of same subject; this leads to loss of information. The symbolic KDA algorithm extracts most discriminating nonlinear interval-type features which optimally discriminate among the classes represented in the training set. The proposed method has been successfully tested for face recognition using two databases, ORL database and Yale face database. The effectiveness of the proposed method is shown in terms of comparative performance against popular face recognition methods such as kernel Eigenface method and kernel Fisherface method. Experimental results show that symbolic KDA yields improved recognition rate.

  3. Automated Feature Extraction by Combining Polarimetric SAR and Object-Based Image Analysis for Monitoring of Natural Resource Exploitation

    Science.gov (United States)

    Plank, Simon; Mager, Alexander; Schoepfer, Elizabeth

    2015-04-01

    An automated feature extraction procedure based on the combination of a pixel-based unsupervised classification of polarimetric synthetic aperture radar data (co-co dual-polarimetric TerraSAR-X) and an object-based post-classification is presented. The former is based on the entropy/alpha decomposition and the hereon based unsupervised Wishart classification, while the latter considers in addition feature properties such as shape and area. The feature extraction procedure is developed for monitoring oil field infrastructure. For developing countries, several studies reported a high correlation between the dependence of oil exports and violent conflicts. Consequently, to support problem solving, an independent monitoring of the oil field infrastructure by Earth observation is proposed.

  4. Rapid Mass Spectrometric Analysis of a Novel Fucoidan, Extracted from the Brown Alga Coccophora langsdorfii

    Directory of Open Access Journals (Sweden)

    Stanislav D. Anastyuk

    2014-01-01

    Full Text Available The novel highly sulfated (35% fucoidan fraction Cf2 , which contained, along with fucose, galactose and traces of xylose and uronic acids was purified from the brown alga Coccophora langsdorfii. Its structural features were predominantly determined (in comparison with fragments of known structure by a rapid mass spectrometric investigation of the low-molecular-weight fragments, obtained by “mild” (5 mg/mL and “exhaustive” (maximal concentration autohydrolysis. Tandem matrix-assisted laser desorption/ionization mass spectra (MALDI-TOF/TOFMS of fucooligosaccharides with even degree of polymerization (DP, obtained by “mild” autohydrolysis, were the same as that observed for fucoidan from Fucus evanescens, which have a backbone of alternating (1 → 3- and (1 → 4 linked sulfated at C-2 and sometimes at C-4 of 3-linked α-L-Fucp residues. Fragmentation patterns of oligosaccharides with odd DP indicated sulfation at C-2 and at C-4 of (1 → 3 linked α-L-Fucp residues on the reducing terminus. Minor sulfation at C-3 was also suggested. The “exhaustive” autohydrolysis allowed us to observe the “mixed” oligosaccharides, built up of fucose/xylose and fucose/galactose. Xylose residues were found to occupy both the reducing and nonreducing termini of FucXyl disaccharides. Nonreducing galactose residues as part of GalFuc disaccharides were found to be linked, possibly, by 2-type of linkage to fucose residues and were found to be sulfated, most likely, at position C-2.

  5. Ball mill assisted rapid mechanochemical extraction method for natural products from plants.

    Science.gov (United States)

    Wang, Man; Bi, Wentao; Huang, Xiaohua; Chen, David Da Yong

    2016-06-03

    A ball mill assisted mechanochemical extraction method was developed to extract compounds of natural product (NP) from plant using ionic liquid (IL). A small volume ball mill, also known as PastPrep(®) Homogenizer, which is often used for high-speed lysis of biological samples and for other applications, was used to dramatically increase the speed, completeness and reproducibility of the extraction process at room temperature to preserve the chemical integrity of the extracted compounds. In this study, tanshinones were selected as target compounds to evaluate the performance of this extraction method. Factors affecting the extraction efficiency, such as the duration, IL concentration and solid/liquid ratio were systematically optimized using the response surface methodology. Under the optimized conditions, the described method was more efficient and much faster than the conventional extraction methods such as methanol based ultrasound assisted extraction (UAE) and heat reflux extraction (HRE) that consumes a lot more organic solvent. In addition, the natural products of interest were enriched by anion metathesis of ionic liquids, combining extraction and preconcentration in the same process. The extractant was analyzed by HPLC and LC-MS. The reproducibility (RSD, n=5), correlation coefficient (r(2)) of the calibration curve, and the limit of detection, were determined to be in the range of 4.7-5.2%, 0.9992-0.9995, and 20-51ng/mL, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. The BUME method: a new rapid and simple chloroform-free method for total lipid extraction of animal tissue

    Science.gov (United States)

    Löfgren, Lars; Forsberg, Gun-Britt; Ståhlman, Marcus

    2016-06-01

    In this study we present a simple and rapid method for tissue lipid extraction. Snap-frozen tissue (15-150 mg) is collected in 2 ml homogenization tubes. 500 μl BUME mixture (butanol:methanol [3:1]) is added and automated homogenization of up to 24 frozen samples at a time in less than 60 seconds is performed, followed by a 5-minute single-phase extraction. After the addition of 500 μl heptane:ethyl acetate (3:1) and 500 μl 1% acetic acid a 5-minute two-phase extraction is performed. Lipids are recovered from the upper phase by automated liquid handling using a standard 96-tip robot. A second two-phase extraction is performed using 500 μl heptane:ethyl acetate (3:1). Validation of the method showed that the extraction recoveries for the investigated lipids, which included sterols, glycerolipids, glycerophospholipids and sphingolipids were similar or better than for the Folch method. We also applied the method for lipid extraction of liver and heart and compared the lipid species profiles with profiles generated after Folch and MTBE extraction. We conclude that the BUME method is superior to the Folch method in terms of simplicity, through-put, automation, solvent consumption, economy, health and environment yet delivering lipid recoveries fully comparable to or better than the Folch method.

  7. Wearable Sensor-Based Human Activity Recognition Method with Multi-Features Extracted from Hilbert-Huang Transform

    Directory of Open Access Journals (Sweden)

    Huile Xu

    2016-12-01

    Full Text Available Wearable sensors-based human activity recognition introduces many useful applications and services in health care, rehabilitation training, elderly monitoring and many other areas of human interaction. Existing works in this field mainly focus on recognizing activities by using traditional features extracted from Fourier transform (FT or wavelet transform (WT. However, these signal processing approaches are suitable for a linear signal but not for a nonlinear signal. In this paper, we investigate the characteristics of the Hilbert-Huang transform (HHT for dealing with activity data with properties such as nonlinearity and non-stationarity. A multi-features extraction method based on HHT is then proposed to improve the effect of activity recognition. The extracted multi-features include instantaneous amplitude (IA and instantaneous frequency (IF by means of empirical mode decomposition (EMD, as well as instantaneous energy density (IE and marginal spectrum (MS derived from Hilbert spectral analysis. Experimental studies are performed to verify the proposed approach by using the PAMAP2 dataset from the University of California, Irvine for wearable sensors-based activity recognition. Moreover, the effect of combining multi-features vs. a single-feature are investigated and discussed in the scenario of a dependent subject. The experimental results show that multi-features combination can further improve the performance measures. Finally, we test the effect of multi-features combination in the scenario of an independent subject. Our experimental results show that we achieve four performance indexes: recall, precision, F-measure, and accuracy to 0.9337, 0.9417, 0.9353, and 0.9377 respectively, which are all better than the achievements of related works.

  8. Enhancing interpretability of automatically extracted machine learning features: application to a RBM-Random Forest system on brain lesion segmentation.

    Science.gov (United States)

    Pereira, Sérgio; Meier, Raphael; McKinley, Richard; Wiest, Roland; Alves, Victor; Silva, Carlos A; Reyes, Mauricio

    2018-02-01

    Machine learning systems are achieving better performances at the cost of becoming increasingly complex. However, because of that, they become less interpretable, which may cause some distrust by the end-user of the system. This is especially important as these systems are pervasively being introduced to critical domains, such as the medical field. Representation Learning techniques are general methods for automatic feature computation. Nevertheless, these techniques are regarded as uninterpretable "black boxes". In this paper, we propose a methodology to enhance the interpretability of automatically extracted machine learning features. The proposed system is composed of a Restricted Boltzmann Machine for unsupervised feature learning, and a Random Forest classifier, which are combined to jointly consider existing correlations between imaging data, features, and target variables. We define two levels of interpretation: global and local. The former is devoted to understanding if the system learned the relevant relations in the data correctly, while the later is focused on predictions performed on a voxel- and patient-level. In addition, we propose a novel feature importance strategy that considers both imaging data and target variables, and we demonstrate the ability of the approach to leverage the interpretability of the obtained representation for the task at hand. We evaluated the proposed methodology in brain tumor segmentation and penumbra estimation in ischemic stroke lesions. We show the ability of the proposed methodology to unveil information regarding relationships between imaging modalities and extracted features and their usefulness for the task at hand. In both clinical scenarios, we demonstrate that the proposed methodology enhances the interpretability of automatically learned features, highlighting specific learning patterns that resemble how an expert extracts relevant data from medical images. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Extracting salient features for network intrusion detection using machine learning methods

    Directory of Open Access Journals (Sweden)

    Ralf C. Staudemeyer

    2014-06-01

    Full Text Available This work presents a data preprocessing and feature selection framework to support data mining and network security experts in minimal feature set selection of intrusion detection data. This process is supported by detailed visualisation and examination of class distributions. Distribution histograms, scatter plots and information gain are presented as supportive feature reduction tools. The feature reduction process applied is based on decision tree pruning and backward elimination. This paper starts with an analysis of the KDD Cup '99 datasets and their potential for feature reduction. The dataset consists of connection records with 41 features whose relevance for intrusion detection are not clear. All traffic is either classified `normal' or into the four attack types denial-of-service, network probe, remote-to-local or user-to-root. Using our custom feature selection process, we show how we can significantly reduce the number features in the dataset to a few salient features. We conclude by presenting minimal sets with 4--8 salient features for two-class and multi-class categorisation for detecting intrusions, as well as for the detection of individual attack classes; the performance using a static classifier compares favourably to the performance using all features available. The suggested process is of general nature and can be applied to any similar dataset.

  10. A Measurement Method for Large Parts Combining with Feature Compression Extraction and Directed Edge-Point Criterion

    Directory of Open Access Journals (Sweden)

    Wei Liu

    2016-12-01

    Full Text Available High-accuracy surface measurement of large aviation parts is a significant guarantee of aircraft assembly with high quality. The result of boundary measurement is a significant parameter for aviation-part measurement. This paper proposes a measurement method for accurately measuring the surface and boundary of aviation part with feature compression extraction and directed edge-point criterion. To improve the measurement accuracy of both the surface and boundary of large parts, extraction method of global boundary and feature analysis of local stripe are combined. The center feature of laser stripe is obtained with high accuracy and less calculation using a sub-pixel centroid extraction method based on compress processing. This method consists of a compressing process of images and judgment criterion of laser stripe centers. An edge-point extraction method based on directed arc-length criterion is proposed to obtain accurate boundary. Finally, a high-precision reconstruction of aerospace part is achieved. Experiments are performed both in a laboratory and an industrial field. The physical measurements validate that the mean distance deviation of the proposed method is 0.47 mm. The results of the field experimentation show the validity of the proposed method.

  11. Automatic fault feature extraction of mechanical anomaly on induction motor bearing using ensemble super-wavelet transform

    Science.gov (United States)

    He, Wangpeng; Zi, Yanyang; Chen, Binqiang; Wu, Feng; He, Zhengjia

    2015-03-01

    Mechanical anomaly is a major failure type of induction motor. It is of great value to detect the resulting fault feature automatically. In this paper, an ensemble super-wavelet transform (ESW) is proposed for investigating vibration features of motor bearing faults. The ESW is put forward based on the combination of tunable Q-factor wavelet transform (TQWT) and Hilbert transform such that fault feature adaptability is enabled. Within ESW, a parametric optimization is performed on the measured signal to obtain a quality TQWT basis that best demonstrate the hidden fault feature. TQWT is introduced as it provides a vast wavelet dictionary with time-frequency localization ability. The parametric optimization is guided according to the maximization of fault feature ratio, which is a new quantitative measure of periodic fault signatures. The fault feature ratio is derived from the digital Hilbert demodulation analysis with an insightful quantitative interpretation. The output of ESW on the measured signal is a selected wavelet scale with indicated fault features. It is verified via numerical simulations that ESW can match the oscillatory behavior of signals without artificially specified. The proposed method is applied to two engineering cases, signals of which were collected from wind turbine and steel temper mill, to verify its effectiveness. The processed results demonstrate that the proposed method is more effective in extracting weak fault features of induction motor bearings compared with Fourier transform, direct Hilbert envelope spectrum, different wavelet transforms and spectral kurtosis.

  12. Feature extraction using first and second derivative extrema (FSDE) for real-time and hardware-efficient spike sorting.

    Science.gov (United States)

    Paraskevopoulou, Sivylla E; Barsakcioglu, Deren Y; Saberi, Mohammed R; Eftekhar, Amir; Constandinou, Timothy G

    2013-04-30

    Next generation neural interfaces aspire to achieve real-time multi-channel systems by integrating spike sorting on chip to overcome limitations in communication channel capacity. The feasibility of this approach relies on developing highly efficient algorithms for feature extraction and clustering with the potential of low-power hardware implementation. We are proposing a feature extraction method, not requiring any calibration, based on first and second derivative features of the spike waveform. The accuracy and computational complexity of the proposed method are quantified and compared against commonly used feature extraction methods, through simulation across four datasets (with different single units) at multiple noise levels (ranging from 5 to 20% of the signal amplitude). The average classification error is shown to be below 7% with a computational complexity of 2N-3, where N is the number of sample points of each spike. Overall, this method presents a good trade-off between accuracy and computational complexity and is thus particularly well-suited for hardware-efficient implementation. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Evaluation of feature extraction techniques on event-related potentials for detection of attention-deficit/hyperactivity disorder.

    Science.gov (United States)

    Castro-Cabrera, P; Gomez-Garcia, J; Restrepo, F; Moscoso, O; Castellanos-Dominguez, G

    2010-01-01

    Event-related potentials (ERPs) are one of the most informative and dynamic methods of monitoring cognitive processes, which are widely used in clinical research to deal a variety of psychiatric and neurological disorders as attention-deficit/hyperactivity disorder (ADHD). This work proposes an extraction and selection methodology for discriminating between normal and pathological patients with ADHD by using ERPs. Three different sets of features (morphological, wavelets, and nonlinear based) are analyzed, looking for the best classification accuracy. The results show that the wavelet features provided a good discriminative capability, but it improved by combining all the set of features and applying a feature selection algorithm, reaching a maximum accuracy rate of 91.3%.

  14. A rapid and reliable procedure for extraction of cellular polyamines and inorganic ions from plant tissues

    Science.gov (United States)

    Rakesh Minocha; Walter C. Shortle; Stephanie L. Long; Subhash C. Minocha

    1994-01-01

    A fast and reliable method for the extraction of cellular polyamines and major inorganic ions (Ca, Mg, Mn, K, and P) from several plant tissues is described. The method involves repeated freezing and thawing of samples instead of homogenization. The efficiency of extraction of both the polyamines and inorganic ions by these two methods was compared for 10 different...

  15. A rapid and low-cost DNA extraction method for isolating ...

    African Journals Online (AJOL)

    The price of commercial DNA extraction methods makes the routine use of polymerase chain reaction amplification (PCR) based methods rather costly for scientists in developing countries. A guanidium thiocayante-based DNA extraction method was investigated in this study for the isolation of Escherichia coli (E. coli) DNA ...

  16. Terra-Kleen Response Group, Inc. Solvent Extraction Technology Rapid Commercialization Initiative Report

    Science.gov (United States)

    Terra-Kleen Response Group Inc. (Terra-Kleen), has commercialized a solvent extraction technology that uses a proprietary extraction solvent to transfer organic constituents from soil to a liquid phase in a batch process at ambient temperatures. The proprietary solvent has a rel...

  17. UAS-SfM for coastal research: Geomorphic feature extraction and land cover classification from high-resolution elevation and optical imagery

    Science.gov (United States)

    Sturdivant, Emily; Lentz, Erika; Thieler, E. Robert; Farris, Amy; Weber, Kathryn; Remsen, David P.; Miner, Simon; Henderson, Rachel

    2017-01-01

    The vulnerability of coastal systems to hazards such as storms and sea-level rise is typically characterized using a combination of ground and manned airborne systems that have limited spatial or temporal scales. Structure-from-motion (SfM) photogrammetry applied to imagery acquired by unmanned aerial systems (UAS) offers a rapid and inexpensive means to produce high-resolution topographic and visual reflectance datasets that rival existing lidar and imagery standards. Here, we use SfM to produce an elevation point cloud, an orthomosaic, and a digital elevation model (DEM) from data collected by UAS at a beach and wetland site in Massachusetts, USA. We apply existing methods to (a) determine the position of shorelines and foredunes using a feature extraction routine developed for lidar point clouds and (b) map land cover from the rasterized surfaces using a supervised classification routine. In both analyses, we experimentally vary the input datasets to understand the benefits and limitations of UAS-SfM for coastal vulnerability assessment. We find that (a) geomorphic features are extracted from the SfM point cloud with near-continuous coverage and sub-meter precision, better than was possible from a recent lidar dataset covering the same area; and (b) land cover classification is greatly improved by including topographic data with visual reflectance, but changes to resolution (when <50 cm) have little influence on the classification accuracy.

  18. Multi-step infrared macro-fingerprint features of ethanol extracts from different Cistanche species in China combined with HPLC fingerprint

    Science.gov (United States)

    Xu, Rong; Sun, Suqin; Zhu, Weicheng; Xu, Changhua; Liu, Yougang; Shen, Liang; Shi, Yue; Chen, Jun

    2014-07-01

    The genus Cistanche generally has four species in China, including C. deserticola (CD), C. tubulosa (CT), C. salsa (CS) and C. sinensis (CSN), among which CD and CT are official herbal sources of Cistanche Herba (CH). To clarify the sources of CH and ensure the clinical efficacy and safety, a multi-step IR macro-fingerprint method was developed to analyze and evaluate the ethanol extracts of the four species. Through this method, the four species were distinctively distinguished, and the main active components phenylethanoid glycosides (PhGs) were estimated rapidly according to the fingerprint features in the original IR spectra, second derivative spectra, correlation coefficients and 2D-IR correlation spectra. The exclusive IR fingerprints in the spectra including the positions, shapes and numbers of peaks indicated that constitutes of CD were the most abundant, and CT had the highest level of PhGs. The results deduced by some macroscopic features in IR fingerprint were in agreement with the HPLC fingerprint of PhGs from the four species, but it should be noted that the IR provided more chemical information than HPLC. In conclusion, with the advantages of high resolution, cost effective and speediness, the macroscopic IR fingerprint method should be a promising analytical technique for discriminating extremely similar herbal medicine, monitoring and tracing the constituents of different extracts and even for quality control of the complex systems such as TCM.

  19. UAS-SfM for Coastal Research: Geomorphic Feature Extraction and Land Cover Classification from High-Resolution Elevation and Optical Imagery

    Directory of Open Access Journals (Sweden)

    Emily J. Sturdivant

    2017-10-01

    Full Text Available The vulnerability of coastal systems to hazards such as storms and sea-level rise is typically characterized using a combination of ground and manned airborne systems that have limited spatial or temporal scales. Structure-from-motion (SfM photogrammetry applied to imagery acquired by unmanned aerial systems (UAS offers a rapid and inexpensive means to produce high-resolution topographic and visual reflectance datasets that rival existing lidar and imagery standards. Here, we use SfM to produce an elevation point cloud, an orthomosaic, and a digital elevation model (DEM from data collected by UAS at a beach and wetland site in Massachusetts, USA. We apply existing methods to (a determine the position of shorelines and foredunes using a feature extraction routine developed for lidar point clouds and (b map land cover from the rasterized surfaces using a supervised classification routine. In both analyses, we experimentally vary the input datasets to understand the benefits and limitations of UAS-SfM for coastal vulnerability assessment. We find that (a geomorphic features are extracted from the SfM point cloud with near-continuous coverage and sub-meter precision, better than was possible from a recent lidar dataset covering the same area; and (b land cover classification is greatly improved by including topographic data with visual reflectance, but changes to resolution (when <50 cm have little influence on the classification accuracy.

  20. Quantitative Image Feature Engine (QIFE): an Open-Source, Modular Engine for 3D Quantitative Feature Extraction from Volumetric Medical Images.

    Science.gov (United States)

    Echegaray, Sebastian; Bakr, Shaimaa; Rubin, Daniel L; Napel, Sandy

    2017-10-06

    The aim of this study was to develop an open-source, modular, locally run or server-based system for 3D radiomics feature computation that can be used on any computer system and included in existing workflows for understanding associations and building predictive models between image features and clinical data, such as survival. The QIFE exploits various levels of parallelization for use on multiprocessor systems. It consists of a managing framework and four stages: input, pre-processing, feature computation, and output. Each stage contains one or more swappable components, allowing run-time customization. We benchmarked the engine using various levels of parallelization on a cohort of CT scans presenting 108 lung tumors. Two versions of the QIFE have been released: (1) the open-source MATLAB code posted to Github, (2) a compiled version loaded in a Docker container, posted to DockerHub, which can be easily deployed on any computer. The QIFE processed 108 objects (tumors) in 2:12 (h/mm) using 1 core, and 1:04 (h/mm) hours using four cores with object-level parallelization. We developed the Quantitative Image Feature Engine (QIFE), an open-source feature-extraction framework that focuses on modularity, standards, parallelism, provenance, and integration. Researchers can easily integrate it with their existing segmentation and imaging workflows by creating input and output components that implement their existing interfaces. Computational efficiency can be improved by parallelizing execution at the cost of memory usage. Different parallelization levels provide different trade-offs, and the optimal setting will depend on the size and composition of the dataset to be processed.

  1. Towards a method of rapid extraction of strontium-90 from urine: urine pretreatment and alkali metal removal

    Energy Technology Data Exchange (ETDEWEB)

    Hawkins, C. [Argonne National Lab. (ANL), Argonne, IL (United States); Dietz, M. [Argonne National Lab. (ANL), Argonne, IL (United States); Kaminski, M. [Argonne National Lab. (ANL), Argonne, IL (United States); Mertz, C. [Argonne National Lab. (ANL), Argonne, IL (United States); Shkrob, I. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-03-01

    A technical program to support the Centers of Disease Control and Prevention is being developed to provide an analytical method for rapid extraction of Sr-90 from urine, with the intent of assessing the general population’s exposure during an emergency response to a radiological terrorist event. Results are presented on the progress in urine sample preparation and chemical separation steps that provide an accurate and quantitative detection of Sr-90 based upon an automated column separation sequence and a liquid scintillation assay. Batch extractions were used to evaluate the urine pretreatment and the column separation efficiency and loading capacity based upon commercial, extractant-loaded resins. An efficient pretreatment process for decolorizing and removing organics from urine without measurable loss of radiostrontium from the sample was demonstrated. In addition, the Diphonix® resin shows promise for the removal of high concentrations of common strontium interferents in urine as a first separation step for Sr-90 analysis.

  2. Extracted magnetic resonance texture features discriminate between phenotypes and are associated with overall survival in glioblastoma multiforme patients.

    Science.gov (United States)

    Chaddad, Ahmad; Tanougast, Camel

    2016-11-01

    GBM is a markedly heterogeneous brain tumor consisting of three main volumetric phenotypes identifiable on magnetic resonance imaging: necrosis (vN), active tumor (vAT), and edema/invasion (vE). The goal of this study is to identify the three glioblastoma multiforme (GBM) phenotypes using a texture-based gray-level co-occurrence matrix (GLCM) approach and determine whether the texture features of phenotypes are related to patient survival. MR imaging data in 40 GBM patients were analyzed. Phenotypes vN, vAT, and vE were segmented in a preprocessing step using 3D Slicer for rigid registration by T1-weighted imaging and corresponding fluid attenuation inversion recovery images. The GBM phenotypes were segmented using 3D Slicer tools. Texture features were extracted from GLCM of GBM phenotypes. Thereafter, Kruskal-Wallis test was employed to select the significant features. Robust predictive GBM features were identified and underwent numerous classifier analyses to distinguish phenotypes. Kaplan-Meier analysis was also performed to determine the relationship, if any, between phenotype texture features and survival rate. The simulation results showed that the 22 texture features were significant with p value <0.05. GBM phenotype discrimination based on texture features showed the best accuracy, sensitivity, and specificity of 79.31, 91.67, and 98.75 %, respectively. Three texture features derived from active tumor parts: difference entropy, information measure of correlation, and inverse difference were statistically significant in the prediction of survival, with log-rank p values of 0.001, 0.001, and 0.008, respectively. Among 22 features examined, three texture features have the ability to predict overall survival for GBM patients demonstrating the utility of GLCM analyses in both the diagnosis and prognosis of this patient population.

  3. A Novel Feature Extraction Approach Using Window Function Capturing and QPSO-SVM for Enhancing Electronic Nose Performance

    Directory of Open Access Journals (Sweden)

    Xiuzhen Guo

    2015-06-01

    Full Text Available In this paper, a novel feature extraction approach which can be referred to as moving window function capturing (MWFC has been proposed to analyze signals of an electronic nose (E-nose used for detecting types of infectious pathogens in rat wounds. Meanwhile, a quantum-behaved particle swarm optimization (QPSO algorithm is implemented in conjunction with support vector machine (SVM for realizing a synchronization optimization of the sensor array and SVM model parameters. The results prove the efficacy of the proposed method for E-nose feature extraction, which can lead to a higher classification accuracy rate compared to other established techniques. Meanwhile it is interesting to note that different classification results can be obtained by changing the types, widths or positions of windows. By selecting the optimum window function for the sensor response, the performance of an E-nose can be enhanced.

  4. Entropy-based texture analysis and feature extraction of urban street trees in the spatial frequency domain

    Science.gov (United States)

    Zhao, Haohao; Feng, Xuezhi; Chen, Yan; Zhao, Shuhe; Xiao, Pengfeng

    2009-10-01

    A method of texture analysis and feature extraction of urban street trees in spatial frequency domain is described in this paper. The QUICKBIRD image of Nanjing acquired in July, 2007 was considered. The image was first transformed by 2-D discrete Fourier transform. Then the energy of the component in spatial frequency was calculated. Entropy in a region of 7x7 window was considered to evaluate the energy distribution of the image. A Gabor filter was designed to extract texture features of street trees by using the radius and angel information of the entropy image. The precision of the segmentation result is 79.96%. Odd Gabor filter was designed to detect the edge of street trees, and the experimental result is excellent.

  5. On the use of LDA performance as a metric of feature extraction methods for a P300 BCI classification task

    Science.gov (United States)

    Gareis, Iván; Atum, Yanina; Gentiletti, Gerardo; Acevedo, Rubén; Medina Bañuelos, Verónica; Rufiner, Leonardo

    2011-12-01

    Brain computer interfaces (BCIs) translate brain activity into computer commands. To enhance the performance of a BCI, it is necessary to improve the feature extraction techniques being applied to decode the users' intentions. Objective comparison methods are needed to analyze different feature extraction techniques. One possibility is to use the classifier performance as a comparative measure. In this work the effect of several variables that affect the behaviour of linear discriminant analysis (LDA) has been studied when used to distinguish between electroencephalographic signals with and without the presence of event related potentials (ERPs). The error rate (ER) and the area under the receiver operating characteristic curve (AUC) were used as performance estimators of LDA. The results show that the number of characteristics, the degree of balance of the training patterns set and the number of averaged trials affect the classifier's performance and therefore, must be considered in the design of the integrated system.

  6. A System with Intelligent Editing for Extracting Ridge and Ravine Terrain Features

    National Research Council Canada - National Science Library

    Schmidt, Greg; Swan, J. E., II; Rosenblum, Lawrence; Tomlin, Erik B; Overby, Derek

    2005-01-01

    We describe a system for extracting ridges and ravines from elevation data. The application context is a map-based military planning tool, which allows users to select ridges and ravines by simple mouse clicks...

  7. Semi-automated identification and extraction of geomorphological features using digital elevation data

    NARCIS (Netherlands)

    Seijmonsbergen, A.C.; Hengl, T.; Anders, N.S.; Smith, M.J.; Paron, P.; Griffiths, J.S.

    2011-01-01

    Geomorphological maps that are automatically extracted from digital elevation data are gradually replacing classical geomorphological maps. Commonly, digital mapping projects are based upon statistical techniques, object-based protocols or both. In addition to digital elevation data, expert

  8. Segmentation-based filtering and object-based feature extraction from airborne LiDAR point cloud data

    Science.gov (United States)

    Chang, Jie

    Three dimensional (3D) information about ground and above-ground features such as buildings and trees is important for many urban and environmental applications. Recent developments in Light Detection And Ranging (LiDAR) technology provide promising alternatives to conventional techniques for acquiring such information. The focus of this dissertation research is to effectively and efficiently filter massive airborne LiDAR point cloud data and to extract main above-ground features such as buildings and trees in the urban area. A novel segmentation algorithm for point cloud data, namely the 3D k mutual nearest neighborhood (kMNN) segmentation algorithm, was developed based on the improvement to the kMNN clustering algorithm by employing distances in 3D space to define mutual nearest neighborhoods. A set of optimization strategies, including dividing dataset into multiple blocks and small size grids, and using distance thresholds in x and y, were implemented to improve the efficiency of the segmentation algorithm. A segmentation based filtering method was then employed to filter the generated segments, which first generates segment boundaries using Voronoi polygon and dissolving operations, and then labels the segments as ground and above-ground based on their size and relati