WorldWideScience

Sample records for automatable method extract

  1. Automatic segmentation of brain images: selection of region extraction methods

    Science.gov (United States)

    Gong, Leiguang; Kulikowski, Casimir A.; Mezrich, Reuben S.

    1991-07-01

    In automatically analyzing brain structures from a MR image, the choice of low level region extraction methods depends on the characteristics of both the target object and the surrounding anatomical structures in the image. The authors have experimented with local thresholding, global thresholding, and other techniques, using various types of MR images for extracting the major brian landmarks and different types of lesions. This paper describes specifically a local- binary thresholding method and a new global-multiple thresholding technique developed for MR image segmentation and analysis. The initial testing results on their segmentation performance are presented, followed by a comparative analysis of the two methods and their ability to extract different types of normal and abnormal brain structures -- the brain matter itself, tumors, regions of edema surrounding lesions, multiple sclerosis lesions, and the ventricles of the brain. The analysis and experimental results show that the global multiple thresholding techniques are more than adequate for extracting regions that correspond to the major brian structures, while local binary thresholding is helpful for more accurate delineation of small lesions such as those produced by MS, and for the precise refinement of lesion boundaries. The detection of other landmarks, such as the interhemispheric fissure, may require other techniques, such as line-fitting. These experiments have led to the formulation of a set of generic computer-based rules for selecting the appropriate segmentation packages for particular types of problems, based on which further development of an innovative knowledge- based, goal directed biomedical image analysis framework is being made. The system will carry out the selection automatically for a given specific analysis task.

  2. Automatic extraction of candidate nomenclature terms using the doublet method

    Directory of Open Access Journals (Sweden)

    Berman Jules J

    2005-10-01

    nomenclature. Results A 31+ Megabyte corpus of pathology journal abstracts was parsed using the doublet extraction method. This corpus consisted of 4,289 records, each containing an abstract title. The total number of words included in the abstract titles was 50,547. New candidate terms for the nomenclature were automatically extracted from the titles of abstracts in the corpus. Total execution time on a desktop computer with CPU speed of 2.79 GHz was 2 seconds. The resulting output consisted of 313 new candidate terms, each consisting of concatenated doublets found in the reference nomenclature. Human review of the 313 candidate terms yielded a list of 285 terms approved by a curator. A final automatic extraction of duplicate terms yielded a final list of 222 new terms (71% of the original 313 extracted candidate terms that could be added to the reference nomenclature. Conclusion The doublet method for automatically extracting candidate nomenclature terms can be used to quickly find new terms from vast amounts of text. The method can be immediately adapted for virtually any text and any nomenclature. An implementation of the algorithm, in the Perl programming language, is provided with this article.

  3. A Semi-automatic Method Based on Statistic for Mandarin Semantic Structures Extraction in Specific Domains

    Institute of Scientific and Technical Information of China (English)

    熊英; 朱杰; 孙静

    2004-01-01

    This paper proposed a new method of semi-automatic extraction for semantic structures from unlabelled corpora in specific domains. The approach is statistical in nature. The extracted structures can be used for shallow parsing and semantic labeling. By iteratively extracting new words and clustering words, we get an inital semantic lexicon that groups words of the same semantic meaning together as a class. After that, a bootstrapping algorithm is adopted to extract semantic structures. Then the semantic structures are used to extract new key words and augment the semantic lexicon. The resultant semantic structures are interpreted by persons and are amenable to hand-editing for refinement. In this experiment, the semi-automatically extracted structures SSA provide recall rate of 84.5%.

  4. A dual growing method for the automatic extraction of individual trees from mobile laser scanning data

    Science.gov (United States)

    Li, Lin; Li, Dalin; Zhu, Haihong; Li, You

    2016-10-01

    Street trees interlaced with other objects in cluttered point clouds of urban scenes inhibit the automatic extraction of individual trees. This paper proposes a method for the automatic extraction of individual trees from mobile laser scanning data, according to the general constitution of trees. Two components of each individual tree - a trunk and a crown can be extracted by the dual growing method. This method consists of coarse classification, through which most of artifacts are removed; the automatic selection of appropriate seeds for individual trees, by which the common manual initial setting is avoided; a dual growing process that separates one tree from others by circumscribing a trunk in an adaptive growing radius and segmenting a crown in constrained growing regions; and a refining process that draws a singular trunk from the interlaced other objects. The method is verified by two datasets with over 98% completeness and over 96% correctness. The low mean absolute percentage errors in capturing the morphological parameters of individual trees indicate that this method can output individual trees with high precision.

  5. Developing an Intelligent Automatic Appendix Extraction Method from Ultrasonography Based on Fuzzy ART and Image Processing

    Directory of Open Access Journals (Sweden)

    Kwang Baek Kim

    2015-01-01

    Full Text Available Ultrasound examination (US does a key role in the diagnosis and management of the patients with clinically suspected appendicitis which is the most common abdominal surgical emergency. Among the various sonographic findings of appendicitis, outer diameter of the appendix is most important. Therefore, clear delineation of the appendix on US images is essential. In this paper, we propose a new intelligent method to extract appendix automatically from abdominal sonographic images as a basic building block of developing such an intelligent tool for medical practitioners. Knowing that the appendix is located at the lower organ area below the bottom fascia line, we conduct a series of image processing techniques to find the fascia line correctly. And then we apply fuzzy ART learning algorithm to the organ area in order to extract appendix accurately. The experiment verifies that the proposed method is highly accurate (successful in 38 out of 40 cases in extracting appendix.

  6. An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data

    Science.gov (United States)

    Li, Y.; Hu, X.; Guan, H.; Liu, P.

    2016-06-01

    The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  7. The Fractal Patterns of Words in a Text: A Method for Automatic Keyword Extraction.

    Science.gov (United States)

    Najafi, Elham; Darooneh, Amir H

    2015-01-01

    A text can be considered as a one dimensional array of words. The locations of each word type in this array form a fractal pattern with certain fractal dimension. We observe that important words responsible for conveying the meaning of a text have dimensions considerably different from one, while the fractal dimensions of unimportant words are close to one. We introduce an index quantifying the importance of the words in a given text using their fractal dimensions and then ranking them according to their importance. This index measures the difference between the fractal pattern of a word in the original text relative to a shuffled version. Because the shuffled text is meaningless (i.e., words have no importance), the difference between the original and shuffled text can be used to ascertain degree of fractality. The degree of fractality may be used for automatic keyword detection. Words with the degree of fractality higher than a threshold value are assumed to be the retrieved keywords of the text. We measure the efficiency of our method for keywords extraction, making a comparison between our proposed method and two other well-known methods of automatic keyword extraction.

  8. A method for automatically extracting infectious disease-related primers and probes from the literature

    Directory of Open Access Journals (Sweden)

    Pérez-Rey David

    2010-08-01

    Full Text Available Abstract Background Primer and probe sequences are the main components of nucleic acid-based detection systems. Biologists use primers and probes for different tasks, some related to the diagnosis and prescription of infectious diseases. The biological literature is the main information source for empirically validated primer and probe sequences. Therefore, it is becoming increasingly important for researchers to navigate this important information. In this paper, we present a four-phase method for extracting and annotating primer/probe sequences from the literature. These phases are: (1 convert each document into a tree of paper sections, (2 detect the candidate sequences using a set of finite state machine-based recognizers, (3 refine problem sequences using a rule-based expert system, and (4 annotate the extracted sequences with their related organism/gene information. Results We tested our approach using a test set composed of 297 manuscripts. The extracted sequences and their organism/gene annotations were manually evaluated by a panel of molecular biologists. The results of the evaluation show that our approach is suitable for automatically extracting DNA sequences, achieving precision/recall rates of 97.98% and 95.77%, respectively. In addition, 76.66% of the detected sequences were correctly annotated with their organism name. The system also provided correct gene-related information for 46.18% of the sequences assigned a correct organism name. Conclusions We believe that the proposed method can facilitate routine tasks for biomedical researchers using molecular methods to diagnose and prescribe different infectious diseases. In addition, the proposed method can be expanded to detect and extract other biological sequences from the literature. The extracted information can also be used to readily update available primer/probe databases or to create new databases from scratch.

  9. An automatic recognition and parameter extraction method for structural planes in borehole image

    Science.gov (United States)

    Wang, Chuanying; Zou, Xianjian; Han, Zengqiang; Wang, Yiteng; Wang, Jinchao

    2016-12-01

    As a breakthrough in borehole imaging technology, digital panoramic borehole camera technology has been widely employed. The high-resolution panoramic borehole images can accurately reproduce the geometric features of structural planes. However, the detection of these features is usually done manually, which is both time-consuming and introduces human errors. To solve this problem, this paper presents a method for the automatic recognition and parameter extraction of borehole geometric features of camera images. In this method, the image's gray and gradient level, and also their projection on the depth axis are used to identify the locations of structural planes. Afterwards, iterative matching is employed by using a template of sinusoidal function to search for structural planes in the identified image blocks. Finally, optimal sine curves are selected as the feature curves of structural planes, and their related parameters are converted into structural plane parameters required for engineering, such as their positions, dip directions, dip angles and fracture widths. The method can automatically identify all of structural planes throughout the whole borehole camera image in a continuous and rapid manner, and obtain the corresponding structural parameters. It has proven highly reliable, accurate and efficient.

  10. A semi-automatic method for extracting thin line structures in images as rooted tree network

    Energy Technology Data Exchange (ETDEWEB)

    Brazzini, Jacopo [Los Alamos National Laboratory; Dillard, Scott [Los Alamos National Laboratory; Soille, Pierre [EC - JRC

    2010-01-01

    This paper addresses the problem of semi-automatic extraction of line networks in digital images - e.g., road or hydrographic networks in satellite images, blood vessels in medical images, robust. For that purpose, we improve a generic method derived from morphological and hydrological concepts and consisting in minimum cost path estimation and flow simulation. While this approach fully exploits the local contrast and shape of the network, as well as its arborescent nature, we further incorporate local directional information about the structures in the image. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the target network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given seed with this metric is combined with hydrological operators for overland flow simulation to extract the line network. The algorithm is demonstrated for the extraction of blood vessels in a retina image and of a river network in a satellite image.

  11. Sequential Clustering based Facial Feature Extraction Method for Automatic Creation of Facial Models from Orthogonal Views

    CERN Document Server

    Ghahari, Alireza

    2009-01-01

    Multiview 3D face modeling has attracted increasing attention recently and has become one of the potential avenues in future video systems. We aim to make more reliable and robust automatic feature extraction and natural 3D feature construction from 2D features detected on a pair of frontal and profile view face images. We propose several heuristic algorithms to minimize possible errors introduced by prevalent nonperfect orthogonal condition and noncoherent luminance. In our approach, we first extract the 2D features that are visible to both cameras in both views. Then, we estimate the coordinates of the features in the hidden profile view based on the visible features extracted in the two orthogonal views. Finally, based on the coordinates of the extracted features, we deform a 3D generic model to perform the desired 3D clone modeling. Present study proves the scope of resulted facial models for practical applications like face recognition and facial animation.

  12. A multi-scale method for automatically extracting the dominant features of cervical vertebrae in CT images

    Directory of Open Access Journals (Sweden)

    Tung-Ying Wu

    2013-07-01

    Full Text Available Localization of the dominant points of cervical spines in medical images is important for improving the medical automation in clinical head and neck applications. In order to automatically identify the dominant points of cervical vertebrae in neck CT images with precision, we propose a method based on multi-scale contour analysis to analyzing the deformable shape of spines. To extract the spine contour, we introduce a method to automatically generate the initial contour of the spine shape, and the distance field for level set active contour iterations can also be deduced. In the shape analysis stage, we at first coarsely segment the extracted contour with zero-crossing points of the curvature based on the analysis with curvature scale space, and the spine shape is modeled with the analysis of curvature scale space. Then, each segmented curve is analyzed geometrically based on the turning angle property at different scales, and the local extreme points are extracted and verified as the dominant feature points. The vertices of the shape contour are approximately derived with the analysis at coarse scale, and then adjusted precisely at fine scale. Consequently, the results of experiment show that we approach a success rate of 93.4% and accuracy of 0.37mm by comparing with the manual results.

  13. Image mining and Automatic Feature extraction from Remotely Sensed Image (RSI using Cubical Distance Methods

    Directory of Open Access Journals (Sweden)

    S.Sasikala

    2013-04-01

    Full Text Available Information processing and decision support system using image mining techniques is in advance drive with huge availability of remote sensing image (RSI. RSI describes inherent properties of objects by recording their natural reflectance in the electro-magnetic spectral (ems region. Information on such objects could be gathered by their color properties or their spectral values in various ems range in the form of pixels. Present paper explains a method of such information extraction using cubical distance method and subsequent results. Thismethod is one among the simpler in its approach and considers grouping of pixels on the basis of equal distance from a specified point in the image or selected pixel having definite attribute values (DN in different spectral layers of the RSI. The color distance and the occurrence pixel distance play a vital role in determining similarobjects as clusters aid in extracting features in the RSI domain.

  14. AUTOMATIC TEXT EXTRACTION FROM COMPLEX COLORED IMAGES USING GAMMA CORRECTION METHOD

    Directory of Open Access Journals (Sweden)

    C. P. Sumathi

    2014-01-01

    Full Text Available The aim of this study is to propose a new methodology for text region extraction and non text region removal from complex background colored images. This study presents a new approach based on Gamma correction by determining a gamma value for enhancing the foreground details in an image. The approach also uses gray level co-occurrence matrices, texture measures, threshold concepts. The proposed method is a useful preprocessing technique to remove non text region and to show the text region in the image. Experiments were on various images from the datasets collected and tagged by the ICDAR robust reading dataset collection team. Experimental results show that the proposed method has a good performance on extracting text regions in an image.

  15. Comparison of sample preparation methods for reliable plutonium and neptunium urinalysis using automatic extraction chromatography.

    Science.gov (United States)

    Qiao, Jixin; Xu, Yihong; Hou, Xiaolin; Miró, Manuel

    2014-10-01

    This paper describes improvement and comparison of analytical methods for simultaneous determination of trace-level plutonium and neptunium in urine samples by inductively coupled plasma mass spectrometry (ICP-MS). Four sample pre-concentration techniques, including calcium phosphate, iron hydroxide and manganese dioxide co-precipitation and evaporation were compared and the applicability of different techniques was discussed in order to evaluate and establish the optimal method for in vivo radioassay program. The analytical results indicate that the various sample pre-concentration approaches afford dissimilar method performances and care should be taken for specific experimental parameters for improving chemical yields. The best analytical performances in terms of turnaround time (6h) and chemical yields for plutonium (88.7 ± 11.6%) and neptunium (94.2 ± 2.0%) were achieved by manganese dioxide co-precipitation. The need of drying ashing (≥ 7h) for calcium phosphate co-precipitation and long-term aging (5d) for iron hydroxide co-precipitation, respectively, rendered time-consuming analytical protocols. Despite the fact that evaporation is also somewhat time-consuming (1.5d), it endows urinalysis methods with better reliability and repeatability compared with co-precipitation techniques. In view of the applicability of different pre-concentration techniques proposed previously in the literature, the main challenge behind relevant method development is pointed to be the release of plutonium and neptunium associated with organic compounds in real urine assays. In this work, different protocols for decomposing organic matter in urine were investigated, of which potassium persulfate (K2S2O8) treatment provided the highest chemical yield of neptunium in the iron hydroxide co-precipitation step, yet, the occurrence of sulfur compounds in the processed sample deteriorated the analytical performance of the ensuing extraction chromatographic separation with chemical

  16. Automatic Contour Extraction from 2D Image

    Directory of Open Access Journals (Sweden)

    Panagiotis GIOANNIS

    2011-03-01

    Full Text Available Aim: To develop a method for automatic contour extraction from a 2D image. Material and Method: The method is divided in two basic parts where the user initially chooses the starting point and the threshold. Finally the method is applied to computed tomography of bone images. Results: An interesting method is developed which can lead to a successful boundary extraction of 2D images. Specifically data extracted from a computed tomography images can be used for 2D bone reconstruction. Conclusions: We believe that such an algorithm or part of it can be applied on several other applications for shape feature extraction in medical image analysis and generally at computer graphics.

  17. Automatic detection of wheezes by evaluation of multiple acoustic feature extraction methods and C-weighted SVM

    Science.gov (United States)

    Sosa, Germán. D.; Cruz-Roa, Angel; González, Fabio A.

    2015-01-01

    This work addresses the problem of lung sound classification, in particular, the problem of distinguishing between wheeze and normal sounds. Wheezing sound detection is an important step to associate lung sounds with an abnormal state of the respiratory system, usually associated with tuberculosis or another chronic obstructive pulmonary diseases (COPD). The paper presents an approach for automatic lung sound classification, which uses different state-of-the-art sound features in combination with a C-weighted support vector machine (SVM) classifier that works better for unbalanced data. Feature extraction methods used here are commonly applied in speech recognition and related problems thanks to the fact that they capture the most informative spectral content from the original signals. The evaluated methods were: Fourier transform (FT), wavelet decomposition using Wavelet Packet Transform bank of filters (WPT) and Mel Frequency Cepstral Coefficients (MFCC). For comparison, we evaluated and contrasted the proposed approach against previous works using different combination of features and/or classifiers. The different methods were evaluated on a set of lung sounds including normal and wheezing sounds. A leave-two-out per-case cross-validation approach was used, which, in each fold, chooses as validation set a couple of cases, one including normal sounds and the other including wheezing sounds. Experimental results were reported in terms of traditional classification performance measures: sensitivity, specificity and balanced accuracy. Our best results using the suggested approach, C-weighted SVM and MFCC, achieve a 82.1% of balanced accuracy obtaining the best result for this problem until now. These results suggest that supervised classifiers based on kernel methods are able to learn better models for this challenging classification problem even using the same feature extraction methods.

  18. Fluidized-bed column method for automatic dynamic extraction and determination of trace element bioaccessibility in highly heterogeneous solid wastes.

    Science.gov (United States)

    Rosende, María; Miró, Manuel; Cerdà, Víctor

    2010-01-18

    Dynamic flow-through extraction/fractionation methods have recently drawn much attention as appealing alternatives to the batchwise steady-state counterparts for the evaluation of environmentally available pools of potentially hazardous trace elements in solid matrices. The most critical weakness of flow-based column approaches lies in the small amount of solid that can be handled, whereby their applicability has been merely limited to date to the extraction of trace elements in highly homogeneous solid substrates; otherwise the representativeness of the test portion might not be assured. To tackle this limitation, we have devised an automated flow-through system incorporating a specially designed extraction column with a large volume capacity, wherein up to 2 g of solid sample could be handled without undue backpressure. The assembled flow setup was exploited for fast screening of potentially hazardous trace elements (namely, Cd, Cr, Cu, Pb, and Zn) in highly inhomogeneous municipal solid waste incineration (MSWI) bottom ashes. The pools of readily mobilizable metal forms were ascertained using the Toxicity Characteristic Leaching Procedure (TCLP) based on the usage of 0.1 mol L(-1) CH(3)COOH as leachant and analysis of extracts by inductively coupled optical emission spectrometry. The application of a two-level full factorial (screening) design revealed that the effect of sample fluidization primarily but other experimental factors such as the solid to liquid ratio and extractant flow rate significantly influenced the leachability of given elements in raw bottom ashes at the 0.05 significance level. The analytical performance of the novel flow-based method capitalized on fluidized-bed extraction was evaluated in terms of accuracy, through the use of mass balance validation, reproducibility and operational time as compared to batchwise extraction and earlier flow injection/sequential injection microcolum-based leaching tests.

  19. Automatic River Network Extraction from LIDAR Data

    Science.gov (United States)

    Maderal, E. N.; Valcarcel, N.; Delgado, J.; Sevilla, C.; Ojeda, J. C.

    2016-06-01

    National Geographic Institute of Spain (IGN-ES) has launched a new production system for automatic river network extraction for the Geospatial Reference Information (GRI) within hydrography theme. The goal is to get an accurate and updated river network, automatically extracted as possible. For this, IGN-ES has full LiDAR coverage for the whole Spanish territory with a density of 0.5 points per square meter. To implement this work, it has been validated the technical feasibility, developed a methodology to automate each production phase: hydrological terrain models generation with 2 meter grid size and river network extraction combining hydrographic criteria (topographic network) and hydrological criteria (flow accumulation river network), and finally the production was launched. The key points of this work has been managing a big data environment, more than 160,000 Lidar data files, the infrastructure to store (up to 40 Tb between results and intermediate files), and process; using local virtualization and the Amazon Web Service (AWS), which allowed to obtain this automatic production within 6 months, it also has been important the software stability (TerraScan-TerraSolid, GlobalMapper-Blue Marble , FME-Safe, ArcGIS-Esri) and finally, the human resources managing. The results of this production has been an accurate automatic river network extraction for the whole country with a significant improvement for the altimetric component of the 3D linear vector. This article presents the technical feasibility, the production methodology, the automatic river network extraction production and its advantages over traditional vector extraction systems.

  20. Automatic extraction of legal concepts and definitions

    NARCIS (Netherlands)

    R. Winkels; R. Hoekstra

    2012-01-01

    In this paper we present the results of an experiment in automatic concept and definition extraction from written sources of law using relatively simple natural language and standard semantic web technology. The software was tested on six laws from the tax domain.

  1. Automatically extracting class diagrams from spreadsheets

    NARCIS (Netherlands)

    Hermans, F.; Pinzger, M.; Van Deursen, A.

    2010-01-01

    The use of spreadsheets to capture information is widespread in industry. Spreadsheets can thus be a wealthy source of domain information. We propose to automatically extract this information and transform it into class diagrams. The resulting class diagram can be used by software engineers to under

  2. A Novel Characteristic Frequency Bands Extraction Method for Automatic Bearing Fault Diagnosis Based on Hilbert Huang Transform

    Directory of Open Access Journals (Sweden)

    Xiao Yu

    2015-11-01

    Full Text Available Because roller element bearings (REBs failures cause unexpected machinery breakdowns, their fault diagnosis has attracted considerable research attention. Established fault feature extraction methods focus on statistical characteristics of the vibration signal, which is an approach that loses sight of the continuous waveform features. Considering this weakness, this article proposes a novel feature extraction method for frequency bands, named Window Marginal Spectrum Clustering (WMSC to select salient features from the marginal spectrum of vibration signals by Hilbert–Huang Transform (HHT. In WMSC, a sliding window is used to divide an entire HHT marginal spectrum (HMS into window spectrums, following which Rand Index (RI criterion of clustering method is used to evaluate each window. The windows returning higher RI values are selected to construct characteristic frequency bands (CFBs. Next, a hybrid REBs fault diagnosis is constructed, termed by its elements, HHT-WMSC-SVM (support vector machines. The effectiveness of HHT-WMSC-SVM is validated by running series of experiments on REBs defect datasets from the Bearing Data Center of Case Western Reserve University (CWRU. The said test results evidence three major advantages of the novel method. First, the fault classification accuracy of the HHT-WMSC-SVM model is higher than that of HHT-SVM and ST-SVM, which is a method that combines statistical characteristics with SVM. Second, with Gauss white noise added to the original REBs defect dataset, the HHT-WMSC-SVM model maintains high classification accuracy, while the classification accuracy of ST-SVM and HHT-SVM models are significantly reduced. Third, fault classification accuracy by HHT-WMSC-SVM can exceed 95% under a Pmin range of 500–800 and a m range of 50–300 for REBs defect dataset, adding Gauss white noise at Signal Noise Ratio (SNR = 5. Experimental results indicate that the proposed WMSC method yields a high REBs fault

  3. Automatic Extraction of JPF Options and Documentation

    Science.gov (United States)

    Luks, Wojciech; Tkachuk, Oksana; Buschnell, David

    2011-01-01

    Documenting existing Java PathFinder (JPF) projects or developing new extensions is a challenging task. JPF provides a platform for creating new extensions and relies on key-value properties for their configuration. Keeping track of all possible options and extension mechanisms in JPF can be difficult. This paper presents jpf-autodoc-options, a tool that automatically extracts JPF projects options and other documentation-related information, which can greatly help both JPF users and developers of JPF extensions.

  4. A semi-automatic method to extract canal pathways in 3D micro-CT images of Octocorals.

    Directory of Open Access Journals (Sweden)

    Alfredo Morales Pinzón

    Full Text Available The long-term goal of our study is to understand the internal organization of the octocoral stem canals, as well as their physiological and functional role in the growth of the colonies, and finally to assess the influence of climatic changes on this species. Here we focus on imaging tools, namely acquisition and processing of three-dimensional high-resolution images, with emphasis on automated extraction of canal pathways. Our aim was to evaluate the feasibility of the whole process, to point out and solve - if possible - technical problems related to the specimen conditioning, to determine the best acquisition parameters and to develop necessary image-processing algorithms. The pathways extracted are expected to facilitate the structural analysis of the colonies, namely to help observing the distribution, formation and number of canals along the colony. Five volumetric images of Muricea muricata specimens were successfully acquired by X-ray computed tomography with spatial resolution ranging from 4.5 to 25 micrometers. The success mainly depended on specimen immobilization. More than [Formula: see text] of the canals were successfully detected and tracked by the image-processing method developed. Thus obtained three-dimensional representation of the canal network was generated for the first time without the need of histological or other destructive methods. Several canal patterns were observed. Although most of them were simple, i.e. only followed the main branch or "turned" into a secondary branch, many others bifurcated or fused. A majority of bifurcations were observed at branching points. However, some canals appeared and/or ended anywhere along a branch. At the tip of a branch, all canals fused into a unique chamber. Three-dimensional high-resolution tomographic imaging gives a non-destructive insight to the coral ultrastructure and helps understanding the organization of the canal network. Advanced image-processing techniques greatly

  5. A method for automatic extraction of key fames%一种关键帧的自动提取方法

    Institute of Scientific and Technical Information of China (English)

    刘善磊; 赵银娣; 王光辉; 李英成; 薛艳丽; 李建军

    2012-01-01

    In the paper, first, two main lens distortions and camera calibration were introduced* Then the formula of key frames forward overlap was developed within intrinsic parameters of the camera and video frame rate. Then key frames were automatically extracted from the file or real time source. Automatic extraction of timing from real time source was processed and key frame positioning method was used for extracting key frames from file source. Finally key frames were corrected with the calibration parameters. Real data was used to test the developed method, and results showed that the technique was efficient and exact*%本文首先介绍了2种主要透镜畸变及摄像机标定方法;然后结合摄像机内部参数和视频帧率推导出关键帧航向重叠度计算公式;在此基础上实现了指定航向重叠度关键帧从文件源或实时源中的自动提取,文件源中采用定位关键帧自动提取算法,实时源中采用定时自动提取算法;最后利用得到的标定参数完成关键帧影像矫正.实验结果表明本文采用的算法能够高效、准确地得到矫正好的指定航向重叠度关键帧.

  6. Automatic extraction of planetary image features

    Science.gov (United States)

    LeMoigne-Stewart, Jacqueline J. (Inventor); Troglio, Giulia (Inventor); Benediktsson, Jon A. (Inventor); Serpico, Sebastiano B. (Inventor); Moser, Gabriele (Inventor)

    2013-01-01

    A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented.

  7. Automatic Road Centerline Extraction from Imagery Using Road GPS Data

    Directory of Open Access Journals (Sweden)

    Chuqing Cao

    2014-09-01

    Full Text Available Road centerline extraction from imagery constitutes a key element in numerous geospatial applications, which has been addressed through a variety of approaches. However, most of the existing methods are not capable of dealing with challenges such as different road shapes, complex scenes, and variable resolutions. This paper presents a novel method for road centerline extraction from imagery in a fully automatic approach that addresses the aforementioned challenges by exploiting road GPS data. The proposed method combines road color feature with road GPS data to detect road centerline seed points. After global alignment of road GPS data, a novel road centerline extraction algorithm is developed to extract each individual road centerline in local regions. Through road connection, road centerline network is generated as the final output. Extensive experiments demonstrate that our proposed method can rapidly and accurately extract road centerline from remotely sensed imagery.

  8. An automatic face contour extracting method%一种自动的人脸轮廓定位方法

    Institute of Scientific and Technical Information of China (English)

    李昕昕; 龚勋; 夏冉

    2013-01-01

    人脸分割对人脸识别、人脸三维建模等人脸图像处理问题具有重要意义,而人脸图像往往轮廓边缘模糊、梯度不明显,常规无边缘几何活动轮廓模型通常无法获得理想的分割效果且计算量较大.为实现快速、准确的人脸轮廓定位及分割,将无边缘几何活动轮廓模型和稀疏场数值算法相结合提出了一个改进的算法,并结合人脸检测和数学形态学算子提出一个基于曲线演化的人脸分割方案.实验结果表明,该算法不仅提高了计算效率,还可以有效地检测出局部模糊或分断边界,进化曲线不会断裂,能够获得较好的人脸分割效果.%Images containing faces are essential to intelligent vision-based human computer interaction,and research efforts in face processing include face recognition, face tracking, and expression recognition. Many applications assume that the faces in an image or an image sequence have been identified and localized. To build fully automated systems that analyze the information contained in face images, robust and efficient face detection algorithms are required. However,such a problem is challenging because faces are non-rigid and have a high degree of variability in size,shape,color,and texture. The purpose of this paper is to provide a relative robust method for face segmentation in images based on curve evolution methodology. Since the face image always has a blur boundary and little gradient changes,the region segmentations obtained by the original Chan-Vese model are generally unsatisfactory and need large amount of calculations. To achieve more accurate facial contour extraction and face segmentation, a new face segmentation scheme based on curve evolution model is proposed which is a combination of Chan-Vese model, sparse-field algorithm, face detection and mathematical morphology operators. Experimental results show that the improved algorithm can effectively detect the local blur and breaking

  9. A new generic method for the semi-automatic extraction of river and road networks in low and mid-resolution satellite images

    Energy Technology Data Exchange (ETDEWEB)

    Grazzini, Jacopo [Los Alamos National Laboratory; Dillard, Scott [PNNL; Soille, Pierre [EC JRC

    2010-10-21

    This paper addresses the problem of semi-automatic extraction of road or hydrographic networks in satellite images. For that purpose, we propose an approach combining concepts arising from mathematical morphology and hydrology. The method exploits both geometrical and topological characteristics of rivers/roads and their tributaries in order to reconstruct the complete networks. It assumes that the images satisfy the following two general assumptions, which are the minimum conditions for a road/river network to be identifiable and are usually verified in low- to mid-resolution satellite images: (i) visual constraint: most pixels composing the network have similar spectral signature that is distinguishable from most of the surrounding areas; (ii) geometric constraint: a line is a region that is relatively long and narrow, compared with other objects in the image. While this approach fully exploits local (roads/rivers are modeled as elongated regions with a smooth spectral signature in the image and a maximum width) and global (they are structured like a tree) characteristics of the networks, further directional information about the image structures is incorporated. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the target network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given network seed with this metric is combined with hydrological operators for overland flow simulation to extract the paths which contain most line evidence and identify them with the target network.

  10. Automatic Extraction of Protein Interaction in Literature

    OpenAIRE

    Liu, Peilei; Wang, Ting

    2014-01-01

    Protein-protein interaction extraction is the key precondition of the construction of protein knowledge network, and it is very important for the research in the biomedicine. This paper extracted directional protein-protein interaction from the biological text, using the SVM-based method. Experiments were evaluated on the LLL05 corpus with good results. The results show that dependency features are import for the protein-protein interaction extraction and features related to the interaction w...

  11. Automatic Railway Power Line Extraction Using Mobile Laser Scanning Data

    Science.gov (United States)

    Zhang, Shanxin; Wang, Cheng; Yang, Zhuang; Chen, Yiping; Li, Jonathan

    2016-06-01

    Research on power line extraction technology using mobile laser point clouds has important practical significance on railway power lines patrol work. In this paper, we presents a new method for automatic extracting railway power line from MLS (Mobile Laser Scanning) data. Firstly, according to the spatial structure characteristics of power-line and trajectory, the significant data is segmented piecewise. Then, use the self-adaptive space region growing method to extract power lines parallel with rails. Finally use PCA (Principal Components Analysis) combine with information entropy theory method to judge a section of the power line whether is junction or not and which type of junction it belongs to. The least squares fitting algorithm is introduced to model the power line. An evaluation of the proposed method over a complicated railway point clouds acquired by a RIEGL VMX450 MLS system shows that the proposed method is promising.

  12. 高分辨率影像城市植被自动提取算法%Automatic Urban Vegetation Extraction Method Using High Resolution Imagery

    Institute of Scientific and Technical Information of China (English)

    姚方方; 骆剑承; 沈占锋; 董迪; 杨珂含

    2016-01-01

    With the progress of sustained urbanization in the past ten years, information of accurate urban vegetation cover is turning to be essential for the study of both regional climate and urban energy balance. High spatial-resolution remote sensing imagery provides an important tool for automatic mapping and monitoring of urban vegetation cover due to its broad coverage and high-spatial resolution. We propose an automatic urban vegetation extraction methodology, named as hyperplanes for plant extraction methodology (HPEM), based on the vegetation spectral feature analysis with the ZY-3 multi-spectral imagery over different cities in Yangtze River Delta. The results showed that: first, the vegetation pixels and non-vegetation pixels with low NDVI value can be well separated in the false color composite reflectance space, while the vegetation pixels and non-vegetation pixels with high NDVI value can be well separated in the true color composite reflectance space;second, HPEM could effectively suppress the errors of commission that come from built-up pixels which was often misclassified in NDVI method. HPEM’s performance was better than NDVI at the optimal threshold, with kappa coefficients increased from 0.85 to 0.90 and the total errors of omission and commission reduced from 21.15%to 14.18%. Compared to NDVI method, HPEM also avoided the tedious trial-and-error procedures for searching the optimal threshold. Therefore, HPEM can effectively improve the accuracy of automatic urban vegetation mapping. Moreover, the urban vegetation products are more reliable for further urban environment research.%近十几年来,随着城市化进程加剧,准确获取城市植被的分布信息,是城市气候和地表能量平衡研究的重要内容。高空间分辨率遥感影像数据,为精确获取和动态监测城市植被提供了重要资料。本研究利用资源三号数据对长江三角洲地区城市植被进行光谱特征分析与提取,提出一种城市

  13. Semi-automatic methods for landslide features and channel network extraction in a complex mountainous terrain: new opportunities but also challenges from high resolution topography

    Science.gov (United States)

    Tarolli, Paolo; Sofia, Giulia; Pirotti, Francesco; Dalla Fontana, Giancarlo

    2010-05-01

    In recent years, remotely sensed technologies such as airborne and terrestrial laser scanner have improved the detail of analysis providing high-resolution and high-quality topographic data over large areas better than other technologies. A new generation of high resolution (~ 1m) Digital Terrain Models (DTMs) are now available for different landscapes. These data call for the development of the new generation of methodologies for objective extraction of geomorphic features, such as channel heads, channel networks, bank geometry, landslide scars, service roads, etc. The most important benefit of a high resolution DTM is the detailed recognition of surface features. It is possible to recognize in detail divergent-convex landforms, associated with the dominance of hillslope processes, and convergent-concave landforms, associated with fluvial-dominated erosion. In this work, we test the performance of new methodologies for objective extraction of geomorphic features related to landsliding and channelized processes in order to provide a semi-automatic method for channel network and landslide features recognition in a complex mountainous terrain. The methodologies are based on the detection of thresholds derived by statistical analysis of variability of surface curvature. We considered a study area located in the eastern Italian Alps where a high-quality set of LiDAR data is available and where channel heads, related channel network, and landslides have been mapped in the field by DGPS. In the analysis we derived 1 m DTMs from bare ground LiDAR points, and we used different smoothing factors for the curvature calculation in order to set the more suitable curvature maps for the recognition of selected features. Our analyses suggest that: i) the scale for curvature calculations has to be a function of the scale of the features to be detected, (ii) rougher curvature maps are not optimal as they do not explore a sufficient range at which features occur, while smoother

  14. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs

    Directory of Open Access Journals (Sweden)

    Haijian Chen

    2015-01-01

    Full Text Available In recent years, Massive Open Online Courses (MOOCs are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate.

  15. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs.

    Science.gov (United States)

    Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina

    2015-01-01

    In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of "C programming language" are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate.

  16. Automatic Knowledge Extraction and Knowledge Structuring for a National Term Bank

    DEFF Research Database (Denmark)

    Lassen, Tine; Madsen, Bodil Nistrup; Erdman Thomsen, Hanne

    2011-01-01

    This paper gives an introduction to the plans and ongoing work in a project, the aim of which is to develop methods for automatic knowledge extraction and automatic construction and updating of ontologies. The project also aims at developing methods for automatic merging of terminological data from...... various existing sources, as well as methods for target group oriented knowledge dissemination. In this paper, we mainly focus on the plans for automatic knowledge extraction and knowledge structuring that will result in ontologies for a national term bank....

  17. Automatically extracting sheet-metal features from solid model

    Institute of Scientific and Technical Information of China (English)

    刘志坚; 李建军; 王义林; 李材元; 肖祥芷

    2004-01-01

    With the development of modern industry, sheet-metal parts in mass production have been widely applied in mechanical, communication, electronics, and light industries in recent decades; but the advances in sheet-metal part design and manufacturing remain too slow compared with the increasing importance of sheet-metal parts in modern industry. This paper proposes a method for automatically extracting features from an arbitrary solid model of sheet-metal parts; whose characteristics are used for classification and graph-based representation of the sheet-metal features to extract the features embodied in a sheet-metal part. The extracting feature process can be divided for valid checking of the model geometry, feature matching, and feature relationship. Since the extracted features include abundant geometry and engineering information, they will be effective for downstream application such as feature rebuilding and stamping process planning.

  18. Automatically extracting sheet-metal features from solid model

    Institute of Scientific and Technical Information of China (English)

    刘志坚; 李建军; 王义林; 李材元; 肖祥芷

    2004-01-01

    With the development of modern industry,sheet-metal parts in mass production have been widely applied in mechanical,communication,electronics,and light industries in recent decades; but the advances in sheet-metal part design and manufacturing remain too slow compared with the increasing importance of sheet-metal parts in modern industry. This paper proposes a method for automatically extracting features from an arbitrary solid model of sheet-metal parts; whose characteristics are used for classification and graph-based representation of the sheet-metal features to extract the features embodied in a sheet-metal part. The extracting feature process can be divided for valid checking of the model geometry,feature matching,and feature relationship. Since the extracted features include abundant geometry and engineering information,they will be effective for downstream application such as feature rebuilding and stamping process planning.

  19. Automatic extraction of corollaries from semantic structure of text

    Science.gov (United States)

    Nurtazin, Abyz T.; Khisamiev, Zarif G.

    2016-11-01

    The aim of this study is to develop an algorithm for automatic representation of the text of natural language as a formal system for the subsequent automatic extraction as reasonable answers to profound questions in the context of the text, and the deep logical consequences of the text and related areas of knowledge to which the text refers. The most universal method of constructing algorithms of automatic treatment of text for a particular purpose is a representation of knowledge in the form of a graph expressing the semantic values of the text. The paper presents an algorithm of automatic presentation of text and its associated knowledge as a formal logic programming theory for sufficiently strict texts, such as legal texts. This representation is a semantic-syntactic as the causal-investigatory relationships between the various parts are both logical and semantic. This representation of the text allows to resolve the issues of causal-investigatory relationships of present concepts, as methods of the theory and practice of logic programming and methods of model theory as well. In particular, these means of classical branches of mathematics can be used to address such issues as the definition and determination of consequences and questions of consistency of the theory.

  20. An Automatic Collocation Extraction from Arabic Corpus

    Directory of Open Access Journals (Sweden)

    Abdulgabbar M. Saif

    2011-01-01

    Full Text Available Problem statement: The identification of collocations is very important part in natural language processing applications that require some degree of semantic interpretation such as, machine translation, information retrieval and text summarization. Because of the complexities of Arabic, the collocations undergo some variations such as, morphological, graphical, syntactic variation that constitutes the difficulties of identifying the collocation. Approach: We used the hybrid method for extracting the collocations from Arabic corpus that is based on linguistic information and association measures. Results: This method extracted the bi-gram candidates of Arabic collocation from corpus and evaluated the association measures by using the n-best evaluation method. We reported the precision values for each association measure in each n-best list. Conclusion: The experimental results showed that the log-likelihood ratio is the best association measure that achieved highest precision.

  1. Fast Hough transform for automatic bridge extraction

    Science.gov (United States)

    Hao, Qiwei; Chen, Xiaomei; Ni, Guoqiang; Zhang, Huaili

    2008-03-01

    In this paper, a new method to recognize bridge in the complicated background is presented. The algorithm takes full advantages of the characteristics of the bridge image. Firstly, the image is preprocessed and the object edges are extracted. Then according to the limitations of traditional Hough transform (HT), the extraction method of the image line segment characteristic of HT is improved, which eliminates spurious peaks on the basis of global and local thresholds, discriminates the position relation between two straight line segments, and merges segments with near endpoints, etc. Experiments show that this algorithm is more precise and efficient than traditional HT, moreover it can provide a complete description of the bridge in a complicated background.

  2. Automatic Waterline Extraction from Smartphone Images

    Science.gov (United States)

    Kröhnert, M.

    2016-06-01

    Considering worldwide increasing and devastating flood events, the issue of flood defence and prediction becomes more and more important. Conventional methods for the observation of water levels, for instance gauging stations, provide reliable information. However, they are rather cost-expensive in purchase, installation and maintenance and hence mostly limited for monitoring large streams only. Thus, small rivers with noticeable increasing flood hazard risks are often neglected. State-of-the-art smartphones with powerful camera systems may act as affordable, mobile measuring instruments. Reliable and effective image processing methods may allow the use of smartphone-taken images for mobile shoreline detection and thus for water level monitoring. The paper focuses on automatic methods for the determination of waterlines by spatio-temporal texture measures. Besides the considerable challenge of dealing with a wide range of smartphone cameras providing different hardware components, resolution, image quality and programming interfaces, there are several limits in mobile device processing power. For test purposes, an urban river in Dresden, Saxony was observed. The results show the potential of deriving the waterline with subpixel accuracy by a column-by-column four-parameter logistic regression and polynomial spline modelling. After a transformation into object space via suitable landmarks (which is not addressed in this paper), this corresponds to an accuracy in the order of a few centimetres when processing mobile device images taken from small rivers at typical distances.

  3. Automatic Extraction of Metadata from Scientific Publications for CRIS Systems

    Science.gov (United States)

    Kovacevic, Aleksandar; Ivanovic, Dragan; Milosavljevic, Branko; Konjovic, Zora; Surla, Dusan

    2011-01-01

    Purpose: The aim of this paper is to develop a system for automatic extraction of metadata from scientific papers in PDF format for the information system for monitoring the scientific research activity of the University of Novi Sad (CRIS UNS). Design/methodology/approach: The system is based on machine learning and performs automatic extraction…

  4. Automatic Melody Generation System with Extraction Feature

    Science.gov (United States)

    Ida, Kenichi; Kozuki, Shinichi

    In this paper, we propose the melody generation system with the analysis result of an existing melody. In addition, we introduce the device that takes user's favor in the system. The melody generation is done by pitch's being arranged best on the given rhythm. The best standard is decided by using the feature element extracted from existing music by proposed method. Moreover, user's favor is reflected in the best standard by operating some of the feature element in users. And, GA optimizes the pitch array based on the standard, and achieves the system.

  5. Image feature meaning for automatic key-frame extraction

    Science.gov (United States)

    Di Lecce, Vincenzo; Guerriero, Andrea

    2003-12-01

    Video abstraction and summarization, being request in several applications, has address a number of researches to automatic video analysis techniques. The processes for automatic video analysis are based on the recognition of short sequences of contiguous frames that describe the same scene, shots, and key frames representing the salient content of the shot. Since effective shot boundary detection techniques exist in the literature, in this paper we will focus our attention on key frames extraction techniques to identify the low level visual features of the frames that better represent the shot content. To evaluate the features performance, key frame automatically extracted using these features, are compared to human operator video annotations.

  6. Temporally rendered automatic cloud extraction (TRACE) system

    Science.gov (United States)

    Bodrero, Dennis M.; Yale, James G.; Davis, Roger E.; Rollins, John M.

    1999-10-01

    Smoke/obscurant testing requires that 2D cloud extent be extracted from visible and thermal imagery. These data are used alone or in combination with 2D data from other aspects to make 3D calculations of cloud properties, including dimensions, volume, centroid, travel, and uniformity. Determining cloud extent from imagery has historically been a time-consuming manual process. To reduce time and cost associated with smoke/obscurant data processing, automated methods to extract cloud extent from imagery were investigated. The TRACE system described in this paper was developed and implemented at U.S. Army Dugway Proving Ground, UT by the Science and Technology Corporation--Acuity Imaging Incorporated team with Small Business Innovation Research funding. TRACE uses dynamic background subtraction and 3D fast Fourier transform as primary methods to discriminate the smoke/obscurant cloud from the background. TRACE has been designed to run on a PC-based platform using Windows. The PC-Windows environment was chosen for portability, to give TRACE the maximum flexibility in terms of its interaction with peripheral hardware devices such as video capture boards, removable media drives, network cards, and digital video interfaces. Video for Windows provides all of the necessary tools for the development of the video capture utility in TRACE and allows for interchangeability of video capture boards without any software changes. TRACE is designed to take advantage of future upgrades in all aspects of its component hardware. A comparison of cloud extent determined by TRACE with manual method is included in this paper.

  7. Automatically extracting functionally equivalent proteins from SwissProt

    Directory of Open Access Journals (Sweden)

    Martin Andrew CR

    2008-10-01

    Full Text Available Abstract Background There is a frequent need to obtain sets of functionally equivalent homologous proteins (FEPs from different species. While it is usually the case that orthology implies functional equivalence, this is not always true; therefore datasets of orthologous proteins are not appropriate. The information relevant to extracting FEPs is contained in databanks such as UniProtKB/Swiss-Prot and a manual analysis of these data allow FEPs to be extracted on a one-off basis. However there has been no resource allowing the easy, automatic extraction of groups of FEPs – for example, all instances of protein C. We have developed FOSTA, an automatically generated database of FEPs annotated as having the same function in UniProtKB/Swiss-Prot which can be used for large-scale analysis. The method builds a candidate list of homologues and filters out functionally diverged proteins on the basis of functional annotations using a simple text mining approach. Results Large scale evaluation of our FEP extraction method is difficult as there is no gold-standard dataset against which the method can be benchmarked. However, a manual analysis of five protein families confirmed a high level of performance. A more extensive comparison with two manually verified functional equivalence datasets also demonstrated very good performance. Conclusion In summary, FOSTA provides an automated analysis of annotations in UniProtKB/Swiss-Prot to enable groups of proteins already annotated as functionally equivalent, to be extracted. Our results demonstrate that the vast majority of UniProtKB/Swiss-Prot functional annotations are of high quality, and that FOSTA can interpret annotations successfully. Where FOSTA is not successful, we are able to highlight inconsistencies in UniProtKB/Swiss-Prot annotation. Most of these would have presented equal difficulties for manual interpretation of annotations. We discuss limitations and possible future extensions to FOSTA, and

  8. Automatic Statistics Extraction for Amateur Soccer Videos

    NARCIS (Netherlands)

    Gemert, J.C. van; Schavemaker, J.G.M.; Bonenkamp, C.W.B.

    2014-01-01

    Amateur soccer statistics have interesting applications such as providing insights to improve team performance, individual coaching, monitoring team progress and personal or team entertainment. Professional soccer statistics are extracted with labor intensive expensive manual effort which is not rea

  9. Automatic moving object extraction toward compact video representation

    Science.gov (United States)

    Fan, Jianping; Fujita, Gen; Furuie, Makoto; Onoye, Takao; Shirakawa, Isao; Wu, Lide

    2000-02-01

    An automatic object-oriented video segmentation and representation algorithm is proposed, where the local variance contrast and the frame differences contrast are jointly exploited for meaningful moving object extinction because these two visual features can indicate the spatial homogeneity of the gray levels and the temporal coherence of the motion fields efficiently. The 2D entropic thresholding technique and the watershed transformation method are further developed to determine the global feature thresholds adaptively according to the variation of the video components. The obtained video components are first represented by a group of 4 X 4 blocks coarsely, and then the meaningful moving objects are generated by an iterative region-merging procedure according to the spatiotemporal similarity measure. The temporal tracking procedure is further proposed to obtain more semantic moving objects among frames. Therefore, the proposed automatic moving object extraction algorithm can detect the appearance of new objects as well as the disappearance of existing objects efficiently because the correspondence of the video objects among frames is also established. Moreover, an object- oriented video representation and indexing approach is suggested, where both the operation of the camera (i.e., change of the viewpoint) and the birth or death of the individual objects are exploited to detect the breakpoints of the video data and to select the key frames adaptively.

  10. 基于主动轮廓模型的肺纹理自动提取新方法%A Novel Automatic Extraction Method of Lung Texture Tree from HRCT Images

    Institute of Scientific and Technical Information of China (English)

    刘军伟; 冯焕清; 周颖玥; 李传富

    2009-01-01

    Computed tomography (CT) is the primary imaging modality for investigation of lung function and lung diseases. High resolution CT slice images of chest contain lots of texture information, which provides powerful datasets to research computer aid-diagnosis (CAD) system. But the extraction of lung tissue textures is a challenge task. In this paper, we introduce a novel method based on level set to extract lung tissue texture tree, which is automatic and effectual. Firstly, we propose an improved implicit active contour model driven by local binary fitting energy, and the parameters are dynamic and modulated by image gradient information. Secondly, a new technique of painting background based on intensity nonlinear mapping is brought forward to remove the influence of background during the evolution of single level set function. At last, a number of contrast experiments are performed, and the results of 3D surface reconstruction show our method is efficient and powerful for the segmentation of fine lung tree texture structures.

  11. Super pixel density based clustering automatic image classification method

    Science.gov (United States)

    Xu, Mingxing; Zhang, Chuan; Zhang, Tianxu

    2015-12-01

    The image classification is an important means of image segmentation and data mining, how to achieve rapid automated image classification has been the focus of research. In this paper, based on the super pixel density of cluster centers algorithm for automatic image classification and identify outlier. The use of the image pixel location coordinates and gray value computing density and distance, to achieve automatic image classification and outlier extraction. Due to the increased pixel dramatically increase the computational complexity, consider the method of ultra-pixel image preprocessing, divided into a small number of super-pixel sub-blocks after the density and distance calculations, while the design of a normalized density and distance discrimination law, to achieve automatic classification and clustering center selection, whereby the image automatically classify and identify outlier. After a lot of experiments, our method does not require human intervention, can automatically categorize images computing speed than the density clustering algorithm, the image can be effectively automated classification and outlier extraction.

  12. Fingerprint Recognition: Enhancement, Feature Extraction and Automatic Evaluation of Algorithms

    OpenAIRE

    Turroni, Francesco

    2012-01-01

    The identification of people by measuring some traits of individual anatomy or physiology has led to a specific research area called biometric recognition. This thesis is focused on improving fingerprint recognition systems considering three important problems: fingerprint enhancement, fingerprint orientation extraction and automatic evaluation of fingerprint algorithms. An effective extraction of salient fingerprint features depends on the quality of the input fingerprint. If the fingerp...

  13. Automatic Extraction of Mangrove Vegetation from Optical Satellite Data

    Science.gov (United States)

    Agrawal, Mayank; Sushma Reddy, Devireddy; Prasad, Ram Chandra

    2016-06-01

    Mangrove, the intertidal halophytic vegetation, are one of the most significant and diverse ecosystem in the world. They protect the coast from sea erosion and other natural disasters like tsunami and cyclone. In view of their increased destruction and degradation in the current scenario, mapping of this vegetation is at priority. Globally researchers mapped mangrove vegetation using visual interpretation method or digital classification approaches or a combination of both (hybrid) approaches using varied spatial and spectral data sets. In the recent past techniques have been developed to extract these coastal vegetation automatically using varied algorithms. In the current study we tried to delineate mangrove vegetation using LISS III and Landsat 8 data sets for selected locations of Andaman and Nicobar islands. Towards this we made an attempt to use segmentation method, that characterize the mangrove vegetation based on their tone and the texture and the pixel based classification method, where the mangroves are identified based on their pixel values. The results obtained from the both approaches are validated using maps available for the region selected and obtained better accuracy with respect to their delineation. The main focus of this paper is simplicity of the methods and the availability of the data on which these methods are applied as these data (Landsat) are readily available for many regions. Our methods are very flexible and can be applied on any region.

  14. Method of infusion extraction

    Science.gov (United States)

    Chang-Diaz, Franklin R. (Inventor)

    1989-01-01

    Apparatus and method of removing desirable constituents from an infusible material by infusion extraction, where a piston operating in a first chamber draws a solvent into the first chamber where it may be heated, and then moves the heated solvent into a second chamber containing the infusible material, and where infusion extraction takes place. The piston then moves the solvent containing the extract through a filter into the first chamber, leaving the extraction residue in the second chamber.

  15. 基于厦门岛的海岸线自动提取方法研究%The Method of Coastline Automatic Extraction in Xiamen Island

    Institute of Scientific and Technical Information of China (English)

    齐宇; 任航科

    2012-01-01

    应用遥感的方法监测海岸线变化、提取海岸线、进行景观分析具有范围广、精度高、可动态监测的特点。提取海岸线由于海岸带类型的不同,选取的提取方法不同,会得出不同的结果。本文以厦门岛海岸线为例,使用TM和遥感影像,利用两种提取海岸线方法,得到计算机自动提取的两种海岸线位置,并通过实地调查确认海岸类型和叠加高空间分辨率的SPOT影像进行精度分析。探讨了根据不同海岸带类型,如何选取海岸线自动提取方法的问题。%In terms of supervising the change of coastline, extracting the coastline, and analyzing the landscape, re- mote sensing has many advantages: it is wider, more precise and dynamic. Owing to different types of coastal zone and different extracting methods,the results of coastline auto-extraction may differ significantly. Taking the coastal zone a- round Xiamen island as an example, this paper uses TM image and two different methods of computer coastline auto-ex- traction to extract its coastline,which are of two types: sandy-beach and artificial beach coasts. Based on the result of field research around the Xiamen island, the paper also precisely analyzes the result of the fusion with its high spatial res- olution SPOT image. Finally, the paper discusses how to select the methods of computer coastline auto-extraction subject to different coastal zones

  16. Automatic extraction of discontinuity orientation from rock mass surface 3D point cloud

    Science.gov (United States)

    Chen, Jianqin; Zhu, Hehua; Li, Xiaojun

    2016-10-01

    This paper presents a new method for extracting discontinuity orientation automatically from rock mass surface 3D point cloud. The proposed method consists of four steps: (1) automatic grouping of discontinuity sets using an improved K-means clustering method, (2) discontinuity segmentation and optimization, (3) discontinuity plane fitting using Random Sample Consensus (RANSAC) method, and (4) coordinate transformation of discontinuity plane. The method is first validated by the point cloud of a small piece of a rock slope acquired by photogrammetry. The extracted discontinuity orientations are compared with measured ones in the field. Then it is applied to a publicly available LiDAR data of a road cut rock slope at Rockbench repository. The extracted discontinuity orientations are compared with the method proposed by Riquelme et al. (2014). The results show that the presented method is reliable and of high accuracy, and can meet the engineering needs.

  17. Document Exploration and Automatic Knowledge Extraction for Unstructured Biomedical Text

    Science.gov (United States)

    Chu, S.; Totaro, G.; Doshi, N.; Thapar, S.; Mattmann, C. A.; Ramirez, P.

    2015-12-01

    We describe our work on building a web-browser based document reader with built-in exploration tool and automatic concept extraction of medical entities for biomedical text. Vast amounts of biomedical information are offered in unstructured text form through scientific publications and R&D reports. Utilizing text mining can help us to mine information and extract relevant knowledge from a plethora of biomedical text. The ability to employ such technologies to aid researchers in coping with information overload is greatly desirable. In recent years, there has been an increased interest in automatic biomedical concept extraction [1, 2] and intelligent PDF reader tools with the ability to search on content and find related articles [3]. Such reader tools are typically desktop applications and are limited to specific platforms. Our goal is to provide researchers with a simple tool to aid them in finding, reading, and exploring documents. Thus, we propose a web-based document explorer, which we called Shangri-Docs, which combines a document reader with automatic concept extraction and highlighting of relevant terms. Shangri-Docsalso provides the ability to evaluate a wide variety of document formats (e.g. PDF, Words, PPT, text, etc.) and to exploit the linked nature of the Web and personal content by performing searches on content from public sites (e.g. Wikipedia, PubMed) and private cataloged databases simultaneously. Shangri-Docsutilizes Apache cTAKES (clinical Text Analysis and Knowledge Extraction System) [4] and Unified Medical Language System (UMLS) to automatically identify and highlight terms and concepts, such as specific symptoms, diseases, drugs, and anatomical sites, mentioned in the text. cTAKES was originally designed specially to extract information from clinical medical records. Our investigation leads us to extend the automatic knowledge extraction process of cTAKES for biomedical research domain by improving the ontology guided information extraction

  18. Development of Automatic Extraction Weld for Industrial Radiographic Negative Inspection

    Institute of Scientific and Technical Information of China (English)

    张晓光; 林家骏; 李浴; 卢印举

    2003-01-01

    In industrial X-ray inspection, in order to identify weld defects automatically, raise the identification ratio, and avoid processing of complex background, it is an important step for sequent processing to extract weld from the image. According to the characteristics of weld radiograph image, median filter is adopted to reduce the noise with high frequency, then relative gray-scale of image is chosen as fuzzy characteristic, and image gray-scale fuzzy matrix is constructed and suitable membership function is selected to describe edge characteristic. A fuzzy algorithm is adopted for enhancing radiograph image processing. Based on the intensity distribution characteristic in weld, methodology of weld extraction is then designed. This paper describes the methodology of all the weld extraction, including reducing noise, fuzzy enhancement and weld extraction process. To prove its effectiveness, this methodology was tested with 64 weld negative images available for this study. The experimental results show that this methodology is very effective for extracting linear weld.

  19. Automatic calibration method for plenoptic camera

    Science.gov (United States)

    Luan, Yinsen; He, Xing; Xu, Bing; Yang, Ping; Tang, Guomao

    2016-04-01

    An automatic calibration method is proposed for a microlens-based plenoptic camera. First, all microlens images on the white image are searched and recognized automatically based on digital morphology. Then, the center points of microlens images are rearranged according to their relative position relationships. Consequently, the microlens images are located, i.e., the plenoptic camera is calibrated without the prior knowledge of camera parameters. Furthermore, this method is appropriate for all types of microlens-based plenoptic cameras, even the multifocus plenoptic camera, the plenoptic camera with arbitrarily arranged microlenses, or the plenoptic camera with different sizes of microlenses. Finally, we verify our method by the raw data of Lytro. The experiments show that our method has higher intelligence than the methods published before.

  20. Painful Bile Extraction Methods

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    It was only in the past 20 years that countries in Asia began to search for an alternative to protect moon bears from being killed for their bile and other body parts. In the early 1980s, a new method of extracting bile from living bears was developed in North Korea. In 1983, Chinese scientists imported this technique from North Korea. According to the Animals Asia Foundation, the most original method of bile extraction is to embed a latex catheter, a narrow rubber

  1. Automatic extraction of forward stroke volume using dynamic 11C-acetate PET/CT

    DEFF Research Database (Denmark)

    Harms, Hans; Tolbod, Lars Poulsen; Hansson, Nils Henrik;

    , potentially introducing bias if measured with a separate modality. The aim of this study was to develop and validate methods for automatically extracting FSV directly from the dynamic PET used for measuring oxidative metabolism. Methods: 16 subjects underwent a dynamic 27 min PET scan on a Siemens Biograph...... TruePoint 64 PET/CT scanner after bolus injection of 399±27 MBq of 11C-acetate. The LV-aortic time-activity curve (TAC) was extracted automatically from dynamic PET data using cluster analysis. The first-pass peak was derived by automatic extrapolation of the down-slope of the TAC. FSV...... was then calculated as the injected dose divided by the product of heart rate and the area under the curve of the first-pass peak. Gold standard FSV was measured in the left ventricular outflow tract by cardiovascular magnetic resonance using phase-contrast velocity mapping within two weeks of PET imaging. Results...

  2. AUTOMATIC EXTRACTION OF BUILDING OUTLINE FROM HIGH RESOLUTION AERIAL IMAGERY

    Directory of Open Access Journals (Sweden)

    Y. Wang

    2016-06-01

    Full Text Available In this paper, a new approach for automated extraction of building boundary from high resolution imagery is proposed. The proposed approach uses both geometric and spectral properties of a building to detect and locate buildings accurately. It consists of automatic generation of high quality point cloud from the imagery, building detection from point cloud, classification of building roof and generation of building outline. Point cloud is generated from the imagery automatically using semi-global image matching technology. Buildings are detected from the differential surface generated from the point cloud. Further classification of building roof is performed in order to generate accurate building outline. Finally classified building roof is converted into vector format. Numerous tests have been done on images in different locations and results are presented in the paper.

  3. Automatic Extraction of Building Outline from High Resolution Aerial Imagery

    Science.gov (United States)

    Wang, Yandong

    2016-06-01

    In this paper, a new approach for automated extraction of building boundary from high resolution imagery is proposed. The proposed approach uses both geometric and spectral properties of a building to detect and locate buildings accurately. It consists of automatic generation of high quality point cloud from the imagery, building detection from point cloud, classification of building roof and generation of building outline. Point cloud is generated from the imagery automatically using semi-global image matching technology. Buildings are detected from the differential surface generated from the point cloud. Further classification of building roof is performed in order to generate accurate building outline. Finally classified building roof is converted into vector format. Numerous tests have been done on images in different locations and results are presented in the paper.

  4. ANALYSIS METHOD OF AUTOMATIC PLANETARY TRANSMISSION KINEMATICS

    Directory of Open Access Journals (Sweden)

    Józef DREWNIAK

    2014-06-01

    Full Text Available In the present paper, planetary automatic transmission is modeled by means of contour graphs. The goals of modeling could be versatile: ratio calculating via algorithmic equation generation, analysis of velocity and accelerations. The exemplary gears running are analyzed, several drives/gears are consecutively taken into account discussing functional schemes, assigned contour graphs and generated system of equations and their solutions. The advantages of the method are: algorithmic approach, general approach where particular drives are cases of the generally created model. Moreover, the method allows for further analyzes and synthesis tasks e.g. checking isomorphism of design solutions.

  5. Automatic Foreground Extraction Based on Difference of Gaussian

    Directory of Open Access Journals (Sweden)

    Yubo Yuan

    2014-01-01

    Full Text Available A novel algorithm for automatic foreground extraction based on difference of Gaussian (DoG is presented. In our algorithm, DoG is employed to find the candidate keypoints of an input image in different color layers. Then, a keypoints filter algorithm is proposed to get the keypoints by removing the pseudo-keypoints and rebuilding the important keypoints. Finally, Normalized cut (Ncut is used to segment an image into several regions and locate the foreground with the number of keypoints in each region. Experiments on the given image data set demonstrate the effectiveness of our algorithm.

  6. Method for automatic detection of wheezing in lung sounds

    Directory of Open Access Journals (Sweden)

    R.J. Riella

    2009-07-01

    Full Text Available The present report describes the development of a technique for automatic wheezing recognition in digitally recorded lung sounds. This method is based on the extraction and processing of spectral information from the respiratory cycle and the use of these data for user feedback and automatic recognition. The respiratory cycle is first pre-processed, in order to normalize its spectral information, and its spectrogram is then computed. After this procedure, the spectrogram image is processed by a two-dimensional convolution filter and a half-threshold in order to increase the contrast and isolate its highest amplitude components, respectively. Thus, in order to generate more compressed data to automatic recognition, the spectral projection from the processed spectrogram is computed and stored as an array. The higher magnitude values of the array and its respective spectral values are then located and used as inputs to a multi-layer perceptron artificial neural network, which results an automatic indication about the presence of wheezes. For validation of the methodology, lung sounds recorded from three different repositories were used. The results show that the proposed technique achieves 84.82% accuracy in the detection of wheezing for an isolated respiratory cycle and 92.86% accuracy for the detection of wheezes when detection is carried out using groups of respiratory cycles obtained from the same person. Also, the system presents the original recorded sound and the post-processed spectrogram image for the user to draw his own conclusions from the data.

  7. Pavement crack identification based on automatic threshold iterative method

    Science.gov (United States)

    Lu, Guofeng; Zhao, Qiancheng; Liao, Jianguo; He, Yongbiao

    2017-01-01

    Crack detection is an important issue in concrete infrastructure. Firstly, the accuracy of crack geometry parameters measurement is directly affected by the extraction accuracy, the same as the accuracy of the detection system. Due to the properties of unpredictability, randomness and irregularity, it is difficult to establish recognition model of crack. Secondly, various image noise, caused by irregular lighting conditions, dark spots, freckles and bump, exerts an influence on the crack detection accuracy. Peak threshold selection method is improved in this paper, and the processing of enhancement, smoothing and denoising is conducted before iterative threshold selection, which can complete the automatic selection of the threshold value in real time and stability.

  8. Overview of Automatic Keyphrase Extraction%自动关键短语抽取综述

    Institute of Scientific and Technical Information of China (English)

    姚尧

    2015-01-01

    自动关键短语抽取是知识抽取和信息检索等信息技术的关键步骤,当前已经被广泛研究多年,但是和许多自然语言处理任务的性能相比,现有抽取算法的性能依然很低下。对自动关键短语抽取方法进行综述,并对其未来研究发展进行展望,为进一步自动抽取高质量的关键短语提供良好借鉴。%Automatic keyphrase extraction is a key step knowledge extraction and information retrieval of information technology, the current has been extensively studied for many years, but many properties as compared to natural language processing tasks, the performance of exist-ing extraction algorithm remains low down. Reviews phrase for automatic extraction methods, and prospects for its future research and de-velopment, to provide a good reference for further automatically extract keyphrases of high quality.

  9. Investigation on the automatic parameters extraction of pulse signals based on wavelet transform

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper analyses a key problem in the quantification of pulse diagnosis. Due to the subjectivity and fuzziness of pulse diagnosis, quantitative methods are needed. To extract the parameters of pulse signals, the prerequisite is to detect the corners of pulse signals correctly. Up to now, the pulse parameters are mostly acquired by marking the pulse corners manually, which is an obstacle to modernize pulse diagnosis. Therefore, a new automatic parameters extraction approach for pulse signals using wavelet transform is presented. The results testified that the method we proposed is feasible and effective and can detect corners of pulse signals accurately, which can be expected to facilitate the modernization of pulse diagnosis.

  10. Automatic Authorship Detection Using Textual Patterns Extracted from Integrated Syntactic Graphs

    Science.gov (United States)

    Gómez-Adorno, Helena; Sidorov, Grigori; Pinto, David; Vilariño, Darnes; Gelbukh, Alexander

    2016-01-01

    We apply the integrated syntactic graph feature extraction methodology to the task of automatic authorship detection. This graph-based representation allows integrating different levels of language description into a single structure. We extract textual patterns based on features obtained from shortest path walks over integrated syntactic graphs and apply them to determine the authors of documents. On average, our method outperforms the state of the art approaches and gives consistently high results across different corpora, unlike existing methods. Our results show that our textual patterns are useful for the task of authorship attribution. PMID:27589740

  11. Automatic extraction of forward stroke volume using dynamic PET/CT

    DEFF Research Database (Denmark)

    Harms, Hans; Tolbod, Lars Poulsen; Hansson, Nils Henrik Stubkjær;

    2015-01-01

    Background The aim of this study was to develop and validate an automated method for extracting forward stroke volume (FSV) using indicator dilution theory directly from dynamic positron emission tomography (PET) studies for two different tracers and scanners. Methods 35 subjects underwent...... a dynamic 11 C-acetate PET scan on a Siemens Biograph TruePoint-64 PET/CT (scanner I). In addition, 10 subjects underwent both dynamic 15 O-water PET and 11 C-acetate PET scans on a GE Discovery-ST PET/CT (scanner II). The left ventricular (LV)-aortic time-activity curve (TAC) was extracted automatically...... from PET data using cluster analysis. The first-pass peak was isolated by automatic extrapolation of the downslope of the TAC. FSV was calculated as the injected dose divided by the product of heart rate and the area under the curve of the first-pass peak. Gold standard FSV was measured using phase...

  12. Motion states extraction with optical flow for rat-robot automatic navigation.

    Science.gov (United States)

    Zhang, Xinlu; Sun, Chao; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang

    2012-01-01

    The real-time acquisition of precise motion states is significant and difficult for bio-robot automatic navigation. In this paper, we propose a real-time video-tracking algorithm to extract motion states of rat-robots in complex environment using optical flow. The rat-robot's motion states, including location, speed and motion trend, are acquired accurately in real time. Compared with the traditional methods based on single frame image, our algorithm using consecutive frames provides more exact and rich motion information for the automatic navigation of bio-robots. The video of the manual navigation experiments on rat-robots in eight-arm maze is applied to test this algorithm. The average computation time is 25.76 ms which is less than the speed of image acquisition. The results show that our method could extract the motion states with good performance of accuracy and time consumption.

  13. Investigation of Procedures for Automatic Resonance Extraction from Noisy Transient Electromagnetics Data. Volume III. Translation of Prony’s Original Paper and Bibliography of Prony’s Method

    Science.gov (United States)

    1981-08-17

    Van Blaricum, "On the Source of Parameter Bias in Prony’s Method," 1980 NEM Conference, Disneyland Hotel, August 1980. Auton, J.R., "An Unbiased...Method for the Estimation of the SEM Parameters of an Electromagnetic System," 1980 NEM Conference, Disneyland Hotel, August 1980. Auton, J.R. and M.L...34 1980 NEM Conference, Disneyland Hotel, August 5-7, 1980. Chuang, C.W. and D.L. Moffatt, "Complex Natural Responances of Radar Targets via Prony’s

  14. Extraction Methods, Variability Encountered in

    NARCIS (Netherlands)

    Bodelier, P.L.E.; Nelson, K.E.

    2014-01-01

    Synonyms Bias in DNA extractions methods; Variation in DNA extraction methods Definition The variability in extraction methods is defined as differences in quality and quantity of DNA observed using various extraction protocols, leading to differences in outcome of microbial community composition as

  15. Template Guided Live Wire and Its Application on Automatic Extraction of Tongue in Digital Image

    Institute of Scientific and Technical Information of China (English)

    ZHENG Yuan-jie; YANG Jie; ZHOU Yue

    2005-01-01

    In this paper, we propose a novel automatic object extraction algorithm, named the Template Guided Live Wire, based on the popularly used livewire techniques. We discuss in details the novel method's applications on tongue extraction in digital images. With the guides of a given template curve which approximates the tongue's shape, our method can finish the extraction of tongue without any human intervention. In the paper, we also discussed in details how the template guides the live wire, and why our method functions more effectively than other boundary based segmentation methods especially the snake algorithm. Experimental results on some tongue images areas well provided to show our method's better accuracy and robustness than the snake algorithm.

  16. Definition extraction for glossary creation : a study on extracting definitions for semi-automatic glossary creation in Dutch

    NARCIS (Netherlands)

    Westerhout, E.N.

    2010-01-01

    The central topic of this thesis is the automatic extraction of definitions from text. Definition extraction can play a role in various applications including the semi-automatic development of glossaries in an eLearning context, which constitutes the main focus of this dissertation. A glossary provi

  17. Automatic building extraction and segmentation directly from lidar point clouds

    Science.gov (United States)

    Jiang, Jingjue; Ming, Ying

    2006-10-01

    This paper presents an automatic approach for building extraction and segmentation directly from Lidar point clouds without previous rasterization or triangulation. The algorithm works in the following sequential steps. First, a filtering algorithm, which is capable of preserving steep terrain features, is performed on raw Lidar point clouds. Points that belong to the bare earth and those that belong to buildings are separated. Second, the building points which may include some vegetation and other objects due to the disturbance of noise and the distribution of points are segmented further by using a Riemannian Graph. Then building segments are recognized by considering size and roughness. Finally, each segment can be treated as a building roof plane. Experiment results show that the algorithm is very promising.

  18. EXIF Custom: Automatic image metadata extraction for Scratchpads and Drupal

    Directory of Open Access Journals (Sweden)

    Ed Baker

    2013-09-01

    Full Text Available Many institutions and individuals use embedded metadata to aid in the management of their image collections. Many deskop image management solutions such as Adobe Bridge and online tools such as Flickr also make use of embedded metadata to describe, categorise and license images. Until now Scratchpads (a data management system and virtual research environment for biodiversity  have not made use of these metadata, and users have had to manually re-enter this information if they have wanted to display it on their Scratchpad site. The Drupal described here allows users to map metadata embedded in their images to the associated field in the Scratchpads image form using one or more customised mappings. The module works seamlessly with the bulk image uploader used on Scratchpads and it is therefore possible to upload hundreds of images easily with automatic metadata (EXIF, XMP and IPTC extraction and mapping.

  19. Automatic Metadata Extraction - The High Energy Physics Use Case

    CERN Document Server

    Boyd, Joseph; Rajman, Martin

    Automatic metadata extraction (AME) of scientific papers has been described as one of the hardest problems in document engineering. Heterogeneous content, varying style, and unpredictable placement of article components render the problem inherently indeterministic. Conditional random fields (CRF), a machine learning technique, can be used to classify document metadata amidst this uncertainty, annotating document contents with semantic labels. High energy physics (HEP) papers, such as those written at CERN, have unique content and structural characteristics, with scientific collaborations of thousands of authors altering article layouts dramatically. The distinctive qualities of these papers necessitate the creation of specialised datasets and model features. In this work we build an unprecedented training set of HEP papers and propose and evaluate a set of innovative features for CRF models. We build upon state-of-the-art AME software, GROBID, a tool coordinating a hierarchy of CRF models in a full document ...

  20. Automatization for development of HPLC methods.

    Science.gov (United States)

    Pfeffer, M; Windt, H

    2001-01-01

    Within the frame of inprocess analytics of the synthesis of pharmaceutical drugs a lot of HPLC methods are required for checking the quality of intermediates and drug substances. The methods have to be developed in terms of optimal selectivity and low limit of detection, minimum running time and chromatographic robustness. The goal was to shorten the method development process. Therefore, the screening of stationary phases was automated by means of switching modules equipped with 12 HPLC columns. Mobile phase and temperature could be optimized by using Drylab after evaluating chromatograms of gradient elutions performed automatically. The column switching module was applied for more than three dozens of substances, e.g. steroidal intermediates. Resolution (especially of isomeres), peak shape and number of peaks turned out to be the criteria for selection of the appropriate stationary phase. On the basis of the "best" column the composition of the "best" eluent was usually defined rapidly and with less effort. This approach leads to savings in manpower by more than one third. Overnight, impurity profiles of the intermediates were obtained yielding robust HPLC methods with high selectivity and minimized elution time.

  1. Automatic Road Extraction Based on Integration of High Resolution LIDAR and Aerial Imagery

    Science.gov (United States)

    Rahimi, S.; Arefi, H.; Bahmanyar, R.

    2015-12-01

    In recent years, the rapid increase in the demand for road information together with the availability of large volumes of high resolution Earth Observation (EO) images, have drawn remarkable interest to the use of EO images for road extraction. Among the proposed methods, the unsupervised fully-automatic ones are more efficient since they do not require human effort. Considering the proposed methods, the focus is usually to improve the road network detection, while the roads' precise delineation has been less attended to. In this paper, we propose a new unsupervised fully-automatic road extraction method, based on the integration of the high resolution LiDAR and aerial images of a scene using Principal Component Analysis (PCA). This method discriminates the existing roads in a scene; and then precisely delineates them. Hough transform is then applied to the integrated information to extract straight lines; which are further used to segment the scene and discriminate the existing roads. The roads' edges are then precisely localized using a projection-based technique, and the round corners are further refined. Experimental results demonstrate that our proposed method extracts and delineates the roads with a high accuracy.

  2. Automatic Rotation Recovery Algorithm for Accurate Digital Image and Video Watermarks Extraction

    Directory of Open Access Journals (Sweden)

    Nasr addin Ahmed Salem Al-maweri

    2016-11-01

    Full Text Available Research in digital watermarking has evolved rapidly in the current decade. This evolution brought various different methods and algorithms for watermarking digital images and videos. Introduced methods in the field varies from weak to robust according to how tolerant the method is implemented to keep the existence of the watermark in the presence of attacks. Rotation attacks applied to the watermarked media is one of the serious attacks which many, if not most, algorithms cannot survive. In this paper, a new automatic rotation recovery algorithm is proposed. This algorithm can be plugged to any image or video watermarking algorithm extraction component. The main job for this method is to detect the geometrical distortion happens to the watermarked image/images sequence; recover the distorted scene to its original state in a blind and automatic way and then send it to be used by the extraction procedure. The work is limited to have a recovery process to zero padded rotations for now, cropped images after rotation is left as future work. The proposed algorithm is tested on top of extraction component. Both recovery accuracy and the extracted watermarks accuracy showed high performance level.

  3. Automatic Key-Frame Extraction from Optical Motion Capture Data

    Institute of Scientific and Technical Information of China (English)

    ZHANG Qiang; YU Shao-pei; ZHOU Dong-sheng; WEI Xiao-peng

    2013-01-01

    Optical motion capture is an increasingly popular animation technique. In the last few years, plenty of methods have been proposed for key-frame extraction of motion capture data, and it is a common method to extract key-frame using quaternion. Here, one main difficulty is due to the fact that previous algorithms often need to manually set various parameters. In addition, it is problematic to predefine the appropriate threshold without knowing the data content. In this paper, we present a novel adaptive threshold-based extraction method. Key-frame can be found according to quaternion distance. We propose a simple and efficient algorithm to extract key-frame from a motion sequence based on adaptive threshold. It is convenient with no need to predefine parameters to meet certain compression ratio. Experimental results of many motion captures with different traits demonstrate good performance of the proposed algorithm. Our experiments show that one can typically cut down the process of extraction from several minutes to a couple of seconds.

  4. Fast title extraction method for business documents

    Science.gov (United States)

    Katsuyama, Yutaka; Naoi, Satoshi

    1997-04-01

    Conventional electronic document filing systems are inconvenient because the user must specify the keywords in each document for later searches. To solve this problem, automatic keyword extraction methods using natural language processing and character recognition have been developed. However, these methods are slow, especially for japanese documents. To develop a practical electronic document filing system, we focused on the extraction of keyword areas from a document by image processing. Our fast title extraction method can automatically extract titles as keywords from business documents. All character strings are evaluated for similarity by rating points associated with title similarity. We classified these points as four items: character sitting size, position of character strings, relative position among character strings, and string attribution. Finally, the character string that has the highest rating is selected as the title area. The character recognition process is carried out on the selected area. It is fast because this process must recognize a small number of patterns in the restricted area only, and not throughout the entire document. The mean performance of this method is an accuracy of about 91 percent and a 1.8 sec. processing time for an examination of 100 Japanese business documents.

  5. Hierarchical Feature Extraction and Selection Method and the Applications in Automatic Target Recognition System%分级特征提取与选择及在自动目标识别系统中的应用

    Institute of Scientific and Technical Information of China (English)

    梅雪; 张继法; 许松松; 巩建鸣

    2012-01-01

    应用于遥感图像、武器制导等的自动目标识别系统中,经常遇到形状相似目标的鉴别问题。为提高其识别的快速性和识别率,提出一种分级的基于形状的目标识别方法。借鉴人类视觉感知方式提取多尺度特征,大尺度下采用全局特征快速粗分类,小尺度下采用局部特征鉴别形状相似目标。然后运用模糊规则对提取的特征进行选择,降低特征维数,加快目标匹配过程。实验结果表明:该方法能快速有效地识别形状相似的目标,特征选择后平均识别率较选择之前提高了6.99/6。%Similar shape object recognition is widely used in automatic target recognition system of remote sensing and weapon guidance. A hierarchical method of shape feature extraction and selection is proposed to increase the recognition efficiency and rate. I.earning from human visual perception, multi-scale features are extracted. C-lobal features are used to make a quick classification,and local features are used to distinguish targets with similar shape. To achieve the feature selection, fuzzy criterion is introduced which improves the matching processing and increases the recognition rate. Experimental results show this method is an effective and general way in recognizing targets with similar shape,and the feature selection improves the recognition rate by 6.9%than before.

  6. Lung Lesion Extraction Using a Toboggan Based Growing Automatic Segmentation Approach.

    Science.gov (United States)

    Song, Jiangdian; Yang, Caiyun; Fan, Li; Wang, Kun; Yang, Feng; Liu, Shiyuan; Tian, Jie

    2016-01-01

    The accurate segmentation of lung lesions from computed tomography (CT) scans is important for lung cancer research and can offer valuable information for clinical diagnosis and treatment. However, it is challenging to achieve a fully automatic lesion detection and segmentation with acceptable accuracy due to the heterogeneity of lung lesions. Here, we propose a novel toboggan based growing automatic segmentation approach (TBGA) with a three-step framework, which are automatic initial seed point selection, multi-constraints 3D lesion extraction and the final lesion refinement. The new approach does not require any human interaction or training dataset for lesion detection, yet it can provide a high lesion detection sensitivity (96.35%) and a comparable segmentation accuracy with manual segmentation (P > 0.05), which was proved by a series assessments using the LIDC-IDRI dataset (850 lesions) and in-house clinical dataset (121 lesions). We also compared TBGA with commonly used level set and skeleton graph cut methods, respectively. The results indicated a significant improvement of segmentation accuracy . Furthermore, the average time consumption for one lesion segmentation was under 8 s using our new method. In conclusion, we believe that the novel TBGA can achieve robust, efficient and accurate lung lesion segmentation in CT images automatically.

  7. Automatic extraction of forward stroke volume using dynamic PET/CT

    DEFF Research Database (Denmark)

    Harms, Hans; Tolbod, Lars Poulsen; Hansson, Nils Henrik;

    Background: Dynamic PET can be used to extract forward stroke volume (FSV) by the indicator dilution principle. The technique employed can be automated and is in theory independent on the tracer used and may therefore be added to any dynamic cardiac PET protocol. The aim of this study was to vali......Background: Dynamic PET can be used to extract forward stroke volume (FSV) by the indicator dilution principle. The technique employed can be automated and is in theory independent on the tracer used and may therefore be added to any dynamic cardiac PET protocol. The aim of this study...... was to validate automated methods for extracting FSV directly from dynamic PET studies for two different tracers and to examine potential scanner hardware bias. Methods: 21 subjects underwent a dynamic 27 min 11C-acetate PET scan on a Siemens Biograph TruePoint 64 PET/CT scanner (scanner I). In addition, 8...... subjects underwent a dynamic 6 min 15O-water PET scan followed by a 27 min 11C-acetate PET scan on a GE Discovery ST PET/CT scanner (scanner II). The LV-aortic time-activity curve (TAC) was extracted automatically from dynamic PET data using cluster analysis. The first-pass peak was isolated by automatic...

  8. MINUTIAE EXTRACTION BASED ON ARTIFICIAL NEURAL NETWORKS FOR AUTOMATIC FINGERPRINT RECOGNITION SYSTEMS

    Directory of Open Access Journals (Sweden)

    Necla ÖZKAYA

    2007-01-01

    Full Text Available Automatic fingerprint recognition systems are utilised for personal identification with the use of comparisons of local ridge characteristics and their relationships. Critical stages in personal identification are to extract features automatically, fast and reliably from the input fingerprint images. In this study, a new approach based on artificial neural networks to extract minutiae from fingerprint images is developed and introduced. The results have shown that artificial neural networks achieve the minutiae extraction from fingerprint images with high accuracy.

  9. Feature Extraction and Automatic Material Classification of Underground Objects from Ground Penetrating Radar Data

    Directory of Open Access Journals (Sweden)

    Qingqing Lu

    2014-01-01

    Full Text Available Ground penetrating radar (GPR is a powerful tool for detecting objects buried underground. However, the interpretation of the acquired signals remains a challenging task since an experienced user is required to manage the entire operation. Particularly difficult is the classification of the material type of underground objects in noisy environment. This paper proposes a new feature extraction method. First, discrete wavelet transform (DWT transforms A-Scan data and approximation coefficients are extracted. Then, fractional Fourier transform (FRFT is used to transform approximation coefficients into fractional domain and we extract features. The features are supplied to the support vector machine (SVM classifiers to automatically identify underground objects material. Experiment results show that the proposed feature-based SVM system has good performances in classification accuracy compared to statistical and frequency domain feature-based SVM system in noisy environment and the classification accuracy of features proposed in this paper has little relationship with the SVM models.

  10. An Extended Keyword Extraction Method

    Science.gov (United States)

    Hong, Bao; Zhen, Deng

    Among numerous Chinese keyword extraction methods, Chinese characteristics were shortly considered. This phenomenon going against the precision enhancement of the Chinese keyword extraction. An extended term frequency based method(Extended TF) is proposed in this paper which combined Chinese linguistic characteristics with basic TF method. Unary, binary and ternary grammars for the candidate keyword extraction as well as other linguistic features were all taken into account. The method establishes classification model using support vector machine. Tests show that the proposed extraction method improved key words precision and recall rate significantly. We applied the key words extracted by the extended TF method into the text file classification. Results show that the key words extracted by the proposed method contributed greatly to raising the precision of text file classification.

  11. Research of Automatic Summarization Methods%自动文摘的方法研究

    Institute of Scientific and Technical Information of China (English)

    卫佳君; 宋继华

    2011-01-01

    It summarizes the main automatic abstracting research methods and strategies and divides the methods into three major categories : automatically extracted summarization,automatic summarization based on information extraction and summarization based on understanding. Automatically extracted method uses mat extract important sentences from the article to form a digest;Abstract based on information extraction method uses that extract information from the article to fill framework which has been prepared, and then use the template to output the content; Abstract based on understanding is to use natural language processing technology to generate abstracts, focuses on automatically extracted summarization from single theme articles and multi-topic articles. After comparing advantages and disadvantages of variety of algorithms,a new multi-topic classification method is proposed.%文中总结了自动文摘的主要研究方法和策略并把方法分成了三大类:自动摘录、基于信息抽取的自动文摘和基于理解的自动文摘.自动摘录方法是从文章中抽取重要句子来形成文摘;基于信息抽取的文摘方法是用从文章中抽取的信息填充已经编好的框架,然后用模板将内容输出;基于理解的文摘方法是利用自然语言处理技术生成文摘.文中重点总结了单主题文章和多主题文章的自动摘录方法,在多种算法进行优缺点比较后提出了一种新的多主题划分方法.

  12. Method of information extraction of marbling image characteristic and automatic classification for beef%牛肉大理石花纹图像特征信息提取及自动分级方法

    Institute of Scientific and Technical Information of China (English)

    周彤; 彭彦昆

    2013-01-01

    . Light intensity was regulated through a light controller, and the distance between the camera lens and the beef samples was adjusted though translation stages in the image acquisition device. Collected images were automatically stored in the computer for further image processing. First, some methods such as image denoising, background removal, and image enhancement were adopted to preprocess the image to obtain a region of interest (ROI). In this step, the image was cropped to separate the beef from the background. Then, an iteration method was used to segment the beef area, obtain the beef marbling area and fat area. The redundant fat area was removed to extract an effective rib-eye region. Ten characteristic parameters of beef marbling namely, the rate of marbling area in the rib-eye region, the number of large grain fat, medium grain fat, small grain fat, total grain fat, the density of large grain fat, medium grain fat, small grain fat, total grain fat and, the evenness degree of fat distribution in the rib-eye region can reflect the amount of marbling and its distribution. So they were used to establish principal component regression (PCR) model. The PCR model result yielded a correction coefficient (Rv) of 0.88 and a standard error of prediction (SEP) of 0.56. And the PCR model showed that the rate of the marbling area in the rib-eye region had the greatest effect on the grade of beef marbling. Fisher discriminant functions were constructed based on the PCR model results to classify the grade of beef marbling. Experimental results showed that the classification accuracy was 97.0%in the calibration set and 91.2%in the prediction set. On this basis, a software system was developed for the automatic grading of beef marbling. A corresponding hardware device was also developed, controlled by the software system for real time application. The speed and accuracy of the algorithm were verified with theoretical analysis and a practical test. Through tests, the average

  13. Historical Patterns Based on Automatically Extracted Data: the Case of Classical Composers

    DEFF Research Database (Denmark)

    Borowiecki, Karol; O'Hagan, John

    2012-01-01

    application that automatically extracts and processes information was developed to generate data on the birth location, occupations and importance (using word count methods) of over 12,000 composers over six centuries. Quantitative measures of the relative importance of different types of music...... and of the different music instruments over the centuries were also generated. Finally quantitative indicators of the importance of different cities over the different centuries in the lives of these composers are constructed. A range of interesting findings emerge in relation to all of these aspects of the lives...

  14. Quality assessment of automatically extracted data from GPs' EPR.

    Science.gov (United States)

    de Clercq, Etienne; Moreels, Sarah; Van Casteren, Viviane; Bossuyt, Nathalie; Goderis, Geert; Bartholomeeusen, Stefaan

    2012-01-01

    There are many secondary benefits to collecting routine primary care data, but we first need to understand some of the properties of this data. In this paper we describe the method used to assess the PPV and sensitivity of data extracted from Belgian GPs' EPR (diagnoses, drug prescriptions, referrals, and certain parameters), using data collected through an electronic questionnaire as a gold standard. We describe the results of the ResoPrim phase 2 project, which involved 4 software systems and 43 practices (10,307 patients). This method of assessment could also be applied to other research networks.

  15. A General Method for Module Automatic Testing in Avionics Systems

    Directory of Open Access Journals (Sweden)

    Li Ma

    2013-05-01

    Full Text Available The traditional Automatic Test Equipment (ATE systems are insufficient to cope with the challenges of testing more and more complex avionics systems. In this study, we propose a general method for module automatic testing in the avionics test platform based on PXI bus. We apply virtual instrument technology to realize the automatic testing and the fault reporting of signal performance. Taking the avionics bus ARINC429 as an example, we introduce the architecture of automatic test system as well as the implementation of algorithms in Lab VIEW. The comprehensive experiments show the proposed method can effectively accomplish the automatic testing and fault reporting of signal performance. It greatly improves the generality and reliability of ATE in avionics systems.

  16. Automatic Extraction of Destinations, Origins and Route Parts from Human Generated Route Directions

    Science.gov (United States)

    Zhang, Xiao; Mitra, Prasenjit; Klippel, Alexander; Maceachren, Alan

    Researchers from the cognitive and spatial sciences are studying text descriptions of movement patterns in order to examine how humans communicate and understand spatial information. In particular, route directions offer a rich source of information on how cognitive systems conceptualize movement patterns by segmenting them into meaningful parts. Route directions are composed using a plethora of cognitive spatial organization principles: changing levels of granularity, hierarchical organization, incorporation of cognitively and perceptually salient elements, and so forth. Identifying such information in text documents automatically is crucial for enabling machine-understanding of human spatial language. The benefits are: a) creating opportunities for large-scale studies of human linguistic behavior; b) extracting and georeferencing salient entities (landmarks) that are used by human route direction providers; c) developing methods to translate route directions to sketches and maps; and d) enabling queries on large corpora of crawled/analyzed movement data. In this paper, we introduce our approach and implementations that bring us closer to the goal of automatically processing linguistic route directions. We report on research directed at one part of the larger problem, that is, extracting the three most critical parts of route directions and movement patterns in general: origin, destination, and route parts. We use machine-learning based algorithms to extract these parts of routes, including, for example, destination names and types. We prove the effectiveness of our approach in several experiments using hand-tagged corpora.

  17. A method of automatic control procedures cardiopulmonary resuscitation

    Science.gov (United States)

    Bureev, A. Sh.; Zhdanov, D. S.; Kiseleva, E. Yu.; Kutsov, M. S.; Trifonov, A. Yu.

    2015-11-01

    The study is to present the results of works on creation of methods of automatic control procedures of cardiopulmonary resuscitation (CPR). A method of automatic control procedure of CPR by evaluating the acoustic data of the dynamics of blood flow in the bifurcation of carotid arteries and the dynamics of air flow in a trachea according to the current guidelines for CPR is presented. Evaluation of the patient is carried out by analyzing the respiratory noise and blood flow in the interspaces between the chest compressions and artificial pulmonary ventilation. The device operation algorithm of automatic control procedures of CPR and its block diagram has been developed.

  18. Automatic cell object extraction of red tide algae in microscopic images

    Science.gov (United States)

    Yu, Kun; Ji, Guangrong; Zheng, Haiyong

    2017-03-01

    Extracting the cell objects of red tide algae is the most important step in the construction of an automatic microscopic image recognition system for harmful algal blooms. This paper describes a set of composite methods for the automatic segmentation of cells of red tide algae from microscopic images. Depending on the existence of setae, we classify the common marine red tide algae into non-setae algae species and Chaetoceros, and design segmentation strategies for these two categories according to their morphological characteristics. In view of the varied forms and fuzzy edges of non-setae algae, we propose a new multi-scale detection algorithm for algal cell regions based on border- correlation, and further combine this with morphological operations and an improved GrabCut algorithm to segment single-cell and multicell objects. In this process, similarity detection is introduced to eliminate the pseudo cellular regions. For Chaetoceros, owing to the weak grayscale information of their setae and the low contrast between the setae and background, we propose a cell extraction method based on a gray surface orientation angle model. This method constructs a gray surface vector model, and executes the gray mapping of the orientation angles. The obtained gray values are then reconstructed and linearly stretched. Finally, appropriate morphological processing is conducted to preserve the orientation information and tiny features of the setae. Experimental results demonstrate that the proposed methods can effectively remove noise and accurately extract both categories of algae cell objects possessing a complete shape, regular contour, and clear edge. Compared with other advanced segmentation techniques, our methods are more robust when considering images with different appearances and achieve more satisfactory segmentation effects.

  19. Template-based automatic extraction of the joint space of foot bones from CT scan

    Science.gov (United States)

    Park, Eunbi; Kim, Taeho; Park, Jinah

    2016-03-01

    Clean bone segmentation is critical in studying the joint anatomy for measuring the spacing between the bones. However, separation of the coupled bones in CT images is sometimes difficult due to ambiguous gray values coming from the noise and the heterogeneity of bone materials as well as narrowing of the joint space. For fine reconstruction of the individual local boundaries, manual operation is a common practice where the segmentation remains to be a bottleneck. In this paper, we present an automatic method for extracting the joint space by applying graph cut on Markov random field model to the region of interest (ROI) which is identified by a template of 3D bone structures. The template includes encoded articular surface which identifies the tight region of the high-intensity bone boundaries together with the fuzzy joint area of interest. The localized shape information from the template model within the ROI effectively separates the bones nearby. By narrowing the ROI down to the region including two types of tissue, the object extraction problem was reduced to binary segmentation and solved via graph cut. Based on the shape of a joint space marked by the template, the hard constraint was set by the initial seeds which were automatically generated from thresholding and morphological operations. The performance and the robustness of the proposed method are evaluated on 12 volumes of ankle CT data, where each volume includes a set of 4 tarsal bones (calcaneus, talus, navicular and cuboid).

  20. Automatic extraction of property norm-like data from large text corpora.

    Science.gov (United States)

    Kelly, Colin; Devereux, Barry; Korhonen, Anna

    2014-01-01

    Traditional methods for deriving property-based representations of concepts from text have focused on either extracting only a subset of possible relation types, such as hyponymy/hypernymy (e.g., car is-a vehicle) or meronymy/metonymy (e.g., car has wheels), or unspecified relations (e.g., car--petrol). We propose a system for the challenging task of automatic, large-scale acquisition of unconstrained, human-like property norms from large text corpora, and discuss the theoretical implications of such a system. We employ syntactic, semantic, and encyclopedic information to guide our extraction, yielding concept-relation-feature triples (e.g., car be fast, car require petrol, car cause pollution), which approximate property-based conceptual representations. Our novel method extracts candidate triples from parsed corpora (Wikipedia and the British National Corpus) using syntactically and grammatically motivated rules, then reweights triples with a linear combination of their frequency and four statistical metrics. We assess our system output in three ways: lexical comparison with norms derived from human-generated property norm data, direct evaluation by four human judges, and a semantic distance comparison with both WordNet similarity data and human-judged concept similarity ratings. Our system offers a viable and performant method of plausible triple extraction: Our lexical comparison shows comparable performance to the current state-of-the-art, while subsequent evaluations exhibit the human-like character of our generated properties.

  1. Evaluation of DNA and RNA extraction methods.

    Science.gov (United States)

    Edwin Shiaw, C S; Shiran, M S; Cheah, Y K; Tan, G C; Sabariah, A R

    2010-06-01

    This study was done to evaluate various DNA and RNA extractions from archival FFPE tissues. A total of 30 FFPE blocks from the years of 2004 to 2006 were assessed with each modified and adapted method. Extraction protocols evaluated include the modified enzymatic extraction method (Method A), Chelex-100 extraction method (Method B), heat-induced retrieval in alkaline solution extraction method (Methods C and D) and one commercial FFPE DNA Extraction kit (Qiagen, Crawley, UK). For RNA extraction, 2 extraction protocols were evaluated including the enzymatic extraction method (Method 1), and Chelex-100 RNA extraction method (Method 2). Results show that the modified enzymatic extraction method (Method A) is an efficient DNA extraction protocol, while for RNA extraction, the enzymatic method (Method 1) and the Chelex-100 RNA extraction method (Method 2) are equally efficient RNA extraction protocols.

  2. An Automatic Eye Detection Method for Gray Intensity Facial Images

    Directory of Open Access Journals (Sweden)

    M Hassaballah

    2011-07-01

    Full Text Available Eyes are the most salient and stable features in the human face, and hence automatic extraction or detection of eyes is often considered as the most important step in many applications, such as face identification and recognition. This paper presents a method for eye detection of still grayscale images. The method is based on two facts: eye regions exhibit unpredictable local intensity, therefore entropy in eye regions is high and the center of eye (iris is too dark circle (low intensity compared to the neighboring regions. A score based on the entropy of eye and darkness of iris is used to detect eye center coordinates. Experimental results on two databases; namely, FERET with variations in views and BioID with variations in gaze directions and uncontrolled conditions show that the proposed method is robust against gaze direction, variations in views and variety of illumination. It can achieve a correct detection rate of 97.8% and 94.3% on a set containing 2500 images of FERET and BioID databases respectively. Moreover, in the cases with glasses and severe conditions, the performance is still acceptable.

  3. Automatic Classification of Marine Mammals with Speaker Classification Methods.

    Science.gov (United States)

    Kreimeyer, Roman; Ludwig, Stefan

    2016-01-01

    We present an automatic acoustic classifier for marine mammals based on human speaker classification methods as an element of a passive acoustic monitoring (PAM) tool. This work is part of the Protection of Marine Mammals (PoMM) project under the framework of the European Defense Agency (EDA) and joined by the Research Department for Underwater Acoustics and Geophysics (FWG), Bundeswehr Technical Centre (WTD 71) and Kiel University. The automatic classification should support sonar operators in the risk mitigation process before and during sonar exercises with a reliable automatic classification result.

  4. Multilevel spatial semantic model for urban house information extraction automatically from QuickBird imagery

    Science.gov (United States)

    Guan, Li; Wang, Ping; Liu, Xiangnan

    2006-10-01

    Based on the introduction to the characters and constructing flow of space semantic model, the feature space and context of house information in high resolution remote sensing image are analyzed, and the house semantic network model of Quick Bird image is also constructed. Furthermore, the accuracy and practicability of space semantic model are checked up through extracting house information automatically from Quick Bird image after extracting candidate semantic nodes to the image by taking advantage of grey division method, window threshold value method and Hough transformation. Sample result indicates that its type coherence, shape coherence and area coherence are 96.75%, 89.5 % and 88 % respectively. Thereinto the effect of the extraction of the houses with rectangular roof is the best and that with herringbone and the polygonal roofs is just ideal. However, the effect of the extraction of the houses with round roof is not satisfied and thus they need the further perfection to the semantic model to make them own higher applied value.

  5. Automatic Extraction and Regularization of Building Outlines from Airborne LIDAR Point Clouds

    Science.gov (United States)

    Albers, Bastian; Kada, Martin; Wichmann, Andreas

    2016-06-01

    Building outlines are needed for various applications like urban planning, 3D city modelling and updating cadaster. Their automatic reconstruction, e.g. from airborne laser scanning data, as regularized shapes is therefore of high relevance. Today's airborne laser scanning technology can produce dense 3D point clouds with high accuracy, which makes it an eligible data source to reconstruct 2D building outlines or even 3D building models. In this paper, we propose an automatic building outline extraction and regularization method that implements a trade-off between enforcing strict shape restriction and allowing flexible angles using an energy minimization approach. The proposed procedure can be summarized for each building as follows: (1) an initial building outline is created from a given set of building points with the alpha shape algorithm; (2) a Hough transform is used to determine the main directions of the building and to extract line segments which are oriented accordingly; (3) the alpha shape boundary points are then repositioned to both follow these segments, but also to respect their original location, favoring long line segments and certain angles. The energy function that guides this trade-off is evaluated with the Viterbi algorithm.

  6. Automatic layout feature extraction for lithography hotspot detection based on deep neural network

    Science.gov (United States)

    Matsunawa, Tetsuaki; Nojima, Shigeki; Kotani, Toshiya

    2016-03-01

    Lithography hotspot detection in the physical verification phase is one of the most important techniques in today's optical lithography based manufacturing process. Although lithography simulation based hotspot detection is widely used, it is also known to be time-consuming. To detect hotspots in a short runtime, several machine learning based methods have been proposed. However, it is difficult to realize highly accurate detection without an increase in false alarms because an appropriate layout feature is undefined. This paper proposes a new method to automatically extract a proper layout feature from a given layout for improvement in detection performance of machine learning based methods. Experimental results show that using a deep neural network can achieve better performance than other frameworks using manually selected layout features and detection algorithms, such as conventional logistic regression or artificial neural network.

  7. Statistical Analysis of Automatic Seed Word Acquisition to Improve Harmful Expression Extraction in Cyberbullying Detection

    Directory of Open Access Journals (Sweden)

    Suzuha Hatakeyama

    2016-04-01

    Full Text Available We study the social problem of cyberbullying, defined as a new form of bullying that takes place in the Internet space. This paper proposes a method for automatic acquisition of seed words to improve performance of the original method for the cyberbullying detection by Nitta et al. [1]. We conduct an experiment exactly in the same settings to find out that the method based on a Web mining technique, lost over 30% points of its performance since being proposed in 2013. Thus, we hypothesize on the reasons for the decrease in the performance and propose a number of improvements, from which we experimentally choose the best one. Furthermore, we collect several seed word sets using different approaches, evaluate and their precision. We found out that the influential factor in extraction of harmful expressions is not the number of seed words, but the way the seed words were collected and filtered.

  8. Image Processing Method for Automatic Discrimination of Hoverfly Species

    Directory of Open Access Journals (Sweden)

    Vladimir Crnojević

    2014-01-01

    Full Text Available An approach to automatic hoverfly species discrimination based on detection and extraction of vein junctions in wing venation patterns of insects is presented in the paper. The dataset used in our experiments consists of high resolution microscopic wing images of several hoverfly species collected over a relatively long period of time at different geographic locations. Junctions are detected using the combination of the well known HOG (histograms of oriented gradients and the robust version of recently proposed CLBP (complete local binary pattern. These features are used to train an SVM classifier to detect junctions in wing images. Once the junctions are identified they are used to extract statistics characterizing the constellations of these points. Such simple features can be used to automatically discriminate four selected hoverfly species with polynomial kernel SVM and achieve high classification accuracy.

  9. Automatic Pole and Q-Value Extraction for RF Structures

    Energy Technology Data Exchange (ETDEWEB)

    C. Potratz, H.-W. Glock, U. van Rienen, F. Marhauser

    2011-09-01

    The experimental characterization of RF structures like accelerating cavities often demands for measuring resonant frequencies of Eigenmodes and corresponding (loaded) Q-values over a wide spectral range. A common procedure to determine the Q-values is the -3dB method, which works well for isolated poles, but may not be applicable directly in case of multiple poles residing in close proximity (e.g. for adjacent transverse modes differing by polarization). Although alternative methods may be used in such cases, this often comes at the expense of inherent systematic errors. We have developed an automation algorithm, which not only speeds up the measurement time significantly, but is also able to extract Eigenfrequencies and Q-values both for well isolated and overlapping poles. At the same time the measurement accuracy may be improved as a major benefit. To utilize this procedure merely complex scattering parameters have to be recorded for the spectral range of interest. In this paper we present the proposed algorithm applied to experimental data recorded for superconducting higher-order-mode damped multi-cell cavities as an application of high importance.

  10. Multiple damage identification and imaging in an aluminum plate using effective Lamb wave response automatic extraction technology

    Science.gov (United States)

    Ouyang, Qinghua; Zhou, Li; Liu, Xiaotong

    2016-04-01

    In order to identify multiple damage in the structure, a method of multiple damage identification and imaging based on the effective Lamb wave response automatic extraction algorithm is proposed. In this method, the detected key area in the structure is divided into a number of subregions, and then, the effective response signals including the structural damage information are automatically extracted from the entire Lamb wave responses which are received by the piezoelectric sensors. Further, the damage index values of every subregion based on the correlation coefficient are calculated using the effective response signals. Finally, the damage identification and imaging are performed using the reconstruction algorithm for probabilistic inspection of damage (RAPID) technique. The experimental research was conducted using an aluminum plate. The experimental results show that the method proposed in this research can quickly and effectively identify the single damage or multiple damage and image the damages clearly in detected area.

  11. Feature-point-extracting-based automatically mosaic for composite microscopic images

    Institute of Scientific and Technical Information of China (English)

    YIN YanSheng; ZHAO XiuYang; TIAN XiaoFeng; LI Jia

    2007-01-01

    Image mosaic is a crucial step in the three-dimensional reconstruction of composite materials to align the serial images. A novel method is adopted to mosaic two SiC/Al microscopic images with an amplification coefficient of 1000. The two images are denoised by Gaussian model, and feature points are then extracted by using Harris corner detector. The feature points are filtered through Canny edge detector. A 40x40 feature template is chosen by sowing a seed in an overlapped area of the reference image, and the homologous region in floating image is acquired automatically by the way of correlation analysis. The feature points in matched templates are used as feature point-sets. Using the transformational parameters acquired by SVD-ICP method, the two images are transformed into the universal coordinates and merged to the final mosaic image.

  12. Automatic extraction of gene ontology annotation and its correlation with clusters in protein networks

    Directory of Open Access Journals (Sweden)

    Mazo Ilya

    2007-07-01

    Full Text Available Abstract Background Uncovering cellular roles of a protein is a task of tremendous importance and complexity that requires dedicated experimental work as well as often sophisticated data mining and processing tools. Protein functions, often referred to as its annotations, are believed to manifest themselves through topology of the networks of inter-proteins interactions. In particular, there is a growing body of evidence that proteins performing the same function are more likely to interact with each other than with proteins with other functions. However, since functional annotation and protein network topology are often studied separately, the direct relationship between them has not been comprehensively demonstrated. In addition to having the general biological significance, such demonstration would further validate the data extraction and processing methods used to compose protein annotation and protein-protein interactions datasets. Results We developed a method for automatic extraction of protein functional annotation from scientific text based on the Natural Language Processing (NLP technology. For the protein annotation extracted from the entire PubMed, we evaluated the precision and recall rates, and compared the performance of the automatic extraction technology to that of manual curation used in public Gene Ontology (GO annotation. In the second part of our presentation, we reported a large-scale investigation into the correspondence between communities in the literature-based protein networks and GO annotation groups of functionally related proteins. We found a comprehensive two-way match: proteins within biological annotation groups form significantly denser linked network clusters than expected by chance and, conversely, densely linked network communities exhibit a pronounced non-random overlap with GO groups. We also expanded the publicly available GO biological process annotation using the relations extracted by our NLP technology

  13. Sensitive, automatic method for the determination of diazepam and its five metabolites in human oral fluid by online solid-phase extraction and liquid chromatography with tandem mass spectrometry

    DEFF Research Database (Denmark)

    Jiang, Fengli; Rao, Yulan; Wang, Rong;

    2016-01-01

    A novel and simple online solid-phase extraction liquid chromatography-tandem mass spectrometry method was developed and validated for the simultaneous determination of diazepam and its five metabolites including nordazepam, oxazepam, temazepam, oxazepam glucuronide, and temazepam glucuronide...... in human oral fluid. Human oral fluid was obtained using the Salivette(®) collection device, and 100 μL of oral fluid samples were loaded onto HySphere Resin GP cartridge for extraction. Analytes were separated on a Waters Xterra C18 column and quantified by liquid chromatography with tandem mass...

  14. Automatic extraction of insulators from 3D LiDAR data of an electrical substation

    Science.gov (United States)

    Arastounia, M.; Lichti, D. D.

    2013-10-01

    A considerable percentage of power outages are caused by animals that come into contact with conductive elements of electrical substations. These can be prevented by insulating conductive electrical objects, for which a 3D as-built plan of the substation is crucial. This research aims to create such a 3D as-built plan using terrestrial LiDAR data while in this paper the aim is to extract insulators, which are key objects in electrical substations. This paper proposes a segmentation method based on a new approach of finding the principle direction of points' distribution. This is done by forming and analysing the distribution matrix whose elements are the range of points in 9 different directions in 3D space. Comparison of the computational performance of our method with PCA (principal component analysis) shows that our approach is 25% faster since it utilizes zero-order moments while PCA computes the first- and second-order moments, which is more time-consuming. A knowledge-based approach has been developed to automatically recognize points on insulators. The method utilizes known insulator properties such as diameter and the number and the spacing of their rings. The results achieved indicate that 24 out of 27 insulators could be recognized while the 3 un-recognized ones were highly occluded. Check point analysis was performed by manually cropping all points on insulators. The results of check point analysis show that the accuracy, precision and recall of insulator recognition are 98%, 86% and 81%, respectively. It is concluded that automatic object extraction from electrical substations using only LiDAR data is not only possible but also promising. Moreover, our developed approach to determine the directional distribution of points is computationally more efficient for segmentation of objects in electrical substations compared to PCA. Finally our knowledge-based method is promising to recognize points on electrical objects as it was successfully applied for

  15. Automatic Extraction of Tunnel Lining Cross-Sections from Terrestrial Laser Scanning Point Clouds.

    Science.gov (United States)

    Cheng, Yun-Jian; Qiu, Wenge; Lei, Jin

    2016-10-06

    Tunnel lining (bare-lining) cross-sections play an important role in analyzing deformations of tunnel linings. The goal of this paper is to develop an automatic method for extracting bare-lining cross-sections from terrestrial laser scanning (TLS) point clouds. First, the combination of a 2D projection strategy and angle criterion is used for tunnel boundary point detection, from which we estimate the two boundary lines in the X-Y plane. The initial direction of the cross-sectional plane is defined to be orthogonal to one of the two boundary lines. In order to compute the final cross-sectional plane, the direction is adjusted twice with the total least squares method and Rodrigues' rotation formula, respectively. The projection of nearby points is made onto the adjusted plane to generate tunnel cross-sections. Finally, we present a filtering algorithm (similar to the idea of the morphological erosion) to remove the non-lining points in the cross-section. The proposed method was implemented on railway tunnel data collected in Sichuan, China. Compared with an existing method of cross-sectional extraction, the proposed method can offer high accuracy and more reliable cross-sectional modeling. We also evaluated Type I and Type II errors of the proposed filter, at the same time, which gave suggestions on the parameter selection of the filter.

  16. Automatic Extraction of Tunnel Lining Cross-Sections from Terrestrial Laser Scanning Point Clouds

    Directory of Open Access Journals (Sweden)

    Yun-Jian Cheng

    2016-10-01

    Full Text Available Tunnel lining (bare-lining cross-sections play an important role in analyzing deformations of tunnel linings. The goal of this paper is to develop an automatic method for extracting bare-lining cross-sections from terrestrial laser scanning (TLS point clouds. First, the combination of a 2D projection strategy and angle criterion is used for tunnel boundary point detection, from which we estimate the two boundary lines in the X-Y plane. The initial direction of the cross-sectional plane is defined to be orthogonal to one of the two boundary lines. In order to compute the final cross-sectional plane, the direction is adjusted twice with the total least squares method and Rodrigues' rotation formula, respectively. The projection of nearby points is made onto the adjusted plane to generate tunnel cross-sections. Finally, we present a filtering algorithm (similar to the idea of the morphological erosion to remove the non-lining points in the cross-section. The proposed method was implemented on railway tunnel data collected in Sichuan, China. Compared with an existing method of cross-sectional extraction, the proposed method can offer high accuracy and more reliable cross-sectional modeling. We also evaluated Type I and Type II errors of the proposed filter, at the same time, which gave suggestions on the parameter selection of the filter.

  17. Automatic GCP extraction with high resolution COSMO-SkyMed products

    Science.gov (United States)

    Nitti, Davide Oscar; Morea, Alberto; Nutricato, Raffaele; Chiaradia, Maria Teresa; La Mantia, Claudio; Agrimano, Luigi; Samarelli, Sergio

    2016-10-01

    High-resolution Synthetic Aperture Radar (SAR) data represent an essential resource for the extraction of Ground Control Points (GCP) with sub-metric accuracy without in situ measurement campaigns. Conceptually, SAR-based GCP extraction consists of the following two steps: (i) identification of the same local feature on more SAR images and determination of their range/azimuth coordinates; (ii) spatial 3D positioning retrieval from the 2D radar coordinates, through spatial triangulation (stereo analysis) and inversion methods. In order to boost the geolocation accuracy, SAR images must be acquired from different line of sights, with intersection angles typically wider than 10 degrees, or even in opposite looking directions. In the present study, we present an algorithm specifically designed for ensuring robustness and accuracy in the fully automatic detection of bright isolated targets (steel light poles or towers) even when dealing with opposite looking data takes. In particular, the popular Harris algorithm has been selected as detector because it is the most stable and robust-to-noise algorithm for corners detection on SAR images. We outline the designed algorithmic solution and discusses the results derived over the urban area of Pisa (Italy), where more than ten COSMO-SkyMed Enhanced Spotlight (ES) stereo images are available, thus resulting an optimal test site for an assessment of the performances of the processing chain. The experimental analysis proofs that, assumed timing has been properly recalibrated, we are capable to automatically extract GCP from CSK ES data takes consisting in a very limited number of images.

  18. Automatic methods for generating seismic intensity maps

    OpenAIRE

    Brillinger, David R.; Chiann, Chang; Irizarry, Rafael A.; Pedro A. Morettin

    2001-01-01

    For many years the modified Mercalli (MM) scale has been used to describe earthquake damage and effects observed at scattered locations. In the next stage of an analysis involving MM data, isoseismal lines based on the observations have been added to maps by hand, i.e. subjectively. However a few objective methods have been proposed (by e.g. De Rubeis et al., Brillinger, Wald et al. and Pettenati et al.). The work presented here develops objective methods further. In part...

  19. Visual content highlighting via automatic extraction of embedded captions on MPEG compressed video

    Science.gov (United States)

    Yeo, Boon-Lock; Liu, Bede

    1996-03-01

    Embedded captions in TV programs such as news broadcasts, documentaries and coverage of sports events provide important information on the underlying events. In digital video libraries, such captions represent a highly condensed form of key information on the contents of the video. In this paper we propose a scheme to automatically detect the presence of captions embedded in video frames. The proposed method operates on reduced image sequences which are efficiently reconstructed from compressed MPEG video and thus does not require full frame decompression. The detection, extraction and analysis of embedded captions help to capture the highlights of visual contents in video documents for better organization of video, to present succinctly the important messages embedded in the images, and to facilitate browsing, searching and retrieval of relevant clips.

  20. Automatic extraction of semantic relations between medical entities: a rule based approach

    Directory of Open Access Journals (Sweden)

    Ben Abacha Asma

    2011-10-01

    Full Text Available Abstract Background Information extraction is a complex task which is necessary to develop high-precision information retrieval tools. In this paper, we present the platform MeTAE (Medical Texts Annotation and Exploration. MeTAE allows (i to extract and annotate medical entities and relationships from medical texts and (ii to explore semantically the produced RDF annotations. Results Our annotation approach relies on linguistic patterns and domain knowledge and consists in two steps: (i recognition of medical entities and (ii identification of the correct semantic relation between each pair of entities. The first step is achieved by an enhanced use of MetaMap which improves the precision obtained by MetaMap by 19.59% in our evaluation. The second step relies on linguistic patterns which are built semi-automatically from a corpus selected according to semantic criteria. We evaluate our system’s ability to identify medical entities of 16 types. We also evaluate the extraction of treatment relations between a treatment (e.g. medication and a problem (e.g. disease: we obtain 75.72% precision and 60.46% recall. Conclusions According to our experiments, using an external sentence segmenter and noun phrase chunker may improve the precision of MetaMap-based medical entity recognition. Our pattern-based relation extraction method obtains good precision and recall w.r.t related works. A more precise comparison with related approaches remains difficult however given the differences in corpora and in the exact nature of the extracted relations. The selection of MEDLINE articles through queries related to known drug-disease pairs enabled us to obtain a more focused corpus of relevant examples of treatment relations than a more general MEDLINE query.

  1. Method for Extracting Product Information from TV Commercial

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2011-09-01

    Full Text Available Television (TV Commercial program contains important product information that displayed only in seconds. People who need that information has no insufficient time for noted it, even just for reading that information. This research work focus on automatically detect text and extract important information from a TV commercial to provide information in real time and for video indexing. We propose method for product information extraction from TV commercial using knowledge based system with pattern matching rule based method. Implementation and experiments on 50 commercial screenshot images achieved a high accuracy result on text extraction and information recognition.

  2. An Automatic High Efficient Method for Dish Concentrator Alignment

    Directory of Open Access Journals (Sweden)

    Yong Wang

    2014-01-01

    for the alignment of faceted solar dish concentrator. The isosceles triangle configuration of facet’s footholds determines a fixed relation between light spot displacements and foothold movements, which allows an automatic determination of the amount of adjustments. Tests on a 25 kW Stirling Energy System dish concentrator verify the feasibility, accuracy, and efficiency of our method.

  3. The Automatic Start Method of Application Program Using API

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    This paper introduces on a method about the au-tomactic start of application program. Through defining Registryby API function, the automatic start of specified application pro-gram is fulfilled when Windows98 is taking action. It gives facil-ities to many computer application works.

  4. A fast and automatic mosaic method for high-resolution satellite images

    Science.gov (United States)

    Chen, Hongshun; He, Hui; Xiao, Hongyu; Huang, Jing

    2015-12-01

    We proposed a fast and fully automatic mosaic method for high-resolution satellite images. First, the overlapped rectangle is computed according to geographical locations of the reference and mosaic images and feature points on both the reference and mosaic images are extracted by a scale-invariant feature transform (SIFT) algorithm only from the overlapped region. Then, the RANSAC method is used to match feature points of both images. Finally, the two images are fused into a seamlessly panoramic image by the simple linear weighted fusion method or other method. The proposed method is implemented in C++ language based on OpenCV and GDAL, and tested by Worldview-2 multispectral images with a spatial resolution of 2 meters. Results show that the proposed method can detect feature points efficiently and mosaic images automatically.

  5. An efficient method for parallel CRC automatic generation

    Institute of Scientific and Technical Information of China (English)

    陈红胜; 张继承; 王勇; 陈抗生

    2003-01-01

    The State Transition Equation (STE) based method to automatically generate the parallel CRC circuits for any generator polynomial or required amount of parallelism is presented. The parallel CRC circuit so generated is partially optimized before being fed to synthesis tools and works properly in our LAN transceiv-er. Compared with the cascading method, the proposed method gives better timing results and significantly re-duces the synthesis time, in particular.

  6. Automatic cloud detection for high resolution satellite stereo images and its application in terrain extraction

    Science.gov (United States)

    Wu, Teng; Hu, Xiangyun; Zhang, Yong; Zhang, Lulin; Tao, Pengjie; Lu, Luping

    2016-11-01

    The automatic extraction of terrain from high-resolution satellite optical images is very difficult under cloudy conditions. Therefore, accurate cloud detection is necessary to fully use the cloud-free parts of images for terrain extraction. This paper addresses automated cloud detection by introducing an image matching based method under a stereo vision framework, and the optimization usage of non-cloudy areas in stereo matching and the generation of digital surface models (DSMs). Given that clouds are often separated from the terrain surface, cloudy areas are extracted by integrating dense matching DSM, worldwide digital elevation model (DEM) (i.e., shuttle radar topography mission (SRTM)) and gray information from the images. This process consists of the following steps: an image based DSM is firstly generated through a multiple primitive multi-image matcher. Once it is aligned with the reference DEM based on common features, places with significant height differences between the DSM and the DEM will suggest the potential cloud covers. Detecting cloud at these places in the images then enables precise cloud delineation. In the final step, elevations of the reference DEM within the cloud covers are assigned to the corresponding region of the DSM to generate a cloud-free DEM. The proposed approach is evaluated with the panchromatic images of the Tianhui satellite and has been successfully used in its daily operation. The cloud detection accuracy for images without snow is as high as 95%. Experimental results demonstrate that the proposed method can significantly improve the usage of the cloudy panchromatic satellite images for terrain extraction.

  7. Automatic Data Extraction from Websites for Generating Aquatic Product Market Information

    Institute of Scientific and Technical Information of China (English)

    YUAN Hong-chun; CHEN Ying; SUN Yue-fu

    2006-01-01

    The massive web-based information resources have led to an increasing demand for effective automatic retrieval of target information for web applications. This paper introduces a web-based data extraction tool that deploys various algorithms to locate, extract and filter tabular data from HTML pages and to transform them into new web-based representations. The tool has been applied in an aquaculture web application platform for extracting and generating aquatic product market information.Results prove that this tool is very effective in extracting the required data from web pages.

  8. Apparatus and methods for hydrocarbon extraction

    Science.gov (United States)

    Bohnert, George W.; Verhulst, Galen G.

    2016-04-26

    Systems and methods for hydrocarbon extraction from hydrocarbon-containing material. Such systems and methods relate to extracting hydrocarbon from hydrocarbon-containing material employing a non-aqueous extractant. Additionally, such systems and methods relate to recovering and reusing non-aqueous extractant employed for extracting hydrocarbon from hydrocarbon-containing material.

  9. FEATURES AND GROUND AUTOMATIC EXTRACTION FROM AIRBORNE LIDAR DATA

    OpenAIRE

    D. Costantino; M. G. Angelini

    2012-01-01

    The aim of the research has been the developing and implementing an algorithm for automated extraction of features from LIDAR scenes with varying terrain and coverage types. This applies the moment of third order (Skweness) and fourth order (Kurtosis). While the first has been applied in order to produce an initial filtering and data classification, the second, through the introduction of the weights of the measures, provided the desired results, which is a finer classification and l...

  10. Features and Ground Automatic Extraction from Airborne LIDAR Data

    Science.gov (United States)

    Costantino, D.; Angelini, M. G.

    2011-09-01

    The aim of the research has been the developing and implementing an algorithm for automated extraction of features from LIDAR scenes with varying terrain and coverage types. This applies the moment of third order (Skweness) and fourth order (Kurtosis). While the first has been applied in order to produce an initial filtering and data classification, the second, through the introduction of the weights of the measures, provided the desired results, which is a finer classification and less noisy. The process has been carried out in Matlab but to reduce processing time, given the large data density, the analysis has been limited at a mobile window. It was, therefore, arranged to produce subscenes in order to covers the entire area. The performance of the algorithm, confirm its robustness and goodness of results. Employment of effective processing strategies to improve the automation is a key to the implementation of this algorithm. The results of this work will serve the increased demand of automation for 3D information extraction using remotely sensed large datasets. After obtaining the geometric features from LiDAR data, we want to complete the research creating an algorithm to vector features and extraction of the DTM.

  11. Automatic Extraction of DTM from Low Resolution Dsm by Twosteps Semi-Global Filtering

    Science.gov (United States)

    Zhang, Yanfeng; Zhang, Yongjun; Zhang, Yi; Li, Xin

    2016-06-01

    Automatically extracting DTM from DSM or LiDAR data by distinguishing non-ground points from ground points is an important issue. Many algorithms for this issue are developed, however, most of them are targeted at processing dense LiDAR data, and lack the ability of getting DTM from low resolution DSM. This is caused by the decrease of distinction on elevation variation between steep terrains and surface objects. In this paper, a method called two-steps semi-global filtering (TSGF) is proposed to extract DTM from low resolution DSM. Firstly, the DSM slope map is calculated and smoothed by SGF (semi-global filtering), which is then binarized and used as the mask of flat terrains. Secondly, the DSM is segmented with the restriction of the flat terrains mask. Lastly, each segment is filtered with semi-global algorithm in order to remove non-ground points, which will produce the final DTM. The first SGF is based on global distribution characteristic of large slope, which distinguishes steep terrains and flat terrains. The second SGF is used to filter non-ground points on DSM within flat terrain segments. Therefore, by two steps SGF non-ground points are removed robustly, while shape of steep terrains is kept. Experiments on DSM generated by ZY3 imagery with resolution of 10-30m demonstrate the effectiveness of the proposed method.

  12. A Method of Generating Indoor Map Spatial Data Automatically from Architectural Plans

    Directory of Open Access Journals (Sweden)

    SUN Weixin

    2016-06-01

    Full Text Available Taking architectural plans as data source, we proposed a method which can automatically generate indoor map spatial data. Firstly, referring to the spatial data demands of indoor map, we analyzed the basic characteristics of architectural plans, and introduced concepts of wall segment, adjoining node and adjoining wall segment, based on which basic flow of indoor map spatial data automatic generation was further established. Then, according to the adjoining relation between wall lines at the intersection with column, we constructed a repair method for wall connectivity in relation to the column. Utilizing the method of gradual expansibility and graphic reasoning to judge wall symbol local feature type at both sides of door or window, through update the enclosing rectangle of door or window, we developed a repair method for wall connectivity in relation to the door or window and a method for transform door or window into indoor map point feature. Finally, on the basis of geometric relation between adjoining wall segment median lines, a wall center-line extraction algorithm was presented. Taking one exhibition hall's architectural plan as example, we performed experiment and results show that the proposed methods have preferable applicability to deal with various complex situations, and realized indoor map spatial data automatic extraction effectively.

  13. An improved, SSH-based method to automatically identify mesoscale eddies in the ocean

    Institute of Scientific and Technical Information of China (English)

    WANG Xin; DU Yun-yan; ZHOU Cheng-hu; FAN Xing; YI Jia-wei

    2013-01-01

      Mesoscale eddies are an important component of oceanic features. How to automatically identify these mesoscale eddies from available data has become an important research topic. Through careful examination of existing methods, we propose an improved, SSH-based automatic identification method. Using the inclusion relation of enclosed SSH contours, the mesoscale eddy boundary and core(s) can be automatically identified. The time evolution of eddies can be examined by a threshold search algorithm and a tracking algorithm based on similarity. Sea-surface height (SSH) data from Naval Research Laboratory Layered Ocean Model (NLOM) and sea-level anomaly (SLA) data from altimeter are used in the many experiments, in which different automatic identification methods are compared. Our results indicate that the improved method is able to extract the mesoscale eddy boundary more precisely, retaining the multiple-core structure. In combination with the tracking algorithm, this method can capture complete mesoscale eddy processes. It can thus provide reliable information for further study of reconstructing eddy dynamics, merging, splitting, and evolution of a multi-core structure.

  14. Automatic extraction of ontological relations from Arabic text

    Directory of Open Access Journals (Sweden)

    Mohammed G.H. Al Zamil

    2014-12-01

    The proposed methodology has been designed to analyze Arabic text using lexical semantic patterns of the Arabic language according to a set of features. Next, the features have been abstracted and enriched with formal descriptions for the purpose of generalizing the resulted rules. The rules, then, have formulated a classifier that accepts Arabic text, analyzes it, and then displays related concepts labeled with its designated relationship. Moreover, to resolve the ambiguity of homonyms, a set of machine translation, text mining, and part of speech tagging algorithms have been reused. We performed extensive experiments to measure the effectiveness of our proposed tools. The results indicate that our proposed methodology is promising for automating the process of extracting ontological relations.

  15. Multiple Adaptive Neuro-Fuzzy Inference System with Automatic Features Extraction Algorithm for Cervical Cancer Recognition

    Directory of Open Access Journals (Sweden)

    Mohammad Subhi Al-batah

    2014-01-01

    Full Text Available To date, cancer of uterine cervix is still a leading cause of cancer-related deaths in women worldwide. The current methods (i.e., Pap smear and liquid-based cytology (LBC to screen for cervical cancer are time-consuming and dependent on the skill of the cytopathologist and thus are rather subjective. Therefore, this paper presents an intelligent computer vision system to assist pathologists in overcoming these problems and, consequently, produce more accurate results. The developed system consists of two stages. In the first stage, the automatic features extraction (AFE algorithm is performed. In the second stage, a neuro-fuzzy model called multiple adaptive neuro-fuzzy inference system (MANFIS is proposed for recognition process. The MANFIS contains a set of ANFIS models which are arranged in parallel combination to produce a model with multi-input-multioutput structure. The system is capable of classifying cervical cell image into three groups, namely, normal, low-grade squamous intraepithelial lesion (LSIL and high-grade squamous intraepithelial lesion (HSIL. The experimental results prove the capability of the AFE algorithm to be as effective as the manual extraction by human experts, while the proposed MANFIS produces a good classification performance with 94.2% accuracy.

  16. Automatic indicator dilution curve extraction in dynamic-contrast enhanced imaging using spectral clustering

    Science.gov (United States)

    Saporito, Salvatore; Herold, Ingeborg HF; Houthuizen, Patrick; van den Bosch, Harrie CM; Korsten, Hendrikus HM; van Assen, Hans C.; Mischi, Massimo

    2015-07-01

    Indicator dilution theory provides a framework for the measurement of several cardiovascular parameters. Recently, dynamic imaging and contrast agents have been proposed to apply the method in a minimally invasive way. However, the use of contrast-enhanced sequences requires the definition of regions of interest (ROIs) in the dynamic image series; a time-consuming and operator dependent task, commonly performed manually. In this work, we propose a method for the automatic extraction of indicator dilution curves, exploiting the time domain correlation between pixels belonging to the same region. Individual time intensity curves were projected into a low dimensional subspace using principal component analysis; subsequently, clustering was performed to identify the different ROIs. The method was assessed on clinically available DCE-MRI and DCE-US recordings, comparing the derived IDCs with those obtained manually. The robustness to noise of the proposed approach was shown on simulated data. The tracer kinetic parameters derived on real images were in agreement with those obtained from manual annotation. The presented method is a clinically useful preprocessing step prior to further ROI-based cardiac quantifications.

  17. Automatic Vertebral Column Extraction by Whole-Body Bone SPECT Scan

    Directory of Open Access Journals (Sweden)

    Sheng-Fang Huang

    2013-01-01

    Full Text Available Bone extraction and division can enhance the accuracy of diagnoses based on whole-body bone SPECT data. This study developed a method for using conventional SPECT for automatic recognition of the vertebral column. A novel feature of the proposed approach is a novel “bone graph" image description method that represents the connectivity between these image regions to facilitate manipulation of morphological relationships in the skeleton before surgery. By tracking the paths shown on the bone graph, skeletal structures can be identified by performing morphological operations. The performance of the method was evaluated quantitatively and qualitatively by two experienced nuclear medicine physicians. Datasets for whole-body bone SPECT scans in 46 lung cancer patients with bone metastasis were obtained with Tc-99m MDP. The algorithm successfully segmented vertebrae in the thoracolumbar spine. The quantitative assessment shows that the segmentation method achieved an average TP, FP, and FN rates of 95.1%, 9.1%, and 4.9%. The qualitative evaluation shows an average acceptance rate of 83%, where the data for the acceptable and unacceptable groups had a Cronbach’s alpha value of 0.718, which indicated reasonable internal consistency and reliability.

  18. Automatic Open Space Area Extraction and Change Detection from High Resolution Urban Satellite Images

    CERN Document Server

    Kodge, B G

    2011-01-01

    In this paper, we study efficient and reliable automatic extraction algorithm to find out the open space area from the high resolution urban satellite imagery, and to detect changes from the extracted open space area during the period 2003, 2006 and 2008. This automatic extraction and change detection algorithm uses some filters, segmentation and grouping that are applied on satellite images. The resultant images may be used to calculate the total available open space area and the built up area. It may also be used to compare the difference between present and past open space area using historical urban satellite images of that same projection, which is an important geo spatial data management application.

  19. Automatic landslide and mudflow detection method via multichannel sparse representation

    Science.gov (United States)

    Chao, Chen; Zhou, Jianjun; Hao, Zhuo; Sun, Bo; He, Jun; Ge, Fengxiang

    2015-10-01

    Landslide and mudflow detection is an important application of aerial images and high resolution remote sensing images, which is crucial for national security and disaster relief. Since the high resolution images are often large in size, it's necessary to develop an efficient algorithm for landslide and mudflow detection. Based on the theory of sparse representation and, we propose a novel automatic landslide and mudflow detection method in this paper, which combines multi-channel sparse representation and eight neighbor judgment methods. The whole process of the detection is totally automatic. We make the experiment on a high resolution image of ZhouQu district of Gansu province in China on August, 2010 and get a promising result which proved the effective of using sparse representation on landslide and mudflow detection.

  20. Automatic registration method for mobile LiDAR data

    Science.gov (United States)

    Wang, Ruisheng; Ferrie, Frank P.

    2015-01-01

    We present an automatic mutual information (MI) registration method for mobile LiDAR and panoramas collected from a driving vehicle. The suitability of MI for registration of aerial LiDAR and aerial oblique images has been demonstrated under an assumption that minimization of joint entropy (JE) is a sufficient approximation of maximization of MI. We show that this assumption is invalid for the ground-level data. The entropy of a LiDAR image cannot be regarded as approximately constant for small perturbations. Instead of minimizing the JE, we directly maximize MI to estimate corrections of camera poses. Our method automatically registers mobile LiDAR with spherical panoramas over an approximate 4-km drive, and is the first example we are aware of that tests MI registration in a large-scale context.

  1. Machine learning based sample extraction for automatic speech recognition using dialectal Assamese speech.

    Science.gov (United States)

    Agarwalla, Swapna; Sarma, Kandarpa Kumar

    2016-06-01

    Automatic Speaker Recognition (ASR) and related issues are continuously evolving as inseparable elements of Human Computer Interaction (HCI). With assimilation of emerging concepts like big data and Internet of Things (IoT) as extended elements of HCI, ASR techniques are found to be passing through a paradigm shift. Oflate, learning based techniques have started to receive greater attention from research communities related to ASR owing to the fact that former possess natural ability to mimic biological behavior and that way aids ASR modeling and processing. The current learning based ASR techniques are found to be evolving further with incorporation of big data, IoT like concepts. Here, in this paper, we report certain approaches based on machine learning (ML) used for extraction of relevant samples from big data space and apply them for ASR using certain soft computing techniques for Assamese speech with dialectal variations. A class of ML techniques comprising of the basic Artificial Neural Network (ANN) in feedforward (FF) and Deep Neural Network (DNN) forms using raw speech, extracted features and frequency domain forms are considered. The Multi Layer Perceptron (MLP) is configured with inputs in several forms to learn class information obtained using clustering and manual labeling. DNNs are also used to extract specific sentence types. Initially, from a large storage, relevant samples are selected and assimilated. Next, a few conventional methods are used for feature extraction of a few selected types. The features comprise of both spectral and prosodic types. These are applied to Recurrent Neural Network (RNN) and Fully Focused Time Delay Neural Network (FFTDNN) structures to evaluate their performance in recognizing mood, dialect, speaker and gender variations in dialectal Assamese speech. The system is tested under several background noise conditions by considering the recognition rates (obtained using confusion matrices and manually) and computation time

  2. An Automatic High Efficient Method for Dish Concentrator Alignment

    OpenAIRE

    Yong Wang; Song Li; Jinshan Xu; Yijiang Wang; Xu Cheng; Changgui Gu; Shengyong Chen; Bin Wan

    2014-01-01

    Alignment of dish concentrator is a key factor to the performance of solar energy system. We propose a new method for the alignment of faceted solar dish concentrator. The isosceles triangle configuration of facet’s footholds determines a fixed relation between light spot displacements and foothold movements, which allows an automatic determination of the amount of adjustments. Tests on a 25 kW Stirling Energy System dish concentrator verify the feasibility, accuracy, and efficiency of our...

  3. Microbial diversity in fecal samples depends on DNA extraction method

    DEFF Research Database (Denmark)

    Mirsepasi, Hengameh; Persson, Søren; Struve, Carsten

    2014-01-01

    BACKGROUND: There are challenges, when extracting bacterial DNA from specimens for molecular diagnostics, since fecal samples also contain DNA from human cells and many different substances derived from food, cell residues and medication that can inhibit downstream PCR. The purpose of the study...... was to evaluate two different DNA extraction methods in order to choose the most efficient method for studying intestinal bacterial diversity using Denaturing Gradient Gel Electrophoresis (DGGE). FINDINGS: In this study, a semi-automatic DNA extraction system (easyMag®, BioMérieux, Marcy I'Etoile, France......) and a manual one (QIAamp DNA Stool Mini Kit, Qiagen, Hilden, Germany) were tested on stool samples collected from 3 patients with Inflammatory Bowel disease (IBD) and 5 healthy individuals. DNA extracts obtained by the QIAamp DNA Stool Mini Kit yield a higher amount of DNA compared to DNA extracts obtained...

  4. Sleep Spindles as an Electrographic Element: Description and Automatic Detection Methods

    Directory of Open Access Journals (Sweden)

    Dorothée Coppieters ’t Wallant

    2016-01-01

    Full Text Available Sleep spindle is a peculiar oscillatory brain pattern which has been associated with a number of sleep (isolation from exteroceptive stimuli, memory consolidation and individual characteristics (intellectual quotient. Oddly enough, the definition of a spindle is both incomplete and restrictive. In consequence, there is no consensus about how to detect spindles. Visual scoring is cumbersome and user dependent. To analyze spindle activity in a more robust way, automatic sleep spindle detection methods are essential. Various algorithms were developed, depending on individual research interest, which hampers direct comparisons and meta-analyses. In this review, sleep spindle is first defined physically and topographically. From this general description, we tentatively extract the main characteristics to be detected and analyzed. A nonexhaustive list of automatic spindle detection methods is provided along with a description of their main processing principles. Finally, we propose a technique to assess the detection methods in a robust and comparable way.

  5. A New Method for Automatically Labeling Aircrafts in Airport Video

    Directory of Open Access Journals (Sweden)

    Luo Xiao

    2015-01-01

    Full Text Available For the problem that the airport video monitoring could only provide the image information while the label information including the flight number is not provided, a new method for automatically labeling aircrafts in airport video through the fusion of video and ADS-B data has been proposed. First, the image coordinates of aircrafts will be obtained through the image tracking of video. Then, the homography matrix between two projection planes will be calculated with the four and above point and line correspondences selected from the airport map and video image, respectively to the map image coordinates into the map coordinates. Finally, the aircrafts in video can be automatically labeled through the fusion of image tracking data and ADS-B monitoring data. Because an image coordinate measurement error exists at the time of selecting points from the image, the resulting coordinate conversion error is derived and the impact of point correspondence geometric layout on mesh coordinate mapping error is analyzed. Experiments have been conducted based on the actual data of Chengdu Shuangliu International Airport. The results show that the method can automatically label aircrafts in video in an effective way.

  6. Automatic parameter extraction technique for gate leakage current modeling in double gate MOSFET

    Science.gov (United States)

    Darbandy, Ghader; Gneiting, Thomas; Alius, Heidrun; Alvarado, Joaquín; Cerdeira, Antonio; Iñiguez, Benjamin

    2013-11-01

    Direct Tunneling (DT) and Trap Assisted Tunneling (TAT) gate leakage current parameters have been extracted and verified considering automatic parameter extraction approach. The industry standard package IC-CAP is used to extract our leakage current model parameters. The model is coded in Verilog-A and the comparison between the model and measured data allows to obtain the model parameter values and parameters correlations/relations. The model and parameter extraction techniques have been used to study the impact of parameters in the gate leakage current based on the extracted parameter values. It is shown that the gate leakage current depends on the interfacial barrier height more strongly than the barrier height of the dielectric layer. There is almost the same scenario with respect to the carrier effective masses into the interfacial layer and the dielectric layer. The comparison between the simulated results and available measured gate leakage current transistor characteristics of Trigate MOSFETs shows good agreement.

  7. Automatic extraction of faults and fractal analysis from remote sensing data

    Directory of Open Access Journals (Sweden)

    R. Gloaguen

    2007-01-01

    Full Text Available Object-based classification is a promising technique for image classification. Unlike pixel-based methods, which only use the measured radiometric values, the object-based techniques can also use shape and context information of scene textures. These extra degrees of freedom provided by the objects allow the automatic identification of geological structures. In this article, we present an evaluation of object-based classification in the context of extraction of geological faults. Digital elevation models and radar data of an area near Lake Magadi (Kenya have been processed. We then determine the statistics of the fault populations. The fractal dimensions of fault dimensions are similar to fractal dimensions directly measured on remote sensing images of the study area using power spectra (PSD and variograms. These methods allow unbiased statistics of faults and help us to understand the evolution of the fault systems in extensional domains. Furthermore, the direct analysis of image texture is a good indicator of the fault statistics and allows us to classify the intensity and type of deformation. We propose that extensional fault networks can be modeled by iterative function system (IFS.

  8. Automatic In-Syringe Dispersive Microsolid Phase Extraction Using Magnetic Metal-Organic Frameworks.

    Science.gov (United States)

    Maya, Fernando; Palomino Cabello, Carlos; Estela, Jose Manuel; Cerdà, Víctor; Turnes Palomino, Gemma

    2015-08-04

    A novel automatic strategy for the use of micro- and nanomaterials as sorbents for dispersive microsolid phase extraction (D-μ-SPE) based on the lab-in-syringe concept is reported. Using the developed technique, the implementation of magnetic metal-organic framework (MOF) materials for automatic solid-phase extraction has been achieved for the first time. A hybrid material based on submicrometric MOF crystals containing Fe3O4 nanoparticles was prepared and retained in the surface of a miniature magnetic bar. The magnetic bar was placed inside the syringe of an automatic bidirectional syringe pump, enabling dispersion and subsequent magnetic retrieval of the MOF hybrid material by automatic activation/deactivation of magnetic stirring. Using malachite green (MG) as a model adsorption analyte, a limit of detection of 0.012 mg/L and a linear working range of 0.04-2 mg/L were obtained for a sample volume equal to the syringe volume (5 mL). MG preconcentration was linear up to a volume of 40 mL, obtaining an enrichment factor of 120. The analysis throughput is 18 h(-1), and up to 3000 extractions/g of material can be performed. Recoveries ranging between 95 and 107% were obtained for the analysis of MG in different types of water and trout fish samples. The developed automatic D-μ-SPE technique is a safe alternative for the use of small-sized materials for sample preparation and is readily implementable to other magnetic materials independent of their size and shape and can be easily hyphenated to the majority of detectors and separation techniques.

  9. Automatic diet monitoring: a review of computer vision and wearable sensor-based methods.

    Science.gov (United States)

    Hassannejad, Hamid; Matrella, Guido; Ciampolini, Paolo; De Munari, Ilaria; Mordonini, Monica; Cagnoni, Stefano

    2017-01-31

    Food intake and eating habits have a significant impact on people's health. Widespread diseases, such as diabetes and obesity, are directly related to eating habits. Therefore, monitoring diet can be a substantial base for developing methods and services to promote healthy lifestyle and improve personal and national health economy. Studies have demonstrated that manual reporting of food intake is inaccurate and often impractical. Thus, several methods have been proposed to automate the process. This article reviews the most relevant and recent researches on automatic diet monitoring, discussing their strengths and weaknesses. In particular, the article reviews two approaches to this problem, accounting for most of the work in the area. The first approach is based on image analysis and aims at extracting information about food content automatically from food images. The second one relies on wearable sensors and has the detection of eating behaviours as its main goal.

  10. Automatic Recognition Method for Optical Measuring Instruments Based on Machine Vision

    Institute of Scientific and Technical Information of China (English)

    SONG Le; LIN Yuchi; HAO Liguo

    2008-01-01

    Based on a comprehensive study of various algorithms, the automatic recognition of traditional ocular optical measuring instruments is realized. Taking a universal tools microscope (UTM) lens view image as an example, a 2-layer automatic recognition model for data reading is established after adopting a series of pre-processing algorithms. This model is an optimal combination of the correlation-based template matching method and a concurrent back propagation (BP) neural network. Multiple complementary feature extraction is used in generating the eigenvectors of the concurrent network. In order to improve fault-tolerance capacity, rotation invariant features based on Zernike moments are extracted from digit characters and a 4-dimensional group of the outline features is also obtained. Moreover, the operating time and reading accuracy can be adjusted dynamically by setting the threshold value. The experimental result indicates that the newly developed algorithm has optimal recognition precision and working speed. The average reading ratio can achieve 97.23%. The recognition method can automatically obtain the results of optical measuring instruments rapidly and stably without modifying their original structure, which meets the application requirements.

  11. Semi-automatic method for routine evaluation of fibrinolytic components.

    Science.gov (United States)

    Collen, D; Tytgat, G; Verstraete, M

    1968-11-01

    A semi-automatic method for the routine evaluation of fibrinolytic activity is described. The principle is based upon graphic recording by a multichannel voltmeter of tension drops over a potentiometer, caused by variations in the influence of light upon a light-dependent resistance, resulting from modifications in the composition of the fibrin fibres by lysis. The method is applied to the assessment of certain fibrinolytic factors with widespread fibrinolytic endpoints, and the results are compared with simultaneously obtained visual data on the plasmin assay, the plasminogen assay, and on the euglobulin clot lysis time.

  12. An Automatic Building Extraction and Regularisation Technique Using LiDAR Point Cloud Data and Orthoimage

    Directory of Open Access Journals (Sweden)

    Syed Ali Naqi Gilani

    2016-03-01

    Full Text Available The development of robust and accurate methods for automatic building detection and regularisation using multisource data continues to be a challenge due to point cloud sparsity, high spectral variability, urban objects differences, surrounding complexity, and data misalignment. To address these challenges, constraints on object’s size, height, area, and orientation are generally benefited which adversely affect the detection performance. Often the buildings either small in size, under shadows or partly occluded are ousted during elimination of superfluous objects. To overcome the limitations, a methodology is developed to extract and regularise the buildings using features from point cloud and orthoimagery. The building delineation process is carried out by identifying the candidate building regions and segmenting them into grids. Vegetation elimination, building detection and extraction of their partially occluded parts are achieved by synthesising the point cloud and image data. Finally, the detected buildings are regularised by exploiting the image lines in the building regularisation process. Detection and regularisation processes have been evaluated using the ISPRS benchmark and four Australian data sets which differ in point density (1 to 29 points/m2, building sizes, shadows, terrain, and vegetation. Results indicate that there is 83% to 93% per-area completeness with the correctness of above 95%, demonstrating the robustness of the approach. The absence of over- and many-to-many segmentation errors in the ISPRS data set indicate that the technique has higher per-object accuracy. While compared with six existing similar methods, the proposed detection and regularisation approach performs significantly better on more complex data sets (Australian in contrast to the ISPRS benchmark, where it does better or equal to the counterparts.

  13. Semi-automatic term extraction for the African languages, with special reference to Northern Sotho

    OpenAIRE

    Elsabé Taljard; Gilles-Maurice de Schryver

    2002-01-01

    Abstract: Worldwide, semi-automatically extracting terms from corpora is becoming the norm for the compilation of terminology lists, term banks or dictionaries for special purposes. If Africanlanguage terminologists are willing to take their rightful place in the new millennium, they must not only take cognisance of this trend but also be ready to implement the new technology. In this article it is advocated that the best way to do the latter two at this stage, is to opt for computat...

  14. Semi-Automatically Extracting FAQs to Improve Accessibility of Software Development Knowledge

    CERN Document Server

    Henß, Stefan; Mezini, Mira

    2012-01-01

    Frequently asked questions (FAQs) are a popular way to document software development knowledge. As creating such documents is expensive, this paper presents an approach for automatically extracting FAQs from sources of software development discussion, such as mailing lists and Internet forums, by combining techniques of text mining and natural language processing. We apply the approach to popular mailing lists and carry out a survey among software developers to show that it is able to extract high-quality FAQs that may be further improved by experts.

  15. A method for automatically constructing the initial contour of the common carotid artery

    Directory of Open Access Journals (Sweden)

    Yara Omran

    2013-10-01

    Full Text Available In this article we propose a novel method to automatically set the initial contour that is used by the Active contours algorithm.The proposed method exploits the accumulative intensity profiles to locate the points on the arterial wall. The intensity profiles of sections that intersect the artery show distinguishable characterstics that make it possible to recognize them from the profiles of sections that do not intersect the artery walls. The proposed method is applied on ultrasound images of the transverse section of the common carotid artery, but it can be extended to be used on the images of the longitudinal section. The intensity profiles are classified using Support vector machine algorithm, and the results of different kernels are compared. The extracted features used for the classification are basically statistical features of the intensity profiles. The echogenicity of the arterial lumen, and gives the profiles that intersect the artery a special shape that helps recognizing these profiles from other general profiles.The outlining of the arterial walls may seem a classic task in image processing. However, most of the methods used to outline the artery start from a manual, or semi-automatic, initial contour.The proposed method is highly appreciated in automating the entire process of automatic artery detection and segmentation.

  16. Automatic Identification and Data Extraction from 2-Dimensional Plots in Digital Documents

    CERN Document Server

    Brouwer, William; Das, Sujatha; Mitra, Prasenjit; Giles, C L

    2008-01-01

    Most search engines index the textual content of documents in digital libraries. However, scholarly articles frequently report important findings in figures for visual impact and the contents of these figures are not indexed. These contents are often invaluable to the researcher in various fields, for the purposes of direct comparison with their own work. Therefore, searching for figures and extracting figure data are important problems. To the best of our knowledge, there exists no tool to automatically extract data from figures in digital documents. If we can extract data from these images automatically and store them in a database, an end-user can query and combine data from multiple digital documents simultaneously and efficiently. We propose a framework based on image analysis and machine learning to extract information from 2-D plot images and store them in a database. The proposed algorithm identifies a 2-D plot and extracts the axis labels, legend and the data points from the 2-D plot. We also segrega...

  17. A novel spectral index to automatically extract road networks from WorldView-2 satellite imagery

    Directory of Open Access Journals (Sweden)

    Kaveh Shahi

    2015-06-01

    Full Text Available This research develops a spectral index to automatically extract asphalt road networks named road extraction index (REI. This index uses WorldView-2 (WV-2 imagery, which has high spatial resolution and is multispectral. To determine the best bands for WV-2, field spectral data using a field spectroradiometer were collected. These data were then analyzed statistically. The bands were selected through the methodology of stepwise discriminant analysis. The appropriate WV-2 bands were distinguished from one another as per significant wavelengths. The proposed index is based on this classification. By applying REI to WV-2 imagery, we can extract asphalt roads accurately. Results demonstrate that REI is automated, transferable, and efficient in asphalt road extraction from high-resolution satellite imagery.

  18. Automatic parameter extraction techniques in IC-CAP for a compact double gate MOSFET model

    Science.gov (United States)

    Darbandy, Ghader; Gneiting, Thomas; Alius, Heidrun; Alvarado, Joaquín; Cerdeira, Antonio; Iñiguez, Benjamin

    2013-05-01

    In this paper, automatic parameter extraction techniques of Agilent's IC-CAP modeling package are presented to extract our explicit compact model parameters. This model is developed based on a surface potential model and coded in Verilog-A. The model has been adapted to Trigate MOSFETs, includes short channel effects (SCEs) and allows accurate simulations of the device characteristics. The parameter extraction routines provide an effective way to extract the model parameters. The techniques minimize the discrepancy and error between the simulation results and the available experimental data for more accurate parameter values and reliable circuit simulation. Behavior of the second derivative of the drain current is also verified and proves to be accurate and continuous through the different operating regimes. The results show good agreement with measured transistor characteristics under different conditions and through all operating regimes.

  19. Automatic Extraction of Figures from Scientific Publications in High-Energy Physics

    Directory of Open Access Journals (Sweden)

    Piotr Adam Praczyk

    2013-12-01

    Full Text Available Plots and figures play an important role in the process of understanding a scientificpublication, providing overviews of large amounts of data or ideas that are difficult to in-tuitively present using only the text. State of art in digital libraries, serving as gatewaysto knowledge encoded in scholarly writings, does not take full advantage of the graphicalcontent of documents. Enabling machines to automatically unlock the meaning of scien-tific illustrations would allow immense improvements in the way scientists work and theknowledge is being processed.    In this paper we present a novel solution for the initial problem of processing graphicalcontent, obtaining figures from scholarly publications stored in PDF format. Our methodrelies on vector properties of documents and as such, does not introduce additional errors,characteristic for methods based on raster image processing. Emphasis has been placed oncorrectly processing documents in High Energy Physics. The described approach makesdistinction between different classes of objects appearing in PDF documents and usesspatial clustering techniques to group objects into larger logical entities. A number ofheuristics allow the rejection of incorrect figure candidates and the extraction of differenttypes of metadata.

  20. UNCERTAIN TRAINING DATA EDITION FOR AUTOMATIC OBJECT-BASED CHANGE MAP EXTRACTION

    Directory of Open Access Journals (Sweden)

    S. Hajahmadi

    2013-09-01

    Full Text Available Due to the rapid transformation of the societies, and the consequent growth of the cities, it is necessary to study these changes in order to achieve better control and management of urban areas and assist the decision-makers. Change detection involves the ability to quantify temporal effects using multi-temporal data sets. The available maps of the under study area is one of the most important sources for this reason. Although old data bases and maps are a great resource, it is more than likely that the training data extracted from them might contain errors, which affects the procedure of the classification; and as a result the process of the training sample editing is an essential matter. Due to the urban nature of the area studied and the problems caused in the pixel base methods, object-based classification is applied. To reach this, the image is segmented into 4 scale levels using a multi-resolution segmentation procedure. After obtaining the segments in required levels, training samples are extracted automatically using the existing old map. Due to the old nature of the map, these samples are uncertain containing wrong data. To handle this issue, an editing process is proposed according to K-nearest neighbour and k-means algorithms. Next, the image is classified in a multi-resolution object-based manner and the effects of training sample refinement are evaluated. As a final step this classified image is compared with the existing map and the changed areas are detected.

  1. A method for closed loop automatic tuning of PID controllers

    Directory of Open Access Journals (Sweden)

    Tor S. Schei

    1992-07-01

    Full Text Available A simple method for the automatic tuning of PID controllers in closed loop is proposed. A limit cycle is generated through a nonlinear feedback path from the process output to the controller reference signal. The frequency of this oscillation is above the crossover frequency and below the critical frequency of the loop transfer function. The amplitude and frequency of the oscillation are estimated and the control parameters are adjusted iteratively such that the closed loop transfer function from the controller reference to the process output attains a specified amplitude at the oscillation frequency.

  2. Progressive Concept Evaluation Method for Automatically Generated Concept Variants

    Directory of Open Access Journals (Sweden)

    Woldemichael Dereje Engida

    2014-07-01

    Full Text Available Conceptual design is one of the most critical and important phases of design process with least computer support system. Conceptual design support tool (CDST is a conceptual design support system developed to automatically generate concepts for each subfunction in functional structure. The automated concept generation process results in large number of concept variants which require a thorough evaluation process to select the best design. To address this, a progressive concept evaluation technique consisting of absolute comparison, concept screening and weighted decision matrix using analytical hierarchy process (AHP is proposed to eliminate infeasible concepts at each stage. The software implementation of the proposed method is demonstrated.

  3. Combining Multiple Methods for the Automatic Construction of Multilingual WordNets

    CERN Document Server

    Atserias, J; Farreres, X; Rigau, G; Rodríguez, H; Atserias, Jordi; Climent, Salvador; Farreres, Xavier; Rigau, German; Rodriguez, Horacio

    1997-01-01

    This paper explores the automatic construction of a multilingual Lexical Knowledge Base from preexisting lexical resources. First, a set of automatic and complementary techniques for linking Spanish words collected from monolingual and bilingual MRDs to English WordNet synsets are described. Second, we show how resulting data provided by each method is then combined to produce a preliminary version of a Spanish WordNet with an accuracy over 85%. The application of these combinations results on an increment of the extracted connexions of a 40% without losing accuracy. Both coarse-grained (class level) and fine-grained (synset assignment level) confidence ratios are used and evaluated. Finally, the results for the whole process are presented.

  4. An Automatic Detection Method of Nanocomposite Film Element Based on GLCM and Adaboost M1

    Directory of Open Access Journals (Sweden)

    Hai Guo

    2015-01-01

    Full Text Available An automatic detection model adopting pattern recognition technology is proposed in this paper; it can realize the measurement to the element of nanocomposite film. The features of gray level cooccurrence matrix (GLCM can be extracted from different types of surface morphology images of film; after that, the dimension reduction of film can be handled by principal component analysis (PCA. So it is possible to identify the element of film according to the Adaboost M1 algorithm of a strong classifier with ten decision tree classifiers. The experimental result shows that this model is superior to the ones of SVM (support vector machine, NN and BayesNet. The method proposed can be widely applied to the automatic detection of not only nanocomposite film element but also other nanocomposite material elements.

  5. Automatic localization of pupil using eccentricity and iris using gradient based method

    Science.gov (United States)

    Khan, Tariq M.; Aurangzeb Khan, M.; Malik, Shahzad A.; Khan, Shahid A.; Bashir, Tariq; Dar, Amir H.

    2011-02-01

    This paper presents a novel approach for the automatic localization of pupil and iris. Pupil and iris are nearly circular regions, which are surrounded by sclera, eyelids and eyelashes. The localization of both pupil and iris is extremely important in any iris recognition system. In the proposed algorithm pupil is localized using Eccentricity based Bisection method which looks for the region that has the highest probability of having pupil. While iris localization is carried out in two steps. In the first step, iris image is directionally segmented and a noise free region (region of interest) is extracted. In the second step, angular lines in the region of interest are extracted and the edge points of iris outer boundary are found through the gradient of these lines. The proposed method is tested on CASIA ver 1.0 and MMU Iris databases. Experimental results show that this method is comparatively accurate.

  6. Automatic Extraction and Size Distribution of Landslides in Kurdistan Region, NE Iraq

    Directory of Open Access Journals (Sweden)

    Arsalan A. Othman

    2013-05-01

    Full Text Available This study aims to assess the localization and size distribution of landslides using automatic remote sensing techniques in (semi- arid, non-vegetated, mountainous environments. The study area is located in the Kurdistan region (NE Iraq, within the Zagros orogenic belt, which is characterized by the High Folded Zone (HFZ, the Imbricated Zone and the Zagros Suture Zone (ZSZ. The available reference inventory includes 3,190 landslides mapped from sixty QuickBird scenes using manual delineation. The landslide types involve rock falls, translational slides and slumps, which occurred in different lithological units. Two hundred and ninety of these landslides lie within the ZSZ, representing a cumulated surface of 32 km2. The HFZ implicates 2,900 landslides with an overall coverage of about 26 km2. We first analyzed cumulative landslide number-size distributions using the inventory map. We then proposed a very simple and robust algorithm for automatic landslide extraction using specific band ratios selected upon the spectral signatures of bare surfaces as well as posteriori slope and the normalized difference vegetation index (NDVI thresholds. The index is based on the contrast between landslides and their background, whereas the landslides have high reflections in the green and red bands. We applied the slope threshold map to remove low slope areas, which have high reflectance in red and green bands. The algorithm was able to detect ~96% of the recent landslides known from the reference inventory on a test site. The cumulative landslide number-size distribution of automatically extracted landslide is very similar to the one based on visual mapping. The automatic extraction is therefore adapted for the quantitative analysis of landslides and thus can contribute to the assessment of hazards in similar regions.

  7. [An automatic extraction algorithm for individual tree crown projection area and volume based on 3D point cloud data].

    Science.gov (United States)

    Xu, Wei-Heng; Feng, Zhong-Ke; Su, Zhi-Fang; Xu, Hui; Jiao, You-Quan; Deng, Ou

    2014-02-01

    Tree crown projection area and crown volume are the important parameters for the estimation of biomass, tridimensional green biomass and other forestry science applications. Using conventional measurements of tree crown projection area and crown volume will produce a large area of errors in the view of practical situations referring to complicated tree crown structures or different morphological characteristics. However, it is difficult to measure and validate their accuracy through conventional measurement methods. In view of practical problems which include complicated tree crown structure, different morphological characteristics, so as to implement the objective that tree crown projection and crown volume can be extracted by computer program automatically. This paper proposes an automatic untouched measurement based on terrestrial three-dimensional laser scanner named FARO Photon120 using plane scattered data point convex hull algorithm and slice segmentation and accumulation algorithm to calculate the tree crown projection area. It is exploited on VC+6.0 and Matlab7.0. The experiments are exploited on 22 common tree species of Beijing, China. The results show that the correlation coefficient of the crown projection between Av calculated by new method and conventional method A4 reaches 0.964 (p3D LIDAR point cloud data of individual tree, tree crown structure was reconstructed at a high rate of speed with high accuracy, and crown projection and volume of individual tree were extracted by this automatical untouched method, which can provide a reference for tree crown structure studies and be worth to popularize in the field of precision forestry.

  8. Kernel sparse coding method for automatic target recognition in infrared imagery using covariance descriptor

    Science.gov (United States)

    Yang, Chunwei; Yao, Junping; Sun, Dawei; Wang, Shicheng; Liu, Huaping

    2016-05-01

    Automatic target recognition in infrared imagery is a challenging problem. In this paper, a kernel sparse coding method for infrared target recognition using covariance descriptor is proposed. First, covariance descriptor combining gray intensity and gradient information of the infrared target is extracted as a feature representation. Then, due to the reason that covariance descriptor lies in non-Euclidean manifold, kernel sparse coding theory is used to solve this problem. We verify the efficacy of the proposed algorithm in terms of the confusion matrices on the real images consisting of seven categories of infrared vehicle targets.

  9. AUTOMATIC EXTRACTION OF BUILDING ROOF PLANES FROM AIRBORNE LIDAR DATA APPLYING AN EXTENDED 3D RANDOMIZED HOUGH TRANSFORM

    OpenAIRE

    Maltezos, Evangelos; Ioannidis, Charalabos

    2016-01-01

    This study aims to extract automatically building roof planes from airborne LIDAR data applying an extended 3D Randomized Hough Transform (RHT). The proposed methodology consists of three main steps, namely detection of building points, plane detection and refinement. For the detection of the building points, the vegetative areas are first segmented from the scene content and the bare earth is extracted afterwards. The automatic plane detection of each building is performed applying extension...

  10. A region-growing approach for automatic outcrop fracture extraction from a three-dimensional point cloud

    Science.gov (United States)

    Wang, Xin; Zou, Lejun; Shen, Xiaohua; Ren, Yupeng; Qin, Yi

    2017-02-01

    Conventional manual surveys of rock mass fractures usually require large amounts of time and labor; yet, they provide a relatively small set of data that cannot be considered representative of the study region. Terrestrial laser scanners are increasingly used for fracture surveys because they can efficiently acquire large area, high-resolution, three-dimensional (3D) point clouds from outcrops. However, extracting fractures and other planar surfaces from 3D outcrop point clouds is still a challenging task. No method has been reported that can be used to automatically extract the full extent of every individual fracture from a 3D outcrop point cloud. In this study, we propose a method using a region-growing approach to address this problem; the method also estimates the orientation of each fracture. In this method, criteria based on the local surface normal and curvature of the point cloud are used to initiate and control the growth of the fracture region. In tests using outcrop point cloud data, the proposed method identified and extracted the full extent of individual fractures with high accuracy. Compared with manually acquired field survey data, our method obtained better-quality fracture data, thereby demonstrating the high potential utility of the proposed method.

  11. Automatic numerical integration methods for Feynman integrals through 3-loop

    Science.gov (United States)

    de Doncker, E.; Yuasa, F.; Kato, K.; Ishikawa, T.; Olagbemi, O.

    2015-05-01

    We give numerical integration results for Feynman loop diagrams through 3-loop such as those covered by Laporta [1]. The methods are based on automatic adaptive integration, using iterated integration and extrapolation with programs from the QUADPACK package, or multivariate techniques from the ParInt package. The Dqags algorithm from QuadPack accommodates boundary singularities of fairly general types. PARINT is a package for multivariate integration layered over MPI (Message Passing Interface), which runs on clusters and incorporates advanced parallel/distributed techniques such as load balancing among processes that may be distributed over a network of nodes. Results are included for 3-loop self-energy diagrams without IR (infra-red) or UV (ultra-violet) singularities. A procedure based on iterated integration and extrapolation yields a novel method of numerical regularization for integrals with UV terms, and is applied to a set of 2-loop self-energy diagrams with UV singularities.

  12. Method for automatic measurement of second language speaking proficiency

    Science.gov (United States)

    Bernstein, Jared; Balogh, Jennifer

    2005-04-01

    Spoken language proficiency is intuitively related to effective and efficient communication in spoken interactions. However, it is difficult to derive a reliable estimate of spoken language proficiency by situated elicitation and evaluation of a person's communicative behavior. This paper describes the task structure and scoring logic of a group of fully automatic spoken language proficiency tests (for English, Spanish and Dutch) that are delivered via telephone or Internet. Test items are presented in spoken form and require a spoken response. Each test is automatically-scored and primarily based on short, decontextualized tasks that elicit integrated listening and speaking performances. The tests present several types of tasks to candidates, including sentence repetition, question answering, sentence construction, and story retelling. The spoken responses are scored according to the lexical content of the response and a set of acoustic base measures on segments, words and phrases, which are scaled with IRT methods or parametrically combined to optimize fit to human listener judgments. Most responses are isolated spoken phrases and sentences that are scored according to their linguistic content, their latency, and their fluency and pronunciation. The item development procedures and item norming are described.

  13. An Automatic Cloud Detection Method for ZY-3 Satellite

    Directory of Open Access Journals (Sweden)

    CHEN Zhenwei

    2015-03-01

    Full Text Available Automatic cloud detection for optical satellite remote sensing images is a significant step in the production system of satellite products. For the browse images cataloged by ZY-3 satellite, the tree discriminate structure is adopted to carry out cloud detection. The image was divided into sub-images and their features were extracted to perform classification between clouds and grounds. However, due to the high complexity of clouds and surfaces and the low resolution of browse images, the traditional classification algorithms based on image features are of great limitations. In view of the problem, a prior enhancement processing to original sub-images before classification was put forward in this paper to widen the texture difference between clouds and surfaces. Afterwards, with the secondary moment and first difference of the images, the feature vectors were extended in multi-scale space, and then the cloud proportion in the image was estimated through comprehensive analysis. The presented cloud detection algorithm has already been applied to the ZY-3 application system project, and the practical experiment results indicate that this algorithm is capable of promoting the accuracy of cloud detection significantly.

  14. Extended morphological processing: a practical method for automatic spot detection of biological markers from microscopic images

    Directory of Open Access Journals (Sweden)

    Kimori Yoshitaka

    2010-07-01

    Full Text Available Abstract Background A reliable extraction technique for resolving multiple spots in light or electron microscopic images is essential in investigations of the spatial distribution and dynamics of specific proteins inside cells and tissues. Currently, automatic spot extraction and characterization in complex microscopic images poses many challenges to conventional image processing methods. Results A new method to extract closely located, small target spots from biological images is proposed. This method starts with a simple but practical operation based on the extended morphological top-hat transformation to subtract an uneven background. The core of our novel approach is the following: first, the original image is rotated in an arbitrary direction and each rotated image is opened with a single straight line-segment structuring element. Second, the opened images are unified and then subtracted from the original image. To evaluate these procedures, model images of simulated spots with closely located targets were created and the efficacy of our method was compared to that of conventional morphological filtering methods. The results showed the better performance of our method. The spots of real microscope images can be quantified to confirm that the method is applicable in a given practice. Conclusions Our method achieved effective spot extraction under various image conditions, including aggregated target spots, poor signal-to-noise ratio, and large variations in the background intensity. Furthermore, it has no restrictions with respect to the shape of the extracted spots. The features of our method allow its broad application in biological and biomedical image information analysis.

  15. Comparative analysis of image classification methods for automatic diagnosis of ophthalmic images

    Science.gov (United States)

    Wang, Liming; Zhang, Kai; Liu, Xiyang; Long, Erping; Jiang, Jiewei; An, Yingying; Zhang, Jia; Liu, Zhenzhen; Lin, Zhuoling; Li, Xiaoyan; Chen, Jingjing; Cao, Qianzhong; Li, Jing; Wu, Xiaohang; Wang, Dongni; Li, Wangting; Lin, Haotian

    2017-01-01

    There are many image classification methods, but it remains unclear which methods are most helpful for analyzing and intelligently identifying ophthalmic images. We select representative slit-lamp images which show the complexity of ocular images as research material to compare image classification algorithms for diagnosing ophthalmic diseases. To facilitate this study, some feature extraction algorithms and classifiers are combined to automatic diagnose pediatric cataract with same dataset and then their performance are compared using multiple criteria. This comparative study reveals the general characteristics of the existing methods for automatic identification of ophthalmic images and provides new insights into the strengths and shortcomings of these methods. The relevant methods (local binary pattern +SVMs, wavelet transformation +SVMs) which achieve an average accuracy of 87% and can be adopted in specific situations to aid doctors in preliminarily disease screening. Furthermore, some methods requiring fewer computational resources and less time could be applied in remote places or mobile devices to assist individuals in understanding the condition of their body. In addition, it would be helpful to accelerate the development of innovative approaches and to apply these methods to assist doctors in diagnosing ophthalmic disease.

  16. Extracting Noun Phrases from Large-Scale Texts A Hybrid Approach and Its Automatic Evaluation

    CERN Document Server

    Chen, K; Chen, Kuang-hua; Chen, Hsin-Hsi

    1994-01-01

    To acquire noun phrases from running texts is useful for many applications, such as word grouping,terminology indexing, etc. The reported literatures adopt pure probabilistic approach, or pure rule-based noun phrases grammar to tackle this problem. In this paper, we apply a probabilistic chunker to deciding the implicit boundaries of constituents and utilize the linguistic knowledge to extract the noun phrases by a finite state mechanism. The test texts are SUSANNE Corpus and the results are evaluated by comparing the parse field of SUSANNE Corpus automatically. The results of this preliminary experiment are encouraging.

  17. A Novel Automatic Method for Removal of Flicker in Video

    Institute of Scientific and Technical Information of China (English)

    ZHOU Lei; NI Qiang; WANG Xing-dong; ZHOU Yuan-hua

    2005-01-01

    Intensity flicker is a common form of degradation in archived film. Most algorithms on this distortion are complicated and uncontrolled. This paper presented a discrete mathematical model of flicker, designed a blockbased estimation method of the model's parameters according to their features of intensity variation in large area.With this estimation result it constructed a compensation model to repair the current frame. This restoration approach is full automatic and the repair process of current frame does not need the information of frames behind it.The algorithm was realized to establish a simple and adjustable repair system. The experimental results show that the proposed algorithm can remove most intensity flicker and preserve the wanted effects.

  18. 帕金森病患者红外线数字化步态测量数据的半自动提取方法的建立%Semi-automatic Extraction Method to Establish for the PD Gait Data of Infrared Digital Measurement

    Institute of Scientific and Technical Information of China (English)

    于昌琳; 沈林勇; 胡小吾; 钱晋武; 吴曦

    2014-01-01

    Many neurological diseases and bone-damaged diseases can cause movement disorder leading to abnormal gait,such as Parkinson′s disease.It will be more accurate to evaluate the rehabilitation of some disease through quantitative evaluation of gait param-eters instead of qualitative evaluation by doctor′s visual inspection.At present,the common method to make quantitative analysis of gait is to collect the three-dimensional coordinates of human body through the motion capture devices,then to extract the gait charac-teristics through the three-dimensional coordinates.During the extraction process,it is difficult to conduct completely automatic selec-tion due to the large masses of the original data,the complexity of completely manual processing,as well as the numerous cases of the demarcation points of clinical gait.In this case,with the combination of the advantages of multiple softwares,we utilize matlab to select demarcation points manually and then automatically extracted the displaying results of gait characteristics in friendly interface in order to realize the semi-automatic processing of three -dimensional coordinates,thus managing to extract the gait parameters efficiently as well as reflect the individual characteristics of clinical gait for Parkinson′s disease accurately.%很多神经性疾病和骨骼损伤性疾病都会造成运动障碍导致异常步态,帕金森症就是其中的一种,通过步态参数定量评估代替医生目测定性评估,可以更准确的对疾病进行康复评估。目前,对步态定量分析常用的方法是通过运动捕捉仪采集人体三维坐标,再通过三维坐标提取步态特征。在提取过程中,由于原始数据庞大,完全手工处理繁复,同时自动处理中临床步态分界点情况众多,完全自动选取存在困难。本研究结合多个软件的优势,利用matlab绘图手工选取分界点,再自动提取步态特征在友好界面中显示结果,实现对

  19. Automatic detection of microaneurysms using microstructure and wavelet methods

    Indian Academy of Sciences (India)

    M Tamilarasi; K Duraiswamy

    2015-06-01

    Retinal microaneurysm is one of the earliest signs in diabetic retinopathy diagnosis. This paper has developed an approach to automate the detection of microaneurysms using wavelet-based Gaussian mixture model and microstructure texture feature extraction. First, the green channel of the colour retinal fundus image is extracted and pre-processed using various enhancement techniques such as bottom-hat filtering and gamma correction. Second, microstructures are extracted as Gaussian profiles in wavelet domain using the three-level generative model. Multiscale Gaussian kernels are obtained and histogram-based features are extracted from the best kernel. Using the Markov Chain Monte Carlo method, microaneurysms are classified using the optimal feature set. The proposed approach is experimented with DIARETDB0 and DIARETDB1 datasets using a classifier based on multi-layer perceptron procedure. For DIARETDB0 dataset, the proposed algorithm obtains the results with a sensitivity of 98.32 and specificity of 97.59. In the case of DIARETDB1 dataset, the sensitivity and specificity of 98.91 and 97.65 have been achieved. The accuracies achieved by the proposed algorithm are 97.86 and 98.33 using DIARETDB0 and DIARETDB1 datasets respectively. Based on ground truth validation, good segmentation results are achieved when compared to existing algorithms such as local relative entropy-based thresholding, inverse adaptive surface thresholding, inverse segmentation method, and dark object segmentation.

  20. Combining contour detection algorithms for the automatic extraction of the preparation line from a dental 3D measurement

    Science.gov (United States)

    Ahlers, Volker; Weigl, Paul; Schachtzabel, Hartmut

    2005-04-01

    Due to the increasing demand for high-quality ceramic crowns and bridges, the CAD/CAM-based production of dental restorations has been a subject of intensive research during the last fifteen years. A prerequisite for the efficient processing of the 3D measurement of prepared teeth with a minimal amount of user interaction is the automatic determination of the preparation line, which defines the sealing margin between the restoration and the prepared tooth. Current dental CAD/CAM systems mostly require the interactive definition of the preparation line by the user, at least by means of giving a number of start points. Previous approaches to the automatic extraction of the preparation line rely on single contour detection algorithms. In contrast, we use a combination of different contour detection algorithms to find several independent potential preparation lines from a height profile of the measured data. The different algorithms (gradient-based, contour-based, and region-based) show their strengths and weaknesses in different clinical situations. A classifier consisting of three stages (range check, decision tree, support vector machine), which is trained by human experts with real-world data, finally decides which is the correct preparation line. In a test with 101 clinical preparations, a success rate of 92.0% has been achieved. Thus the combination of different contour detection algorithms yields a reliable method for the automatic extraction of the preparation line, which enables the setup of a turn-key dental CAD/CAM process chain with a minimal amount of interactive screen work.

  1. CART IV: improving automatic camouflage assessment with assistance methods

    Science.gov (United States)

    Müller, Thomas; Müller, Markus

    2010-04-01

    In order to facilitate systematic, computer aided improvements of camouflage and concealment assessment methods, the software system CART (Camouflage Assessment in Real-Time) was built up for the camouflage assessment of objects in multispectral image sequences (see contributions to SPIE 2007, SPIE 2008 and SPIE 2009 [1], [2], [3]). It comprises a semi-automatic marking of target objects (ground truth generation) including their propagation over the image sequence and the evaluation via user-defined feature extractors. The conspicuity of camouflaged objects due to their movement can be assessed with a purpose-built processing method named MTI snail track algorithm. This paper presents the enhancements over the recent year and addresses procedures to assist the camouflage assessment of moving objects for image data material with strong noise or image artefacts. This extends the evaluation methods significantly to a broader application range. For example, some noisy infrared image data material can be evaluated for the first time by applying the presented methods which fathom the correlations between camouflage assessment, MTI (moving target indication) and dedicated noise filtering.

  2. Automatic facial feature extraction and expression recognition based on neural network

    CERN Document Server

    Khandait, S P; Khandait, P D

    2012-01-01

    In this paper, an approach to the problem of automatic facial feature extraction from a still frontal posed image and classification and recognition of facial expression and hence emotion and mood of a person is presented. Feed forward back propagation neural network is used as a classifier for classifying the expressions of supplied face into seven basic categories like surprise, neutral, sad, disgust, fear, happy and angry. For face portion segmentation and localization, morphological image processing operations are used. Permanent facial features like eyebrows, eyes, mouth and nose are extracted using SUSAN edge detection operator, facial geometry, edge projection analysis. Experiments are carried out on JAFFE facial expression database and gives better performance in terms of 100% accuracy for training set and 95.26% accuracy for test set.

  3. The Automatic Generation of Chinese Outline Font Based on Stroke Extraction

    Institute of Scientific and Technical Information of China (English)

    1995-01-01

    A new method to obtain spline outline description of Chinese font based on stroke extraction is presented.It has two primary advantages:(1)the quality of Chinese output is greatly improved;(2)the memory requirement is reduced.The method for stroke extraction is discussed in detail and experimental results are presented.

  4. A hybrid semi-automatic method for liver segmentation based on level-set methods using multiple seed points.

    Science.gov (United States)

    Yang, Xiaopeng; Yu, Hee Chul; Choi, Younggeun; Lee, Wonsup; Wang, Baojian; Yang, Jaedo; Hwang, Hongpil; Kim, Ji Hyun; Song, Jisoo; Cho, Baik Hwan; You, Heecheon

    2014-01-01

    The present study developed a hybrid semi-automatic method to extract the liver from abdominal computerized tomography (CT) images. The proposed hybrid method consists of a customized fast-marching level-set method for detection of an optimal initial liver region from multiple seed points selected by the user and a threshold-based level-set method for extraction of the actual liver region based on the initial liver region. The performance of the hybrid method was compared with those of the 2D region growing method implemented in OsiriX using abdominal CT datasets of 15 patients. The hybrid method showed a significantly higher accuracy in liver extraction (similarity index, SI=97.6 ± 0.5%; false positive error, FPE = 2.2 ± 0.7%; false negative error, FNE=2.5 ± 0.8%; average symmetric surface distance, ASD=1.4 ± 0.5mm) than the 2D (SI=94.0 ± 1.9%; FPE = 5.3 ± 1.1%; FNE=6.5 ± 3.7%; ASD=6.7 ± 3.8mm) region growing method. The total liver extraction time per CT dataset of the hybrid method (77 ± 10 s) is significantly less than the 2D region growing method (575 ± 136 s). The interaction time per CT dataset between the user and a computer of the hybrid method (28 ± 4 s) is significantly shorter than the 2D region growing method (484 ± 126 s). The proposed hybrid method was found preferred for liver segmentation in preoperative virtual liver surgery planning.

  5. An automatic and effective parameter optimization method for model tuning

    Directory of Open Access Journals (Sweden)

    T. Zhang

    2015-05-01

    Full Text Available Physical parameterizations in General Circulation Models (GCMs, having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.

  6. On A Semi-Automatic Method for Generating Composition Tables

    CERN Document Server

    Liu, Weiming

    2011-01-01

    Originating from Allen's Interval Algebra, composition-based reasoning has been widely acknowledged as the most popular reasoning technique in qualitative spatial and temporal reasoning. Given a qualitative calculus (i.e. a relation model), the first thing we should do is to establish its composition table (CT). In the past three decades, such work is usually done manually. This is undesirable and error-prone, given that the calculus may contain tens or hundreds of basic relations. Computing the correct CT has been identified by Tony Cohn as a challenge for computer scientists in 1995. This paper addresses this problem and introduces a semi-automatic method to compute the CT by randomly generating triples of elements. For several important qualitative calculi, our method can establish the correct CT in a reasonable short time. This is illustrated by applications to the Interval Algebra, the Region Connection Calculus RCC-8, the INDU calculus, and the Oriented Point Relation Algebras. Our method can also be us...

  7. Automatic Extraction of Tongue Coatings from Digital Images: A Traditional Chinese Medicine Diagnostic Tool

    Institute of Scientific and Technical Information of China (English)

    Linda Yunlu BAI; SHI Yundi; WU Jia; ZHANG Yonghong; WONG Weiliang; WU Yu; BAI Jing

    2009-01-01

    In traditional Chinese medicine, the coating on the tongue is considered to be a reflection of various pathologic factors. However, the conventional method to examine the tongue lacks an acceptable standard and does not provide the means for sharing information. This paper describes a segmentation method to extract tongue coatings. First, the tongue body was extracted from the original image using the watershed transform. Then, a threshold method was applied to the image to eliminate the light from the camera flash.Finally, a threshold method using the Otsu model in combination with a splitting-merging method was used in the red, green, and blue (RGB) space to extract the thin coating. The combination of the above two methods is applied in the hue, saturation, and value (HSV) space to extract the thick coating. The feasibility of this method is tested by experiments, and the accuracy of segmentation is 95.9%.

  8. An automatic segmentation method for multi-tomatoes image under complicated natural background

    Science.gov (United States)

    Yin, Jianjun; Mao, Hanping; Hu, Yongguang; Wang, Xinzhong; Chen, Shuren

    2006-12-01

    It is a fundamental work to realize intelligent fruit-picking that mature fruits are distinguished from complicated backgrounds and determined their three-dimensional location. Various methods for fruit identification can be found from the literatures. However, surprisingly little attention has been paid to image segmentation of multi-fruits which growth states are separated, connected, overlapped and partially covered by branches and leaves of plant under the natural illumination condition. In this paper we present an automatic segmentation method that comprises of three main steps. Firstly, Red and Green component image are extracted from RGB color image, and Green component subtracted from Red component gives RG of chromatic aberration gray-level image. Gray-level value between objects and background has obviously difference in RG image. By the feature, Ostu's threshold method is applied to do adaptive RG image segmentation. And then, marker-controlled watershed segmentation based on morphological grayscale reconstruction is applied into Red component image to search boundary of connected or overlapped tomatoes. Finally, intersection operation is done by operation results of above two steps to get binary image of final segmentation. The tests show that the automatic segmentation method has satisfactory effect upon multi-tomatoes image of various growth states under the natural illumination condition. Meanwhile, it has very robust for different maturity of multi-tomatoes image.

  9. Automatic heating and cooling system in a gas purge microsyringe extraction.

    Science.gov (United States)

    Piao, Xiangfan; Bi, Jinhu; Yang, Cui; Wang, Xiaoping; Wang, Juan; Li, Donghao

    2011-10-30

    The gas purge microsyringe extraction (GP-MSE) technique offers quantitative and simultaneous extraction, and rapid gas chromatographic-mass spectrometric determination of volatile and semivolatile chemicals is possible. To simplify the application, a new automatic temperature control system was developed here. Stable heating and cooling over a wide range of temperatures were achieved using a micro-heater and thermoelectric cooler under varying gas flow conditions. Temperatures could be accurately controlled in the range 20-350°C (heating) and 20 to -4°C (cooling). Temperature effects on the extraction performance of the GP-MSE were experimentally investigated by comparing the recoveries of polycyclic aromatic hydrocarbons (PAHs) under various experimental conditions. A sample treatment was completed within 3 min, which is much less than the time required for chromatographic analysis. The recovery of chemicals determined ranged from 81 to 96%. High reproducibility data (RSD ≤ 5%) were obtained for direct extraction of various analytes in spiked complex plant and biological samples. The data show that the heating and cooling system has potential applications in GP-MSE system for the direct determination of various kinds of volatile and semivolatile chemicals from complex matrices without any, or only minor, sample pretreatment.

  10. A Method for Determining Sedimentary Micro-Facies Belts Automatically

    Institute of Scientific and Technical Information of China (English)

    Linfu Xue; Qitai Mei; Quan Sun

    2003-01-01

    It is important to understand the distribution of sedimentary facies, especially the distribution of sand body that is the key for oil production and exploration. The secondary oil recovery requires analyzing a great deal of data accumulated within decades of oil field development. At many cases sedimentary micro-facies maps need to be reconstructed and redrawn frequently, which is time-consuming and heavy. This paper presents an integrated approach for determining the distribution of sedimentary micro-facies, tracing the micro-facies boundary, and drawing the map of sedimentary micro-facies belts automatically by computer technique. The approach is based on the division and correlation of strata of multiple wells as well as analysis of sedimentary facies. The approach includes transform, gridding, interpolation, superposing, searching boundary and drawing the map of sedimentary facies belts, and employs the spatial interpolation method and "worm" interpolation method to determine the distribution of sedimentary micro-facies including sand ribbon and/or sand blanket. The computer software developed on the basis of the above principle provides a tool for quick visualization and understanding the distribution of sedimentary micro-facies and reservoir. Satisfied results have been achieveed by applying the technique to the Putaohua Oil Field in Songliao Basin, China.

  11. A Novel and Efficient Method for Iris Automatic Location

    Institute of Scientific and Technical Information of China (English)

    XU Guang-zhu; ZHANG Zai-feng; MA Yi-de

    2007-01-01

    An efficient and robust iris location algorithm plays a very important role in a real iris recognition system. A novel and efficient iris automatic location method is presented in this study. It includes following two steps mainly: pupil location and iris outer boundary location. A digital eye image was divided into many small rectangular blocks with fixed size in the pupil location, and the block with the smallest average intensity was selected as a reference area. Then image binarization was implemented taking the average intensity of the reference area as a threshold. At last the center coordinates and radius of pupil were estimated by extending the reference area to the pupil's boundaries in the binary iris image. In the iris outer location, two local parts of the eye image were selected and transformed into polar coordinates from Cartesian reference. In order to detect the fainter outer boundary of the iris quickly, a novel edge detector was used to locate boundaries of the two parts. The center coordinates and radius of the iris outer boundary can be estimated using the fusion of the locating results of the two local parts and the location information of the pupil. The algorithm was tested on CASIA v1.0 and MMU v1.0 digital eye image databases and experimental results show that the proposed method has satisfying performance and good robustness.

  12. Framework for automatic information extraction from research papers on nanocrystal devices

    Directory of Open Access Journals (Sweden)

    Thaer M. Dieb

    2015-09-01

    Full Text Available To support nanocrystal device development, we have been working on a computational framework to utilize information in research papers on nanocrystal devices. We developed an annotated corpus called “ NaDev” (Nanocrystal Device Development for this purpose. We also proposed an automatic information extraction system called “NaDevEx” (Nanocrystal Device Automatic Information Extraction Framework. NaDevEx aims at extracting information from research papers on nanocrystal devices using the NaDev corpus and machine-learning techniques. However, the characteristics of NaDevEx were not examined in detail. In this paper, we conduct system evaluation experiments for NaDevEx using the NaDev corpus. We discuss three main issues: system performance, compared with human annotators; the effect of paper type (synthesis or characterization on system performance; and the effects of domain knowledge features (e.g., a chemical named entity recognition system and list of names of physical quantities on system performance. We found that overall system performance was 89% in precision and 69% in recall. If we consider identification of terms that intersect with correct terms for the same information category as the correct identification, i.e., loose agreement (in many cases, we can find that appropriate head nouns such as temperature or pressure loosely match between two terms, the overall performance is 95% in precision and 74% in recall. The system performance is almost comparable with results of human annotators for information categories with rich domain knowledge information (source material. However, for other information categories, given the relatively large number of terms that exist only in one paper, recall of individual information categories is not high (39–73%; however, precision is better (75–97%. The average performance for synthesis papers is better than that for characterization papers because of the lack of training examples for

  13. Automatic 3D Extraction of Buildings, Vegetation and Roads from LIDAR Data

    Science.gov (United States)

    Bellakaout, A.; Cherkaoui, M.; Ettarid, M.; Touzani, A.

    2016-06-01

    Aerial topographic surveys using Light Detection and Ranging (LiDAR) technology collect dense and accurate information from the surface or terrain; it is becoming one of the important tools in the geosciences for studying objects and earth surface. Classification of Lidar data for extracting ground, vegetation, and buildings is a very important step needed in numerous applications such as 3D city modelling, extraction of different derived data for geographical information systems (GIS), mapping, navigation, etc... Regardless of what the scan data will be used for, an automatic process is greatly required to handle the large amount of data collected because the manual process is time consuming and very expensive. This paper is presenting an approach for automatic classification of aerial Lidar data into five groups of items: buildings, trees, roads, linear object and soil using single return Lidar and processing the point cloud without generating DEM. Topological relationship and height variation analysis is adopted to segment, preliminary, the entire point cloud preliminarily into upper and lower contours, uniform and non-uniform surface, non-uniform surfaces, linear objects, and others. This primary classification is used on the one hand to know the upper and lower part of each building in an urban scene, needed to model buildings façades; and on the other hand to extract point cloud of uniform surfaces which contain roofs, roads and ground used in the second phase of classification. A second algorithm is developed to segment the uniform surface into buildings roofs, roads and ground, the second phase of classification based on the topological relationship and height variation analysis, The proposed approach has been tested using two areas : the first is a housing complex and the second is a primary school. The proposed approach led to successful classification results of buildings, vegetation and road classes.

  14. Framework for automatic information extraction from research papers on nanocrystal devices.

    Science.gov (United States)

    Dieb, Thaer M; Yoshioka, Masaharu; Hara, Shinjiro; Newton, Marcus C

    2015-01-01

    To support nanocrystal device development, we have been working on a computational framework to utilize information in research papers on nanocrystal devices. We developed an annotated corpus called " NaDev" (Nanocrystal Device Development) for this purpose. We also proposed an automatic information extraction system called "NaDevEx" (Nanocrystal Device Automatic Information Extraction Framework). NaDevEx aims at extracting information from research papers on nanocrystal devices using the NaDev corpus and machine-learning techniques. However, the characteristics of NaDevEx were not examined in detail. In this paper, we conduct system evaluation experiments for NaDevEx using the NaDev corpus. We discuss three main issues: system performance, compared with human annotators; the effect of paper type (synthesis or characterization) on system performance; and the effects of domain knowledge features (e.g., a chemical named entity recognition system and list of names of physical quantities) on system performance. We found that overall system performance was 89% in precision and 69% in recall. If we consider identification of terms that intersect with correct terms for the same information category as the correct identification, i.e., loose agreement (in many cases, we can find that appropriate head nouns such as temperature or pressure loosely match between two terms), the overall performance is 95% in precision and 74% in recall. The system performance is almost comparable with results of human annotators for information categories with rich domain knowledge information (source material). However, for other information categories, given the relatively large number of terms that exist only in one paper, recall of individual information categories is not high (39-73%); however, precision is better (75-97%). The average performance for synthesis papers is better than that for characterization papers because of the lack of training examples for characterization papers

  15. Comparing and Combining Methods for Automatic Query Expansion

    CERN Document Server

    Pérez-Agüera, José R

    2008-01-01

    Query expansion is a well known method to improve the performance of information retrieval systems. In this work we have tested different approaches to extract the candidate query terms from the top ranked documents returned by the first-pass retrieval. One of them is the cooccurrence approach, based on measures of cooccurrence of the candidate and the query terms in the retrieved documents. The other one, the probabilistic approach, is based on the probability distribution of terms in the collection and in the top ranked set. We compare the retrieval improvement achieved by expanding the query with terms obtained with different methods belonging to both approaches. Besides, we have developed a na\\"ive combination of both kinds of method, with which we have obtained results that improve those obtained with any of them separately. This result confirms that the information provided by each approach is of a different nature and, therefore, can be used in a combined manner.

  16. A NOVEL METHOD FOR ARABIC MULTI-WORD TERM EXTRACTION

    Directory of Open Access Journals (Sweden)

    Hadni Meryem

    2014-10-01

    Full Text Available Arabic Multiword Terms (AMWTs are relevant strings of words in text documents. Once they are automatically extracted, they can be used to increase the performance of any Arabic Text Mining applications such as Categorization, Clustering, Information Retrieval System, Machine Translation, and Summarization, etc. Mainly the proposed methods for AMWTs extraction can be categorized in three approaches: Linguistic-based, Statistic-based, and hybrid-based approach. These methods present some drawbacks that limit their use. In fact they can only deal with bi-grams terms and their yield not good accuracies. In this paper, to overcome these drawbacks, we propose a new and efficient method for AMWTs Extraction based on a hybrid approach. This latter is composed by two main filtering steps: the Linguistic filter and the Statistical one. The Linguistic Filter uses our proposed Part Of Speech (POS Tagger and the Sequence identifier as patterns in order to extract candidate AMWTs. While the Statistical filter incorporate the contextual information, and a new proposed association measure based on Termhood and Unithood Estimation named NTC-Value. To evaluate and illustrate the efficiency of our proposed method for AMWTs extraction, a comparative study has been conducted based on Kalimat Corpus and using nine experiment schemes: In the linguistic filter, we used three POS Taggers such as Taani’s method based Rule-approach, HMM method based Statistical-approach, and our recently proposed Tagger based Hybrid –approach. While in the Statistical filter, we used three statistical measures such as C-Value, NC-Value, and our proposed NTC-Value. The obtained results demonstrate the efficiency of our proposed method for AMWTs extraction: it outperforms the other ones and can deal correctly with the tri-grams terms.

  17. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications.

    Science.gov (United States)

    Lingua, Andrea; Marenchino, Davide; Nex, Francesco

    2009-01-01

    In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc.) and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model) generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A(2) SIFT) has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems.

  18. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications

    Directory of Open Access Journals (Sweden)

    Francesco Nex

    2009-05-01

    Full Text Available In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc. and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A2 SIFT has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems.

  19. Automatic segmentation of coronary artery tree based on multiscale Gabor filtering and transition region extraction

    Science.gov (United States)

    Wang, Fang; Wang, Guozhu; Kang, Lie; Wang, Juan

    2011-11-01

    This paper presents a novel segmentation method for extracting coronary artery tree from angiogram, which is based on multiscale Gabor filtering and transition region extraction. Firstly the enhanced image is obtained after multiscale Gabor filtering, then the transition region of the enhanced image is extracted using the local complexity algorithm, and the final segmentation threshold is calculated, finally the image segmentation is achieved. To evaluate the performance of the proposed approach, we carried out experiments on various sets of angiographic images, and compared its effects with those of the improved top-hat segmentation method. The experiments indicate that the proposed method outperforms the latter method about better extraction of small vessels, more background elimination, better visualized coronary artery tree and continuity of the vessels.

  20. Feasibility of Automatic Extraction of Electronic Health Data to Evaluate a Status Epilepticus Clinical Protocol.

    Science.gov (United States)

    Hafeez, Baria; Paolicchi, Juliann; Pon, Steven; Howell, Joy D; Grinspan, Zachary M

    2016-05-01

    Status epilepticus is a common neurologic emergency in children. Pediatric medical centers often develop protocols to standardize care. Widespread adoption of electronic health records by hospitals affords the opportunity for clinicians to rapidly, and electronically evaluate protocol adherence. We reviewed the clinical data of a small sample of 7 children with status epilepticus, in order to (1) qualitatively determine the feasibility of automated data extraction and (2) demonstrate a timeline-style visualization of each patient's first 24 hours of care. Qualitatively, our observations indicate that most clinical data are well labeled in structured fields within the electronic health record, though some important information, particularly electroencephalography (EEG) data, may require manual abstraction. We conclude that a visualization that clarifies a patient's clinical course can be automatically created using the patient's electronic clinical data, supplemented with some manually abstracted data. Future work could use this timeline to evaluate adherence to status epilepticus clinical protocols.

  1. A chest-shape target automatic detection method based on Deformable Part Models

    Science.gov (United States)

    Zhang, Mo; Jin, Weiqi; Li, Li

    2016-10-01

    Automatic weapon platform is one of the important research directions at domestic and overseas, it needs to accomplish fast searching for the object to be shot under complex background. Therefore, fast detection for given target is the foundation of further task. Considering that chest-shape target is common target of shoot practice, this paper treats chestshape target as the target and studies target automatic detection method based on Deformable Part Models. The algorithm computes Histograms of Oriented Gradient(HOG) features of the target and trains a model using Latent variable Support Vector Machine(SVM); In this model, target image is divided into several parts then we can obtain foot filter and part filters; Finally, the algorithm detects the target at the HOG features pyramid with method of sliding window. The running time of extracting HOG pyramid with lookup table can be shorten by 36%. The result indicates that this algorithm can detect the chest-shape target in natural environments indoors or outdoors. The true positive rate of detection reaches 76% with many hard samples, and the false positive rate approaches 0. Running on a PC (Intel(R)Core(TM) i5-4200H CPU) with C++ language, the detection time of images with the resolution of 640 × 480 is 2.093s. According to TI company run library about image pyramid and convolution for DM642 and other hardware, our detection algorithm is expected to be implemented on hardware platform, and it has application prospect in actual system.

  2. A Method for Extracting Important Segments from Documents Using Support Vector Machines

    Science.gov (United States)

    Suzuki, Daisuke; Utsumi, Akira

    In this paper we propose an extraction-based method for automatic summarization. The proposed method consists of two processes: important segment extraction and sentence compaction. The process of important segment extraction classifies each segment in a document as important or not by Support Vector Machines (SVMs). The process of sentence compaction then determines grammatically appropriate portions of a sentence for a summary according to its dependency structure and the classification result by SVMs. To test the performance of our method, we conducted an evaluation experiment using the Text Summarization Challenge (TSC-1) corpus of human-prepared summaries. The result was that our method achieved better performance than a segment-extraction-only method and the Lead method, especially for sentences only a part of which was included in human summaries. Further analysis of the experimental results suggests that a hybrid method that integrates sentence extraction with segment extraction may generate better summaries.

  3. A Review of Methods of Instance-based Automatic Image Annotation

    Directory of Open Access Journals (Sweden)

    Morad Derakhshan

    2016-12-01

    Full Text Available Today, to use automatic image annotation in order to fill the semantic gap between low level features of images and understanding their information in retrieving process has become popular. Since automatic image annotation is crucial in understanding digital images several methods have been proposed to automatically annotate an image. One of the most important of these methods is instance-based image annotation. As these methods are vastly used in this paper, the most important instance-based image annotation methods are analyzed. First of all the main parts of instance-based automatic image annotation are analyzed. Afterwards, the main methods of instance-based automatic image annotation are reviewed and compared based on various features. In the end the most important challenges and open-ended fields in instance-based image annotation are analyzed.

  4. An Automatic Unpacking Method for Computer Virus Effective in the Virus Filter Based on Paul Graham's Bayesian Theorem

    Science.gov (United States)

    Zhang, Dengfeng; Nakaya, Naoshi; Koui, Yuuji; Yoshida, Hitoaki

    Recently, the appearance frequency of computer virus variants has increased. Updates to virus information using the normal pattern matching method are increasingly unable to keep up with the speed at which viruses occur, since it takes time to extract the characteristic patterns for each virus. Therefore, a rapid, automatic virus detection algorithm using static code analysis is necessary. However, recent computer viruses are almost always compressed and obfuscated. It is difficult to determine the characteristics of the binary code from the obfuscated computer viruses. Therefore, this paper proposes a method that unpacks compressed computer viruses automatically independent of the compression format. The proposed method unpacks the common compression formats accurately 80% of the time, while unknown compression formats can also be unpacked. The proposed method is effective against unknown viruses by combining it with the existing known virus detection system like Paul Graham's Bayesian Virus Filter etc.

  5. Fully Automatic Method for 3D T1-Weighted Brain Magnetic Resonance Images Segmentation

    Directory of Open Access Journals (Sweden)

    Bouchaib Cherradi

    2011-05-01

    Full Text Available Accurate segmentation of brain MR images is of interest for many brain disorders. However, dueto several factors such noise, imaging artefacts, intrinsic tissue variation and partial volumeeffects, brain extraction and tissue segmentation remains a challenging task. So, in this paper, afull automatic method for segmentation of anatomical 3D brain MR images is proposed. Themethod consists of many steps. First, noise reduction by median filtering is done; secondsegmentation of brain/non-brain tissue is performed by using a Threshold Morphologic BrainExtraction method (TMBE. Then initial centroids estimation by gray level histogram analysis isexecuted, this stage yield to a Modified version of Fuzzy C-means Algorithm (MFCM that is usedfor MRI tissue segmentation. Finally 3D visualisation of the three clusters (CSF, GM and WM isperformed. The efficiency of the proposed method is demonstrated by extensive segmentationexperiments using simulated and real MR images. A confrontation of the method with similarmethods of the literature has been undertaken trough different performance measures. TheMFCM for tissue segmentation introduce a gain in rapidity of convergence of about 70%.

  6. Automatic method detection of artifacts for control of tomographic uniformity on SPECT; Metodo automatico de dteccion de artefactos para el control de la uniformidad tomografica en SPECT

    Energy Technology Data Exchange (ETDEWEB)

    Reynes Llompart, G.; Puchal, R.

    2013-07-01

    The objective of this work is the find an automatic method for the detection and classification of artifacts produced in tomographic uniformity, extracting the characteristics necessary to apply a classification algorithm using pattern recognition techniques. The method has been trained and validated with synthetic images and tested with real images. (Author)

  7. Automatization techniques for processing biomedical signals using machine learning methods

    OpenAIRE

    Artés Rodríguez, Antonio

    2008-01-01

    The Signal Processing Group (Department of Signal Theory and Communications, University Carlos III, Madrid, Spain) offers the expertise of its members in the automatic processing of biomedical signals. The main advantages in this technology are the decreased cost, the time saved and the increased reliability of the results. Technical cooperation for the research and development with internal and external funding is sought.

  8. METHOD FOR AUTOMATIC ANALYSIS OF WHEAT STRAW PULP CELL TYPES

    Directory of Open Access Journals (Sweden)

    Mikko Karjalainen,

    2012-01-01

    Full Text Available Agricultural residues are receiving increasing interest when studying renewable raw materials for industrial use. Residues, generally referred to as nonwood materials, are usually complex materials. Wheat straw is one of the most abundant agricultural residues around the world and is therefore available for extensive industrial use. However, more information of its cell types is needed to utilize wheat straw efficiently in pulp and papermaking. The pulp cell types and particle dimensions of wheat straw were studied, using an optical microscope and an automatic optical fibre analyzer. The role of various cell types in wheat straw pulp and papermaking is discussed. Wheat straw pulp components were categorized according to particle morphology and categorization with an automatic optical analyzer was used to determine wheat straw pulp cell types. The results from automatic optical analysis were compared to those with microscopic analysis and a good correlation was found. Automatic optical analysis was found to be a promising tool for the in-depth analysis of wheat straw pulp cell types.

  9. Automatic detecting method of LED signal lamps on fascia based on color image

    Science.gov (United States)

    Peng, Xiaoling; Hou, Wenguang; Ding, Mingyue

    2009-10-01

    Instrument display panel is one of the most important parts of automobiles. Automatic detection of LED signal lamps is critical to ensure the reliability of automobile systems. In this paper, an automatic detection method was developed which is composed of three parts in the automatic detection: the shape of LED lamps, the color of LED lamps, and defect spots inside the lamps. More than hundreds of fascias were detected with the automatic detection algorithm. The speed of the algorithm is quite fast and satisfied with the real-time request of the system. Further, the detection result was demonstrated to be stable and accurate.

  10. Method for analyzing solvent extracted sponge core

    Energy Technology Data Exchange (ETDEWEB)

    Ellington, W.E.; Calkin, C.L.

    1988-11-22

    For use in solvent extracted sponge core measurements of the oil saturation of earth formations, a method is described for quantifying the volume of oil in the fluids resulting from such extraction. The method consists of: (a) separating the solvent/oil mixture from the water in the extracted fluids, (b) distilling at least a portion of the solvent from the solvent/oil mixture substantially without co-distillation or loss of the light hydrocarbons in the mixture, (c) determining the volume contribution of the solvent remaining in the mixture, and (d) determining the volume of oil removed from the sponge by substracting the determined remaining solvent volume.

  11. AUTOMATIC ROAD EXTRACTION FROM SATELLITE IMAGES USING EXTENDED KALMAN FILTERING AND EFFICIENT PARTICLE FILTERING

    Directory of Open Access Journals (Sweden)

    Jenita Subash

    2011-12-01

    Full Text Available Users of geospatial data in government, military, industry, research, and other sectors have need foraccurate display of roads and other terrain information in areas where there are ongoing operations orlocations of interest. Hence, road extraction that is significantly more automated than the employment ofcostly and scarce human resources has become a challenging technical issue for the geospatialcommunity. An automatic road extraction based on Extended Kalman Filtering (EKF and variablestructured multiple model particle filter (VS-MMPF from satellite images is addressed. EKF traces themedian axis of a single road segment while VS-MMPF traces all road branches initializing at theintersection. In case of Local Linearization Particle filter (LLPF, a large number of particles are usedand therefore high computational expense is usually required in order to attain certain accuracy androbustness. The basic idea is to reduce the whole sampling space of the multiple model system to the modesubspace by marginalization over the target subspace and choose better importance function for modestate sampling. The core of the system is based on profile matching. During the estimation, new referenceprofiles were generated and stored in the road template memory for future correlation analysis, thuscovering the space of road profiles. .

  12. AUTOMATIC EXTRACTION OF ROAD SURFACE AND CURBSTONE EDGES FROM MOBILE LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    A. Miraliakbari

    2015-05-01

    Full Text Available We present a procedure for automatic extraction of the road surface from geo-referenced mobile laser scanning data. The basic assumption of the procedure is that the road surface is smooth and limited by curbstones. Two variants of jump detection are investigated for detecting curbstone edges, one based on height differences the other one based on histograms of the height data. Region growing algorithms are proposed which use the irregular laser point cloud. Two- and four-neighbourhood growing strategies utilize the two height criteria for examining the neighborhood. Both height criteria rely on an assumption about the minimum height of a low curbstone. Road boundaries with lower or no jumps will not stop the region growing process. In contrast to this objects on the road can terminate the process. Therefore further processing such as bridging gaps between detected road boundary points and the removal of wrongly detected curbstone edges is necessary. Road boundaries are finally approximated by splines. Experiments are carried out with a ca. 2 km network of smalls streets located in the neighbourhood of University of Applied Sciences in Stuttgart. For accuracy assessment of the extracted road surfaces, ground truth measurements are digitized manually from the laser scanner data. For completeness and correctness of the region growing result values between 92% and 95% are achieved.

  13. [Research advances on DNA extraction methods from peripheral blood mononuclear cells].

    Science.gov (United States)

    Wang, Xiao-Ying; Yu, Chen-Xi

    2014-10-01

    DNA extraction is a basic technology of molecular biology. The purity and the integrality of DNA structure are necessary for different experiments of gene engineering. As commonly used materials in the clinical detection, the fast, efficient isolation and extraction of genomic DNA from peripheral blood mononuclear cells is very important for the inspection and analysis of clinical blood. At present, there are many methods for extracting DNA, such as phenol-chloroform method, salting out method, centrifugal adsorption column chromatography method (artificial methods), magnetic beads (semi-automatic method) and DNA extraction kit. In this article, a brief review of the principle for existing DNA blood extraction method, the specific steps and the assessment of the specific methods briefly are summarized.

  14. Challenges for automatically extracting molecular interactions from full-text articles

    Directory of Open Access Journals (Sweden)

    Curran James R

    2009-09-01

    Full Text Available Abstract Background The increasing availability of full-text biomedical articles will allow more biomedical knowledge to be extracted automatically with greater reliability. However, most Information Retrieval (IR and Extraction (IE tools currently process only abstracts. The lack of corpora has limited the development of tools that are capable of exploiting the knowledge in full-text articles. As a result, there has been little investigation into the advantages of full-text document structure, and the challenges developers will face in processing full-text articles. Results We manually annotated passages from full-text articles that describe interactions summarised in a Molecular Interaction Map (MIM. Our corpus tracks the process of identifying facts to form the MIM summaries and captures any factual dependencies that must be resolved to extract the fact completely. For example, a fact in the results section may require a synonym defined in the introduction. The passages are also annotated with negated and coreference expressions that must be resolved. We describe the guidelines for identifying relevant passages and possible dependencies. The corpus includes 2162 sentences from 78 full-text articles. Our corpus analysis demonstrates the necessity of full-text processing; identifies the article sections where interactions are most commonly stated; and quantifies the proportion of interaction statements requiring coherent dependencies. Further, it allows us to report on the relative importance of identifying synonyms and resolving negated expressions. We also experiment with an oracle sentence retrieval system using the corpus as a gold-standard evaluation set. Conclusion We introduce the MIM corpus, a unique resource that maps interaction facts in a MIM to annotated passages within full-text articles. It is an invaluable case study providing guidance to developers of biomedical IR and IE systems, and can be used as a gold-standard evaluation

  15. Complementary methods for extracting road centerlines from IKONOS imagery

    Science.gov (United States)

    Haverkamp, Donna S.; Poulsen, Rick

    2003-03-01

    We present both semi-automated and automated methods for road extraction using IKONOS imagery. The automated method extracts straight-line, gridded road networks by inferring a local grid structure from initial information and then filling in missing pieces using hypothesization and verification. This can be followed by the semi-automated road tracker tool to approximate curvilinear roads and to fill in some of the remaining missing road structure. After a panchromatic texture analysis, our automated method incorporates an object-level processing phase which enables the algorithm to avoid problems arising from interference such as crosswalks and vehicles. It is limited, however, in that the logic is designed for reasoning concerning intersecting grid patterns of straight road segments. Many suburban areas are characterized by curving streets which may not be well-approximated using this automatic method. In these areas, missing content can be filled in using a semi-automated tool which tracks between user-supplied points. The semi-automated algorithm is based on measures derived from both the panchromatic and multispectral bands of IKONOS. We will discuss both of these algorithms in detail and how they fit into our overall solution strategy for road extraction. A presentation of current experimentation and test results will be followed by a discussion of advantages, shortcomings, and directions for future research and improvements.

  16. Automatic extraction of mandibular bone geometry for anatomy-based synthetization of radiographs.

    Science.gov (United States)

    Antila, Kari; Lilja, Mikko; Kalke, Martti; Lötjönen, Jyrki

    2008-01-01

    We present an automatic method for segmenting Cone-Beam Computerized Tomography (CBCT) volumes and synthetizing orthopantomographic, anatomically aligned views of the mandibular bone. The model-based segmentation method was developed having the characteristics of dental CBCT, severe metal artefacts, relatively high noise and high variability of the mandibular bone shape, in mind. First, we applied the segmentation method to delineate the bone. Second, we aligned a model resembling the geometry of orthopantomographic imaging according to the segmented surface. Third, we estimated the tooth orientations based on the local shape of the segmented surface. These results were used in determining the geometry of the synthetized radiograph. Segmentation was done with excellent results: with 14 samples we reached 0.57+/-0.16 mm mean distance from hand drawn reference. The estimation of tooth orientations was accurate with error of 0.65+/-8.0 degrees. An example of these results used in synthetizing panoramic radiographs is presented.

  17. Automatic features recognition and extraction of digital shoe last model%鞋楦数字模型的特征识别与自动提取

    Institute of Scientific and Technical Information of China (English)

    胡小春; 翟亚磊; 周福静

    2011-01-01

    文章针对鞋楦特征参数化建模的需要,研究鞋楦数据模型的有关几何特征的自动提取;对于不同几何特征建立相应的自动提取的处理方法和算法,采用逐步小角度坐标旋转变换,实现鞋楦前跷高、后跟高、中轴线的自动提取;采用截平面法,实现三围线的自动提取;综合应用Matlab、C语言和Pro/E,实现鞋楦数字模型的特征识别与提取的自动化;最后实现自动提取如下鞋楦特征:前跷高、后跟高、中轴线、楦底轮廓及三围线.%To build a feature-based parametric model of shoe last, it is necessary to recognize and extract the geometrical features of shoe last automatically. Corresponding methods and algorithms are established to extract different geometrical features automatically. Successive small angle coordinate transformation is used to recognize and extract the toe-tip height, the heel height and the central curve of shoe last. Cutting plane method is used to extract the three contours of shoe last model. The whole process is executed with the Matlab, C language and Pro/E. Finally realized is the automatic recognition and extraction of features- toe-tip height, heel height, central curve, bottom outline and three contours of shoe last.

  18. Quantitative evaluation of an automatic segmentation method for 3D reconstruction of intervertebral scoliotic disks from MR images

    Directory of Open Access Journals (Sweden)

    Claudia Chevrefils

    2012-08-01

    Full Text Available Abstract Background For some scoliotic patients the spinal instrumentation is inevitable. Among these patients, those with stiff curvature will need thoracoscopic disk resection. The removal of the intervertebral disk with only thoracoscopic images is a tedious and challenging task for the surgeon. With computer aided surgery and 3D visualisation of the interverterbral disk during surgery, surgeons will have access to additional information such as the remaining disk tissue or the distance of surgical tools from critical anatomical structures like the aorta or spinal canal. We hypothesized that automatically extracting 3D information of the intervertebral disk from MR images would aid the surgeons to evaluate the remaining disk and would add a security factor to the patient during thoracoscopic disk resection. Methods This paper presents a quantitative evaluation of an automatic segmentation method for 3D reconstruction of intervertebral scoliotic disks from MR images. The automatic segmentation method is based on the watershed technique and morphological operators. The 3D Dice Similarity Coefficient (DSC is the main statistical metric used to validate the automatically detected preoperative disk volumes. The automatic detections of intervertebral disks of real clinical MR images are compared to manual segmentation done by clinicians. Results Results show that depending on the type of MR acquisition sequence, the 3D DSC can be as high as 0.79 (±0.04. These 3D results are also supported by a 2D quantitative evaluation as well as by robustness and variability evaluations. The mean discrepancy (in 2D between the manual and automatic segmentations for regions around the spinal canal is of 1.8 (±0.8 mm. The robustness study shows that among the five factors evaluated, only the type of MRI acquisition sequence can affect the segmentation results. Finally, the variability of the automatic segmentation method is lower than the variability associated

  19. Extraction of handwritten areas from colored image of bank checks by an hybrid method

    CERN Document Server

    Haboubi, Sofiene

    2011-01-01

    One of the first step in the realization of an automatic system of check recognition is the extraction of the handwritten area. We propose in this paper an hybrid method to extract these areas. This method is based on digit recognition by Fourier descriptors and different steps of colored image processing . It requires the bank recognition of its code which is located in the check marking band as well as the handwritten color recognition by the method of difference of histograms. The areas extraction is then carried out by the use of some mathematical morphology tools.

  20. An automatic method for retrieving and indexing catalogues of biomedical courses.

    Science.gov (United States)

    Maojo, Victor; de la Calle, Guillermo; García-Remesal, Miguel; Bankauskaite, Vaida; Crespo, Jose

    2008-11-06

    Although there is wide information about Biomedical Informatics education and courses in different Websites, information is usually not exhaustive and difficult to update. We propose a new methodology based on information retrieval techniques for extracting, indexing and retrieving automatically information about educational offers. A web application has been developed to make available such information in an inventory of courses and educational offers.

  1. A HYBRID METHOD FOR AUTOMATIC COUNTING OF MICROORGANISMS IN MICROSCOPIC IMAGES

    OpenAIRE

    2016-01-01

    Microscopic image analysis is an essential process to enable the automatic enumeration and quantitative analysis of microbial images. There are several system are available for numerating microbial growth. Some of the existing method may be inefficient to accurately count the overlapped microorganisms. Therefore, in this paper we proposed an efficient method for automatic segmentation and counting of microorganisms in microscopic images. This method uses a hybrid approach based on...

  2. Automatic extraction of myocardial mass and volumes using parametric images from dynamic non-gated PET

    DEFF Research Database (Denmark)

    Harms, Hans; Hansson, Nils Henrik Stubkjær; Tolbod, Lars Poulsen;

    2016-01-01

    -gated dynamic cardiac PET. METHODS: Thirty-five patients with aortic-valve stenosis and 10 healthy controls (HC) underwent a 27-min 11C-acetate PET/CT scan and cardiac magnetic resonance imaging (CMR). HC were scanned twice to assess repeatability. Parametric images of uptake rate K1 and the blood pool were......LV and WT only and an overestimation for LVEF at lower values. Intra- and inter-observer correlations were >0.95 for all PET measurements. PET repeatability accuracy in HC was comparable to CMR. CONCLUSION: LV mass and volumes are accurately and automatically generated from dynamic 11C-acetate PET without...... ECG-gating. This method can be incorporated in a standard routine without any additional workload and can, in theory, be extended to other PET tracers....

  3. Automatic Object-Oriented, Spectral-Spatial Feature Extraction Driven by Tobler’s First Law of Geography for Very High Resolution Aerial Imagery Classification

    Directory of Open Access Journals (Sweden)

    Zhiyong Lv

    2017-03-01

    Full Text Available Aerial image classification has become popular and has attracted extensive research efforts in recent decades. The main challenge lies in its very high spatial resolution but relatively insufficient spectral information. To this end, spatial-spectral feature extraction is a popular strategy for classification. However, parameter determination for that feature extraction is usually time-consuming and depends excessively on experience. In this paper, an automatic spatial feature extraction approach based on image raster and segmental vector data cross-analysis is proposed for the classification of very high spatial resolution (VHSR aerial imagery. First, multi-resolution segmentation is used to generate strongly homogeneous image objects and extract corresponding vectors. Then, to automatically explore the region of a ground target, two rules, which are derived from Tobler’s First Law of Geography (TFL and a topological relationship of vector data, are integrated to constrain the extension of a region around a central object. Third, the shape and size of the extended region are described. A final classification map is achieved through a supervised classifier using shape, size, and spectral features. Experiments on three real aerial images of VHSR (0.1 to 0.32 m are done to evaluate effectiveness and robustness of the proposed approach. Comparisons to state-of-the-art methods demonstrate the superiority of the proposed method in VHSR image classification.

  4. A Method of Road Extraction from High-resolution Remote Sensing Images Based on Shape Features

    Directory of Open Access Journals (Sweden)

    LEI Xiaoqi

    2016-02-01

    Full Text Available Road extraction from high-resolution remote sensing image is an important and difficult task.Since remote sensing images include complicated information,the methods that extract roads by spectral,texture and linear features have certain limitations.Also,many methods need human-intervention to get the road seeds(semi-automatic extraction,which have the great human-dependence and low efficiency.The road-extraction method,which uses the image segmentation based on principle of local gray consistency and integration shape features,is proposed in this paper.Firstly,the image is segmented,and then the linear and curve roads are obtained by using several object shape features,so the method that just only extract linear roads are rectified.Secondly,the step of road extraction is carried out based on the region growth,the road seeds are automatic selected and the road network is extracted.Finally,the extracted roads are regulated by combining the edge information.In experiments,the images that including the better gray uniform of road and the worse illuminated of road surface were chosen,and the results prove that the method of this study is promising.

  5. Calibration of three rainfall simulators with automatic measurement methods

    Science.gov (United States)

    Roldan, Margarita

    2010-05-01

    CALIBRATION OF THREE RAINFALL SIMULATORS WITH AUTOMATIC MEASUREMENT METHODS M. Roldán (1), I. Martín (2), F. Martín (2), S. de Alba(3), M. Alcázar(3), F.I. Cermeño(3) 1 Grupo de Investigación Ecología y Gestión Forestal Sostenible. ECOGESFOR-Universidad Politécnica de Madrid. E.U.I.T. Forestal. Avda. Ramiro de Maeztu s/n. Ciudad Universitaria. 28040 Madrid. margarita.roldan@upm.es 2 E.U.I.T. Forestal. Avda. Ramiro de Maeztu s/n. Ciudad Universitaria. 28040 Madrid. 3 Facultad de Ciencias Geológicas. Universidad Complutense de Madrid. Ciudad Universitaria s/n. 28040 Madrid The rainfall erosivity is the potential ability of rain to cause erosion. It is function of the physical characteristics of rainfall (Hudson, 1971). Most expressions describing erosivity are related to kinetic energy or momentum and so with drop mass or size and fall velocity. Therefore, research on factors determining erosivity leds to the necessity to study the relation between fall height and fall velocity for different drop sizes, generated in a rainfall simulator (Epema G.F.and Riezebos H.Th, 1983) Rainfall simulators are one of the most used tools for erosion studies and are used to determine fall velocity and drop size. Rainfall simulators allow repeated and multiple measurements The main reason for use of rainfall simulation as a research tool is to reproduce in a controlled way the behaviour expected in the natural environment. But in many occasions when simulated rain is used in order to compare it with natural rain, there is a lack of correspondence between natural and simulated rain and this can introduce some doubt about validity of data because the characteristics of natural rain are not adequately represented in rainfall simulation research (Dunkerley D., 2008). Many times the rainfall simulations have high rain rates and they do not resemble natural rain events and these measures are not comparables. And besides the intensity is related to the kinetic energy which

  6. Automatic dynamic mask extraction for PIV images containing an unsteady interface, bubbles, and a moving structure

    Science.gov (United States)

    Dussol, David; Druault, Philippe; Mallat, Bachar; Delacroix, Sylvain; Germain, Grégory

    2016-07-01

    When performing Particle Image Velocimetry (PIV) measurements in complex fluid flows with moving interfaces and a two-phase flow, it is necessary to develop a mask to remove non-physical measurements. This is the case when studying, for example, the complex bubble sweep-down phenomenon observed in oceanographic research vessels. Indeed, in such a configuration, the presence of an unsteady free surface, of a solid-liquid interface and of bubbles in the PIV frame, leads to generate numerous laser reflections and therefore spurious velocity vectors. In this note, an image masking process is developed to successively identify the boundaries of the ship and the free surface interface. As the presence of the solid hull surface induces laser reflections, the hull edge contours are simply detected in the first PIV frame and dynamically estimated for consecutive ones. As for the unsteady surface determination, a specific process is implemented like the following: i) the edge detection of the gradient magnitude in the PIV frame, ii) the extraction of the particles by filtering high-intensity large areas related to the bubbles and/or hull reflections, iii) the extraction of the rough region containing these particles and their reflections, iv) the removal of these reflections. The unsteady surface is finally obtained with a fifth-order polynomial interpolation. The resulted free surface is successfully validated from the Fourier analysis and by visualizing selected PIV images containing numerous spurious high intensity areas. This paper demonstrates how this data analysis process leads to PIV images database without reflections and an automatic detection of both the free surface and the rigid body. An application of this new mask is finally detailed, allowing a preliminary analysis of the hydrodynamic flow.

  7. Exploring the Potential for Automatic Extraction of Vegetation Phenological Metrics from Traffic Webcams

    Directory of Open Access Journals (Sweden)

    Karon L. Smith

    2013-05-01

    Full Text Available Phenological metrics are of potential value as direct indicators of climate change. Usually they are obtained via either satellite imaging or ground based manual measurements; both are bespoke and therefore costly and have problems associated with scale and quality. An increase in the use of camera networks for monitoring infrastructure offers a means of obtaining images for use in phenological studies, where the only necessary outlay would be for data transfer, storage, processing and display. Here a pilot study is described that uses image data from a traffic monitoring network to demonstrate that it is possible to obtain usable information from the data captured. There are several challenges in using this network of cameras for automatic extraction of phenological metrics, not least, the low quality of the images and frequent camera motion. Although questions remain to be answered concerning the optimal employment of these cameras, this work illustrates that, in principle, image data from camera networks such as these could be used as a means of tracking environmental change in a low cost, highly automated and scalable manner that would require little human involvement.

  8. Assessing the Utility of Automatic Cancer Registry Notifications Data Extraction from Free-Text Pathology Reports.

    Science.gov (United States)

    Nguyen, Anthony N; Moore, Julie; O'Dwyer, John; Philpot, Shoni

    2015-01-01

    Cancer Registries record cancer data by reading and interpreting pathology cancer specimen reports. For some Registries this can be a manual process, which is labour and time intensive and subject to errors. A system for automatic extraction of cancer data from HL7 electronic free-text pathology reports has been proposed to improve the workflow efficiency of the Cancer Registry. The system is currently processing an incoming trickle feed of HL7 electronic pathology reports from across the state of Queensland in Australia to produce an electronic cancer notification. Natural language processing and symbolic reasoning using SNOMED CT were adopted in the system; Queensland Cancer Registry business rules were also incorporated. A set of 220 unseen pathology reports selected from patients with a range of cancers was used to evaluate the performance of the system. The system achieved overall recall of 0.78, precision of 0.83 and F-measure of 0.80 over seven categories, namely, basis of diagnosis (3 classes), primary site (66 classes), laterality (5 classes), histological type (94 classes), histological grade (7 classes), metastasis site (19 classes) and metastatic status (2 classes). These results are encouraging given the large cross-section of cancers. The system allows for the provision of clinical coding support as well as indicative statistics on the current state of cancer, which is not otherwise available.

  9. Forest point processes for the automatic extraction of networks in raster data

    Science.gov (United States)

    Schmidt, Alena; Lafarge, Florent; Brenner, Claus; Rottensteiner, Franz; Heipke, Christian

    2017-04-01

    In this paper, we propose a new stochastic approach for the automatic detection of network structures in raster data. We represent a network as a set of trees with acyclic planar graphs. We embed this model in the probabilistic framework of spatial point processes and determine the most probable configuration of trees by stochastic sampling. That is, different configurations are constructed randomly by modifying the graph parameters and by adding or removing nodes and edges to/ from the current trees. Each configuration is evaluated based on the probabilities for these changes and an energy function describing the conformity with a predefined model. By using the Reversible jump Markov chain Monte Carlo sampler, an approximation of the global optimum of the energy function is iteratively reached. Although our main target application is the extraction of rivers and tidal channels in digital terrain models, experiments with other types of networks in images show the transferability to further applications. Qualitative and quantitative evaluations demonstrate the competitiveness of our approach with respect to existing algorithms.

  10. Automatic detection and extraction of ultra-fine bright structure observed with new vacuum solar telescope

    Science.gov (United States)

    Deng, Linhua

    2017-02-01

    Solar magnetic structures exhibit a wealth of different spatial and temporal scales. Presently, solar magnetic element is believed to be the ultra-fine magnetic structure in the lower solar atmospheric layer, and the diffraction limit of the largest-aperture solar telescope (New Vacuum Solar Telescope; NVST) of China is close to the spatial scale of magnetic element. This implies that modern solar observations have entered the era of high resolution better than 0.2 arc-second. Since the year of 2011, the NVST have successfully established and obtained huge observational data. Moreover, the ultra-fine magnetic structure rooted in the dark inter-graunlar lanes can be easily resolved. Studies on the observational characteristics and physical mechanism of magnetic bright points is one of the most important aspects in the field of solar physics, so it is very important to determine the statistical and physical parameters of magnetic bright points with the feature extraction techniques and numerical analysis approaches. For identifying such ultra-fine magnetic structure, an automatically and effectively detection algorithm, employed the Laplacian transform and the morphological dilation technique, is proposed and examined. Then, the statistical parameters such as the typical diameter, the area distribution, the eccentricity, and the intensity contrast are obtained. And finally, the scientific meaning for investigating the physical parameters of magnetic bright points are discussed, especially for understanding the physical processes of solar magnetic energy transferred from the photosphere to the corona.

  11. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery

    Science.gov (United States)

    Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun

    2016-01-01

    Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building

  12. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery.

    Science.gov (United States)

    Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun

    2016-07-19

    Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building

  13. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery

    Directory of Open Access Journals (Sweden)

    Fasahat Ullah Siddiqui

    2016-07-01

    Full Text Available Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality. Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state

  14. Semi-automatic extraction of lineaments from remote sensing data and the derivation of groundwater flow-paths

    Directory of Open Access Journals (Sweden)

    U. Mallast

    2011-01-01

    Full Text Available We describe a semi-automatic method to objectively and reproducibly extract lineaments based on the global one arc-second ASTER GDEM. The combined method of linear filtering and object-based classification ensures a high degree of accuracy resulting in a lineament map. Subsequently lineaments are differentiated into geological and morphological lineaments to assign a probable origin and hence a hydro-geological significance. In the western catchment area of the Dead Sea (Israel the orientation and location of the differentiated lineaments are compared to characteristics of known structural features. The authors demonstrate that a strong correlation between lineaments and structural features exist, being either influenced by the Syrian Arc paleostress field or the Dead Sea stress field or by both. Subsequently, we analyse the distances between lineaments and wells thereby creating an assessment criterion concerning the hydraulic significance of detected lineaments. Derived from this analysis the authors suggest that the statistic analysis of lineaments allows a delineation of flow-paths and thus significant information for groundwater analysis. We validate the flow-path delineation by comparison with existing groundwater model results based on well data.

  15. An efficient algorithm for automatically generating multivariable fuzzy systems by Fourier series method.

    Science.gov (United States)

    Chen, Liang; Tokuda, N

    2002-01-01

    By exploiting the Fourier series expansion, we have developed a new constructive method of automatically generating a multivariable fuzzy inference system from any given sample set with the resulting multivariable function being constructed within any specified precision to the original sample set. The given sample sets are first decomposed into a cluster of simpler sample sets such that a single input fuzzy system is constructed readily for a sample set extracted directly from the cluster independent of the other variables. Once the relevant fuzzy rules and membership functions are constructed for each of the variables completely independent of the other variables, the resulting decomposed fuzzy rules and membership functions are integrated back into the fuzzy system appropriate for the original sample set requiring only a moderate cost of computation in the required decomposition and composition processes. After proving two basic theorems which we need to ensure the validity of the decomposition and composition processes of the system construction, we have demonstrated a constructive algorithm of a multivariable fuzzy system. Exploiting an implicit error bound analysis available at each of the construction steps, the present Fourier method is capable of implementing a more stable fuzzy system than the power series expansion method of ParNeuFuz and PolyNeuFuz, covering and implementing a wider range of more robust applications.

  16. Automatic extraction of complex surface models from range images using a trimmed-rational Bezier surface

    Science.gov (United States)

    Boulanger, Pierre; Sekita, Iwao

    1993-08-01

    This paper presents a new method for the extraction of a rational Bezier surface from a set of data points. The algorithm is divided into four parts. First, a least median square fitting algorithm is used to extract a Bezier surface from the data set. Second, from this initial surface model an analysis of the data set is performed to eliminate outliers. Third, the algorithm then improves the fit over the residual points by modifying the weights of a rational Bezier surface using a non-linear optimization method. A further improvement of the fit is achieved using a new intrinsic parameterization technique. Fourth, an approximation of the region boundary is performed using a NURB with knots. Experimental results show that the current algorithm is robust and can precisely approximate complex surfaces.

  17. Automatic Shape-Based Target Extraction for Close-Range Photogrammetry

    Science.gov (United States)

    Guo, X.; Chen, Y.; Wang, C.; Cheng, M.; Wen, C.; Yu, J.

    2016-06-01

    In order to perform precise identification and location of artificial coded targets in natural scenes, a novel design of circle-based coded target and the corresponding coarse-fine extraction algorithm are presented. The designed target separates the target box and coding box totally and owns an advantage of rotation invariance. Based on the original target, templates are prepared by three geometric transformations and are used as the input of shape-based template matching. Finally, region growing and parity check methods are used to extract the coded targets as final results. No human involvement is required except for the preparation of templates and adjustment of thresholds in the beginning, which is conducive to the automation of close-range photogrammetry. The experimental results show that the proposed recognition method for the designed coded target is robust and accurate.

  18. Recent developments in automatic solid-phase extraction with renewable surfaces exploiting flow-based approaches

    DEFF Research Database (Denmark)

    Miró, Manuel; Hartwell, Supaporn Kradtap; Jakmunee, Jaroon;

    2008-01-01

    Solid-phase extraction (SPE) is the most versatile sample-processing method for removal of interfering species and/or analyte enrichment. Although significant advances have been made over the past two decades in automating the entire analytical protocol involving SPE via flow-injection approaches...... chemical-derivatization reactions, and it pinpoints the most common instrumental detection techniques utilized. We present and discuss in detail relevant environmental and bioanalytical applications reported in the past few years....

  19. Feature Extraction and Automatic Material Classification of Underground Objects from Ground Penetrating Radar Data

    OpenAIRE

    Qingqing Lu; Jiexin Pu; Zhonghua Liu

    2014-01-01

    Ground penetrating radar (GPR) is a powerful tool for detecting objects buried underground. However, the interpretation of the acquired signals remains a challenging task since an experienced user is required to manage the entire operation. Particularly difficult is the classification of the material type of underground objects in noisy environment. This paper proposes a new feature extraction method. First, discrete wavelet transform (DWT) transforms A-Scan data and approximation coefficient...

  20. Automatic segmentation of the bone and extraction of the bone cartilage interface from magnetic resonance images of the knee

    Science.gov (United States)

    Fripp, Jurgen; Crozier, Stuart; Warfield, Simon K.; Ourselin, Sébastien

    2007-03-01

    The accurate segmentation of the articular cartilages from magnetic resonance (MR) images of the knee is important for clinical studies and drug trials into conditions like osteoarthritis. Currently, segmentations are obtained using time-consuming manual or semi-automatic algorithms which have high inter- and intra-observer variabilities. This paper presents an important step towards obtaining automatic and accurate segmentations of the cartilages, namely an approach to automatically segment the bones and extract the bone-cartilage interfaces (BCI) in the knee. The segmentation is performed using three-dimensional active shape models, which are initialized using an affine registration to an atlas. The BCI are then extracted using image information and prior knowledge about the likelihood of each point belonging to the interface. The accuracy and robustness of the approach was experimentally validated using an MR database of fat suppressed spoiled gradient recall images. The (femur, tibia, patella) bone segmentation had a median Dice similarity coefficient of (0.96, 0.96, 0.89) and an average point-to-surface error of 0.16 mm on the BCI. The extracted BCI had a median surface overlap of 0.94 with the real interface, demonstrating its usefulness for subsequent cartilage segmentation or quantitative analysis.

  1. Automatic extraction of lunar impact craters from Chang'E images based on Hough transform and RANSAC

    Science.gov (United States)

    Luo, Zhongfei; Kang, Zhizhong

    2016-03-01

    This article proposed an algorithm combining Hough transform and RANSAC algorithm for automatic extraction of lunar craters. (1) In order to suppress noise, the images were filtered; (2) The edge of image were extracted, subsequently, eliminate false edge points by qualifying the gradient direction and the area of connected domain; (3) The edge images were segmented through Hough transform, gathering the same crater edge points together; (4) The edge images after segmentation were fitted using RANSAC algorithm, getting the high precision parameter. High precision of the algorithm was verified by the experiments of images acquired by the Chang'E-1 satellites.

  2. A HYBRID METHOD FOR AUTOMATIC COUNTING OF MICROORGANISMS IN MICROSCOPIC IMAGES

    Directory of Open Access Journals (Sweden)

    P.Kalavathi

    2016-03-01

    Full Text Available Microscopic image analysis is an essential process to enable the automatic enumeration and quantitative analysis of microbial images. There are several system are available for numerating microbial growth. Some of the existing method may be inefficient to accurately count the overlapped microorganisms. Therefore, in this paper we proposed an efficient method for automatic segmentation and counting of microorganisms in microscopic images. This method uses a hybrid approach based on morphological operation, active contour model and counting by region labelling process. The colony count value obtained by this proposed method is compared with the manual count and the count value obtained from the existing method

  3. Identifying Structures in Social Conversations in NSCLC Patients through the Semi-Automatic extraction of Topical Taxonomies

    Directory of Open Access Journals (Sweden)

    Giancarlo Crocetti

    2016-01-01

    Full Text Available The exploration of social conversations for addressing patient’s needs is an important analytical task in which many scholarly publications are contributing to fill the knowledge gap in this area. The main difficulty remains the inability to turn such contributions into pragmatic processes the pharmaceutical industry can leverage in order to generate insight from social media data, which can be considered as one of the most challenging source of information available today due to its sheer volume and noise. This study is based on the work by Scott Spangler and Jeffrey Kreulen and applies it to identify structure in social media through the extraction of a topical taxonomy able to capture the latent knowledge in social conversations in health-related sites. The mechanism for automatically identifying and generating a taxonomy from social conversations is developed and pressured tested using public data from media sites focused on the needs of cancer patients and their families. Moreover, a novel method for generating the category’s label and the determination of an optimal number of categories is presented which extends Scott and Jeffrey’s research in a meaningful way. We assume the reader is familiar with taxonomies, what they are and how they are used.

  4. Optical Methods For Automatic Rating Of Engine Test Components

    Science.gov (United States)

    Pritchard, James R.; Moss, Brian C.

    1989-03-01

    In recent years, increasing commercial and legislative pressure on automotive engine manufacturers, including increased oil drain intervals, cleaner exhaust emissions and high specific power outputs, have led to increasing demands on lubricating oil performance. Lubricant performance is defined by bench engine tests run under closely controlled conditions. After test, engines are dismantled and the parts rated for wear and accumulation of deposit. This rating must be consistently carried out in laboratories throughout the world in order to ensure lubricant quality meeting the specified standards. To this end, rating technicians evaluate components, following closely defined procedures. This process is time consuming, inaccurate and subject to drift, requiring regular recalibration of raters by means of international rating workshops. This paper describes two instruments for automatic rating of engine parts. The first uses a laser to determine the degree of polishing of the engine cylinder bore, caused by the reciprocating action of piston. This instrument has been developed to prototype stage by the NDT Centre at Harwell under contract to Exxon Chemical, and is planned for production within the next twelve months. The second instrument uses red and green filtered light to determine the type, quality and position of deposit formed on the piston surfaces. The latter device has undergone feasibility study, but no prototype exists.

  5. Automatic Morphological Sieving: Comparison between Different Methods, Application to DNA Ploidy Measurements

    Directory of Open Access Journals (Sweden)

    Christophe Boudry

    1999-01-01

    Full Text Available The aim of the present study is to propose alternative automatic methods to time consuming interactive sorting of elements for DNA ploidy measurements. One archival brain tumour and two archival breast carcinoma were studied, corresponding to 7120 elements (3764 nuclei, 3356 debris and aggregates. Three automatic classification methods were tested to eliminate debris and aggregates from DNA ploidy measurements (mathematical morphology (MM, multiparametric analysis (MA and neural network (NN. Performances were evaluated by reference to interactive sorting. The results obtained for the three methods concerning the percentage of debris and aggregates automatically removed reach 63, 75 and 85% for MM, MA and NN methods, respectively, with false positive rates of 6, 21 and 25%. Information about DNA ploidy abnormalities were globally preserved after automatic elimination of debris and aggregates by MM and MA methods as opposed to NN method, showing that automatic classification methods can offer alternatives to tedious interactive elimination of debris and aggregates, for DNA ploidy measurements of archival tumours.

  6. Influence of extraction method on protein profile of soybeans

    OpenAIRE

    Pavlićević Milica Ž.; Stanojević Slađana P.; Vucelić-Radović Biljana V.

    2013-01-01

    Comparison between protein profiles of soybean obtained by commonly used methods of extraction (Tris buffer and Tris-urea buffer) with methods used for extraction of plant proteins for 2D PAGE analysis (direct solubilization in IEF buffer, acetone extraction, phenol extraction, extraction with urea solubilization buffer and thiourea-urea extraction) was investigated. 2D profiles of samples extracted directly in IEF buffer, in urea solubilization buffer and in acetone were characterized ...

  7. A multi-label learning based kernel automatic recommendation method for support vector machine.

    Science.gov (United States)

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  8. Automatic Seamless Stitching Method for CCD Images of Chang'E-I Lunar Mission

    Institute of Scientific and Technical Information of China (English)

    Mengjie Ye; Jian Li; Yanyan Liang; Zhanchuan Cai; Zesheng Tang

    2011-01-01

    A novel automatic seamless stitching method is presented.Compared to the traditional method,it can speed the processing and minimize the utilization of human resources to produce global lunar map.Meanwhile,a new global image map of the Moon with spatial resolution of~120 m has been completed by the proposed method from Chang'E-1 CCD image data.

  9. Computer Domain Term Automatic Extraction and Hierarchical Structure Building%计算机领域术语的自动获取与层次构建

    Institute of Scientific and Technical Information of China (English)

    林源; 陈志泊; 孙俏

    2011-01-01

    This paper presents a computer domain term automatic extraction method based on roles and statistics.It uses computer book titles from Amazon.com website as corpus, data are preprocessed by words splitting, stop words and special characters filtering.Terms are extracted by a set of rules and frequency statistics and inserted into a word tree from ODP to build the hierarchical structure.Experimental results show high precision and recall of the automatically extracted results compared with manual tagged terms.%设计一种能够自动获取计算机领域术语的方案,提出基于规则与统计相结合的抽取方法,使用亚马逊网站的计算机类图书作为语料库,通过分词、去停止词预处理以及词频统计的方法提取出计算机类领域术语,并插入到由ODP构建的树中,形成计算机领域术语的层次结构.实验结果表明,与人工标注结果相比,使用该方法自动获取的术语有很高的准确率与召回率.

  10. Auto-OBSD: Automatic parameter selection for reliable Oscillatory Behavior-based Signal Decomposition with an application to bearing fault signature extraction

    Science.gov (United States)

    Huang, Huan; Baddour, Natalie; Liang, Ming

    2017-03-01

    Bearing signals are often contaminated by in-band interferences and random noise. Oscillatory Behavior-based Signal Decomposition (OBSD) is a new technique which decomposes a signal according to its oscillatory behavior, rather than frequency or scale. Due to the low oscillatory transients of bearing fault-induced signals, the OBSD can be used to effectively extract bearing fault signatures from a blurred signal. However, the quality of the result highly relies on the selection of method-related parameters. Such parameters are often subjectively selected and a systematic approach has not been reported in the literature. As such, this paper proposes a systematic approach to automatic selection of OBSD parameters for reliable extraction of bearing fault signatures. The OBSD utilizes the idea of Morphological Component Analysis (MCA) that optimally projects the original signal to low oscillatory wavelets and high oscillatory wavelets established via the Tunable Q-factor Wavelet Transform (TQWT). In this paper, the effects of the selection of each parameter on the performance of the OBSD for bearing fault signature extraction are investigated. It is found that some method-related parameters can be fixed at certain values due to the nature of bearing fault-induced impulses. To adaptively tune the remaining parameters, index-guided parameter selection algorithms are proposed. A Convergence Index (CI) is proposed and a CI-guided self-tuning algorithm is developed to tune the convergence-related parameters, namely, penalty factor and number of iterations. Furthermore, a Smoothness Index (SI) is employed to measure the effectiveness of the extracted low oscillatory component (i.e. bearing fault signature). It is shown that a minimum SI implies an optimal result with respect to the adjustment of relevant parameters. Thus, two SI-guided automatic parameter selection algorithms are also developed to specify two other parameters, i.e., Q-factor of high-oscillatory wavelets and

  11. A Method for Modeling the Virtual Instrument Automatic Test System Based on the Petri Net

    Institute of Scientific and Technical Information of China (English)

    MA Min; CHEN Guang-ju

    2005-01-01

    Virtual instrument is playing the important role in automatic test system. This paper introduces a composition of a virtual instrument automatic test system and takes the VXIbus based a test software platform which is developed by CAT lab of the UESTC as an example. Then a method to model this system based on Petri net is proposed. Through this method, we can analyze the test task scheduling to prevent the deadlock or resources conflict. At last, this paper analyzes the feasibility of this method.

  12. A method for improving the accuracy of automatic indexing of Chinese-English mixed documents

    Institute of Scientific and Technical Information of China (English)

    Yan; ZHAO; Hui; SHI

    2012-01-01

    Purpose:The thrust of this paper is to present a method for improving the accuracy of automatic indexing of Chinese-English mixed documents.Design/methodology/approach:Based on the inherent characteristics of Chinese-English mixed texts and the cybernetics theory,we proposed an integrated control method for indexing documents.It consists of"feed-forward control","in-progress control"and"feed-back control",aiming at improving the accuracy of automatic indexing of Chinese-English mixed documents.An experiment was conducted to investigate the effect of our proposed method.Findings:This method distinguishes Chinese and English documents in grammatical structures and word formation rules.Through the implementation of this method in the three phases of automatic indexing for the Chinese-English mixed documents,the results were encouraging.The precision increased from 88.54%to 97.10%and recall improved from97.37%to 99.47%.Research limitations:The indexing method is relatively complicated and the whole indexing process requires substantial human intervention.Due to pattern matching based on a bruteforce(BF)approach,the indexing efficiency has been reduced to some extent.Practical implications:The research is of both theoretical significance and practical value in improving the accuracy of automatic indexing of multilingual documents(not confined to Chinese-English mixed documents).The proposed method will benefit not only the indexing of life science documents but also the indexing of documents in other subject areas.Originality/value:So far,few studies have been published about the method for increasing the accuracy of multilingual automatic indexing.This study will provide insights into the automatic indexing of multilingual documents,especially Chinese-English mixed documents.

  13. Method for Extracting and Sequestering Carbon Dioxide

    Energy Technology Data Exchange (ETDEWEB)

    Rau, Gregory H.; Caldeira, Kenneth G.

    2005-05-10

    A method and apparatus to extract and sequester carbon dioxide (CO2) from a stream or volume of gas wherein said method and apparatus hydrates CO2, and reacts the resulting carbonic acid with carbonate. Suitable carbonates include, but are not limited to, carbonates of alkali metals and alkaline earth metals, preferably carbonates of calcium and magnesium. Waste products are metal cations and bicarbonate in solution or dehydrated metal salts, which when disposed of in a large body of water provide an effective way of sequestering CO2 from a gaseous environment.

  14. AsteriX: a Web server to automatically extract ligand coordinates from figures in PDF articles.

    Science.gov (United States)

    Lounnas, V; Vriend, G

    2012-02-27

    Coordinates describing the chemical structures of small molecules that are potential ligands for pharmaceutical targets are used at many stages of the drug design process. The coordinates of the vast majority of ligands can be obtained from either publicly accessible or commercial databases. However, interesting ligands sometimes are only available from the scientific literature, in which case their coordinates need to be reconstructed manually--a process that consists of a series of time-consuming steps. We present a Web server that helps reconstruct the three-dimensional (3D) coordinates of ligands for which a two-dimensional (2D) picture is available in a PDF file. The software, called AsteriX, analyses every picture contained in the PDF file and attempts to determine automatically whether or not it contains ligands. Areas in pictures that may contain molecular structures are processed to extract connectivity and atom type information that allow coordinates to be subsequently reconstructed. The AsteriX Web server was tested on a series of articles containing a large diversity in graphical representations. In total, 88% of 3249 ligand structures present in the test set were identified as chemical diagrams. Of these, about half were interpreted correctly as 3D structures, and a further one-third required only minor manual corrections. It is principally impossible to always correctly reconstruct 3D coordinates from pictures because there are many different protocols for drawing a 2D image of a ligand, but more importantly a wide variety of semantic annotations are possible. The AsteriX Web server therefore includes facilities that allow the users to augment partial or partially correct 3D reconstructions. All 3D reconstructions are submitted, checked, and corrected by the users domain at the server and are freely available for everybody. The coordinates of the reconstructed ligands are made available in a series of formats commonly used in drug design research. The

  15. Automatic Registration Method for Optical Remote Sensing Images with Large Background Variations Using Line Segments

    Directory of Open Access Journals (Sweden)

    Xiaolong Shi

    2016-05-01

    Full Text Available Image registration is an essential step in the process of image fusion, environment surveillance and change detection. Finding correct feature matches during the registration process proves to be difficult, especially for remote sensing images with large background variations (e.g., images taken pre and post an earthquake or flood. Traditional registration methods based on local intensity probably cannot maintain steady performances, as differences are significant in the same area of the corresponding images, and ground control points are not always available in many disaster images. In this paper, an automatic image registration method based on the line segments on the main shape contours (e.g., coastal lines, long roads and mountain ridges is proposed for remote sensing images with large background variations because the main shape contours can hold relatively more invariant information. First, a line segment detector called EDLines (Edge Drawing Lines, which was proposed by Akinlar et al. in 2011, is used to extract line segments from two corresponding images, and a line validation step is performed to remove meaningless and fragmented line segments. Then, a novel line segment descriptor with a new histogram binning strategy, which is robust to global geometrical distortions, is generated for each line segment based on the geometrical relationships,including both the locations and orientations of theremaining line segments relative to it. As a result of the invariance of the main shape contours, correct line segment matches will have similar descriptors and can be obtained by cross-matching among the descriptors. Finally, a spatial consistency measure is used to remove incorrect matches, and transformation parameters between the reference and sensed images can be figured out. Experiments with images from different types of satellite datasets, such as Landsat7, QuickBird, WorldView, and so on, demonstrate that the proposed algorithm is

  16. Disordered Speech Assessment Using Automatic Methods Based on Quantitative Measures

    Directory of Open Access Journals (Sweden)

    Christine Sapienza

    2005-06-01

    Full Text Available Speech quality assessment methods are necessary for evaluating and documenting treatment outcomes of patients suffering from degraded speech due to Parkinson's disease, stroke, or other disease processes. Subjective methods of speech quality assessment are more accurate and more robust than objective methods but are time-consuming and costly. We propose a novel objective measure of speech quality assessment that builds on traditional speech processing techniques such as dynamic time warping (DTW and the Itakura-Saito (IS distortion measure. Initial results show that our objective measure correlates well with the more expensive subjective methods.

  17. Determination of Artificial Sweetener 4-Ethoxyphenylurea in Succade by Automatic Solid-phase Extraction and High Performance Chromatography with Fluorescence Method%全自动固相萃取-高效液相色谱荧光法测定蜜饯中人工合成甜味剂对乙氧基苯脲含量

    Institute of Scientific and Technical Information of China (English)

    陈章捷; 陈金凤; 张艳燕; 钟坚海; 魏晶晶

    2014-01-01

    提出了高效液相色谱法测定蜜饯中人工合成甜味剂对乙氧基苯脲含量的方法。样品采用醋酸-醋酸铵缓冲液超声提取,全自动固相萃取仪净化,SB-C18反相色谱柱分离,荧光检测器检测。对乙氧基苯脲在0~10 mg/L范围内的线性相关系数为0.9987,方法定量限(S/N=10)小于0.1 mg/kg。以三种空白蜜饯为基体,在3个添加水平进行加标回收试验,平均回收率在81.7%~92.4%之间,相对标准偏差(n=6)在2.4%~6.8%之间。%High performance liquid chromatography is applied for the determination of artificial sweete-ner 4-Ethoxyphenylurea in succade.The sample is ultrasonic extracted with acetic acid/ammonium acetate buffer solution and purified by automatic solid-phase extraction.The extract is separated by SB-C1 8 column and detected by fluorescence detector.The value of correlation coefficient in the range of 0 to 10 mg/L is 0.9987.The limit of quantity (S/N=10)is less than 0.1 mg/kg.Using blank sample of succade as matrixes,the recovery is tested at 3 different concentration levels and the values of recovery are in the range of 81.7% to 92.4% with RSDs (n=6)in the range of 2.4% to 6.8%.

  18. A new method for an automatic analysis of rotation patterns

    Science.gov (United States)

    Heuer, A.

    A method that reliably and quickly analyzes even poorly defined crowded rotation patterns is presented. It is based on a systematic search in the non-Euclidean 3D parameter space of the harmonics combined with the application of the method of interval bisecting.

  19. A cell extraction method for oily sediments

    Directory of Open Access Journals (Sweden)

    Michael eLappé

    2011-11-01

    Full Text Available Hydrocarbons can be found in many different habitats and represent an important carbon source for microbes. As fossil fuels, they are also an important economical resource, through natural seepage or accidental release they can also be major pollutants. DNA-specific stains and molecular probes bind to hydrocarbons, causing massive background fluorescence and thereby hampering cell enumeration. The cell extraction procedure of Kallmeyer et al. (2008 separates the cells from the sediment matrix. In principle, this technique can also be used to separate cells from oily sediments, but it is not optimized for this application.Here we present a modified extraction method in which the hydrocarbons are removed prior to cell extraction. Due to the reduced background fluorescence the microscopic image becomes clearer, making cell identification and enumeration much easier. Consequently, the resulting cell counts from samples treated according to our new protocol are significantly higher than those treated according to Kallmeyer et al. (2008. We tested different amounts of a variety of solvents for their ability to remove hydrocarbons and found that n-hexane and – in samples containing more biodegraded oils – methanol, delivered the best results. However, as solvents also tend to lyse cells, it was important to find the optimum solvent to sample ratio, at which hydrocarbon extraction is maximised and cell lysis minimized. A ratio between slurry and solvent of 1:2 to 1:5 delivered the highest cell counts without lysing too many cells. The method provided reproducibly good results on samples from very different environments, both marine and terrestrial.

  20. Unsupervised Threshold for Automatic Extraction of Dolphin Dorsal Fin Outlines from Digital Photographs in DARWIN (Digital Analysis and Recognition of Whale Images on a Network)

    CERN Document Server

    Hale, Scott A

    2012-01-01

    At least two software packages---DARWIN, Eckerd College, and FinScan, Texas A&M---exist to facilitate the identification of cetaceans---whales, dolphins, porpoises---based upon the naturally occurring features along the edges of their dorsal fins. Such identification is useful for biological studies of population, social interaction, migration, etc. The process whereby fin outlines are extracted in current fin-recognition software packages is manually intensive and represents a major user input bottleneck: it is both time consuming and visually fatiguing. This research aims to develop automated methods (employing unsupervised thresholding and morphological processing techniques) to extract cetacean dorsal fin outlines from digital photographs thereby reducing manual user input. Ideally, automatic outline generation will improve the overall user experience and improve the ability of the software to correctly identify cetaceans. Various transformations from color to gray space were examined to determine whi...

  1. An Improved Dynamic Programming Method for Automatic Stratigraphic Correlation

    Institute of Scientific and Technical Information of China (English)

    Yan Hanjie; Yan Hong; Xiang Zhucong; Wang Yanjiang

    2003-01-01

    An improved dynamic programming algorithm is proposed for reducing the possible mismatching of layer in multi-well correlation. Compared with the standard dynamic programming algorithm, this method restricts the searching range during layer matching. It can not only avoid possible mismatching between sample and target layer, but also reduce the time spent on layer correlation. The result of applying the improved methods on the data processed by standard method before indicates that the improved one is more effective and timesaving for the multi-well correlation system than conventional dynamic programming algorithm.

  2. Automatic extraction of reference gene from literature in plants based on texting mining.

    Science.gov (United States)

    He, Lin; Shen, Gengyu; Li, Fei; Huang, Shuiqing

    2015-01-01

    Real-Time Quantitative Polymerase Chain Reaction (qRT-PCR) is widely used in biological research. It is a key to the availability of qRT-PCR experiment to select a stable reference gene. However, selecting an appropriate reference gene usually requires strict biological experiment for verification with high cost in the process of selection. Scientific literatures have accumulated a lot of achievements on the selection of reference gene. Therefore, mining reference genes under specific experiment environments from literatures can provide quite reliable reference genes for similar qRT-PCR experiments with the advantages of reliability, economic and efficiency. An auxiliary reference gene discovery method from literature is proposed in this paper which integrated machine learning, natural language processing and text mining approaches. The validity tests showed that this new method has a better precision and recall on the extraction of reference genes and their environments.

  3. Automatable Evaluation Method Oriented toward Behaviour Believability for Video Games

    CERN Document Server

    Tencé, Fabien

    2010-01-01

    Classic evaluation methods of believable agents are time-consuming because they involve many human to judge agents. They are well suited to validate work on new believable behaviours models. However, during the implementation, numerous experiments can help to improve agents' believability. We propose a method which aim at assessing how much an agent's behaviour looks like humans' behaviours. By representing behaviours with vectors, we can store data computed for humans and then evaluate as many agents as needed without further need of humans. We present a test experiment which shows that even a simple evaluation following our method can reveal differences between quite believable agents and humans. This method seems promising although, as shown in our experiment, results' analysis can be difficult.

  4. Virgin almond oil: Extraction methods and composition

    Directory of Open Access Journals (Sweden)

    Roncero, J. M.

    2016-09-01

    Full Text Available In this paper the extraction methods of virgin almond oil and its chemical composition are reviewed. The most common methods for obtaining oil are solvent extraction, extraction with supercritical fluids (CO2 and pressure systems (hydraulic and screw presses. The best industrial performance, but also the worst oil quality is achieved by using solvents. Oils obtained by this method cannot be considered virgin oils as they are obtained by chemical treatments. Supercritical fluid extraction results in higher quality oils but at a very high price. Extraction by pressing becomes the best option to achieve high quality oils at an affordable price. With regards chemical composition, almond oil is characterized by its low content in saturated fatty acids and the predominance of monounsaturated, especially oleic acid. Furthermore, almond oil contains antioxidants and fat-soluble bioactive compounds that make it an oil with interesting nutritional and cosmetic properties.En este trabajo se revisan los métodos de extracción del aceite de almendra virgen y su composición química. Los métodos más habituales para la obtención del aceite son la extracción con disolventes, la extracción con fluidos supercríticos (CO2 y los sistemas de presión (prensas hidráulica y de tornillo. El mayor rendimiento industrial, pero también la peor calidad de los aceites, se consigue mediante el uso de disolventes. Además, los aceites obtenidos por este método no se pueden considerar vírgenes, pues se obtienen por medio de tratamientos químicos. La extracción con fluidos supercríticos da lugar a aceites de mayor calidad pero a un precio muy elevado. La extracción mediante prensado se convierte en la mejor opción de extracción, al conseguir aceites de alta calidad a un precio asequible. En cuanto a su composición química, el aceite de almendra se caracteriza por su bajo contenido en ácidos grasos saturados y el predominio de los monoinsaturados, en

  5. Automatic Sampling with the Ratio-of-uniforms Method

    OpenAIRE

    Leydold, Josef

    1999-01-01

    Applying the ratio-of-uniforms method for generating random variates results in very efficient, fast and easy to implement algorithms. However parameters for every particular type of density must be precalculated analytically. In this paper we show, that the ratio-of-uniforms method is also useful for the design of a black-box algorithm suitable for a large class of distributions, including all with log-concave densities. Using polygonal envelopes and squeezes results in an algorithm that is ...

  6. Automatic Knowledge Extraction from Chinese Natural Language Documents%面向中文自然语言文档的自动知识抽取方法

    Institute of Scientific and Technical Information of China (English)

    车海燕; 冯铁; 张家晨; 陈伟; 李大利

    2013-01-01

    自动知识抽取方法可以自动识别并抽取Web文档中与本体匹配的事实知识.利用这些事实知识既可以构建基于知识的服务,也能够为语义Web的实现提供必要的语义数据.但面向自然语言特别是中文自然语言的自动知识抽取非常困难.提出了基于语义Web理论和中文自然语言处理(naturallanguage processing,NLP)技术的自动知识抽取新方法AKE,用聚集体知识概念刻画N元关系知识,能够在不使用大规模语言知识库和同义词表的情况下自动识别中文自然语言文档内容中显式和隐含的简单事实知识和N元关系复杂事实知识.实验结果表明该方法优于目前已知的其他方法.%Automatic knowledge extraction method can recognize and extract the factual knowledge on matching the ontology from the Web documents automatically. These factual knowledge can not only be used to implement knowledge-based services but also provide necessary semantic content to enable the realization of the vision of Semantic Web. However, it is very difficult to deal with the natural language documents, especially the Chinese natural language documents. This paper proposes a new knowledge extraction method (AKE) based on Semantic Web theory and Chinese natural language processing (NLP) technologies. This method uses aggregated knowledge concept to depict N-ary relation knowledge in ontology and can automatically extract not only the explicit but also the implicit simple and N-ary complex factual knowledge from Chinese natural language documents without using the large scale linguistics databases and synonym table. Experimental results show that this method is better than other similar methods.

  7. Automatic methods for the refinement of system models from the specification to the implementation

    CERN Document Server

    Seiter, Julia; Drechsler, Rolf

    2017-01-01

    This book provides a comprehensive overview of automatic model refinement, which helps readers close the gap between initial textual specification and its desired implementation. The authors enable readers to follow two “directions” for refinement: Vertical refinement, for adding detail and precision to single description for a given model and Horizontal refinement, which considers several views on one level of abstraction, refining the system specification by dedicated descriptions for structure or behavior. The discussion includes several methods which support designers of electronic systems in this refinement process, including verification methods to check automatically whether a refinement has been conducted as intended.

  8. Methods for automatic cloud classification from MODIS data

    Science.gov (United States)

    Astafurov, V. G.; Kuriyanovich, K. V.; Skorokhodov, A. V.

    2016-12-01

    In this paper, different texture-analysis methods are used to describe different cloud types in MODIS satellite images. A universal technique is suggested for the formation of efficient sets of textural features using the algorithm of truncated scanning of the features for different classifiers based on neural networks and cluster-analysis methods. Efficient sets of textural features are given for the considered classifiers; the cloud-image classification results are discussed. The characteristics of the classification methods used in this work are described: the probabilistic neural network, K-nearest neighbors, self-organizing Kohonen network, fuzzy C-means, and density clustering algorithm methods. It is shown that the algorithm based on a probabilistic neural network is the most efficient. It provides for the best classification reliability for 25 cloud types and allows the recognition of 11 cloud types with a probability greater than 0.7. As an example, the cloud classification results are given for the Tomsk region. The classifications were carried out using full-size satellite cloud images and different methods. The results agree with each other and agree well with the observational data from ground-based weather stations.

  9. Analysis of Fiber deposition using Automatic Image Processing Method

    Science.gov (United States)

    Belka, M.; Lizal, F.; Jedelsky, J.; Jicha, M.

    2013-04-01

    Fibers are permanent threat for a human health. They have an ability to penetrate deeper in the human lung, deposit there and cause health hazards, e.glung cancer. An experiment was carried out to gain more data about deposition of fibers. Monodisperse glass fibers were delivered into a realistic model of human airways with an inspiratory flow rate of 30 l/min. Replica included human airways from oral cavity up to seventh generation of branching. Deposited fibers were rinsed from the model and placed on nitrocellulose filters after the delivery. A new novel method was established for deposition data acquisition. The method is based on a principle of image analysis. The images were captured by high definition camera attached to a phase contrast microscope. Results of new method were compared with standard PCM method, which follows methodology NIOSH 7400, and a good match was found. The new method was found applicable for evaluation of fibers and deposition fraction and deposition efficiency were calculated afterwards.

  10. Analysis of Fiber deposition using Automatic Image Processing Method

    Directory of Open Access Journals (Sweden)

    Jicha M.

    2013-04-01

    Full Text Available Fibers are permanent threat for a human health. They have an ability to penetrate deeper in the human lung, deposit there and cause health hazards, e.glung cancer. An experiment was carried out to gain more data about deposition of fibers. Monodisperse glass fibers were delivered into a realistic model of human airways with an inspiratory flow rate of 30 l/min. Replica included human airways from oral cavity up to seventh generation of branching. Deposited fibers were rinsed from the model and placed on nitrocellulose filters after the delivery. A new novel method was established for deposition data acquisition. The method is based on a principle of image analysis. The images were captured by high definition camera attached to a phase contrast microscope. Results of new method were compared with standard PCM method, which follows methodology NIOSH 7400, and a good match was found. The new method was found applicable for evaluation of fibers and deposition fraction and deposition efficiency were calculated afterwards.

  11. A new method for automatic discontinuity traces sampling on rock mass 3D model

    Science.gov (United States)

    Umili, G.; Ferrero, A.; Einstein, H. H.

    2013-02-01

    A new automatic method for discontinuity traces mapping and sampling on a rock mass digital model is described in this work. The implemented procedure allows one to automatically identify discontinuity traces on a Digital Surface Model: traces are detected directly as surface breaklines, by means of maximum and minimum principal curvature values of the vertices that constitute the model surface. Color influence and user errors, that usually characterize the trace mapping on images, are eliminated. Also trace sampling procedures based on circular windows and circular scanlines have been implemented: they are used to infer trace data and to calculate values of mean trace length, expected discontinuity diameter and intensity of rock discontinuities. The method is tested on a case study: results obtained applying the automatic procedure on the DSM of a rock face are compared to those obtained performing a manual sampling on the orthophotograph of the same rock face.

  12. An automatic and effective parameter optimization method for model tuning

    Directory of Open Access Journals (Sweden)

    T. Zhang

    2015-11-01

    simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.

  13. Comparison of modified automatic Dumas method and the traditional Kjeldahl method for nitrogen determination in infant food.

    Science.gov (United States)

    Bellomonte, G; Costantini, A; Giammarioli, S

    1987-01-01

    This study compares 2 methods for determining nitrogen and protein in various types of infant food: the Kjeldahl method, developed in 1883, which is time consuming and labor intensive, and a newer, automatic method, based on the Dumas method. In each category of infant food considered, the results obtained from both methods are shown to be comparable; however, the modified Dumas method is quicker, easier, and does not pollute the laboratory environment.

  14. A cell extraction method for oily sediments

    Science.gov (United States)

    Lappé, M.; Kallmeyer, J.

    2012-04-01

    Hydrocarbons can be found in many different habitats and represent an important carbon source for microbes. As fossil fuels, they are an important economical resource and, through natural seepage or accidental release, they can be major pollutants. Oil sands from Alberta, Canada, and samples from the seafloor of the Gulf of Mexico represent typical examples of either natural or anthropogenically affected oily sediments. DNA-specific stains and molecular probes bind to hydrocarbons, causing massive background fluorescence and thereby massively hampering cell enumeration. The cell extraction procedure of Kallmeyer et al. (2008) separates the cells from the sediment matrix, producing a sediment free cell extract that can then be used for subsequent staining and cell enumeration under a fluorescence microscope. In principle, this technique can also be used to separate cells from oily sediments, but it was not originally optimized for this application and does not provide satisfactory results. Here we present a modified extraction method in which the hydrocarbons are removed prior to cell extraction by a solvent treatment. Due to the reduced background fluorescence the microscopic image becomes clearer, making cell identification and enumeration much easier. Consequently, the resulting cell counts from oily samples treated according to our new protocol were significantly higher than those treated according to Kallmeyer et al. (2008). We tested different amounts of a variety of solvents for their ability to remove hydrocarbons and found that n-hexane and - in samples containing more biodegraded oils - methanol, delivered the best results. Because solvents also tend to lyse cells, it was important to find the optimum solvent to sample ratio, at which the positive effect of hydrocarbon extraction overcomes the negative effect of cell lysis. A volumetric ratio of 1:2 to 1:5 between a formalin-fixed sediment slurry and solvent delivered highest cell counts. Extraction

  15. A New Automatic Method to Adjust Parameters for Object Recognition

    Directory of Open Access Journals (Sweden)

    Issam Qaffou

    2012-09-01

    Full Text Available To recognize an object in an image, the user must apply a combination of operators, where each operator has a set of parameters. These parameters must be “well” adjusted in order to reach good results. Usually, this adjustment is made manually by the user. In this paper we propose a new method to automate the process of parameter adjustment for an object recognition task. Our method is based on reinforcement learning, we use two types of agents: User Agent that gives the necessary information and Parameter Agent that adjusts the parameters of each operator. Due to the nature of reinforcement learning the results do not depend only on the system characteristics but also the user’s favorite choices.

  16. Automatic teleaudiometry: a low cost method to auditory screening

    Directory of Open Access Journals (Sweden)

    Campelo, Victor Eulálio Sousa

    2010-03-01

    Full Text Available Introduction: The auditory screening' benefits has been demonstrated, however these programs has been restricted to the big centers. Objectives: (a Developing a auditory screening method to distance; (b Testing its accuracy and comparing to the screening audiometry test (AV. Method: The teleaudiometry (TA, consists in a developed software, installed in a computer with phone TDH39. It was realized a study in series in 73 individuals between 17 and 50 years, being 57,%% of the female sex, they were randomly selected between patients and companions of the Hospital das Clínicas. Before were subjected to a symptom questionnaire and otoscopy, the individuals realized the tests of TA AV, with scanning in 20dB in the frequencies of 1,2 and 4kHz following the ASHA (1997 protocol and to the gold standard test of audiometry of pure tones in soundproof booth in aleatory order. Results: the TA has lasted average 125+11s and the AV 65+18s. 69 individuals (94,5% declaring to be found difficult or very easy to performing the TA and 61 (83,6% have considered easy or very easy the AV. The accuracy results of TA and AV were respectively: sensibility (86,7% / 86,7%, specificity (75,9%/ 72,4% and negative predictive value (95,7% / 95,5%, positive predictive value (48,1% / 55,2%. Conclusion: The teleaudiometry has showed a good option as an auditory screening method, presenting accuracy next to screening audiometry. In comparison with this method, the teleaudiometry has presented a similar sensibility, major specificity, negative predictive value and endurance time and, under positive predictive value.

  17. An Automatic Interference Recognition Method in Spread Spectrum Communication System

    Institute of Scientific and Technical Information of China (English)

    YANG Xiao-ming; TAO Ran

    2007-01-01

    An algorithm to detect and recognize interferences embedded in a direct sequence spread spectrum (DSSS) communication system is proposed. Based on Welch's averaging modified periodogram method and fractional Fourier transformation (FRFT), the paper proposes a decision tree-based algorithm in which a set of decision criteria for identifying different types of interferences is developed. Simulation results demonstrate that the proposed algorithm provides a high recognition rate and is robust for various ISR and SNR.

  18. An automatic synthesis method of compact models of integrated circuit devices based on equivalent circuits

    Science.gov (United States)

    Abramov, I. I.

    2006-05-01

    An automatic synthesis method of equivalent circuits of integrated circuit devices is described in the paper. This method is based on a physical approach to construction of finite-difference approximation to basic equations of semiconductor device physics. It allows to synthesize compact equivalent circuits of different devices automatically as alternative to, for example, sufficiently formal BSIM2 and BSIM3 models used in circuit simulation programs of SPICE type. The method is one of possible variants of general methodology for automatic synthesis of compact equivalent circuits of almost arbitrary devices and circuit-type structures of micro- and nanoelecronics [1]. The method is easily extended in the case of necessity to account thermal effects in integrated circuits. It was shown that its application would be especially perspective for analysis of integrated circuit fragments as a whole and for identification of significant collective physical effects, including parasitic effects in VLSI and ULSI. In the paper the examples illustrating possibilities of the method for automatic synthesis of compact equivalent circuits of some of semiconductor devices and integrated circuit devices are considered. Special attention is given to examples of integrated circuit devices for coarse grids of spatial discretization (less than 10 nodes).

  19. Automatic parameter extraction for the 16,000 galaxies in the ESO/Uppsala catalogue

    NARCIS (Netherlands)

    Lauberts, A.; Valentijn, E. A.

    1983-01-01

    Under a restriction for minimum angular diameter of not less than 1 arcmin, corresponding to the 15th magnitude and allowing morphological classification of structure, 16,000 galaxies have been brought together in the single volume of the ESO/Uppsala catalog (1982). Attention is given to the automat

  20. Accuracy of structure-based sequence alignment of automatic methods

    Directory of Open Access Journals (Sweden)

    Lee Byungkook

    2007-09-01

    Full Text Available Abstract Background Accurate sequence alignments are essential for homology searches and for building three-dimensional structural models of proteins. Since structure is better conserved than sequence, structure alignments have been used to guide sequence alignments and are commonly used as the gold standard for sequence alignment evaluation. Nonetheless, as far as we know, there is no report of a systematic evaluation of pairwise structure alignment programs in terms of the sequence alignment accuracy. Results In this study, we evaluate CE, DaliLite, FAST, LOCK2, MATRAS, SHEBA and VAST in terms of the accuracy of the sequence alignments they produce, using sequence alignments from NCBI's human-curated Conserved Domain Database (CDD as the standard of truth. We find that 4 to 9% of the residues on average are either not aligned or aligned with more than 8 residues of shift error and that an additional 6 to 14% of residues on average are misaligned by 1–8 residues, depending on the program and the data set used. The fraction of correctly aligned residues generally decreases as the sequence similarity decreases or as the RMSD between the Cα positions of the two structures increases. It varies significantly across CDD superfamilies whether shift error is allowed or not. Also, alignments with different shift errors occur between proteins within the same CDD superfamily, leading to inconsistent alignments between superfamily members. In general, residue pairs that are more than 3.0 Å apart in the reference alignment are heavily (>= 25% on average misaligned in the test alignments. In addition, each method shows a different pattern of relative weaknesses for different SCOP classes. CE gives relatively poor results for β-sheet-containing structures (all-β, α/β, and α+β classes, DaliLite for "others" class where all but the major four classes are combined, and LOCK2 and VAST for all-β and "others" classes. Conclusion When the sequence

  1. An unsupervised text mining method for relation extraction from biomedical literature.

    Science.gov (United States)

    Quan, Changqin; Wang, Meng; Ren, Fuji

    2014-01-01

    The wealth of interaction information provided in biomedical articles motivated the implementation of text mining approaches to automatically extract biomedical relations. This paper presents an unsupervised method based on pattern clustering and sentence parsing to deal with biomedical relation extraction. Pattern clustering algorithm is based on Polynomial Kernel method, which identifies interaction words from unlabeled data; these interaction words are then used in relation extraction between entity pairs. Dependency parsing and phrase structure parsing are combined for relation extraction. Based on the semi-supervised KNN algorithm, we extend the proposed unsupervised approach to a semi-supervised approach by combining pattern clustering, dependency parsing and phrase structure parsing rules. We evaluated the approaches on two different tasks: (1) Protein-protein interactions extraction, and (2) Gene-suicide association extraction. The evaluation of task (1) on the benchmark dataset (AImed corpus) showed that our proposed unsupervised approach outperformed three supervised methods. The three supervised methods are rule based, SVM based, and Kernel based separately. The proposed semi-supervised approach is superior to the existing semi-supervised methods. The evaluation on gene-suicide association extraction on a smaller dataset from Genetic Association Database and a larger dataset from publicly available PubMed showed that the proposed unsupervised and semi-supervised methods achieved much higher F-scores than co-occurrence based method.

  2. An unsupervised text mining method for relation extraction from biomedical literature.

    Directory of Open Access Journals (Sweden)

    Changqin Quan

    Full Text Available The wealth of interaction information provided in biomedical articles motivated the implementation of text mining approaches to automatically extract biomedical relations. This paper presents an unsupervised method based on pattern clustering and sentence parsing to deal with biomedical relation extraction. Pattern clustering algorithm is based on Polynomial Kernel method, which identifies interaction words from unlabeled data; these interaction words are then used in relation extraction between entity pairs. Dependency parsing and phrase structure parsing are combined for relation extraction. Based on the semi-supervised KNN algorithm, we extend the proposed unsupervised approach to a semi-supervised approach by combining pattern clustering, dependency parsing and phrase structure parsing rules. We evaluated the approaches on two different tasks: (1 Protein-protein interactions extraction, and (2 Gene-suicide association extraction. The evaluation of task (1 on the benchmark dataset (AImed corpus showed that our proposed unsupervised approach outperformed three supervised methods. The three supervised methods are rule based, SVM based, and Kernel based separately. The proposed semi-supervised approach is superior to the existing semi-supervised methods. The evaluation on gene-suicide association extraction on a smaller dataset from Genetic Association Database and a larger dataset from publicly available PubMed showed that the proposed unsupervised and semi-supervised methods achieved much higher F-scores than co-occurrence based method.

  3. Concept of automatic programming of NC machine for metal plate cutting by genetic algorithm method

    Directory of Open Access Journals (Sweden)

    B. Vaupotic

    2005-12-01

    Full Text Available Purpose: In this paper the concept of automatic programs of the NC machine for metal plate cutting by genetic algorithm method has been presented.Design/methodology/approach: The paper was limited to automatic creation of NC programs for two-dimensional cutting of material by means of adaptive heuristic search algorithms.Findings: Automatic creation of NC programs in laser cutting of materials combines the CAD concepts, the recognition of features and creation and optimization of NC programs. The proposed intelligent system is capable to recognize automatically the nesting of products in the layout, to determine the incisions and sequences of cuts forming the laid out products. Position of incisions is determined at the relevant places on the cut. The system is capable to find the shortest path between individual cuts and to record the NC program.Research limitations/implications: It would be appropriate to orient future researches towards conceiving an improved system for three-dimensional cutting with optional determination of positions of incisions, with the capability to sense collisions and with optimization of the speed and acceleration during cutting.Practical implications: The proposed system assures automatic preparation of NC program without NC programer.Originality/value: The proposed concept shows a high degree of universality, efficiency and reliability and it can be simply adapted to other NC-machines.

  4. A simple multi-scale Gaussian smoothing-based strategy for automatic chromatographic peak extraction.

    Science.gov (United States)

    Fu, Hai-Yan; Guo, Jun-Wei; Yu, Yong-Jie; Li, He-Dong; Cui, Hua-Peng; Liu, Ping-Ping; Wang, Bing; Wang, Sheng; Lu, Peng

    2016-06-24

    Peak detection is a critical step in chromatographic data analysis. In the present work, we developed a multi-scale Gaussian smoothing-based strategy for accurate peak extraction. The strategy consisted of three stages: background drift correction, peak detection, and peak filtration. Background drift correction was implemented using a moving window strategy. The new peak detection method is a variant of the system used by the well-known MassSpecWavelet, i.e., chromatographic peaks are found at local maximum values under various smoothing window scales. Therefore, peaks can be detected through the ridge lines of maximum values under these window scales, and signals that are monotonously increased/decreased around the peak position could be treated as part of the peak. Instrumental noise was estimated after peak elimination, and a peak filtration strategy was performed to remove peaks with signal-to-noise ratios smaller than 3. The performance of our method was evaluated using two complex datasets. These datasets include essential oil samples for quality control obtained from gas chromatography and tobacco plant samples for metabolic profiling analysis obtained from gas chromatography coupled with mass spectrometry. Results confirmed the reasonability of the developed method.

  5. Method and apparatus for automatic control of a humanoid robot

    Science.gov (United States)

    Abdallah, Muhammad E (Inventor); Platt, Robert (Inventor); Wampler, II, Charles W. (Inventor); Reiland, Matthew J (Inventor); Sanders, Adam M (Inventor)

    2013-01-01

    A robotic system includes a humanoid robot having a plurality of joints adapted for force control with respect to an object acted upon by the robot, a graphical user interface (GUI) for receiving an input signal from a user, and a controller. The GUI provides the user with intuitive programming access to the controller. The controller controls the joints using an impedance-based control framework, which provides object level, end-effector level, and/or joint space-level control of the robot in response to the input signal. A method for controlling the robotic system includes receiving the input signal via the GUI, e.g., a desired force, and then processing the input signal using a host machine to control the joints via an impedance-based control framework. The framework provides object level, end-effector level, and/or joint space-level control of the robot, and allows for functional-based GUI to simplify implementation of a myriad of operating modes.

  6. Ceramography and segmentation of polycristalline ceramics: application to grain size analysis by automatic methods

    Energy Technology Data Exchange (ETDEWEB)

    Arnould, X.; Coster, M.; Chermant, J.L.; Chermant, L. [LERMAT, ISMRA, Caen (France); Chartier, T. [SPCTS, ENSCI, Limoges (France)

    2002-07-01

    The knowledge of the mean grain size of ceramics is a very important problem to solve in the ceramic industry. Some specific methods of segmentation are presented to analyse, by an automatic way, the granulometry and morphological parameters of ceramic materials. Example presented concerns cerine materials. Such investigations lead to important information on the sintering process. (orig.)

  7. BioSimplify: an open source sentence simplification engine to improve recall in automatic biomedical information extraction

    CERN Document Server

    Jonnalagadda, Siddhartha

    2011-01-01

    BioSimplify is an open source tool written in Java that introduces and facilitates the use of a novel model for sentence simplification tuned for automatic discourse analysis and information extraction (as opposed to sentence simplification for improving human readability). The model is based on a "shot-gun" approach that produces many different (simpler) versions of the original sentence by combining variants of its constituent elements. This tool is optimized for processing biomedical scientific literature such as the abstracts indexed in PubMed. We tested our tool on its impact to the task of PPI extraction and it improved the f-score of the PPI tool by around 7%, with an improvement in recall of around 20%. The BioSimplify tool and test corpus can be downloaded from https://biosimplify.sourceforge.net.

  8. EnvMine: A text-mining system for the automatic extraction of contextual information

    Directory of Open Access Journals (Sweden)

    de Lorenzo Victor

    2010-06-01

    Full Text Available Abstract Background For ecological studies, it is crucial to count on adequate descriptions of the environments and samples being studied. Such a description must be done in terms of their physicochemical characteristics, allowing a direct comparison between different environments that would be difficult to do otherwise. Also the characterization must include the precise geographical location, to make possible the study of geographical distributions and biogeographical patterns. Currently, there is no schema for annotating these environmental features, and these data have to be extracted from textual sources (published articles. So far, this had to be performed by manual inspection of the corresponding documents. To facilitate this task, we have developed EnvMine, a set of text-mining tools devoted to retrieve contextual information (physicochemical variables and geographical locations from textual sources of any kind. Results EnvMine is capable of retrieving the physicochemical variables cited in the text, by means of the accurate identification of their associated units of measurement. In this task, the system achieves a recall (percentage of items retrieved of 92% with less than 1% error. Also a Bayesian classifier was tested for distinguishing parts of the text describing environmental characteristics from others dealing with, for instance, experimental settings. Regarding the identification of geographical locations, the system takes advantage of existing databases such as GeoNames to achieve 86% recall with 92% precision. The identification of a location includes also the determination of its exact coordinates (latitude and longitude, thus allowing the calculation of distance between the individual locations. Conclusion EnvMine is a very efficient method for extracting contextual information from different text sources, like published articles or web pages. This tool can help in determining the precise location and physicochemical

  9. AUTOMATIC EXTRACTION OF BUILDING ROOF PLANES FROM AIRBORNE LIDAR DATA APPLYING AN EXTENDED 3D RANDOMIZED HOUGH TRANSFORM

    Directory of Open Access Journals (Sweden)

    E. Maltezos

    2016-06-01

    Full Text Available This study aims to extract automatically building roof planes from airborne LIDAR data applying an extended 3D Randomized Hough Transform (RHT. The proposed methodology consists of three main steps, namely detection of building points, plane detection and refinement. For the detection of the building points, the vegetative areas are first segmented from the scene content and the bare earth is extracted afterwards. The automatic plane detection of each building is performed applying extensions of the RHT associated with additional constraint criteria during the random selection of the 3 points aiming at the optimum adaptation to the building rooftops as well as using a simple design of the accumulator that efficiently detects the prominent planes. The refinement of the plane detection is conducted based on the relationship between neighbouring planes, the locality of the point and the use of additional information. An indicative experimental comparison to verify the advantages of the extended RHT compared to the 3D Standard Hough Transform (SHT is implemented as well as the sensitivity of the proposed extensions and accumulator design is examined in the view of quality and computational time compared to the default RHT. Further, a comparison between the extended RHT and the RANSAC is carried out. The plane detection results illustrate the potential of the proposed extended RHT in terms of robustness and efficiency for several applications.

  10. Automatic Extraction of Building Roof Planes from Airborne LIDAR Data Applying AN Extended 3d Randomized Hough Transform

    Science.gov (United States)

    Maltezos, Evangelos; Ioannidis, Charalabos

    2016-06-01

    This study aims to extract automatically building roof planes from airborne LIDAR data applying an extended 3D Randomized Hough Transform (RHT). The proposed methodology consists of three main steps, namely detection of building points, plane detection and refinement. For the detection of the building points, the vegetative areas are first segmented from the scene content and the bare earth is extracted afterwards. The automatic plane detection of each building is performed applying extensions of the RHT associated with additional constraint criteria during the random selection of the 3 points aiming at the optimum adaptation to the building rooftops as well as using a simple design of the accumulator that efficiently detects the prominent planes. The refinement of the plane detection is conducted based on the relationship between neighbouring planes, the locality of the point and the use of additional information. An indicative experimental comparison to verify the advantages of the extended RHT compared to the 3D Standard Hough Transform (SHT) is implemented as well as the sensitivity of the proposed extensions and accumulator design is examined in the view of quality and computational time compared to the default RHT. Further, a comparison between the extended RHT and the RANSAC is carried out. The plane detection results illustrate the potential of the proposed extended RHT in terms of robustness and efficiency for several applications.

  11. Using Probe Vehicle Data for Automatic Extraction of Road Traffic Parameters

    Directory of Open Access Journals (Sweden)

    Roman Popescu Maria Alexandra

    2016-12-01

    Full Text Available Through this paper the author aims to study and find solutions for automatic detection of traffic light position and for automatic calculation of the waiting time at traffic light. The first objective serves mainly the road transportation field, mainly because it removes the need for collaboration with local authorities to establish a national network of traffic lights. The second objective is important not only for companies which are providing navigation solutions, but especially for authorities, institutions, companies operating in road traffic management systems. Real-time dynamic determination of traffic queue length and of waiting time at traffic lights allow the creation of dynamic systems, intelligent and flexible, adapted to actual traffic conditions, and not to generic, theoretical models. Thus, cities can approach the Smart City concept by boosting, efficienting and greening the road transport, promoted in Europe through the Horizon 2020, Smart Cities, Urban Mobility initiative.

  12. Green technology approach towards herbal extraction method

    Science.gov (United States)

    Mutalib, Tengku Nur Atiqah Tengku Ab; Hamzah, Zainab; Hashim, Othman; Mat, Hishamudin Che

    2015-05-01

    The aim of present study was to compare maceration method of selected herbs using green and non-green solvents. Water and d-limonene are a type of green solvents while non-green solvents are chloroform and ethanol. The selected herbs were Clinacanthus nutans leaf and stem, Orthosiphon stamineus leaf and stem, Sesbania grandiflora leaf, Pluchea indica leaf, Morinda citrifolia leaf and Citrus hystrix leaf. The extracts were compared with the determination of total phenolic content. Total phenols were analyzed using a spectrophotometric technique, based on Follin-ciocalteau reagent. Gallic acid was used as standard compound and the total phenols were expressed as mg/g gallic acid equivalent (GAE). The most suitable and effective solvent is water which produced highest total phenol contents compared to other solvents. Among the selected herbs, Orthosiphon stamineus leaves contain high total phenols at 9.087mg/g.

  13. On the method of the automatic modeling in hydraulic pipe networks

    Institute of Scientific and Technical Information of China (English)

    孙以泽; 徐本洲; 王祖温

    2003-01-01

    In this paper the dynamic characteristics in pipes are analyzed with frequency method, and puts for-ward a simple and practical describing method. By establishing the model library beforehand, the modeling ofthe pipe-net is completed automatically, and we can accurately calculate the impedance characteristics of thepipe network, achieve the reasonable configuration of the pipe network, so that to decrease the pressure pulsa-tion.

  14. An Automatic Evaluation Method for Conversational Agents Based on Affect-as-Information Theory

    OpenAIRE

    Ptaszynski, Michal; Dybala, Pawel; Rzepka, Rafal; Araki, Kenji

    2010-01-01

    This paper presents a method for automatic evaluation of conversational agents. The method consists of several steps. First, an affect analysis system is used to detect users' general emotional engagement in the conversation and classify their specific emotional states. Next, we interpret this data with the use of reasoning based on Affect-as-Information Theory to obtain information about users' general attitudes to the conversational agent and its performance. The affect analysis system was ...

  15. Extraction of human genomic DNA from whole blood using a magnetic microsphere method.

    Science.gov (United States)

    Gong, Rui; Li, Shengying

    2014-01-01

    With the rapid development of molecular biology and the life sciences, magnetic extraction is a simple, automatic, and highly efficient method for separating biological molecules, performing immunoassays, and other applications. Human blood is an ideal source of human genomic DNA. Extracting genomic DNA by traditional methods is time-consuming, and phenol and chloroform are toxic reagents that endanger health. Therefore, it is necessary to find a more convenient and efficient method for obtaining human genomic DNA. In this study, we developed urea-formaldehyde resin magnetic microspheres and magnetic silica microspheres for extraction of human genomic DNA. First, a magnetic microsphere suspension was prepared and used to extract genomic DNA from fresh whole blood, frozen blood, dried blood, and trace blood. Second, DNA content and purity were measured by agarose electrophoresis and ultraviolet spectrophotometry. The human genomic DNA extracted from whole blood was then subjected to polymerase chain reaction analysis to further confirm its quality. The results of this study lay a good foundation for future research and development of a high-throughput and rapid extraction method for extracting genomic DNA from various types of blood samples.

  16. Level set method with automatic selective local statistics for brain tumor segmentation in MR images.

    Science.gov (United States)

    Thapaliya, Kiran; Pyun, Jae-Young; Park, Chun-Su; Kwon, Goo-Rak

    2013-01-01

    The level set approach is a powerful tool for segmenting images. This paper proposes a method for segmenting brain tumor images from MR images. A new signed pressure function (SPF) that can efficiently stop the contours at weak or blurred edges is introduced. The local statistics of the different objects present in the MR images were calculated. Using local statistics, the tumor objects were identified among different objects. In this level set method, the calculation of the parameters is a challenging task. The calculations of different parameters for different types of images were automatic. The basic thresholding value was updated and adjusted automatically for different MR images. This thresholding value was used to calculate the different parameters in the proposed algorithm. The proposed algorithm was tested on the magnetic resonance images of the brain for tumor segmentation and its performance was evaluated visually and quantitatively. Numerical experiments on some brain tumor images highlighted the efficiency and robustness of this method.

  17. An atlas-based fuzzy connectedness method for automatic tissue classification in brain MRI

    Institute of Scientific and Technical Information of China (English)

    ZHOU Yongxin; BAI Jing

    2006-01-01

    A framework incorporating a subject-registered atlas into the fuzzy connectedness (FC) method is proposed for the automatic tissue classification of 3D images of brain MRI. The pre-labeled atlas is first registered onto the subject to provide an initial approximate segmentation. The initial segmentation is used to estimate the intensity histograms of gray matter and white matter. Based on the estimated intensity histograms, multiple seed voxels are assigned to each tissue automatically. The normalized intensity histograms are utilized in the FC method as the intensity probability density function (PDF) directly. Relative fuzzy connectedness technique is adopted in the final classification of gray matter and white matter. Experimental results based on the 20 data sets from IBSR are included, as well as comparisons of the performance of our method with that of other published methods. This method is fully automatic and operator-independent. Therefore, it is expected to find wide applications, such as 3D visualization, radiation therapy planning, and medical database construction.

  18. Quality assurance using outlier detection on an automatic segmentation method for the cerebellar peduncles

    Science.gov (United States)

    Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.

    2016-03-01

    Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.

  19. NeurphologyJ: An automatic neuronal morphology quantification method and its application in pharmacological discovery

    Directory of Open Access Journals (Sweden)

    Huang Hui-Ling

    2011-06-01

    Full Text Available Abstract Background Automatic quantification of neuronal morphology from images of fluorescence microscopy plays an increasingly important role in high-content screenings. However, there exist very few freeware tools and methods which provide automatic neuronal morphology quantification for pharmacological discovery. Results This study proposes an effective quantification method, called NeurphologyJ, capable of automatically quantifying neuronal morphologies such as soma number and size, neurite length, and neurite branching complexity (which is highly related to the numbers of attachment points and ending points. NeurphologyJ is implemented as a plugin to ImageJ, an open-source Java-based image processing and analysis platform. The high performance of NeurphologyJ arises mainly from an elegant image enhancement method. Consequently, some morphology operations of image processing can be efficiently applied. We evaluated NeurphologyJ by comparing it with both the computer-aided manual tracing method NeuronJ and an existing ImageJ-based plugin method NeuriteTracer. Our results reveal that NeurphologyJ is comparable to NeuronJ, that the coefficient correlation between the estimated neurite lengths is as high as 0.992. NeurphologyJ can accurately measure neurite length, soma number, neurite attachment points, and neurite ending points from a single image. Furthermore, the quantification result of nocodazole perturbation is consistent with its known inhibitory effect on neurite outgrowth. We were also able to calculate the IC50 of nocodazole using NeurphologyJ. This reveals that NeurphologyJ is effective enough to be utilized in applications of pharmacological discoveries. Conclusions This study proposes an automatic and fast neuronal quantification method NeurphologyJ. The ImageJ plugin with supports of batch processing is easily customized for dealing with high-content screening applications. The source codes of NeurphologyJ (interactive and high

  20. A generic method for automatic translation between input models for different versions of simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Serfontein, Dawid E., E-mail: Dawid.Serfontein@nwu.ac.za [School of Mechanical and Nuclear Engineering, North West University (PUK-Campus), PRIVATE BAG X6001 (Internal Post Box 360), Potchefstroom 2520 (South Africa); Mulder, Eben J. [School of Mechanical and Nuclear Engineering, North West University (South Africa); Reitsma, Frederik [Calvera Consultants (South Africa)

    2014-05-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications.

  1. Free Model of Sentence Classifier for Automatic Extraction of Topic Sentences

    OpenAIRE

    M.L. Khodra; D.H. Widyantoro; E.A. Aziz; B.R. Trilaksono

    2011-01-01

    This research employs free model that uses only sentential features without paragraph context to extract topic sentences of a paragraph. For finding optimal combination of features, corpus-based classification is used for constructing a sentence classifier as the model. The sentence classifier is trained by using Support Vector Machine (SVM). The experiment shows that position and meta-discourse features are more important than syntactic features to extract topic sentence, and the best perfor...

  2. A method for extracting urban built-up area based on RS indexes

    Science.gov (United States)

    Qin, Ruijiao; Li, Jiansong; Tang, Huijun

    2016-10-01

    Within administrative regions, urban built-up areas are vast stretches of constructed areas equipped with basic public facilit ies. Human act ivit ies most frequently take place within urban regions and the dynamic evolution of urbanization has caused profound variations in urban spatial structures. Conventional boundary extraction methods are complicated and require human intervention. This article innovatively proposes a vector method that combines a data-dimension compression index known as an Index-based Built -up Index (IBI) with aggregate analysis to extract vector boundaries of urban built-up areas automatically by setting a threshold value and the parameters for aggregate analysis. Datadimension compression technology is used to extract urban built-up areas using thematic bands (rather than original bands) to build indexes, which improves the precision of extraction. Areas ext racted by the methods above contains urban built-up areas, rural built-up areas, independent houses and fully bare areas. Aggregate analysis aggregates a certain range of non-adjacent plots into a new polygon section. This method has made it easy to analyze the spatial expansion of Wuhan city from 2003 to 2013. This method avoids cumbersome process es of outlining vector boundaries by artificial visual interpretation with a better working efficiency and reduced costs than other methods, which cannot accurately determine vector boundaries to an accurate degree by manual vector quantizat ion without depending on other data or expert knowledge. Compared with t raditional boundary extraction methods, this vector method is more efficient, precise, objective, and exquisite.

  3. Development of Poliovirus Extraction Method from Stool Extracts by Using Magnetic Nanoparticles Sensitized with Soluble Poliovirus Receptor

    OpenAIRE

    Arita, Minetaro

    2013-01-01

    A method for extracting poliovirus (PV) from stool extracts was developed. Magnetic nanoparticles sensitized with soluble PV receptor efficiently extracted PV pseudovirus (>99% extraction) or endogenous infectious PVs (>90% extraction) from stool extracts. This method would be useful for extraction of PV from crude biological samples.

  4. A new quantitative automatic method for the measurement of non-rapid eye movement sleep electroencephalographic amplitude variability.

    Science.gov (United States)

    Ferri, Raffaele; Rundo, Francesco; Novelli, Luana; Terzano, Mario G; Parrino, Liborio; Bruni, Oliviero

    2012-04-01

    The aim of this study was to arrange an automatic quantitative measure of the electroencephalographic (EEG) signal amplitude variability during non-rapid eye movement (NREM) sleep, correlated with the visually extracted cyclic alternating pattern (CAP) parameters. Ninety-eight polysomnographic EEG recordings of normal controls were used. A new algorithm based on the analysis of the EEG amplitude variability during NREM sleep was designed and applied to all recordings, which were also scored visually for CAP. All measurements obtained with the new algorithm correlated positively with corresponding CAP parameters. In particular, total CAP time correlated with total NREM variability time (r = 0.596; P < 1E-07), light sleep CAP time with light sleep variability time (r = 0.597; P < 1E-07) and slow wave sleep CAP time with slow wave sleep variability time (r = 0.809; P < 1E-07). Only the duration of CAP A phases showed a low correlation with the duration of variability events. Finally, the age-related modifications of CAP time and of NREM variability time were found to be very similar. The new method for the automatic analysis of NREM sleep amplitude variability presented here correlates significantly with visual CAP parameters; its application requires a minimum work time, compared to CAP analysis, and might be used in large studies involving numerous recordings in which NREM sleep EEG amplitude variability needs to be assessed.

  5. Antioxidant and Antibacterial Assays on Polygonum minus Extracts: Different Extraction Methods

    OpenAIRE

    2015-01-01

    The effect of solvent type and extraction method was investigated to study the antioxidant and antibacterial activity of Polygonum minus. Two extraction methods were used: a solvent extraction using Soxhlet apparatus and supercritical fluid extraction (SFE). The antioxidant capacity was evaluated using the ferric reducing/antioxidant power (FRAP) assay and the free radical-scavenging capacity of 2,2-diphenyl-1-picrylhydrazyl (DPPH) assay. The highest polyphenol content was obtained from the m...

  6. Semi-automatic extraction of sectional view from point clouds - The case of Ottmarsheim's abbey-church

    Science.gov (United States)

    Landes, T.; Bidino, S.; Guild, R.

    2014-06-01

    Today, elevations or sectional views of buildings are often produced from terrestrial laser scanning. However, due to the amount of data to process and because usually 2D maps are required by customers, the 3D point cloud is often degraded into 2D slices. In a sectional view, not only the portions of the objet which are intersected by the cutting plane but also edges and contours of other parts of the object which are visible behind the cutting plane are represented. To avoid the tedious manual drawing, the aim of this work is to propose a semi-automatic approach for creating sectional views by point cloud processing. The extraction of sectional views requires in a first step the segmentation of the point cloud into planar and non-planar entities. Since in cultural heritage buildings, arches, vaults, columns can be found, the position and the direction of the sectional view must be taken into account before contours extraction. Indeed, the edges of surfaces of revolution depend on the chosen view. The developed extraction approach is detailed based on point clouds acquired inside and outside churches. The resulting sectional view has been evaluated in a qualitative and quantitative way by comparing it with a reference sectional view made by hand. A mean deviation of 3 cm between both sections proves that the proposed approach is promising. Regarding the processing time, despite a few manual corrections, it has saved 40% of the time required for manual drawing.

  7. An automatic segmentation method for building facades from vehicle-borne LiDAR point cloud data based on fundamental geographical data

    Science.gov (United States)

    Li, Yongqiang; Mao, Jie; Cai, Lailiang; Zhang, Xitong; Li, Lixue

    2016-03-01

    In this paper, the author proposed a segmentation method based on the fundamental geographic data, the algorithm describes as following: Firstly, convert the coordinate system of fundamental geographic data to that of vehicle- borne LiDAR point cloud though some data preprocessing work, and realize the coordinate system between them; Secondly, simplify the feature of fundamental geographic data, extract effective contour information of the buildings, then set a suitable buffer threshold value for building contour, and segment out point cloud data of building facades automatically; Thirdly, take a reasonable quality assessment mechanism, check and evaluate of the segmentation results, control the quality of segmentation result. Experiment shows that the proposed method is simple and effective. The method also has reference value for the automatic segmentation for surface features of other types of point cloud.

  8. A semi-automatic multiple view texture mapping for the surface model extracted by laser scanning

    Science.gov (United States)

    Zhang, Zhichao; Huang, Xianfeng; Zhang, Fan; Chang, Yongmin; Li, Deren

    2008-12-01

    Laser scanning is an effective way to acquire geometry data of the cultural heritage with complex architecture. After generating the 3D model of the object, it's difficult to do the exactly texture mapping for the real object. we take effort to create seamless texture maps for a virtual heritage of arbitrary topology. Texture detail is acquired directly from the real object in a light condition as uniform as we can make. After preprocessing, images are then registered on the 3D mesh by a semi-automatic way. Then we divide the mesh into mesh patches overlapped with each other according to the valid texture area of each image. An optimal correspondence between mesh patches and sections of the acquired images is built. Then, a smoothing approach is proposed to erase the seam between different images that map on adjacent mesh patches, based on texture blending. The obtained result with a Buddha of Dunhuang Mogao Grottoes is presented and discussed.

  9. A new method for the automatic interpretation of Schlumberger and Wenner sounding curves

    Science.gov (United States)

    Zohdy, A.A.R.

    1989-01-01

    A fast iterative method for the automatic interpretation of Schlumberger and Wenner sounding curves is based on obtaining interpreted depths and resistivities from shifted electrode spacings and adjusted apparent resistivities, respectively. The method is fully automatic. It does not require an initial guess of the number of layers, their thicknesses, or their resistivities; and it does not require extrapolation of incomplete sounding curves. The number of layers in the interpreted model equals the number of digitized points on the sounding curve. The resulting multilayer model is always well-behaved with no thin layers of unusually high or unusually low resistivities. For noisy data, interpretation is done in two sets of iterations (two passes). Anomalous layers, created because of noise in the first pass, are eliminated in the second pass. Such layers are eliminated by considering the best-fitting curve from the first pass to be a smoothed version of the observed curve and automatically reinterpreting it (second pass). The application of the method is illustrated by several examples. -Author

  10. NLP techniques associated with the OpenGALEN ontology for semi-automatic textual extraction of medical knowledge: abstracting and mapping equivalent linguistic and logical constructs.

    Science.gov (United States)

    do Amaral, M B; Roberts, A; Rector, A L

    2000-01-01

    This research project presents methodological and theoretical issues related to the inter-relationship between linguistic and conceptual semantics, analysing the results obtained by the application of a NLP parser to a set of radiology reports. Our objective is to define a technique for associating linguistic methods with domain specific ontologies for semi-automatic extraction of intermediate representation (IR) information formats and medical ontological knowledge from clinical texts. We have applied the Edinburgh LTG natural language parser to 2810 clinical narratives describing radiology procedures. In a second step, we have used medical expertise and ontology formalism for identification of semantic structures and abstraction of IR schemas related to the processed texts. These IR schemas are an association of linguistic and conceptual knowledge, based on their semantic contents. This methodology aims to contribute to the elaboration of models relating linguistic and logical constructs based on empirical data analysis. Advance in this field might lead to the development of computational techniques for automatic enrichment of medical ontologies from real clinical environments, using descriptive knowledge implicit in large text corpora sources.

  11. An efficient method of key-frame extraction based on a cluster algorithm.

    Science.gov (United States)

    Zhang, Qiang; Yu, Shao-Pei; Zhou, Dong-Sheng; Wei, Xiao-Peng

    2013-12-18

    This paper proposes a novel method of key-frame extraction for use with motion capture data. This method is based on an unsupervised cluster algorithm. First, the motion sequence is clustered into two classes by the similarity distance of the adjacent frames so that the thresholds needed in the next step can be determined adaptively. Second, a dynamic cluster algorithm called ISODATA is used to cluster all the frames and the frames nearest to the center of each class are automatically extracted as key-frames of the sequence. Unlike many other clustering techniques, the present improved cluster algorithm can automatically address different motion types without any need for specified parameters from users. The proposed method is capable of summarizing motion capture data reliably and efficiently. The present work also provides a meaningful comparison between the results of the proposed key-frame extraction technique and other previous methods. These results are evaluated in terms of metrics that measure reconstructed motion and the mean absolute error value, which are derived from the reconstructed data and the original data.

  12. Automatic Extraction of Three Dimensional Prismatic Machining Features from CAD Model

    Directory of Open Access Journals (Sweden)

    B.V. Sudheer Kumar

    2011-12-01

    Full Text Available Machining features recognition provides the necessary platform for the computer aided process planning (CAPP and plays a key role in the integration of computer aided design (CAD and computer aided manufacturing (CAM. This paper presents a new methodology for extracting features from the geometrical data of the CAD Model present in the form of Virtual Reality Modeling Language (VRML files. First, the point cloud is separated into the available number of horizontal cross sections. Each cross section consists of a 2D point cloud. Then, a collection of points represented by a set of feature points is derived for each slice, describing the cross section accurately, and providing the basis for a feature-extraction. These extracted manufacturing features, gives the necessary information regarding the manufacturing activities tomanufacture the part. Software in Microsoft Visual C++ environment is developed to recognize the features, where geometric information of the part isextracted from the CAD model. By using this data, anoutput file i.e., text file is generated, which gives all the machinable features present in the part. This process has been tested on various parts and successfully extracted all the features

  13. Brazil nut sorting for aflatoxin prevention: a comparison between automatic and manual shelling methods

    Directory of Open Access Journals (Sweden)

    Ariane Mendonça Pacheco

    2013-06-01

    Full Text Available The impact of automatic and manual shelling methods during manual/visual sorting of different batches of Brazil nuts from the 2010 and 2011 harvests was evaluated in order to investigate aflatoxin prevention.The samples were tested as follows: in-shell, shell, shelled, and pieces in order to evaluate the moisture content (mc, water activity (Aw, and total aflatoxin (LOD = 0.3 µg/kg and LOQ 0.85 µg/kg at the Brazil nut processing plant. The results of aflatoxins obtained for the manually shelled nut samples ranged from 3.0 to 60.3 µg/g and from 2.0 to 31.0 µg/g for the automatically shelled samples. All samples showed levels of mc below the limit of 15%; on the other hand, shelled samples from both harvests showed levels of Aw above the limit. There were no significant differences concerning the manual or automatic shelling results during the sorting stages. On the other hand, the visual sorting was effective in decreasing the aflatoxin contamination in both methods.

  14. An automatic method to generate domain-specific investigator networks using PubMed abstracts

    Directory of Open Access Journals (Sweden)

    Gwinn Marta

    2007-06-01

    Full Text Available Abstract Background Collaboration among investigators has become critical to scientific research. This includes ad hoc collaboration established through personal contacts as well as formal consortia established by funding agencies. Continued growth in online resources for scientific research and communication has promoted the development of highly networked research communities. Extending these networks globally requires identifying additional investigators in a given domain, profiling their research interests, and collecting current contact information. We present a novel strategy for building investigator networks dynamically and producing detailed investigator profiles using data available in PubMed abstracts. Results We developed a novel strategy to obtain detailed investigator information by automatically parsing the affiliation string in PubMed records. We illustrated the results by using a published literature database in human genome epidemiology (HuGE Pub Lit as a test case. Our parsing strategy extracted country information from 92.1% of the affiliation strings in a random sample of PubMed records and in 97.0% of HuGE records, with accuracies of 94.0% and 91.0%, respectively. Institution information was parsed from 91.3% of the general PubMed records (accuracy 86.8% and from 94.2% of HuGE PubMed records (accuracy 87.0. We demonstrated the application of our approach to dynamic creation of investigator networks by creating a prototype information system containing a large database of PubMed abstracts relevant to human genome epidemiology (HuGE Pub Lit, indexed using PubMed medical subject headings converted to Unified Medical Language System concepts. Our method was able to identify 70–90% of the investigators/collaborators in three different human genetics fields; it also successfully identified 9 of 10 genetics investigators within the PREBIC network, an existing preterm birth research network. Conclusion We successfully created a

  15. Automatic knowledge extraction in sequencing analysis with multiagent system and grid computing.

    Science.gov (United States)

    González, Roberto; Zato, Carolina; Benito, Rocío; Bajo, Javier; Hernández, Jesús M; De Paz, Juan F; Vera, Vicente; Corchado, Juan M

    2012-07-24

    Advances in bioinformatics have contributed towards a significant increase in available information. Information analysis requires the use of distributed computing systems to best engage the process of data analysis. This study proposes a multiagent system that incorporates grid technology to facilitate distributed data analysis by dynamically incorporating the roles associated to each specific case study. The system was applied to genetic sequencing data to extract relevant information about insertions, deletions or polymorphisms.

  16. Automatic Estimation of Artemia Hatching Rate Using an Object Discrimination Method

    Directory of Open Access Journals (Sweden)

    Sung Kim

    2013-09-01

    Full Text Available Digital image processing is a process to analyze a large volume of information on digital images. In this study, Artemia hatching rate was measured by automatically classifying and counting cysts and larvae based on color imaging data from cyst hatching experiments using an image processing technique. The Artemia hatching rate estimation consists of a series of processes; a step to convert the scanned image data to a binary image data, a process to detect objects and to extract their shape information in the converted image data, an analysis step to choose an optimal discriminant function, and a step to recognize and classify the objects using the function. The function to classify Artemia cysts and larvae is optimally estimated based on the classification performance using the areas and the plan-form factors of the detected objects. The hatching rate using the image data obtained under the different experimental conditions was estimated in the range of 34-48%. It was shown that the maximum difference is about 19.7% and the average root-mean squared difference is about 10.9% as the difference between the results using an automatic counting (this study and a manual counting were compared. This technique can be applied to biological specimen analysis using similar imaging information.

  17. Automatic segmentation and 3D feature extraction of protein aggregates in Caenorhabditis elegans

    Science.gov (United States)

    Rodrigues, Pedro L.; Moreira, António H. J.; Teixeira-Castro, Andreia; Oliveira, João; Dias, Nuno; Rodrigues, Nuno F.; Vilaça, João L.

    2012-03-01

    In the last years, it has become increasingly clear that neurodegenerative diseases involve protein aggregation, a process often used as disease progression readout and to develop therapeutic strategies. This work presents an image processing tool to automatic segment, classify and quantify these aggregates and the whole 3D body of the nematode Caenorhabditis Elegans. A total of 150 data set images, containing different slices, were captured with a confocal microscope from animals of distinct genetic conditions. Because of the animals' transparency, most of the slices pixels appeared dark, hampering their body volume direct reconstruction. Therefore, for each data set, all slices were stacked in one single 2D image in order to determine a volume approximation. The gradient of this image was input to an anisotropic diffusion algorithm that uses the Tukey's biweight as edge-stopping function. The image histogram median of this outcome was used to dynamically determine a thresholding level, which allows the determination of a smoothed exterior contour of the worm and the medial axis of the worm body from thinning its skeleton. Based on this exterior contour diameter and the medial animal axis, random 3D points were then calculated to produce a volume mesh approximation. The protein aggregations were subsequently segmented based on an iso-value and blended with the resulting volume mesh. The results obtained were consistent with qualitative observations in literature, allowing non-biased, reliable and high throughput protein aggregates quantification. This may lead to a significant improvement on neurodegenerative diseases treatment planning and interventions prevention.

  18. Different methods to select the best extraction system for solid-phase extraction.

    Science.gov (United States)

    Bielicka-Daszkiewicz, Katarzyna

    2015-02-01

    The optimization methods for planning a solid-phase extraction experiment are presented. These methods are based on a study of interactions between different parts of an extraction system. Determination of the type and strength of interaction depends on the physicochemical properties of the individual components of the system. The main parameters that determine the extraction properties are described in this work. The influence of sorbents' and solvents' polarity on extraction efficiency, Hansen solubility parameters and breakthrough volume determination on sorption and desorption extraction step are discussed.

  19. An Automatic Cycle-Slip Processing Method and Its Precision Analysis

    Institute of Scientific and Technical Information of China (English)

    ZHENG Zuoya; LU Xiushan

    2006-01-01

    On the basis of analyzing and researching the current algorithms of cycle-slip detection and correction, a new method of cycle-slip detection and correction is put forward in this paper, that is, a reasonable cycle-slip detection condition and algorithm with corresponding program COMPRE (COMpass PRE-processing) to detect and correct cycle-slip automatically, compared with GIPSY and GAMIT software, for example, it is proved that this method is effective and credible to cycle-slip detection and correction in GPS data pre-processing.

  20. Texture Analysis and Modified Level Set Method for Automatic Detection of Bone Boundaries in Hand Radiographs

    Directory of Open Access Journals (Sweden)

    Syaiful Anam

    2014-10-01

    Full Text Available Rheumatoid Arthritis (RA is a chronic inflammatory joint disease characterized by a distinctive pattern of bone and joint destruction. To give an RA diagnosis, hand bone radiographs are taken and analyzed. A hand bone radiograph analysis starts with the bone boundary detection. It is however an extremely exhausting and time consuming task for radiologists. An automatic bone boundary detection in hand radiographs is thus strongly required. Garcia et al. have proposed a method for automatic bone boundary detection in hand radiographs by using an adaptive snake method, but it doesn’t work for those affected by RA. The level set method has advantages over the snake method. It however often leads to either a complete breakdown or a premature termination of the curve evolution process, resulting in unsatisfactory results. For those reasons, we propose a modified level set method for detecting bone boundaries in hand radiographs affected by RA. Texture analysis is also applied for distinguishing the hand bones and other areas. Evaluating the experiments using a particular set of hand bone radiographs, the effectiveness of the proposed method has been proved.

  1. A method of applying two-pump system in automatic transmissions for energy conservation

    Directory of Open Access Journals (Sweden)

    Peng Dong

    2015-06-01

    Full Text Available In order to improve the hydraulic efficiency, modern automatic transmissions tend to apply electric oil pump in their hydraulic system. The electric oil pump can support the mechanical oil pump for cooling, lubrication, and maintaining the line pressure at low engine speeds. In addition, the start–stop function can be realized by means of the electric oil pump; thus, the fuel consumption can be further reduced. This article proposes a method of applying two-pump system (one electric oil pump and one mechanical oil pump in automatic transmissions based on the forward driving simulation. A mathematical model for calculating the transmission power loss is developed. The power loss transfers to heat which requires oil flow for cooling and lubrication. A leakage model is developed to calculate the leakage of the hydraulic system. In order to satisfy the flow requirement, a flow-based control strategy for the electric oil pump is developed. Simulation results of different driving cycles show that there is a best combination of the size of electric oil pump and the size of mechanical oil pump with respect to the optimal energy conservation. Besides, the two-pump system can also satisfy the requirement of the start–stop function. This research is extremely valuable for the forward design of a two-pump system in automatic transmissions with respect to energy conservation and start–stop function.

  2. An automatic procedure to extract galaxy clusters from CRoNaRio catalogues

    CERN Document Server

    Puddu, E; Longo, G; Paolillo, M; Scaramella, R; Testa, V; Gal, R R; De Carvalho, R R; Djorgovski, S G

    1999-01-01

    We present preliminary results of a simple peak finding algorithm applied to catalogues of galaxies, extracted from the Second Palomar Sky Survey in the framework of the CRoNaRio project. All previously known Abell and Zwicky clusters in a test region of 5x5 sq. deg. are recovered and new candidate clusters are also detected. This algorithm represents an alternative way of searching for galaxy clusters with respect to that implemented and tested at Caltech on the same type of data (Gal et al. 1998).

  3. Terrain-driven unstructured mesh development through semi-automatic vertical feature extraction

    Science.gov (United States)

    Bilskie, Matthew V.; Coggin, David; Hagen, Scott C.; Medeiros, Stephen C.

    2015-12-01

    A semi-automated vertical feature terrain extraction algorithm is described and applied to a two-dimensional, depth-integrated, shallow water equation inundation model. The extracted features describe what are commonly sub-mesh scale elevation details (ridge and valleys), which may be ignored in standard practice because adequate mesh resolution cannot be afforded. The extraction algorithm is semi-automated, requires minimal human intervention, and is reproducible. A lidar-derived digital elevation model (DEM) of coastal Mississippi and Alabama serves as the source data for the vertical feature extraction. Unstructured mesh nodes and element edges are aligned to the vertical features and an interpolation algorithm aimed at minimizing topographic elevation error assigns elevations to mesh nodes via the DEM. The end result is a mesh that accurately represents the bare earth surface as derived from lidar with element resolution in the floodplain ranging from 15 m to 200 m. To examine the influence of the inclusion of vertical features on overland flooding, two additional meshes were developed, one without crest elevations of the features and another with vertical features withheld. All three meshes were incorporated into a SWAN+ADCIRC model simulation of Hurricane Katrina. Each of the three models resulted in similar validation statistics when compared to observed time-series water levels at gages and post-storm collected high water marks. Simulated water level peaks yielded an R2 of 0.97 and upper and lower 95% confidence interval of ∼ ± 0.60 m. From the validation at the gages and HWM locations, it was not clear which of the three model experiments performed best in terms of accuracy. Examination of inundation extent among the three model results were compared to debris lines derived from NOAA post-event aerial imagery, and the mesh including vertical features showed higher accuracy. The comparison of model results to debris lines demonstrates that additional

  4. Automatic Building Extraction and Roof Reconstruction in 3k Imagery Based on Line Segments

    Science.gov (United States)

    Köhn, A.; Tian, J.; Kurz, F.

    2016-06-01

    We propose an image processing workflow to extract rectangular building footprints using georeferenced stereo-imagery and a derivative digital surface model (DSM) product. The approach applies a line segment detection procedure to the imagery and subsequently verifies identified line segments individually to create a footprint on the basis of the DSM. The footprint is further optimized by morphological filtering. Towards the realization of 3D models, we decompose the produced footprint and generate a 3D point cloud from DSM height information. By utilizing the robust RANSAC plane fitting algorithm, the roof structure can be correctly reconstructed. In an experimental part, the proposed approach has been performed on 3K aerial imagery.

  5. Automatic Target Recognition in Synthetic Aperture Sonar Images Based on Geometrical Feature Extraction

    Directory of Open Access Journals (Sweden)

    J. Del Rio Vera

    2009-01-01

    Full Text Available This paper presents a new supervised classification approach for automated target recognition (ATR in SAS images. The recognition procedure starts with a novel segmentation stage based on the Hilbert transform. A number of geometrical features are then extracted and used to classify observed objects against a previously compiled database of target and non-target features. The proposed approach has been tested on a set of 1528 simulated images created by the NURC SIGMAS sonar model, achieving up to 95% classification accuracy.

  6. Design of a Direction-of-Arrival Estimation Method Used for an Automatic Bearing Tracking System

    Science.gov (United States)

    Guo, Feng; Liu, Huawei; Huang, Jingchang; Zhang, Xin; Zu, Xingshui; Li, Baoqing; Yuan, Xiaobing

    2016-01-01

    In this paper, we introduce a sub-band direction-of-arrival (DOA) estimation method suitable for employment within an automatic bearing tracking system. Inspired by the magnitude-squared coherence (MSC), we extend the MSC to the sub-band and propose the sub-band magnitude-squared coherence (SMSC) to measure the coherence between the frequency sub-bands of wideband signals. Then, we design a sub-band DOA estimation method which chooses a sub-band from the wideband signals by SMSC for the bearing tracking system. The simulations demonstrate that the sub-band method has a good tradeoff between the wideband methods and narrowband methods in terms of the estimation accuracy, spatial resolution, and computational cost. The proposed method was also tested in the field environment with the bearing tracking system, which also showed a good performance. PMID:27455267

  7. Evaluation of extraction and non-extraction treatment effects by two different superimposition methods.

    Science.gov (United States)

    Türköz, Çağrı; İşcan, Hakan Necip

    2011-12-01

    The aim of this study was to determine whether different evaluation methods may be the cause of the varied outcomes of research that have evaluated the effects of extraction and non-extraction therapy on jaw rotation. This retrospective study consisted of the pre- (T1) and post- (T2) treatment lateral cephalograms of 70 skeletal Class I subjects with an optimal vertical mandibular plane angle, who had undergone fixed orthodontic treatment. Thirty-five of the subjects (20 females and 15 males, mean age: 14.7 years) were treated with four first premolar extractions and 35 (22 females and 13 males, mean age: 15 years) without extractions. T1 and T2 radiographs were superimposed using Björk's structural method and Steiner's method of sella-nasion line registered at sella. A Wilcoxon test was used to evaluate the changes between T1 and T2 and the Mann-Whitney U-test to determine differences between the extraction and non-extraction and Björk and Steiner groups. No significant difference was found between the methods of Steiner and Björk according to the spatial changes of the cephalometric points in the extraction and non-extraction groups. The maxilla showed forward rotation in the extraction group and backward rotation in the non-extraction group with both superimposition methods, but the differences were not significant in either inter- or intraclass comparisons. The mandible showed forward rotation in the extraction group with both superimposition methods but, in the non-extraction group, forward rotation was recorded with Björk's method and backward rotation with Steiner's method. These findings were not significant in either inter- or intraclass evaluations. No significant difference was found between the groups or methods.

  8. Study of Automatic Extraction, Classification, and Ranking of Product Aspects Based on Sentiment Analysis of Reviews

    Directory of Open Access Journals (Sweden)

    Muhammad Rafi

    2015-10-01

    Full Text Available It is very common for a customer to read reviews about the product before making a final decision to buy it. Customers are always eager to get the best and the most objective information about the product theywish to purchase and reviews are the major source to obtain this information. Although reviews are easily accessible from the web, but since most of them carry ambiguous opinion and different structure, it is often very difficult for a customer to filter the information he actually needs. This paper suggests a framework, which provides a single user interface solution to this problem based on sentiment analysis of reviews. First, it extracts all the reviews from different websites carrying varying structure, and gathers information about relevant aspects of that product. Next, it does sentiment analysis around those aspects and gives them sentiment scores. Finally, it ranks all extracted aspects and clusters them into positive and negative class. The final output is a graphical visualization of all positive and negative aspects, which provide the customer easy, comparable, and visual information about the important aspects of the product. The experimental results on five different products carrying 5000 reviewsshow 78% accuracy. Moreover, the paper also explained the effect of Negation, Valence Shifter, and Diminisher with sentiment lexiconon sentiment analysis, andconcluded that they all are independent of the case problem , and have no effect on the accuracy of sentiment analysis.

  9. Design of automatic control system for the precipitation of bromelain from the extract of pineapple wastes

    Directory of Open Access Journals (Sweden)

    Flavio Vasconcelos da Silva

    2010-12-01

    Full Text Available In this work, bromelain was recovered from ground pineapple stem and rind by means of precipitation with alcohol at low temperature. Bromelain is the name of a group of powerful protein-digesting, or proteolytic, enzymes that are particularly useful for reducing muscle and tissue inflammation and as a digestive aid. Temperature control is crucial to avoid irreversible protein denaturation and consequently to improve the quality of the enzyme recovered. The process was carried out alternatively in two fed-batch pilot tanks: a glass tank and a stainless steel tank. Aliquots containing 100 mL of pineapple aqueous extract were fed into the tank. Inside the jacketed tank, the protein was exposed to unsteady operating conditions during the addition of the precipitating agent (ethanol 99.5% because the dilution ratio "aqueous extract to ethanol" and heat transfer area changed. The coolant flow rate was manipulated through a variable speed pump. Fine tuned conventional and adaptive PID controllers were on-line implemented using a fieldbus digital control system. The processing performance efficiency was enhanced and so was the quality (enzyme activity of the product.

  10. Histogram of Intensity Feature Extraction for Automatic Plastic Bottle Recycling System Using Machine Vision

    Directory of Open Access Journals (Sweden)

    Suzaimah Ramli

    2008-01-01

    Full Text Available Currently, many recycling activities adopt manual sorting for plastic recycling that relies on plant personnel who visually identify and pick plastic bottles as they travel along the conveyor belt. These bottles are then sorted into the respective containers. Manual sorting may not be a suitable option for recycling facilities of high throughput. It has also been noted that the high turnover among sorting line workers had caused difficulties in achieving consistency in the plastic separation process. As a result, an intelligent system for automated sorting is greatly needed to replace manual sorting system. The core components of machine vision for this intelligent sorting system is the image recognition and classification. In this research, the overall plastic bottle sorting system is described. Additionally, the feature extraction algorithm used is discussed in detail since it is the core component of the overall system that determines the success rate. The performance of the proposed feature extractions were evaluated in terms of classification accuracy and result obtained showed an accuracy of more than 80%.

  11. Review of automatic segmentation methods of multiple sclerosis white matter lesions on conventional magnetic resonance imaging.

    Science.gov (United States)

    García-Lorenzo, Daniel; Francis, Simon; Narayanan, Sridar; Arnold, Douglas L; Collins, D Louis

    2013-01-01

    Magnetic resonance (MR) imaging is often used to characterize and quantify multiple sclerosis (MS) lesions in the brain and spinal cord. The number and volume of lesions have been used to evaluate MS disease burden, to track the progression of the disease and to evaluate the effect of new pharmaceuticals in clinical trials. Accurate identification of MS lesions in MR images is extremely difficult due to variability in lesion location, size and shape in addition to anatomical variability between subjects. Since manual segmentation requires expert knowledge, is time consuming and is subject to intra- and inter-expert variability, many methods have been proposed to automatically segment lesions. The objective of this study was to carry out a systematic review of the literature to evaluate the state of the art in automated multiple sclerosis lesion segmentation. From 1240 hits found initially with PubMed and Google scholar, our selection criteria identified 80 papers that described an automatic lesion segmentation procedure applied to MS. Only 47 of these included quantitative validation with at least one realistic image. In this paper, we describe the complexity of lesion segmentation, classify the automatic MS lesion segmentation methods found, and review the validation methods applied in each of the papers reviewed. Although many segmentation solutions have been proposed, including some with promising results using MRI data obtained on small groups of patients, no single method is widely employed due to performance issues related to the high variability of MS lesion appearance and differences in image acquisition. The challenge remains to provide segmentation techniques that work in all cases regardless of the type of MS, duration of the disease, or MRI protocol, and this within a comprehensive, standardized validation framework. MS lesion segmentation remains an open problem.

  12. Methods for microbial DNA extraction from soil for PCR amplification

    Directory of Open Access Journals (Sweden)

    Yeates C

    1998-01-01

    Full Text Available Amplification of DNA from soil is often inhibited by co-purified contaminants. A rapid, inexpensive, large-scale DNA extraction method involving minimal purification has been developed that is applicable to various soil types (1. DNA is also suitable for PCR amplification using various DNA targets. DNA was extracted from 100g of soil using direct lysis with glass beads and SDS followed by potassium acetate precipitation, polyethylene glycol precipitation, phenol extraction and isopropanol precipitation. This method was compared to other DNA extraction methods with regard to DNA purity and size.

  13. Spectrophotometric validation of assay method for selected medicinal plant extracts

    OpenAIRE

    Matthew Arhewoh; Augustine O. Okhamafe

    2014-01-01

    Objective: To develop UV spectrophotometric assay validation methods for some selected medicinal plant extracts.Methods: Dried, powdered leaves of Annona muricata (AM) and Andrographis paniculata (AP) as well as seeds of Garcinia kola (GK) and Hunteria umbellata (HU) were separately subjected to maceration using distilled water. Different concentrations of the extracts were scanned spectrophotometrically to obtain wavelengths of maximum absorbance. The different extracts were then subjected t...

  14. Evaluation of an automatic brain segmentation method developed for neonates on adult MR brain images

    Science.gov (United States)

    Moeskops, Pim; Viergever, Max A.; Benders, Manon J. N. L.; Išgum, Ivana

    2015-03-01

    Automatic brain tissue segmentation is of clinical relevance in images acquired at all ages. The literature presents a clear distinction between methods developed for MR images of infants, and methods developed for images of adults. The aim of this work is to evaluate a method developed for neonatal images in the segmentation of adult images. The evaluated method employs supervised voxel classification in subsequent stages, exploiting spatial and intensity information. Evaluation was performed using images available within the MRBrainS13 challenge. The obtained average Dice coefficients were 85.77% for grey matter, 88.66% for white matter, 81.08% for cerebrospinal fluid, 95.65% for cerebrum, and 96.92% for intracranial cavity, currently resulting in the best overall ranking. The possibility of applying the same method to neonatal as well as adult images can be of great value in cross-sectional studies that include a wide age range.

  15. Automatic selection of preprocessing methods for improving predictions on mass spectrometry protein profiles.

    Science.gov (United States)

    Pelikan, Richard C; Hauskrecht, Milos

    2010-11-13

    Mass spectrometry proteomic profiling has potential to be a useful clinical screening tool. One obstacle is providing a standardized method for preprocessing the noisy raw data. We have developed a system for automatically determining a set of preprocessing methods among several candidates. Our system's automated nature relieves the analyst of the need to be knowledgeable about which methods to use on any given dataset. Each stage of preprocessing is approached with many competing methods. We introduce metrics which are used to balance each method's attempts to correct noise versus preserving valuable discriminative information. We demonstrate the benefit of our preprocessing system on several SELDI and MALDI mass spectrometry datasets. Downstream classification is improved when using our system to preprocess the data.

  16. An automatic multigrid method for the solution of sparse linear systems

    Science.gov (United States)

    Shapira, Yair; Israeli, Moshe; Sidi, Avram

    1993-01-01

    An automatic version of the multigrid method for the solution of linear systems arising from the discretization of elliptic PDE's is presented. This version is based on the structure of the algebraic system solely, and does not use the original partial differential operator. Numerical experiments show that for the Poisson equation the rate of convergence of our method is equal to that of classical multigrid methods. Moreover, the method is robust in the sense that its high rate of convergence is conserved for other classes of problems: non-symmetric, hyperbolic (even with closed characteristics) and problems on non-uniform grids. No double discretization or special treatment of sub-domains (e.g. boundaries) is needed. When supplemented with a vector extrapolation method, high rates of convergence are achieved also for anisotropic and discontinuous problems and also for indefinite Helmholtz equations. A new double discretization strategy is proposed for finite and spectral element schemes and is found better than known strategies.

  17. Study on Rear-end Real-time Data Quality Control Method of Regional Automatic Weather Station

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    [Objective] The aim was to study the rear-end real-time data quality control method of regional automatic weather station. [Method] The basic content and steps of rear-end real-time data quality control of regional automatic weather station were introduced. Each element was treated with systematic quality control procedure. The existence of rear-end real time data of regional meteorological station in Guangxi was expounded. Combining with relevant elements and linear changes, improvement based on traditiona...

  18. 7 CFR 51.1179 - Method of juice extraction.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Method of juice extraction. 51.1179 Section 51.1179 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... of Common Sweet Oranges (citrus Sinensis (l) Osbeck) § 51.1179 Method of juice extraction. The...

  19. COMPARISON OF RNA EXTRACTION METHODS FOR Passiflora edulis SIMS LEAVES

    Directory of Open Access Journals (Sweden)

    ANNY CAROLYNE DA LUZ

    2016-02-01

    Full Text Available ABSTRACT Functional genomic analyses require intact RNA; however, Passiflora edulis leaves are rich in secondary metabolites that interfere with RNA extraction primarily by promoting oxidative processes and by precipitating with nucleic acids. This study aimed to analyse three RNA extraction methods, Concert™ Plant RNA Reagent (Invitrogen, Carlsbad, CA, USA, TRIzol® Reagent (Invitrogen and TRIzol® Reagent (Invitrogen/ice -commercial products specifically designed to extract RNA, and to determine which method is the most effective for extracting RNA from the leaves of passion fruit plants. In contrast to the RNA extracted using the other 2 methods, the RNA extracted using TRIzol® Reagent (Invitrogen did not have acceptable A260/A280 and A260/A230 ratios and did not have ideal concentrations. Agarose gel electrophoresis showed a strong DNA band for all of the Concert™ method extractions but not for the TRIzol® and TRIzol®/ice methods. The TRIzol® method resulted in smears during electrophoresis. Due to its low levels of DNA contamination, ideal A260/A280 and A260/A230 ratios and superior sample integrity, RNA from the TRIzol®/ice method was used for reverse transcription-polymerase chain reaction (RT-PCR, and the resulting amplicons were highly similar. We conclude that TRIzol®/ice is the preferred method for RNA extraction for P. edulis leaves.

  20. Extraction of Roots of Quintics by Division Method

    Science.gov (United States)

    Kulkarni, Raghavendra G.

    2009-01-01

    We describe a method to extract roots of a reducible quintic over the real field, which makes use of a simple division. A procedure to synthesize such quintics is given and a numerical example is solved to extract the roots of quintic with the proposed method.

  1. Automatic collocation extraction based on quintuple%基于五元组的词语搭配自动抽取

    Institute of Scientific and Technical Information of China (English)

    孙婷婷

    2015-01-01

    Collocation plays an important role in the field of linguistics. In recent years, it has become one of the major research directions in natural language processing. In order to realize the automatic extraction of collocations, the definitions of semantic collocation and syntactic collocation are given in this paper. For these two types of collocation, a quintuple-based collocation extraction method is also presented. Through the experiment based on statistics, it indicates that this method is advantageous to the collocation extraction. And among these statistics, mutual information is the best, the accuracy rate can go up to 80%.%词语搭配在语言学领域占有重要的地位,近年来,它已成为自然语言处理研究的重点方向之一。为了实现词语搭配的自动抽取,本文给出了语义搭配和句式搭配的定义,并针对这两类搭配,给出了一种基于五元组的词语搭配抽取方法。通过基于统计量的搭配提取实验,得出此方法有利于词语搭配的自动抽取。其中,基于互信息的搭配抽取效果最好,其准确率可达80%。

  2. A method of automatic recognition of airport in complex environment from remote sensing image

    Science.gov (United States)

    Hao, Qiwei; Ni, Guoqiang; Guo, Pan; Chen, Xiaomei; Tang, Yi

    2009-11-01

    In this paper, a new method is proposed for airport recognition in complex environments. The algorithm takes all advantages of essential characteristics of the airport target. Structural characteristics of the airport are used to establish assumption process. Improved Hough transformation (HT) is used to check out those right straight-lines which stand for actual position and direction of runways. Morphological processing is used to remove road segments and isolated points. Finally, we combine these segments carefully to describe the whole airport area, and then our automatic recognition of airport target is realized.

  3. Method of Measuring Fixture Automatic Design and Assembly for Auto-Body Part

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    A method of 3-D measuring fixture automatic assembly for auto-body part is presented. Locating constraint mapping technique and assembly rule-based reasoning are applied. Calculating algorithm of the position and pose for the part model, fixture configuration and fixture elements in virtual auto-body assembly space are given. Transforming fixture element from itself coordinate system space to assembly space with homogeneous transformation matrix is realized. Based on the second development technique of unigraphics(UG), the automated assembly is implemented with application program interface (API) function. Lastly the automated assembly of measuring fixture for rear longeron as a case is implemented.

  4. Automatic extraction of nanoparticle properties using natural language processing: NanoSifter an application to acquire PAMAM dendrimer properties.

    Science.gov (United States)

    Jones, David E; Igo, Sean; Hurdle, John; Facelli, Julio C

    2014-01-01

    In this study, we demonstrate the use of natural language processing methods to extract, from nanomedicine literature, numeric values of biomedical property terms of poly(amidoamine) dendrimers. We have developed a method for extracting these values for properties taken from the NanoParticle Ontology, using the General Architecture for Text Engineering and a Nearly-New Information Extraction System. We also created a method for associating the identified numeric values with their corresponding dendrimer properties, called NanoSifter. We demonstrate that our system can correctly extract numeric values of dendrimer properties reported in the cancer treatment literature with high recall, precision, and f-measure. The micro-averaged recall was 0.99, precision was 0.84, and f-measure was 0.91. Similarly, the macro-averaged recall was 0.99, precision was 0.87, and f-measure was 0.92. To our knowledge, these results are the first application of text mining to extract and associate dendrimer property terms and their corresponding numeric values.

  5. Amniotic Membrane Extract Preparation: What is the Best Method?

    Directory of Open Access Journals (Sweden)

    Mirgholamreza Mahbod

    2014-01-01

    Full Text Available Purpose: To compare different preparation methods for a suitable amniotic membrane (AM extract containing a given amount of growth factors. Methods: In this interventional case series, we dissected the AM from eight placentas within 24 hours after delivery, under clean conditions. After washing and mixing, AM extracts (AMEs were prepared using pulverization and homogenization methods, and different processing and storing conditions. Main outcome measures were the amount of added protease inhibitor (PI, the relative centrifugal force (g, in-process temperature, repeated extraction times, drying percentage, repeated pulverization times, and the effect of filtering with 0.2 μm filters. Extract samples were preserved at different temperature and time parameters, and analyzed for hepatic growth factor (HGF and total protein using ELISA and calorimetric methods, respectively. Results: The extracted HGF was 20% higher with pulverization as compared to homogenization, and increased by increasing the PI to 5.0 μl/g of dried AM. Repeating centrifugation up to 3 times almost doubled the extracted HGF and protein. Storing the AME at −170° for 6 months caused a 50% drop in the level of HGF and protein. Other studied parameters showed no significant effect on the extracted amount of HGF or total protein. Conclusion: Appropriate extraction methods with an adequate amount of PI increases the level of extractable components from harvested AMs. To achieve the maximal therapeutic effects of AMEs, it is necessary to consider the half-life of its bioactive components.

  6. Accurate and robust fully-automatic QCA: method and numerical validation.

    Science.gov (United States)

    Hernández-Vela, Antonio; Gatta, Carlo; Escalera, Sergio; Igual, Laura; Martin-Yuste, Victoria; Radeva, Petia

    2011-01-01

    The Quantitative Coronary Angiography (QCA) is a methodology used to evaluate the arterial diseases and, in particular, the degree of stenosis. In this paper we propose AQCA, a fully automatic method for vessel segmentation based on graph cut theory. Vesselness, geodesic paths and a new multi-scale edgeness map are used to compute a globally optimal artery segmentation. We evaluate the method performance in a rigorous numerical way on two datasets. The method can detect an artery with precision 92.9 +/- 5% and sensitivity 94.2 +/- 6%. The average absolute distance error between detected and ground truth centerline is 1.13 +/- 0.11 pixels (about 0.27 +/- 0.025 mm) and the absolute relative error in the vessel caliber estimation is 2.93% with almost no bias. Moreover, the method can discriminate between arteries and catheter with an accuracy of 96.4%.

  7. A semi-automatic method for developing an anthropomorphic numerical model of dielectric anatomy by MRI

    Energy Technology Data Exchange (ETDEWEB)

    Mazzurana, M [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy); Sandrini, L [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy); Vaccari, A [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy); Malacarne, C [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy); Cristoforetti, L [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy); Pontalti, R [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy)

    2003-10-07

    Complex permittivity values have a dominant role in the overall consideration of interaction between radiofrequency electromagnetic fields and living matter, and in related applications such as electromagnetic dosimetry. There are still some concerns about the accuracy of published data and about their variability due to the heterogeneous nature of biological tissues. The aim of this study is to provide an alternative semi-automatic method by which numerical dielectric human models for dosimetric studies can be obtained. Magnetic resonance imaging (MRI) tomography was used to acquire images. A new technique was employed to correct nonuniformities in the images and frequency-dependent transfer functions to correlate image intensity with complex permittivity were used. The proposed method provides frequency-dependent models in which permittivity and conductivity vary with continuity-even in the same tissue-reflecting the intrinsic realistic spatial dispersion of such parameters. The human model is tested with an FDTD (finite difference time domain) algorithm at different frequencies; the results of layer-averaged and whole-body-averaged SAR (specific absorption rate) are compared with published work, and reasonable agreement has been found. Due to the short time needed to obtain a whole body model, this semi-automatic method may be suitable for efficient study of various conditions that can determine large differences in the SAR distribution, such as body shape, posture, fat-to-muscle ratio, height and weight.

  8. An automatic fractional coefficient setting method of FODPSO for hyperspectral image segmentation

    Science.gov (United States)

    Xie, Weiying; Li, Yunsong

    2015-05-01

    In this paper, an automatic fractional coefficient setting method of fractional-order Darwinian particle swarm optimization (FODPSO) is proposed for hyperspectral image segmentation. The spectrum has been already taken into consideration by integrating various types of band selection algorithms, firstly. We provide a short overview of the hyperspectral image to select an appropriate set of bands by combining supervised, semi-supervised and unsupervised band selection algorithms. Some approaches are not limited in regards to their spectral dimension, but are limited with respect to their spatial dimension owing to low spatial resolution. The addition of spatial information will be focused on improving the performance of hyperspectral image segmentation for later fusion or classification. Many researchers have advocated that a large fractional coefficient should be in the exploration state while a small fractional coefficient should be in the exploitation, which does not mean the coefficient purely decrease with time. Due to such reasons, we propose an adaptive FODPSO by setting the fractional coefficient adaptively for the application of final hyperspectral image segmentation. In fact, the paper introduces an evolutionary factor to automatically control the fractional coefficient by using a sigmoid function. Therefore, fractional coefficient with large value will benefit the global search in the exploration state. Conversely, when the fractional coefficient has a small value, the exploitation state is detected. Hence, it can avoid optimization process get trapped into the local optima. Ultimately, the experimental segmentation results prove the validity and efficiency of our proposed automatic fractional coefficient setting method of FODPSO compared with traditional PSO, DPSO and FODPSO.

  9. An improved schlieren method for measurement and automatic reconstruction of the far-field focal spot

    Science.gov (United States)

    Wang, Zhengzhou; Hu, Bingliang; Yin, Qinye

    2017-01-01

    The schlieren method of measuring far-field focal spots offers many advantages at the Shenguang III laser facility such as low cost and automatic laser-path collimation. However, current methods of far-field focal spot measurement often suffer from low precision and efficiency when the final focal spot is merged manually, thereby reducing the accuracy of reconstruction. In this paper, we introduce an improved schlieren method to construct the high dynamic-range image of far-field focal spots and improve the reconstruction accuracy and efficiency. First, a detection method based on weak light beam sampling and magnification imaging was designed; images of the main and side lobes of the focused laser irradiance in the far field were obtained using two scientific CCD cameras. Second, using a self-correlation template matching algorithm, a circle the same size as the schlieren ball was dug from the main lobe cutting image and used to change the relative region of the main lobe cutting image within a 100×100 pixel region. The position that had the largest correlation coefficient between the side lobe cutting image and the main lobe cutting image when a circle was dug was identified as the best matching point. Finally, the least squares method was used to fit the center of the side lobe schlieren small ball, and the error was less than 1 pixel. The experimental results show that this method enables the accurate, high-dynamic-range measurement of a far-field focal spot and automatic image reconstruction. Because the best matching point is obtained through image processing rather than traditional reconstruction methods based on manual splicing, this method is less sensitive to the efficiency of focal-spot reconstruction and thus offers better experimental precision. PMID:28207758

  10. A novel figure panel classification and extraction method for document image understanding.

    Science.gov (United States)

    Yuan, Xiaohui; Ang, Dongyu

    2014-01-01

    With the availability of full-text documents in many online databases, the paradigm of biomedical literature mining and document understanding has shifted to analysis of both text and figures to derive implicit messages that are unforeseen with text mining only. To enable automatic, massive processing, a key step is to extract and parse figures embedded in papers. In this paper, we present a novel model-driven, hierarchical method to classify and extract panels from figures in scientific papers. Our method consists of two integrated components: figure (or panel) classification and panel segmentation. Figure classification evaluates each panel and decides the existence of photographs and drawings. Mixtures of photographs and non-photographs are divided into subfigures. The splitting process repeats until no further panel collage can be identified. Detection of highlighted views is addressed with Hough space analysis. Using reconstruction from Hough peaks, enclosed panels are retrieved and saved into separate files. Experiments were conducted with a total of 360 figures extracted from two sets of papers that are retrieved with difference sets of keywords. Experimental results demonstrated that our method successfully segmented figures and extracted photographs and non-photographs with high accuracy and robustness. In addition, our method was able to identify zoom-in views that are superimposed on the original photographs. The efficiency of our method allows online implementation.

  11. Robust methods for automatic image-to-world registration in cone-beam CT interventional guidance

    Energy Technology Data Exchange (ETDEWEB)

    Dang, H.; Otake, Y.; Schafer, S.; Stayman, J. W.; Kleinszig, G.; Siewerdsen, J. H. [Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21202 (United States); Siemens Healthcare XP Division, Erlangen 91052 (Germany); Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21202 (United States)

    2012-10-15

    Purpose: Real-time surgical navigation relies on accurate image-to-world registration to align the coordinate systems of the image and patient. Conventional manual registration can present a workflow bottleneck and is prone to manual error and intraoperator variability. This work reports alternative means of automatic image-to-world registration, each method involving an automatic registration marker (ARM) used in conjunction with C-arm cone-beam CT (CBCT). The first involves a Known-Model registration method in which the ARM is a predefined tool, and the second is a Free-Form method in which the ARM is freely configurable. Methods: Studies were performed using a prototype C-arm for CBCT and a surgical tracking system. A simple ARM was designed with markers comprising a tungsten sphere within infrared reflectors to permit detection of markers in both x-ray projections and by an infrared tracker. The Known-Model method exercised a predefined specification of the ARM in combination with 3D-2D registration to estimate the transformation that yields the optimal match between forward projection of the ARM and the measured projection images. The Free-Form method localizes markers individually in projection data by a robust Hough transform approach extended from previous work, backprojected to 3D image coordinates based on C-arm geometric calibration. Image-domain point sets were transformed to world coordinates by rigid-body point-based registration. The robustness and registration accuracy of each method was tested in comparison to manual registration across a range of body sites (head, thorax, and abdomen) of interest in CBCT-guided surgery, including cases with interventional tools in the radiographic scene. Results: The automatic methods exhibited similar target registration error (TRE) and were comparable or superior to manual registration for placement of the ARM within {approx}200 mm of C-arm isocenter. Marker localization in projection data was robust across all

  12. Initiating GrabCut by Color Difference for Automatic Foreground Extraction of Passport Imagery

    DEFF Research Database (Denmark)

    Sangüesa, Adriá Arbués; Jørgensen, Nicolai Krogh; Larsen, Christian Aagaard;

    2016-01-01

    Grabcut, an iterative algorithm based on Graph Cut, is a popular foreground segmentation method. However, it suffers from a main drawback: a manual interaction is required in order to start segmenting the image. In this paper, four different methods based on image pairs are used to obtain an init...... faced, such as the segmentation in hair regions or tests in a non-uniform background scenario....

  13. 挖掘专利知识实现关键词自动抽取%Mining Patent Knowledge for Automatic Keyword Extraction

    Institute of Scientific and Technical Information of China (English)

    陈忆群; 周如旗; 朱蔚恒; 李梦婷; 印鉴

    2016-01-01

    expression and professional authority .T his paper uses patent data set as the external knowledge repository serves for keyword extraction .An algorithm is designed to construct the background knowledge repository based on patent data set , also a method for automatic keyword extraction with novel word features is provided . This paper discusses the characters of patent data ,mines the relation between different patent files to construct background knowledge repository for target document , and finally achieves keyword extraction . The related patent files of target document are used to construct background knowledge repository . The information of patent inventors ,assignees ,citations and classification are used to mining the hidden knowledge and relationship between different patent files .And the related knowledge is imported to extend the background knowledge repository . Novel word features are derived according to the different background knowledge supplied by patent data .The word features reflecting the document’s background knowledge offer valuable indications on individual words’ importance in the target document .The keyword extraction problem can then be regarded as a classification problem and the support vector machine (SVM) is used to extract the keywords .Experiments have been done using patent data set and open data set . Experimental results have proved that using these novel word features ,the novel approach can achieve superior performance in keyword extraction to other state‐of‐the‐art approaches .

  14. Effects of Different Extraction Methods and Conditions on the Phenolic Composition of Mate Tea Extracts

    Directory of Open Access Journals (Sweden)

    Jelena Vladic

    2012-03-01

    Full Text Available A simple and rapid HPLC method for determination of chlorogenic acid (5-O-caffeoylquinic acid in mate tea extracts was developed and validated. The chromatography used isocratic elution with a mobile phase of aqueous 1.5% acetic acid-methanol (85:15, v/v. The flow rate was 0.8 mL/min and detection by UV at 325 nm. The method showed good selectivity, accuracy, repeatability and robustness, with detection limit of 0.26 mg/L and recovery of 97.76%. The developed method was applied for the determination of chlorogenic acid in mate tea extracts obtained by ethanol extraction and liquid carbon dioxide extraction with ethanol as co-solvent. Different ethanol concentrations were used (40, 50 and 60%, v/v and liquid CO2 extraction was performed at different pressures (50 and 100 bar and constant temperature (27 ± 1 °C. Significant influence of extraction methods, conditions and solvent polarity on chlorogenic acid content, antioxidant activity and total phenolic and flavonoid content of mate tea extracts was established. The most efficient extraction solvent was liquid CO2 with aqueous ethanol (40% as co-solvent using an extraction pressure of 100 bar.

  15. Comparison of Methods for Protein Extraction from Pine Needles

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    Extraction of proteins from pine needles for proteomic analysis has long been a challenge for scientists. We compared three different protein extraction methods including sucrose, Tris-HCl and trichloroacetic acid (TCA)/acetone (TCA method) to determine their efficiency in separating pine needle proteins by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and two-dimensional PAGE (2D-PAGE). Proteins were then separated by SDS-PAGE. Among three methods the method using sucrose extraction buffer showed the highest efficiency and highest quality in separating proteins. In addition, clearer and more stable strips were detected by SDS-PAGE using sucrose extraction buffer. When the proteins extracted using sucrose extraction buffer were separated by 2D-PAGE, more than 300 protein spots, with isoelectric points (PI) ranging from 4.0 to 7.0 and molecular weights (MW) from 6.5 to 97.4 kD, were observed. This confirmed that the method with sucrose extraction buffer was an efficient and reliable method for extracting proteins from pine needles.

  16. Improved method for the feature extraction of laser scanner using genetic clustering

    Institute of Scientific and Technical Information of China (English)

    Yu Jinxia; Cai Zixing; Duan Zhuohua

    2008-01-01

    Feature extraction of range images provided by ranging sensor is a key issue of pattern recognition. To automatically extract the environmental feature sensed by a 2D ranging sensor laser scanner, an improved method based on genetic clustering VGA-clustering is presented. By integrating the spatial neighbouring information of range data into fuzzy clustering algorithm, a weighted fuzzy clustering algorithm (WFCA) instead of standard clustering algorithm is introduced to realize feature extraction of laser scanner. Aimed at the unknown clustering number in advance, several validation index functions are used to estimate the validity of different clustering al-gorithms and one validation index is selected as the fitness function of genetic algorithm so as to determine the accurate clustering number automatically. At the same time, an improved genetic algorithm IVGA on the basis of VGA is proposed to solve the local optimum of clustering algorithm, which is implemented by increasing the population diversity and improving the genetic operators of elitist rule to enhance the local search capacity and to quicken the convergence speed. By the comparison with other algorithms, the effectiveness of the algorithm introduced is demonstrated.

  17. Automatic extraction of shallow landslides based on SPOT-5 remote sensing images%基于SPOT5遥感影像的浅层滑坡体自动提取方法

    Institute of Scientific and Technical Information of China (English)

    杨树文; 谢飞; 韩惠; 冯光胜

    2012-01-01

    The thesis advanced an improved extraction method of shallow landslides on the basis of previous research. Firstly, bare land information was extracted from SPOT5 images based on CMSAVI method. Then, a series of further processions were done upon the extraction results; removed the shadows, then selected based on slope, after that the morphological filtering was done, then transformed the raster image into vector in order to do further selection based on area and downslope. Finally, the improved multi-peak histogram threshholding method was used for automatic extraction of landslide information. It' s improved by the experiment that this method could not only get rid of non-landslides and other interference information but also realize the automatic extraction of landslide information, increase the efficiency and accuracy of extraction and identification of occurred landslides.%本文在前人研究的基础上,对浅层滑坡体的提取方法进行了改进.首先利用改进的MSAVI算法提取SPOT5影像中的裸地信息,进而对提取的结果进行去阴影、坡度筛选、形态学滤波、栅-矢转换、面积和顺坡性筛选,并基于改进的多峰直方图阈值自动选取算法实现了滑坡体信息的自动提取.经过实验比较表明,改进的方法既有效地去除非滑坡体等于扰信息,又真正实现了滑坡体信息的自动提取,从而极大地提高了已发生滑坡体的识别、提取效率和精度.

  18. Linking attentional processes and conceptual problem solving: Visual cues facilitate the automaticity of extracting relevant information from diagrams

    Directory of Open Access Journals (Sweden)

    Amy eRouinfar

    2014-09-01

    Full Text Available This study investigated links between lower-level visual attention processes and higher-level problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants’ attention to relevant areas to facilitate problem solving. Participants (N = 80 individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants’ verbal responses were used to determine their accuracy. The study produced two major findings. First, short duration visual cues can improve problem solving performance on a variety of insight physics problems, including transfer problems not sharing the surface features of the training problems, but instead sharing the underlying solution path. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers’ attention necessarily embodying the solution to the problem. Instead, the cueing effects were caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, these short duration visual cues when administered repeatedly over multiple training problems resulted in participants becoming more efficient at extracting the relevant information on the transfer problem, showing that such cues can improve the automaticity with which solvers extract relevant information from a problem. Both of these results converge on the conclusion that lower-order visual processes driven by attentional cues can influence higher-order cognitive processes

  19. A New Automatic Method to Identify Galaxy Mergers I. Description and Application to the STAGES Survey

    CERN Document Server

    Hoyos, Carlos; Gray, Meghan E; Maltby, David T; Bell, Eric F; Barazza, Fabio D; Boehm, Asmus; Haussler, Boris; Jahnke, Knud; Jogee, Sharda; Lane, Kyle P; McIntosh, Daniel H; Wolf, Christian

    2011-01-01

    We present an automatic method to identify galaxy mergers using the morphological information contained in the residual images of galaxies after the subtraction of a Sersic model. The removal of the bulk signal from the host galaxy light is done with the aim of detecting the fainter minor mergers. The specific morphological parameters that are used in the merger diagnostic suggested here are the Residual Flux Fraction and the asymmetry of the residuals. The new diagnostic has been calibrated and optimized so that the resulting merger sample is very complete. However, the contamination by non-mergers is also high. If the same optimization method is adopted for combinations of other structural parameters such as the CAS system, the merger indicator we introduce yields merger samples of equal or higher statistical quality than the samples obtained through the use of other structural parameters. We explore the ability of the method presented here to select minor mergers by identifying a sample of visually classif...

  20. Methods for testing automatic mode switching in patients implanted with DDD(R) pacemakers.

    Science.gov (United States)

    Lau, Chu-Pak; Mascia, Franco; Corbucci, Giorgio; Padeletti, Luigi

    2004-01-01

    The assessment of automatic mode switching (AMS) algorithms is impossible in vivo, due to a low chance of seeing the patient at the onset of a spontaneous episode of atrial fibrillation (AF). As the induction of AF to test AMS has clinical concerns, three alternative and non-invasive techniques may be proposed for this purpose: myopotentials, chest wall stimulation, and an external supraventricular arrhythmia simulator. The first method is simple and does not require additional equipment, even though in some patients adequate signals cannot be generated with a soft effort such as handgrip or hand compression. The main advantage of the chest wall stimulation method is the possibility that it be performed in every implanting center, since it is based on the use of standard devices for cardiac stimulation. The method based on the external supraventricular arrhythmia simulator allows the most detailed of the ECG traces, but it needs a dedicated electronic device.

  1. 指纹自动识别方法研究%Method for Automatic Fingerprint Recognition

    Institute of Scientific and Technical Information of China (English)

    叶四民; 邹奉庭; 陈福祥

    2001-01-01

    After analyzing the automatic fingerprint recognition based on structural features,we improved it and put forward a step by step structural matching method.The recognition is grouped two step.In addition,a method for how to form feature vector is presented.This method for fingerprint matching is very good in fingerprint matching.%分析了基于结构匹配的指纹自动识别方法,并对其作出了改进,提出了一种分阶段的结构匹配识别方法,把识别分为两步进行。此外,还给出了特征向量的构造方法。这种匹配方法是一种比较好的识别方法。

  2. 本体的自动构建方法%The methods of ontology automatic building

    Institute of Scientific and Technical Information of China (English)

    解峥; 王盼卿; 彭成

    2015-01-01

    The method of information integration based on ontology is the most effective way to solve the semantic heterogeneity,but the traditional ontology construction requires a ot ofmanpower material resources. With the help of artificial intelligence technology and ealizeautomatic build of ontology, such as WordNet knowledge base will save a lot of social costs, will be the focus of the present and future aspects of building ontology research. In this paper, the mainstream in the world today paper summarizes the method of building ontology automatically, it is concluded that the future main direction of ontology automatic building technology.%基于本体的信息集成方法是解决语义异构的最有效途径,但是传统的本体构建需要大量的人力物力。借助人工智能技术和WordNet等知识库实现本体的自动构建,将节省大量的社会成本,将是现在以及未来的本体构建方面研究的重点。文中对当今世界上主流的本体自动构建方法进行归纳总结,得出未来本体自动构建技术的主要发展方向。

  3. Methods for microbial DNA extraction from soil for PCR amplification

    OpenAIRE

    Yeates C; Gillings, MR; Davison AD; Altavilla N; Veal DA

    1998-01-01

    Amplification of DNA from soil is often inhibited by co-purified contaminants. A rapid, inexpensive, large-scale DNA extraction method involving minimal purification has been developed that is applicable to various soil types (1). DNA is also suitable for PCR amplification using various DNA targets. DNA was extracted from 100g of soil using direct lysis with glass beads and SDS followed by potassium acetate precipitation, polyethylene glycol precipitation, phenol extraction and isopropanol pr...

  4. Effect of Extraction Methods on Polysaccharide of Clitocybe maxima Stipe

    OpenAIRE

    Junchen Chen; Pufu Lai; Hengsheng Shen; Hengguang Zhen; Rutao Fang

    2013-01-01

    Clitocybe maxima (Gartn. ex Mey. Fr.) Quél. is a favorable edible fungi species. The proportion of its stipe is about 45% of entire fruit biomass, which is a low value defined byproduct. To increase its value added utilization, three extraction methods (as hot water, microwave-assisted and complex-enzyme-hydrolysis-assist) were conducted. The extraction effect on the polysaccharide of Clitocybe maxima stipe was compared and the processing conditions in extraction were optimized. The content o...

  5. COMPARISON OF RNA EXTRACTION METHODS FOR Passiflora edulis SIMS LEAVES

    OpenAIRE

    2016-01-01

    ABSTRACT Functional genomic analyses require intact RNA; however, Passiflora edulis leaves are rich in secondary metabolites that interfere with RNA extraction primarily by promoting oxidative processes and by precipitating with nucleic acids. This study aimed to analyse three RNA extraction methods, Concert™ Plant RNA Reagent (Invitrogen, Carlsbad, CA, USA), TRIzol® Reagent (Invitrogen) and TRIzol® Reagent (Invitrogen)/ice -commercial products specifically designed to extract RNA, and...

  6. 鱼油萃取过程自动监控系统设计%Design of Automatic Monitoring System in Fish Oil Extraction Process

    Institute of Scientific and Technical Information of China (English)

    郑业双; 刘飞; 王志国

    2013-01-01

    萃取法具有操作可连续化,生产周期短,对热敏物质破坏少,保证鱼油良好品质的优点,但目前萃取法提取鱼油的控制系统相对落后。文章在分析鱼油萃取工艺过程的基础上,根据工艺要求设计自动监控系统,整个监控系统包含上位机和下位机两个部分,上位机实现对过程的远程监控、报警显示以及数据记录等功能,下位机以PLC为核心,实现现场控制、数据采集与转换以及回路调节等功能。系统自动化程度高、操作方便、可以为企业节省大量人力成本。%Extraction method for operation can be continuous, short production cycle, less damage to heat-sensitive substances, guarantee good quality of ifsh oil beneifts, but currently used to extract ifsh oil control system is relatively backward. The article based on the analysis on ifsh oil extraction process, and automatic monitoring system is designed according to process requirements, the whole monitoring system includes epigynous machine and hypogyny machine two parts, the epigynous machine for the process of remote monitoring, alarm display and data records, and other functions, the hypogyny machine with PLC as the core, to achieve local control, data acquisition and conversion, and loop adjustment and other functions. High degree of automation systems, easy to operate, you can save a lot of labor costs for businesses.

  7. A novel automatic method for monitoring Tourette motor tics through a wearable device.

    Science.gov (United States)

    Bernabei, Michel; Preatoni, Ezio; Mendez, Martin; Piccini, Luca; Porta, Mauro; Andreoni, Giuseppe

    2010-09-15

    The aim of this study was to propose a novel automatic method for quantifying motor-tics caused by the Tourette Syndrome (TS). In this preliminary report, the feasibility of the monitoring process was tested over a series of standard clinical trials in a population of 12 subjects affected by TS. A wearable instrument with an embedded three-axial accelerometer was used to detect and classify motor tics during standing and walking activities. An algorithm was devised to analyze acceleration data by: eliminating noise; detecting peaks connected to pathological events; and classifying intensity and frequency of motor tics into quantitative scores. These indexes were compared with the video-based ones provided by expert clinicians, which were taken as the gold-standard. Sensitivity, specificity, and accuracy of tic detection were estimated, and an agreement analysis was performed through the least square regression and the Bland-Altman test. The tic recognition algorithm showed sensitivity = 80.8% ± 8.5% (mean ± SD), specificity = 75.8% ± 17.3%, and accuracy = 80.5% ± 12.2%. The agreement study showed that automatic detection tended to overestimate the number of tics occurred. Although, it appeared this may be a systematic error due to the different recognition principles of the wearable and video-based systems. Furthermore, there was substantial concurrency with the gold-standard in estimating the severity indexes. The proposed methodology gave promising performances in terms of automatic motor-tics detection and classification in a standard clinical context. The system may provide physicians with a quantitative aid for TS assessment. Further developments will focus on the extension of its application to everyday long-term monitoring out of clinical environments.

  8. A realistic assessment of methods for extracting gene/protein interactions from free text

    Directory of Open Access Journals (Sweden)

    Shepherd Adrian J

    2009-07-01

    Full Text Available Abstract Background The automated extraction of gene and/or protein interactions from the literature is one of the most important targets of biomedical text mining research. In this paper we present a realistic evaluation of gene/protein interaction mining relevant to potential non-specialist users. Hence we have specifically avoided methods that are complex to install or require reimplementation, and we coupled our chosen extraction methods with a state-of-the-art biomedical named entity tagger. Results Our results show: that performance across different evaluation corpora is extremely variable; that the use of tagged (as opposed to gold standard gene and protein names has a significant impact on performance, with a drop in F-score of over 20 percentage points being commonplace; and that a simple keyword-based benchmark algorithm when coupled with a named entity tagger outperforms two of the tools most widely used to extract gene/protein interactions. Conclusion In terms of availability, ease of use and performance, the potential non-specialist user community interested in automatically extracting gene and/or protein interactions from free text is poorly served by current tools and systems. The public release of extraction tools that are easy to install and use, and that achieve state-of-art levels of performance should be treated as a high priority by the biomedical text mining community.

  9. Comparison of DNA extraction methods for meat analysis.

    Science.gov (United States)

    Yalçınkaya, Burhanettin; Yumbul, Eylem; Mozioğlu, Erkan; Akgoz, Muslum

    2017-04-15

    Preventing adulteration of meat and meat products with less desirable or objectionable meat species is important not only for economical, religious and health reasons, but also, it is important for fair trade practices, therefore, several methods for identification of meat and meat products have been developed. In the present study, ten different DNA extraction methods, including Tris-EDTA Method, a modified Cetyltrimethylammonium Bromide (CTAB) Method, Alkaline Method, Urea Method, Salt Method, Guanidinium Isothiocyanate (GuSCN) Method, Wizard Method, Qiagen Method, Zymogen Method and Genespin Method were examined to determine their relative effectiveness for extracting DNA from meat samples. The results show that the salt method is easy to perform, inexpensive and environmentally friendly. Additionally, it has the highest yield among all the isolation methods tested. We suggest this method as an alternative method for DNA isolation from meat and meat products.

  10. The effect of extraction method on antioxidant activity of Atractylis babelii Hochr. leaves and flowers extracts

    Directory of Open Access Journals (Sweden)

    Khadidja Boudebaz

    2015-04-01

    Full Text Available In this study, leaves and flowers of Atractylis babelii were chosen to investigate their antioxidant activities. Thus, a comparison between the antioxidant properties of ethanolic crude extracts obtained by two extraction methods, maceration and soxhlet extraction, was performed using two different tests; DPPH and ABTS radical assays. Besides, total polyphenol, flavonoid and condensed tannin contents were determined in leaves and flowers of Atractylis babelii by colorimetric methods. The results showed that there was no correlation between phenolic contents of plant parts and their antioxidant activity. Whereas, leaves and flowers of Atractylis babelii showed that both had almost similar phenolic contents, while their antioxidant activity depended on the plant parts. Furthermore, the antioxidant activity of plant parts was also depended on extraction method. Such a result may be likely ascribed to a variety of chemical composition can be found in Atractylis babelii extracts which has been related to its antioxidant properties.

  11. Semi-automatic template matching based extraction of hyperbolic signatures in ground-penetrating radar images

    Science.gov (United States)

    Sagnard, Florence; Tarel, Jean-Philippe

    2015-04-01

    In civil engineering applications, ground-penetrating radar (GPR) is one of the main non destructive technique based on the refraction and reflection of electromagnetic waves to probe the underground and particularly detect damages (cracks, delaminations, texture changes…) and buried objects (utilities, rebars…). An UWB ground-coupled radar operating in the frequency band [0.46;4] GHz and made of bowtie slot antennas has been used because, comparing to a air-launched radar, it increases energy transfer of electromagnetic radiation in the sub-surface and penetration depth. This paper proposes an original adaptation of the generic template matching algorithm to GPR images to recognize, localize and characterize with parameters a specific pattern associated with a hyperbola signature in the two main polarizations. The processing of a radargram (Bscan) is based on four main steps. The first step consists in pre-processing and scaling. The second step uses template matching to isolate and localize individual hyperbola signatures in an environment containing unwanted reflections, noise and overlapping signatures. The algorithm supposes to generate and collect a set of reference hyperbola templates made of a small reflection pattern in the vicinity of the apex in order to further analyze multiple time signals of embedded targets in an image. The standard Euclidian distance between the template shifted and a local zone in the radargram allows to obtain a map of distances. A user-defined threshold allows to select a reduced number of zones having a high similarity measure. In a third step, each zone is analyzed to detect minimum or maximum discrete amplitudes belonging to the first arrival times of a hyperbola signature. In the fourth step, the extracted discrete data (i,j) are fitted by a parametric hyperbola modeling based on the straight ray path hypothesis and using a constraint least square criterion associated with parameter ranges, that are the position, the

  12. An efficient method for DNA extraction from Cladosporioid fungi

    NARCIS (Netherlands)

    Moslem, M.A.; Bahkali, A.H.; Abd-Elsalam, K.A.; Wit, de P.J.G.M.

    2010-01-01

    We developed an efficient method for DNA extraction from Cladosporioid fungi, which are important fungal plant pathogens. The cell wall of Cladosporioid fungi is often melanized, which makes it difficult to extract DNA from their cells. In order to overcome this we grew these fungi for three days on

  13. An Improved Method for Extraction and Separation of Photosynthetic Pigments

    Science.gov (United States)

    Katayama, Nobuyasu; Kanaizuka, Yasuhiro; Sudarmi, Rini; Yokohama, Yasutsugu

    2003-01-01

    The method for extracting and separating hydrophobic photosynthetic pigments proposed by Katayama "et al." ("Japanese Journal of Phycology," 42, 71-77, 1994) has been improved to introduce it to student laboratories at the senior high school level. Silica gel powder was used for removing water from fresh materials prior to extracting pigments by a…

  14. Methods for Evaluating Text Extraction Toolkits: An Exploratory Investigation

    Science.gov (United States)

    2015-01-22

    M T R 1 4 0 4 4 3 R 2 M I T R E T E C H N I C A L R E P O R T Methods for Evaluating Text Extraction Toolkits : An...JAN 2015 2. REPORT TYPE 3. DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Methods for Evaluating Text Extraction Toolkits : An...contributes to closing this gap. We discuss an exploratory investigation into a method and a set of tools for evaluating a text extraction toolkit

  15. A RAPID PCR-QUALITY DNA EXTRACTION METHOD IN FISH

    Institute of Scientific and Technical Information of China (English)

    LI Zhong; LIANG Hong-Wei; ZOU Gui-Wei

    2012-01-01

    PCR has been a general preferred method for biological research in fish, and previous research have enabled us to extract and purify PCR-quality DNA templates in laboratories[1-4]. The same problem among these procedures is waiting for tissue digesting for a long time. The overabundance time spent on PCR-quality DNA extraction restricts the efficiency of PCR assay, especially in large-scale PCR amplification, such as SSR-based genetic-mapping construction [5,6], identification of germ plasm resource[7,8] and evolution research [9,10], etc. In this study, a stable and rapid PCR-quality DNA extraction method was explored, using a modified alkaline lysis protocol. Extracting DNA for PCR only takes approximately 25 minutes. This stable and rapid DNA extraction method could save much laboratory time and promotes.%PCR has been a general preferred method for biological research in fish,and previous research have enabled us to extract and purify PCR-quality DNA templates in laboratories [1-4].The same problem among these procedures is waiting for tissue digesting for a long time.The overabundance time spent on PCR-quality DNA extraction restricts the efficiency of PCR assay,especially in large-scale PCR amplification,such as SSR-based genetic-mapping construction [5,6],identification of germ plasm resource[7,8] and evolution research [9,10],etc.In this study,a stable and rapid PCR-quality DNA extraction method was explored,using a modified alkaline lysis protocol.Extracting DNA for PCR only takes approximately 25 minutes.This stable and rapid DNA extraction method could save much laboratory time and promotes.

  16. An adaptive spatial clustering method for automatic brain MR image segmentation

    Institute of Scientific and Technical Information of China (English)

    Jingdan Zhang; Daoqing Dai

    2009-01-01

    In this paper, an adaptive spatial clustering method is presented for automatic brain MR image segmentation, which is based on a competitive learning algorithm-self-organizing map (SOM). We use a pattern recognition approach in terms of feature generation and classifier design. Firstly, a multi-dimensional feature vector is constructed using local spatial information. Then, an adaptive spatial growing hierarchical SOM (ASGHSOM) is proposed as the classifier, which is an extension of SOM, fusing multi-scale segmentation with the competitive learning clustering algorithm to overcome the problem of overlapping grey-scale intensities on boundary regions. Furthermore, an adaptive spatial distance is integrated with ASGHSOM, in which local spatial information is considered in the cluster-ing process to reduce the noise effect and the classification ambiguity. Our proposed method is validated by extensive experiments using both simulated and real MR data with varying noise level, and is compared with the state-of-the-art algorithms.

  17. Quantitative Study on Nonmetallic Inclusion Particles in Steels by Automatic Image Analysis With Extreme Values Method

    Institute of Scientific and Technical Information of China (English)

    Cássio Barbosa; José Brant de Campos; J(ǒ)neo Lopes do Nascimento; Iêda Maria Vieira Caminha

    2009-01-01

    The presence of nonmetallic inclusion particles which appear during steelmaking process is harmful to the properties of steels, which is mainly as a function of some aspects such as size, volume fraction, shape, and distribution of these particles. The automatic image analysis technique is one of the most important tools for the quantitative determination of these parameters. The classical Student approach and the Extreme Values Method (EVM) were used for the inclusion size and shape determination and the evaluation of distance between the inclusion particles. The results thus obtained indicated that there were significant differences in the characteristics of the inclusion particles in the analyzed products. Both methods achieved results with some differences, indicating that EVM could be used as a faster and more reliable statistical methodology.

  18. Out-of-Bounds Array Access Fault Model and Automatic Testing Method Study

    Institute of Scientific and Technical Information of China (English)

    GAO Chuanping; DUAN Miyi; TAN Liqun; GONG Yunzhan

    2007-01-01

    Out-of-bounds array access(OOB) is one of the fault models commonly employed in the objectoriented programming language. At present, the technology of code insertion and optimization is widely used in the world to detect and fix this kind of fault. Although this method can examine some of the faults in OOB programs, it cannot test programs thoroughly, neither to find the faults correctly. The way of code insertion makes the test procedures so inefficient that the test becomes costly and time-consuming. This paper, uses a kind of special static test technology to realize the fault detection in OOB programs. We first establish the fault models in OOB program, and then develop an automatic test tool to detect the faults. Some experiments have exercised and the results show that the method proposed in the paper is efficient and feasible in practical applications.

  19. A method for automatic liver segmentation from multi-phase contrast-enhanced CT images

    Science.gov (United States)

    Yuan, Rong; Luo, Ming; Wang, Shaofa; Wang, Luyao; Xie, Qingguo

    2014-03-01

    Liver segmentation is a basic and indispensable function in systems of computer aided liver surgery for volume calculation, operation designing and risk evaluation. Traditional manual segmentation is very time consuming because of the complicated contours of liver and the big amount of images. For increasing the efficiency of the clinical work, in this paper, a fully-automatic method was proposed to segment the liver from multi-phase contrast-enhanced computed tomography (CT) images. As an advanced region growing method, we applied various pre- and post-processing to get better segmentation from the different phases. Fifteen sets of clinical abdomens CT images of five patients were segmented by our algorithm, and the results were acceptable and evaluated by an experienced surgeon. The running-time is about 30 seconds for a single-phase data which includes more than 200 slices.

  20. Automatic Method for Controlling the Iodine Adsorption Number in Carbon Black Oil Furnaces

    Directory of Open Access Journals (Sweden)

    Zečević, N.

    2008-12-01

    Full Text Available There are numerous of different inlet process factors in carbon black oil furnaces which must be continuously and automatically adjusted, due to stable quality of final product. The most important six inlet process factors in carbon black oil-furnaces are:1. volume flow of process air for combustion2. temperature of process air for combustion3. volume flow of natural gas for insurance the necessary heat for thermal reaction of conversionthe hydrocarbon oil feedstock in oil-furnace carbon black4. mass flow rate of hydrocarbon oil feedstock5. type and quantity of additive for adjustment the structure of oil-furnace carbon black6. quantity and position of the quench water for cooling the reaction of oil-furnace carbon black.The control of oil-furnace carbon black adsorption capacity is made with mass flow rate of hydrocarbon feedstock, which is the most important inlet process factor. Oil-furnace carbon black adsorption capacity in industrial process is determined with laboratory analyze of iodine adsorption number. It is shown continuously and automatically method for controlling iodine adsorption number in carbon black oil-furnaces to get as much as possible efficient control of adsorption capacity. In the proposed method it can be seen the correlation between qualitatively-quantitatively composition of the process tail gasses in the production of oil-furnace carbon black and relationship between air for combustion and hydrocarbon feedstock. It is shown that the ratio between air for combustion and hydrocarbon oil feedstock is depended of adsorption capacity summarized by iodine adsorption number, regarding to BMCI index of hydrocarbon oil feedstock.The mentioned correlation can be seen through the figures from 1. to 4. From the whole composition of the process tail gasses the best correlation for continuously and automatically control of iodine adsorption number is show the volume fraction of methane. The volume fraction of methane in the

  1. The development of automatic diagnosis method of impact signal point at LPMS

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jung Soo; Hwang, In Koo; Song, Sun Ja; Kim, Tae Wan; Ham, Chang Shik [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-11-01

    The primary function of loose part monitoring system (LPMS) is to detect the occurrence of any loose part in the primary coolant system caused by being parted or loosened from the mechanical structure during normal operation and refueling times. The existing LPMSs generates an alarm when the detected signal from the accelerometer sensors attached to the surface of the primary pressure boundary was bigger than the alarm threshold value. The automatic diagnosis algorithm was developed two parts : Estimations of impact position and Estimation of mass. In case of impact position, we developed the automation of triangular method. In case of mass estimation, we developed to automatically find the parameters which was needed to estimate the mass. For validating the developed algorithm, the impact test data at YGN3 and UCN 3 and 4 and Kori 4 was used. The analysis result, in case of impact position, was that the error rate was 10%, and, in case of mass estimation, was within 20%. In the future, this algorithm was integrated into the new NIMS system developed by woojin co. and the new NIMS system will be replaced or upgraded the existing NIMS system at YGN3. 28 refs., 34 figs., 22 tabs. (Author)

  2. A method for extracting $cos\\alpha$

    CERN Document Server

    Grinstein, B; Rothstein, I Z; Grinstein, Benjamin; Nolte, Detlef R.; Rothstein, Ira Z.

    2000-01-01

    We show that it is possible to extract the weak mixing angle alpha via a measurement of the rate for B^+(-) -> \\pi^+(-) e^+e^-. The sensitivity to cos(alpha) results from the interference between the long and short distance contributions. The short distance contribution can be computed, using heavy quark symmetry, in terms of semi-leptonic form factors. More importantly, we show that, using Ward identities and a short distance operator product expansion, the long distance contribution can be calculated without recourse to light cone wave functions when the invariant mass of the lepton pair, q^2, is much larger than LQCDs. We find that for q^2 > 2 GeV^2 the branching fraction is approximately 1 * 10^{-8}|V_{td}/0.008|^2. The shape of the differential rate is very sensitive to the value of cos(alpha) at small values of q^2 with dGamma /dq^2 varying up to 50% in the interval -1< cos(alpha)< 1 at q^2= 2 GeV^2. The size of the variation depends upon the ratio V_{ub}/V_{td}.

  3. Automatic Extraction and Post-coordination of Spatial Relations in Consumer Language.

    Science.gov (United States)

    Roberts, Kirk; Rodriguez, Laritza; Shooshan, Sonya E; Demner-Fushman, Dina

    2015-01-01

    To incorporate ontological concepts in natural language processing (NLP) it is often necessary to combine simple concepts into complex concepts (post-coordination). This is especially true in consumer language, where a more limited vocabulary forces consumers to utilize highly productive language that is almost impossible to pre-coordinate in an ontology. Our work focuses on recognizing an important case for post-coordination in natural language: spatial relations between disorders and anatomical structures. Consumers typically utilize such spatial relations when describing symptoms. We describe an annotated corpus of 2,000 sentences with 1,300 spatial relations, and a second corpus of 500 of these relations manually normalized to UMLS concepts. We use machine learning techniques to recognize these relations, obtaining good performance. Further, we experiment with methods to normalize the relations to an existing ontology. This two-step process is analogous to the combination of concept recognition and normalization, and achieves comparable results.

  4. Towards Automatic Extraction of Social Networks of Organizations in PubMed Abstracts

    CERN Document Server

    Jonnalagadda, Siddhartha; Gonzalez, Graciela

    2010-01-01

    Social Network Analysis (SNA) of organizations can attract great interest from government agencies and scientists for its ability to boost translational research and accelerate the process of converting research to care. For SNA of a particular disease area, we need to identify the key research groups in that area by mining the affiliation information from PubMed. This not only involves recognizing the organization names in the affiliation string, but also resolving ambiguities to identify the article with a unique organization. We present here a process of normalization that involves clustering based on local sequence alignment metrics and local learning based on finding connected components. We demonstrate the application of the method by analyzing organizations involved in angiogenensis treatment, and demonstrating the utility of the results for researchers in the pharmaceutical and biotechnology industries or national funding agencies.

  5. Contrast-based fully automatic segmentation of white matter hyperintensities: method and validation.

    Directory of Open Access Journals (Sweden)

    Thomas Samaille

    Full Text Available White matter hyperintensities (WMH on T2 or FLAIR sequences have been commonly observed on MR images of elderly people. They have been associated with various disorders and have been shown to be a strong risk factor for stroke and dementia. WMH studies usually required visual evaluation of WMH load or time-consuming manual delineation. This paper introduced WHASA (White matter Hyperintensities Automated Segmentation Algorithm, a new method for automatically segmenting WMH from FLAIR and T1 images in multicentre studies. Contrary to previous approaches that were based on intensities, this method relied on contrast: non linear diffusion filtering alternated with watershed segmentation to obtain piecewise constant images with increased contrast between WMH and surroundings tissues. WMH were then selected based on subject dependant automatically computed threshold and anatomical information. WHASA was evaluated on 67 patients from two studies, acquired on six different MRI scanners and displaying a wide range of lesion load. Accuracy of the segmentation was assessed through volume and spatial agreement measures with respect to manual segmentation; an intraclass correlation coefficient (ICC of 0.96 and a mean similarity index (SI of 0.72 were obtained. WHASA was compared to four other approaches: Freesurfer and a thresholding approach as unsupervised methods; k-nearest neighbours (kNN and support vector machines (SVM as supervised ones. For these latter, influence of the training set was also investigated. WHASA clearly outperformed both unsupervised methods, while performing at least as good as supervised approaches (ICC range: 0.87-0.91 for kNN; 0.89-0.94 for SVM. Mean SI: 0.63-0.71 for kNN, 0.67-0.72 for SVM, and did not need any training set.

  6. ISS Contingency Attitude Control Recovery Method for Loss of Automatic Thruster Control

    Science.gov (United States)

    Bedrossian, Nazareth; Bhatt, Sagar; Alaniz, Abran; McCants, Edward; Nguyen, Louis; Chamitoff, Greg

    2008-01-01

    In this paper, the attitude control issues associated with International Space Station (ISS) loss of automatic thruster control capability are discussed and methods for attitude control recovery are presented. This scenario was experienced recently during Shuttle mission STS-117 and ISS Stage 13A in June 2007 when the Russian GN&C computers, which command the ISS thrusters, failed. Without automatic propulsive attitude control, the ISS would not be able to regain attitude control after the Orbiter undocked. The core issues associated with recovering long-term attitude control using CMGs are described as well as the systems engineering analysis to identify recovery options. It is shown that the recovery method can be separated into a procedure for rate damping to a safe harbor gravity gradient stable orientation and a capability to maneuver the vehicle to the necessary initial conditions for long term attitude hold. A manual control option using Soyuz and Progress vehicle thrusters is investigated for rate damping and maneuvers. The issues with implementing such an option are presented and the key issue of closed-loop stability is addressed. A new non-propulsive alternative to thruster control, Zero Propellant Maneuver (ZPM) attitude control method is introduced and its rate damping and maneuver performance evaluated. It is shown that ZPM can meet the tight attitude and rate error tolerances needed for long term attitude control. A combination of manual thruster rate damping to a safe harbor attitude followed by a ZPM to Stage long term attitude control orientation was selected by the Anomaly Resolution Team as the alternate attitude control method for such a contingency.

  7. An interactive tool for semi-automatic feature extraction of hyperspectral data

    Directory of Open Access Journals (Sweden)

    Kovács Zoltán

    2016-09-01

    Full Text Available The spectral reflectance of the surface provides valuable information about the environment, which can be used to identify objects (e.g. land cover classification or to estimate quantities of substances (e.g. biomass. We aimed to develop an MS Excel add-in – Hyperspectral Data Analyst (HypDA – for a multipurpose quantitative analysis of spectral data in VBA programming language. HypDA was designed to calculate spectral indices from spectral data with user defined formulas (in all possible combinations involving a maximum of 4 bands and to find the best correlations between the quantitative attribute data of the same object. Different types of regression models reveal the relationships, and the best results are saved in a worksheet. Qualitative variables can also be involved in the analysis carried out with separability and hypothesis testing; i.e. to find the wavelengths responsible for separating data into predefined groups. HypDA can be used both with hyperspectral imagery and spectrometer measurements. This bivariate approach requires significantly fewer observations than popular multivariate methods; it can therefore be applied to a wide range of research areas.

  8. An interactive tool for semi-automatic feature extraction of hyperspectral data

    Science.gov (United States)

    Kovács, Zoltán; Szabó, Szilárd

    2016-09-01

    The spectral reflectance of the surface provides valuable information about the environment, which can be used to identify objects (e.g. land cover classification) or to estimate quantities of substances (e.g. biomass). We aimed to develop an MS Excel add-in - Hyperspectral Data Analyst (HypDA) - for a multipurpose quantitative analysis of spectral data in VBA programming language. HypDA was designed to calculate spectral indices from spectral data with user defined formulas (in all possible combinations involving a maximum of 4 bands) and to find the best correlations between the quantitative attribute data of the same object. Different types of regression models reveal the relationships, and the best results are saved in a worksheet. Qualitative variables can also be involved in the analysis carried out with separability and hypothesis testing; i.e. to find the wavelengths responsible for separating data into predefined groups. HypDA can be used both with hyperspectral imagery and spectrometer measurements. This bivariate approach requires significantly fewer observations than popular multivariate methods; it can therefore be applied to a wide range of research areas.

  9. (226)Ra dynamic lixiviation from phosphogypsum samples by an automatic flow-through system with integrated renewable solid-phase extraction.

    Science.gov (United States)

    Ceballos, Melisa Rodas; Borràs, Antoni; García-Tenorio, Rafael; Rodríguez, Rogelio; Estela, José Manuel; Cerdà, Víctor; Ferrer, Laura

    2017-05-15

    The release of (226)Ra from phosphogypsum (PG) was evaluated by developing a novel tool for fully automated (226)Ra lixiviation from PG integrating extraction/pre-concentration using a renewable sorbent format. Eight leached fractions (30mL each one) and a residual fraction were analyzed allowing the evaluation of dynamic lixiviation of (226)Ra. An automatic system allows this approach coupling a homemade cell with a (226)Ra extraction/pre-concentration method, which is carried out combining two procedures: Ra adsorption on MnO2 and its posterior co-precipitation with BaSO4. Detection was carried out with a low-background proportional counter, obtaining a minimum detectable activity of 7Bqkg(-1). Method was validated by analysis of a PG reference material (MatControl CSN-CIEMAT 2008), comparing the content found in fractions (sum of leached fractions + residual fraction) to the reference value. PG samples from Huelva (Spain) were studied. (226)Ra average activity concentration of the sum of leached fractions with artificial rainwater at pH 5.4±0.2 was 105±3Bqkg(-1)d.w. representing a (226)Ra lixiviation of 37%; while at pH 2.0±0.2, it was 168±3Bqkg(-1) d.w., which represents a 50%. Also, static lixiviation, maintaining the same experimental conditions, was carried out indicating that, for both considered pH, the (226)Ra release from PG is up to 50% higher in a dynamic leaching that in a static one, may have both environmental and reutilization implications.

  10. String Variant Alias Extraction Method using Ensemble Learner

    Directory of Open Access Journals (Sweden)

    P.Selvaperumal

    2016-02-01

    Full Text Available String variant alias names are surnames which are string variant form of the primary name. Extracting string variant aliases are important in tasks such as information retrieval, information extraction, and name resolution etc. String variant alias extraction involves candidate alias name extraction and string variant alias validation. In this paper, string variant aliases are first extracted from the web and then using seven different string similarity metrics as features, candidate aliases are validated using ensemble classifier random forest. Experiments were conducted using string variant name-alias dataset containing name-alias data for 15 persons containing 30 name-alias pairs. Experimental results show that the proposed method outperforms other similar methods in terms of accuracy.

  11. Using Nanoinformatics Methods for Automatically Identifying Relevant Nanotoxicology Entities from the Literature

    Directory of Open Access Journals (Sweden)

    Miguel García-Remesal

    2013-01-01

    Full Text Available Nanoinformatics is an emerging research field that uses informatics techniques to collect, process, store, and retrieve data, information, and knowledge on nanoparticles, nanomaterials, and nanodevices and their potential applications in health care. In this paper, we have focused on the solutions that nanoinformatics can provide to facilitate nanotoxicology research. For this, we have taken a computational approach to automatically recognize and extract nanotoxicology-related entities from the scientific literature. The desired entities belong to four different categories: nanoparticles, routes of exposure, toxic effects, and targets. The entity recognizer was trained using a corpus that we specifically created for this purpose and was validated by two nanomedicine/nanotoxicology experts. We evaluated the performance of our entity recognizer using 10-fold cross-validation. The precisions range from 87.6% (targets to 93.0% (routes of exposure, while recall values range from 82.6% (routes of exposure to 87.4% (toxic effects. These results prove the feasibility of using computational approaches to reliably perform different named entity recognition (NER-dependent tasks, such as for instance augmented reading or semantic searches. This research is a “proof of concept” that can be expanded to stimulate further developments that could assist researchers in managing data, information, and knowledge at the nanolevel, thus accelerating research in nanomedicine.

  12. Comparison of four methods of DNA extraction from rice

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    @@ Polyphenols, teroens, and resins make it difficult to obtain high quality genomic DNA from rice. Four extraction methods were compared in our study, and CTAB precipitation was the most practical one.

  13. Automatic segmentation method of striatum regions in quantitative susceptibility mapping images

    Science.gov (United States)

    Murakawa, Saki; Uchiyama, Yoshikazu; Hirai, Toshinori

    2015-03-01

    Abnormal accumulation of brain iron has been detected in various neurodegenerative diseases. Quantitative susceptibility mapping (QSM) is a novel contrast mechanism in magnetic resonance (MR) imaging and enables the quantitative analysis of local tissue susceptibility property. Therefore, automatic segmentation tools of brain regions on QSM images would be helpful for radiologists' quantitative analysis in various neurodegenerative diseases. The purpose of this study was to develop an automatic segmentation and classification method of striatum regions on QSM images. Our image database consisted of 22 QSM images obtained from healthy volunteers. These images were acquired on a 3.0 T MR scanner. The voxel size was 0.9×0.9×2 mm. The matrix size of each slice image was 256×256 pixels. In our computerized method, a template mating technique was first used for the detection of a slice image containing striatum regions. An image registration technique was subsequently employed for the classification of striatum regions in consideration of the anatomical knowledge. After the image registration, the voxels in the target image which correspond with striatum regions in the reference image were classified into three striatum regions, i.e., head of the caudate nucleus, putamen, and globus pallidus. The experimental results indicated that 100% (21/21) of the slice images containing striatum regions were detected accurately. The subjective evaluation of the classification results indicated that 20 (95.2%) of 21 showed good or adequate quality. Our computerized method would be useful for the quantitative analysis of Parkinson diseases in QSM images.

  14. Automatic Short Essay Scoring Using Natural Language Processing to Extract Semantic Information in the Form of Propositions. CRESST Report 831

    Science.gov (United States)

    Kerr, Deirdre; Mousavi, Hamid; Iseli, Markus R.

    2013-01-01

    The Common Core assessments emphasize short essay constructed-response items over multiple-choice items because they are more precise measures of understanding. However, such items are too costly and time consuming to be used in national assessments unless a way to score them automatically can be found. Current automatic essay-scoring techniques…

  15. Forward gated-diode method for parameter extraction of MOSFETs*

    Institute of Scientific and Technical Information of China (English)

    Zhang Chenfei; Ma Chenyue; Guo Xinjie; Zhang Xiufang; He Jin; Wang Guozeng; Yang Zhang; Liu Zhiwei

    2011-01-01

    The forward gated-diode method is used to extract the dielectric oxide thickness and body doping concentration of MOSFETs, especially when both of the variables are unknown previously. First, the dielectric oxide thickness and the body doping concentration as a function of forward gated-diode peak recombination-generation (R-G) current are derived from the device physics. Then the peak R-G current characteristics of the MOSFETs with different dielectric oxide thicknesses and body doping concentrations are simulated with ISE-Dessis for parameter extraction. The results from the simulation data demonstrate excellent agreement with those extracted from the forward gated-diode method.

  16. A Robust Digital Watermark Extracting Method Based on Neural Network

    Institute of Scientific and Technical Information of China (English)

    GUOLihua; YANGShutang; LIJianhua

    2003-01-01

    Since watermark removal software, such as StirMark, has succeeded in washing watermarks away for most of the known watermarking systems, it is necessary to improve the robustness of watermarking systems. A watermark extracting method based on the error Back propagation (BP) neural network is presented in this paper, which can efficiently improve the robustness of watermarking systems. Experiments show that even if the watermarking systems are attacked by the StirMark software, the extracting method based on neural network can still efficiently extract the whole watermark information.

  17. A PCR amplification method without DNA extraction.

    Science.gov (United States)

    Li, Hongwei; Xu, Haiyue; Zhao, Chunjiang; Sulaiman, Yiming; Wu, Changxin

    2011-02-01

    To develop a simple and inexpensive method for direct PCR amplification of animal DNA from tissues, we optimized different components and their concentration in lysis buffer systems. Finally, we acquired the optimized buffer system composed of 10 mmol tris(hydroxymethyl)aminomethane (Tris)-Cl (pH 8.0), 2 mmol ethylene diamine tetraacetic (EDTA) (pH 8.0), 0.2 mol NaCl and 200 μg/mL Proteinase K. Interestingly, the optimized buffer is also very effective when working with common human sample types, including blood, buccal cells and hair. The direct PCR method requires fewer reagents (Tris-Cl, EDTA, Protease K and NaCl) and less incubation time (only 35 min). The cost of treating every sample is less than $0.02, and all steps can be completed on a thermal cycler in a 96-well format. So, the proposed method will significantly improve high-throughput PCR-based molecular assays in animal systems and in common human sample types.

  18. Comparison of manual and semi-automatic DNA extraction protocols for the barcoding characterization of hematophagous louse flies (Diptera: Hippoboscidae).

    Science.gov (United States)

    Gutiérrez-López, Rafael; Martínez-de la Puente, Josué; Gangoso, Laura; Soriguer, Ramón C; Figuerola, Jordi

    2015-06-01

    The barcoding of life initiative provides a universal molecular tool to distinguish animal species based on the amplification and sequencing of a fragment of the subunit 1 of the cytochrome oxidase (COI) gene. Obtaining good quality DNA for barcoding purposes is a limiting factor, especially in studies conducted on small-sized samples or those requiring the maintenance of the organism as a voucher. In this study, we compared the number of positive amplifications and the quality of the sequences obtained using DNA extraction methods that also differ in their economic costs and time requirements and we applied them for the genetic characterization of louse flies. Four DNA extraction methods were studied: chloroform/isoamyl alcohol, HotShot procedure, Qiagen DNeasy(®) Tissue and Blood Kit and DNA Kit Maxwell(®) 16LEV. All the louse flies were morphologically identified as Ornithophila gestroi and a single COI-based haplotype was identified. The number of positive amplifications did not differ significantly among DNA extraction procedures. However, the quality of the sequences was significantly lower for the case of the chloroform/isoamyl alcohol procedure with respect to the rest of methods tested here. These results may be useful for the genetic characterization of louse flies, leaving most of the remaining insect as a voucher.

  19. Evaluating current automatic de-identification methods with Veteran’s health administration clinical documents

    Directory of Open Access Journals (Sweden)

    Ferrández Oscar

    2012-07-01

    Full Text Available Abstract Background The increased use and adoption of Electronic Health Records (EHR causes a tremendous growth in digital information useful for clinicians, researchers and many other operational purposes. However, this information is rich in Protected Health Information (PHI, which severely restricts its access and possible uses. A number of investigators have developed methods for automatically de-identifying EHR documents by removing PHI, as specified in the Health Insurance Portability and Accountability Act “Safe Harbor” method. This study focuses on the evaluation of existing automated text de-identification methods and tools, as applied to Veterans Health Administration (VHA clinical documents, to assess which methods perform better with each category of PHI found in our clinical notes; and when new methods are needed to improve performance. Methods We installed and evaluated five text de-identification systems “out-of-the-box” using a corpus of VHA clinical documents. The systems based on machine learning methods were trained with the 2006 i2b2 de-identification corpora and evaluated with our VHA corpus, and also evaluated with a ten-fold cross-validation experiment using our VHA corpus. We counted exact, partial, and fully contained matches with reference annotations, considering each PHI type separately, or only one unique ‘PHI’ category. Performance of the systems was assessed using recall (equivalent to sensitivity and precision (equivalent to positive predictive value metrics, as well as the F2-measure. Results Overall, systems based on rules and pattern matching achieved better recall, and precision was always better with systems based on machine learning approaches. The highest “out-of-the-box” F2-measure was 67% for partial matches; the best precision and recall were 95% and 78%, respectively. Finally, the ten-fold cross validation experiment allowed for an increase of the F2-measure to 79% with partial matches

  20. Automatic method for building indoor boundary models from dense point clouds collected by laser scanners.

    Science.gov (United States)

    Valero, Enrique; Adán, Antonio; Cerrada, Carlos

    2012-11-22

    In this paper we present a method that automatically yields Boundary Representation Models (B-rep) for indoors after processing dense point clouds collected by laser scanners from key locations through an existing facility. Our objective is particularly focused on providing single models which contain the shape, location and relationship of primitive structural elements of inhabited scenarios such as walls, ceilings and floors. We propose a discretization of the space in order to accurately segment the 3D data and generate complete B-rep models of indoors in which faces, edges and vertices are coherently connected. The approach has been tested in real scenarios with data coming from laser scanners yielding promising results. We have deeply evaluated the results by analyzing how reliably these elements can be detected and how accurately they are modeled.

  1. An improved automatic detection method for earthquake-collapsed buildings from ADS40 image

    Institute of Scientific and Technical Information of China (English)

    GUO HuaDong; LU LinLin; MA JianWen; PESARESI Martino; YUAN FangYan

    2009-01-01

    Earthquake-collapsed building identification is important in earthquake damage assessment and is evidence for mapping seismic intensity. After the May 12th Wenchuan major earthquake occurred,experts from CEODE and IPSC collaborated to make a rapid earthquake damage assessment. A crucial task was to identify collapsed buildings from ADS40 images in the earthquake region. The difficulty was to differentiate collapsed buildings from concrete bridges,dry gravels,and landslide-induced rolling stones since they had a similar gray level range in the image. Based on the IPSC method,an improved automatic identification technique was developed and tested in the study area,a portion of Beichuan County. Final results showed that the technique's accuracy was over 95%. Procedures and results of this experiment are presented in this article. Theory of this technique indicates that it could be applied to collapsed building identification caused by other disasters.

  2. Transducer-actuator systems and methods for performing on-machine measurements and automatic part alignment

    Science.gov (United States)

    Barkman, William E.; Dow, Thomas A.; Garrard, Kenneth P.; Marston, Zachary

    2016-07-12

    Systems and methods for performing on-machine measurements and automatic part alignment, including: a measurement component operable for determining the position of a part on a machine; and an actuation component operable for adjusting the position of the part by contacting the part with a predetermined force responsive to the determined position of the part. The measurement component consists of a transducer. The actuation component consists of a linear actuator. Optionally, the measurement component and the actuation component consist of a single linear actuator operable for contacting the part with a first lighter force for determining the position of the part and with a second harder force for adjusting the position of the part. The actuation component is utilized in a substantially horizontal configuration and the effects of gravitational drop of the part are accounted for in the force applied and the timing of the contact.

  3. Evaluation of a Meta-1-based automatic indexing method for medical documents.

    Science.gov (United States)

    Wagner, M M; Cooper, G F

    1992-08-01

    This paper describes MetaIndex, an automatic indexing program that creates symbolic representations of documents for the purpose of document retrieval. MetaIndex uses a simple transition network parser to recognize a language that is derived from the set of main concepts in the Unified Medical Language System Metathesaurus (Meta-1). MetaIndex uses a hierarchy of medical concepts, also derived from Meta-1, to represent the content of documents. The goal of this approach is to improve document retrieval performance by better representation of documents. An evaluation method is described, and the performance of MetaIndex on the task of indexing the Slice of Life medical image collection is reported.

  4. miRFam: an effective automatic miRNA classification method based on n-grams and a multiclass SVM

    Directory of Open Access Journals (Sweden)

    Zhou Shuigeng

    2011-05-01

    Full Text Available Abstract Background MicroRNAs (miRNAs are ~22 nt long integral elements responsible for post-transcriptional control of gene expressions. After the identification of thousands of miRNAs, the challenge is now to explore their specific biological functions. To this end, it will be greatly helpful to construct a reasonable organization of these miRNAs according to their homologous relationships. Given an established miRNA family system (e.g. the miRBase family organization, this paper addresses the problem of automatically and accurately classifying newly found miRNAs to their corresponding families by supervised learning techniques. Concretely, we propose an effective method, miRFam, which uses only primary information of pre-miRNAs or mature miRNAs and a multiclass SVM, to automatically classify miRNA genes. Results An existing miRNA family system prepared by miRBase was downloaded online. We first employed n-grams to extract features from known precursor sequences, and then trained a multiclass SVM classifier to classify new miRNAs (i.e. their families are unknown. Comparing with miRBase's sequence alignment and manual modification, our study shows that the application of machine learning techniques to miRNA family classification is a general and more effective approach. When the testing dataset contains more than 300 families (each of which holds no less than 5 members, the classification accuracy is around 98%. Even with the entire miRBase15 (1056 families and more than 650 of them hold less than 5 samples, the accuracy surprisingly reaches 90%. Conclusions Based on experimental results, we argue that miRFam is suitable for application as an automated method of family classification, and it is an important supplementary tool to the existing alignment-based small non-coding RNA (sncRNA classification methods, since it only requires primary sequence information. Availability The source code of miRFam, written in C++, is freely and publicly

  5. An automatic method to homogenize trends in long-term monthly precipitation series

    Science.gov (United States)

    Rustemeier, E.; Kapala, A.; Mächel, H.; Meyer-Christoffer, A.; Schneider, U.; Ziese, M.; Venema, V.; Becker, A.; Simmer, C.

    2012-04-01

    Lack of homogeneity of long-term series of in-situ precipitation observations is a known problem and requires time consuming manual data correction in order to allow for a robust trend analysis. This work is focused on the development of an algorithm for automatic data correction of multiple stations. The algorithm relies on the similarity of climate signals between close stations. It consists of three steps: 1) Construction of networks of comparable precipitation behaviour; 2) Detection of breakpoints; 3) Trend correction. Detection and correction are based on the homogenization software (Prodige) adopted from Météo France (Caussinus and Mestre 2004). The networks are constructed based on monthly accumulated precipitation and several indices. For the classification, principal component analysis in S-mode is applied followed by a VARIMAX rotation. Within each network, a segmentation method is used to detect the breaks. In order to develop a fully automatic method, scaled time series are combined to create the reference series. The monthly correction applied is a multiple linear regression as described in Mestre, 2004 which also conserves the annual cycle. At present, the algorithm has been used to homogenize 100 years of precipitation records from stations in Germany, without any missing values. The data has been digitized recently by the Meteorological Institute of the University of Bonn and the Deutscher Wetterdienst. The resulting networks correspond well to the German geographical regions. The number of detected breaks varies between 0 ~7 breaks per station. The majority of breaks is very small (below ±10 mm per year) despite a few high (up to ±200 mm) ones. In future, the algorithm will be used to generate a homogenous global precipitation data set HOMPRA for the period 1951-2005 using more than 16000 stations in collaboration with the Global Precipitation Climatology Centre (GPCC, Becker et al., 2012).

  6. Method for 3D Airway Topology Extraction

    Directory of Open Access Journals (Sweden)

    Roman Grothausmann

    2015-01-01

    Full Text Available In lungs the number of conducting airway generations as well as bifurcation patterns varies across species and shows specific characteristics relating to illnesses or gene variations. A method to characterize the topology of the mouse airway tree using scanning laser optical tomography (SLOT tomograms is presented in this paper. It is used to test discrimination between two types of mice based on detected differences in their conducting airway pattern. Based on segmentations of the airways in these tomograms, the main spanning tree of the volume skeleton is computed. The resulting graph structure is used to distinguish between wild type and surfactant protein (SP-D deficient knock-out mice.

  7. Comparing extraction buffers to identify optimal method to extract somatic coliphages from sewage sludges.

    Science.gov (United States)

    Murthi, Poornima; Praveen, Chandni; Jesudhasan, Palmy R; Pillai, Suresh D

    2012-08-01

    Somatic coliphages are present in high numbers in sewage sludge. Since they are conservative indicators of viruses during wastewater treatment processes, they are being used to evaluate the effectiveness of sludge treatment processes. However, efficient methods to extract them from sludge are lacking. The objective was to compare different virus extraction procedures and develop a method to extract coliphages from sewage sludge. Twelve different extraction buffers and procedures varying in composition, pH, and sonication were compared in their ability to recover indigenous phages from sludges. The 3% buffered beef extract (BBE) (pH 9.0), the 10% BBE (pH 9.0), and the 10% BBE (pH 7.0) with sonication were short-listed and their recovery efficiency was determined using coliphage-spiked samples. The highest recovery was 16% for the extraction that involved 10% BBE at pH 9.0. There is a need to develop methods to extract somatic phages from sludges for monitoring sludge treatment processes.

  8. A new automatic method for registering of point clouds%提取平面标靶及配准点云的自动化实现

    Institute of Scientific and Technical Information of China (English)

    周绍光; 田慧; 李浩

    2012-01-01

    Registration of point clouds plays an essential role to process the data acquired with 3D laser scanner. One traditional half automatic registration scheme based on targets needs to scan each target separately. In this paper, we develop a new automatic registration method that converts point clouds of single station to two-dimensional images by the center projection principle. It utilizes digital image processing technology to extract targets automatically and calculates the coordinates of its center point with photogrammetry so as to achieve point clouds registration automatically. Experimental results show the effectiveness and reliability of this method.%点云配准是三维激光扫描数据处理过程中不可或缺的一个环节,利用标靶进行配准是经典的手段之一,此类方案在单独扫描标靶的基础上进行半自动化配准。本文给出一种自动配准策略,用中心投影原理将单站扫描的点云转换为深度影像,借助数字图像处理技术完成标靶的自动提取,拟合获得标靶中心点的坐标,并借用摄影测量学的知识实现点云的自动化配准。实验证明了本文方法的可靠性。

  9. A Karnaugh-Map based fingerprint minutiae extraction method

    Directory of Open Access Journals (Sweden)

    Sunil Kumar Singla

    2010-07-01

    Full Text Available Fingerprint is one of the most promising method among all the biometric techniques and has been used for thepersonal authentication for a long time because of its wide acceptance and reliability. Features (Minutiae are extracted fromthe fingerprint in question and are compared with the features already stored in the database for authentication. Crossingnumber (CN is the most commonly used minutiae extraction method for fingerprints. In this paper, a new Karnaugh-Mapbased fingerprint minutiae extraction method has been proposed and discussed. In the proposed algorithm the 8 neighborsof a pixel in a 33 window are arranged as 8 bits of a byte and corresponding hexadecimal (hex value is calculated. Thesehex values are simplified using standard Karnaugh-Map (K-map technique to obtain the minimized logical expression.Experiments conducted on the FVC2002/Db1_a database reveals that the developed method is better than the crossingnumber (CN method.

  10. Liver segmentation in MRI: A fully automatic method based on stochastic partitions.

    Science.gov (United States)

    López-Mir, F; Naranjo, V; Angulo, J; Alcañiz, M; Luna, L

    2014-04-01

    There are few fully automated methods for liver segmentation in magnetic resonance images (MRI) despite the benefits of this type of acquisition in comparison to other radiology techniques such as computed tomography (CT). Motivated by medical requirements, liver segmentation in MRI has been carried out. For this purpose, we present a new method for liver segmentation based on the watershed transform and stochastic partitions. The classical watershed over-segmentation is reduced using a marker-controlled algorithm. To improve accuracy of selected contours, the gradient of the original image is successfully enhanced by applying a new variant of stochastic watershed. Moreover, a final classifier is performed in order to obtain the final liver mask. Optimal parameters of the method are tuned using a training dataset and then they are applied to the rest of studies (17 datasets). The obtained results (a Jaccard coefficient of 0.91 ± 0.02) in comparison to other methods demonstrate that the new variant of stochastic watershed is a robust tool for automatic segmentation of the liver in MRI.

  11. Determination of dithiocarbamate pesticides in occupational hygiene sampling devices using the isooctane method and comparison with an automatic thermal desorption (ATD) method.

    Science.gov (United States)

    Coldwell, Matthew R; Pengelly, Ian; Rimmer, Duncan A

    2003-01-10

    Two new methods for the determination of dithiocarbamate pesticides in occupational hygiene sampling devices are described. Dithiocarbamate spiked occupational hygiene sampling devices, consisting of glass fibre (GF/A) filters, cotton pads, cotton gloves and disposable overalls, were reduced under acidic conditions and the CS2 evolved as a decomposition product was extracted into isooctane. The isooctane was then analysed using gas chromatography with mass spectrometry, for CS2, which provided a quantitative result for dithiocarbamates. Recoveries obtained were generally within a 70-110% range and reproducibilities better than 15% RSD were typically achieved. The method has been successfully applied to samples collected during occupational exposure surveys. A second method employing automatic thermal desorption-gas chromatography-mass spectrometry (ATD-GC-MS) has also been developed and applied to the direct analysis of GF/A (airborne) samples. The method relies on the thermal degradation of dithiocarbamates to release CS2, which is used to quantify the analytes. Thiram spiked GF/A filters gave an average recovery of 107% with an RSD of 4%. The performance of the two analytical methods were directly compared by analysing sub-portions of GF/A filters collected during a survey to evaluate occupational exposures to thiram during seed treatment operations. Both methods performed well for the analysis of airborne (GF/A) samples and produced results in good agreement. ATD-GC-MS is the preferred method for studies involving GF/A (airborne) samples only. Because of the wider applicability of the isooctane method for other sampling devices, it is the preferred choice when carrying out surveys which require a dermal as well as respirable exposure assessment.

  12. A Novel Neural Network Based Method Developed for Digit Recognition Applied to Automatic Speed Sign Recognition

    Directory of Open Access Journals (Sweden)

    Hanene Rouabeh

    2016-02-01

    Full Text Available This Paper presents a new hybrid technique for digit recognition applied to the speed limit sign recognition task. The complete recognition system consists in the detection and recognition of the speed signs in RGB images. A pretreatment is applied to extract the pictogram from a detected circular road sign, and then the task discussed in this work is employed to recognize digit candidates. To realize a compromise between performances, reduced execution time and optimized memory resources, the developed method is based on a conjoint use of a Neural Network and a Decision Tree. A simple Network is employed firstly to classify the extracted candidates into three classes and secondly a small Decision Tree is charged to determine the exact information. This combination is used to reduce the size of the Network as well as the memory resources utilization. The evaluation of the technique and the comparison with existent methods show the effectiveness.

  13. Phase extraction based on sinusoidal extreme strip phase shifting method

    Science.gov (United States)

    Hui, Mei; Liu, Ming; Dong, Liquan; Liu, Xiaohua; Zhao, Yuejin

    2015-08-01

    Multiple synthetic aperture imaging can enlarge pupil diameter of optical systems, and increase system resolution. Multiple synthetic aperture imaging is a cutting-edge topic and research focus in recent years, which is prospectively widely applied in fields like astronomical observations and aerospace remote sensing. In order to achieve good imaging quality, synthetic aperture imaging system requires phase extraction of each sub-aperture and co-phasing of whole aperture. In the project, an in-depth study about basic principles and methods of segments phase extraction was done. The study includes: application of sinusoidal extreme strip light irradiation phase shift method to extract the central dividing line to get segment phase extraction information, and the use of interference measurement to get the aperture phase extraction calibration coefficients of spherical surface. Study about influence of sinusoidal extreme strip phase shift on phase extraction, and based on sinusoidal stripe phase shift from multiple linear light sources of the illumination reflected image, to carry out the phase shift error for inhibiting the effect in the phase extracted frame.

  14. 基于维基百科的气象本体的自动构建%A Method of Building Meteorological Ontology Automatically Based on Wikipedia

    Institute of Scientific and Technical Information of China (English)

    王磊; 顾大权; 侯太平; 代曦

    2014-01-01

    随着语义检索技术在众多领域的不断发展应用,领域本体需求越来越大,手工构造不能满足本体应用的需求。本文从现有本体自动构建方法着手,总结本体自动构建的一般方法,分析以维基百科结构化数据为基础进行气象本体自动构建的可能性,基于网络链接相互性系数,提取有效子分类;定义距离跳数,进行有效条目的提取,最后提出基于维基百科的气象本体自动构建方法。实验结果表明,该方法能够达到本体构建的要求,具有速度快、人工干预少的特点,对本体在气象领域的应用具有一定的促进作用。%With the continuous development of semantic retrieval technology in many fields , the requirement of domain ontology is increasing .To construct ontology only by hand cannot satisfy the requirements of ontology application .This paper , starting from the existing method which builds ontology automatically , summarizes the general method of building ontology automatically , and analyzes the possibility of building meteorological ontology automatically based on the structured data of Wikipedia , and then ex-tracts the effective classifications from it according to the network links interaction coefficient .By defining the distance between the hops, we extract the effective entries , finally, we put forward the meteorological ontology automatic building method based on Wikipedia.The experiment shows that this method which has the higher speed and less manual intervention can satisfy the re -quirements of ontology construction .It’ s sure that this method has a certain role in the application of meteorological ontology .

  15. A Novel Method of Genomic DNA Extraction for Cactaceae

    OpenAIRE

    Fehlberg, Shannon D.; Jessica M. Allen; Kathleen Church

    2013-01-01

    • Premise of the study: Genetic studies of Cactaceae can at times be impeded by difficult sampling logistics and/or high mucilage content in tissues. Simplifying sampling and DNA isolation through the use of cactus spines has not previously been investigated. • Methods and Results: Several protocols for extracting DNA from spines were tested and modified to maximize yield, amplification, and sequencing. Sampling of and extraction from spines resulted in a simplified protocol overall and compl...

  16. Airway Segmentation and Centerline Extraction from Thoracic CT - Comparison of a New Method to State of the Art Commercialized Methods.

    Directory of Open Access Journals (Sweden)

    Pall Jens Reynisson

    centerlines. Reference segmentation comparison averages and standard deviations for MPM and TSF correspond to literature.The TSF is able to segment the airways and extract the centerlines in one single step. The number of branches found is lower for the TSF method than in Mimics. OsiriX demands the highest number of clicks to process the data, the segmentation is often sparse and extracting the centerline requires the use of another software system. Two of the software systems performed satisfactory with respect to be used in preprocessing CT images for navigated bronchoscopy, i.e. the TSF method and the MPM. According to reference segmentation both TSF and MPM are comparable with other segmentation methods. The level of automaticity and the resulting high number of branches plus the fact that both centerline and the surface of the airways were extracted, are requirements we considered particularly important. The in house method has the advantage of being an integrated part of a navigation platform for bronchoscopy, whilst the other methods can be considered preprocessing tools to a navigation system.

  17. Application of Magnetic Bead-Based Nucleic Acid Automatic Extraction System in Molecular Biology%磁珠法核酸自动提取仪在分子生物学领域的应用

    Institute of Scientific and Technical Information of China (English)

    罗英

    2013-01-01

    The magnetic bead-based nucleic acid automatic extraction system can simply ,rapidly , efficiently ,economically and automatically extract nucleic acid from all kinds of samples . This paper summarizes the principle and classification of automatic nucleic acid extraction systems ,and the principle ,classification and characteristics of magnetic bead -based nucleic acid automatic extraction systems and their application in the field of molecular biology .%磁珠法核酸自动提取仪可以简单、快速、高效和经济地实现各种标本核酸的自动提取。本文概述了核酸自动提取仪的原理及分类,磁珠法核酸自动提取仪原理、分类、特点及其在分子生物学领域的应用。

  18. Comparison of Document Index Graph Using TextRank and HITS Weighting Method in Automatic Text Summarization

    Science.gov (United States)

    Hadyan, Fadhlil; Shaufiah; Arif Bijaksana, Moch.

    2017-01-01

    Automatic summarization is a system that can help someone to take the core information of a long text instantly. The system can help by summarizing text automatically. there’s Already many summarization systems that have been developed at this time but there are still many problems in those system. In this final task proposed summarization method using document index graph. This method utilizes the PageRank and HITS formula used to assess the web page, adapted to make an assessment of words in the sentences in a text document. The expected outcome of this final task is a system that can do summarization of a single document, by utilizing document index graph with TextRank and HITS to improve the quality of the summary results automatically.

  19. A new method for automatically constructing convexity-preserving interpolatory splines

    Institute of Scientific and Technical Information of China (English)

    PAN Yongjuan; WANG Guojin

    2004-01-01

    Constructing a convexity-preserving interpolating curve according to the given planar data points is a problem to be solved in computer aided geometric design (CAGD). So far, almost all methods must solve a system of equations or recur to a complicated iterative process, and most of them can only generate some function-form convexity-preserving interpolating curves which are unaccommodated with the parametric curves, commonly used in CAGD systems. In order to overcome these drawbacks, this paper proposes a new method that can automatically generate some parametric convexity-preserving polynomial interpolating curves but dispensing with solving any system of equations or going at any iterative computation. The main idea is to construct a family of interpolating spline curves first with the shape parameter a as its family parameter; then, using the positive conditions of Bernstein polynomial to respectively find a range in which the shape parameter a takes its value for two cases of global convex data points and piecewise convex data points so as to make the corresponding interpolating curves convexity-preserving and C2(or G1) continuous. The method is simple and convenient, and the resulting interpolating curves possess smooth distribution of curvature. Numerical examples illustrate the correctness and the validity of theoretical reasoning.

  20. Fully automatic lung segmentation and rib suppression methods to improve nodule detection in chest radiographs.

    Science.gov (United States)

    Soleymanpour, Elaheh; Pourreza, Hamid Reza; Ansaripour, Emad; Yazdi, Mehri Sadooghi

    2011-07-01

    Computer-aided Diagnosis (CAD) systems can assist radiologists in several diagnostic tasks. Lung segmentation is one of the mandatory steps for initial detection of lung cancer in Posterior-Anterior chest radiographs. On the other hand, many CAD schemes in projection chest radiography may benefit from the suppression of the bony structures that overlay the lung fields, e.g. ribs. The original images are enhanced by an adaptive contrast equalization and non-linear filtering. Then an initial estimation of lung area is obtained based on morphological operations and then it is improved by growing this region to find the accurate final contour, then for rib suppression, we use oriented spatial Gabor filter. The proposed method was tested on a publicly available database of 247 chest radiographs. Results show that this method outperformed greatly with accuracy of 96.25% for lung segmentation, also we will show improving the conspicuity of lung nodules by rib suppression with local nodule contrast measures. Because there is no additional radiation exposure or specialized equipment required, it could also be applied to bedside portable chest x-rays. In addition to simplicity of these fully automatic methods, lung segmentation and rib suppression algorithms are performed accurately with low computation time and robustness to noise because of the suitable enhancement procedure.

  1. UMLS-based automatic image indexing.

    Science.gov (United States)

    Sneiderman, C; Sneiderman, Charles Alan; Demner-Fushman, D; Demner-Fushman, Dina; Fung, K W; Fung, Kin Wah; Bray, B; Bray, Bruce

    2008-01-01

    To date, most accurate image retrieval techniques rely on textual descriptions of images. Our goal is to automatically generate indexing terms for an image extracted from a biomedical article by identifying Unified Medical Language System (UMLS) concepts in image caption and its discussion in the text. In a pilot evaluation of the suggested image indexing method by five physicians, a third of the automatically identified index terms were found suitable for indexing.

  2. A simplified method for extracting androgens from avian egg yolks

    Science.gov (United States)

    Kozlowski, C.P.; Bauman, J.E.; Hahn, D.C.

    2009-01-01

    Female birds deposit significant amounts of steroid hormones into the yolks of their eggs. Studies have demonstrated that these hormones, particularly androgens, affect nestling growth and development. In order to measure androgen concentrations in avian egg yolks, most authors follow the extraction methods outlined by Schwabl (1993. Proc. Nat. Acad. Sci. USA 90:11446-11450). We describe a simplified method for extracting androgens from avian egg yolks. Our method, which has been validated through recovery and linearity experiments, consists of a single ethanol precipitation that produces substantially higher recoveries than those reported by Schwabl.

  3. A New Method to Extract Text from Natural Scenes

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    This paper presents a new method for text detection, location and binarization fron natural scenes. Several morphological steps are used to detect the general positian of the text, including English, Chinese and Japanese characters. Next bounding boxes are processed by a new "Expand, Break and Merge" (EBM) method to get the precise text areas. Finally, text is binarized by a hybrid method based on Otsu and Niblack. This new approach can extract different kinds of text from complicated natural scenes. It is insensitive to noise, distortedness, and text orientation. It also has good performance on extracting texts in various sizes.

  4. Microbial protein in soil: influence of extraction method and C amendment on extraction and recovery.

    Science.gov (United States)

    Taylor, Erin B; Williams, Mark A

    2010-02-01

    The capacity to study the content and resolve the dynamics of the proteome of diverse microbial communities would help to revolutionize the way microbiologists study the function and activity of microorganisms in soil. To better understand the limitations of a proteomic approach to studying soil microbial communities, we characterized extractable soil microbial proteins using sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE). Two methods were utilized to extract proteins from microorganisms residing in a Quitman and Benfield soil: (1) direct extraction of bulk protein from soil and (2) separation of the microorganisms from soil using density gradient centrifugation and subsequent extraction (DGC-EXT) of microbial protein. In addition, glucose and toluene amendments to soil were used to stimulate the growth of a subset of the microbial community. A bacterial culture and bovine serum albumin (BSA) were added to the soil to qualitatively assess their recovery following extraction. Direct extraction and resolution of microbial proteins using SDS-PAGE generally resulted in smeared and unresolved banding patterns on gels. DGC-EXT of microbial protein from soil followed by separation using SDS-PAGE, however, did resolve six to 10 bands in the Benfield but not the Quitman soil. DGC-EXT of microbial protein, but not direct extraction following the addition of glucose and toluene, markedly increased the number of bands (approximately 40) on the gels in both Benfield and Quitman soils. Low recoveries of added culture and BSA proteins using the direct extraction method suggest that proteins either bind to soil organic matter and mineral particles or that partial degradation takes place during extraction. Interestingly, DGC may have been preferentially selected for actively growing cells, as gauged by the 10-100x lower cy19:0/18:1omega7 ratio of the fatty acid methyl esters in the isolated community compared to that for the whole soil. DGC can be used to

  5. A silica gel based method for extracting insect surface hydrocarbons.

    Science.gov (United States)

    Choe, Dong-Hwan; Ramírez, Santiago R; Tsutsui, Neil D

    2012-02-01

    Here, we describe a novel method for the extraction of insect cuticular hydrocarbons using silica gel, herein referred to as "silica-rubbing". This method permits the selective sampling of external hydrocarbons from insect cuticle surfaces for subsequent analysis using gas chromatography-mass spectrometry (GC-MS). The cuticular hydrocarbons are first adsorbed to silica gel particles by rubbing the cuticle of insect specimens with the materials, and then are subsequently eluted using organic solvents. We compared the cuticular hydrocarbon profiles that resulted from extractions using silica-rubbing and solvent-soaking methods in four ant and one bee species: Linepithema humile, Azteca instabilis, Camponotus floridanus, Pogonomyrmex barbatus (Hymenoptera: Formicidae), and Euglossa dilemma (Hymenoptera: Apidae). We also compared the hydrocarbon profiles of Euglossa dilemma obtained via silica-rubbing and solid phase microextraction (SPME). Comparison of hydrocarbon profiles obtained by different extraction methods indicates that silica rubbing selectively extracts the hydrocarbons that are present on the surface of the cuticular wax layer, without extracting hydrocarbons from internal glands and tissues. Due to its surface specificity, efficiency, and low cost, this new method may be useful for studying the biology of insect cuticular hydrocarbons.

  6. Automatic segmentation of 4D cardiac MR images for extraction of ventricular chambers using a spatio-temporal approach

    Science.gov (United States)

    Atehortúa, Angélica; Zuluaga, Maria A.; Ourselin, Sébastien; Giraldo, Diana; Romero, Eduardo

    2016-03-01

    An accurate ventricular function quantification is important to support evaluation, diagnosis and prognosis of several cardiac pathologies. However, expert heart delineation, specifically for the right ventricle, is a time consuming task with high inter-and-intra observer variability. A fully automatic 3D+time heart segmentation framework is herein proposed for short-axis-cardiac MRI sequences. This approach estimates the heart using exclusively information from the sequence itself without tuning any parameters. The proposed framework uses a coarse-to-fine approach, which starts by localizing the heart via spatio-temporal analysis, followed by a segmentation of the basal heart that is then propagated to the apex by using a non-rigid-registration strategy. The obtained volume is then refined by estimating the ventricular muscle by locally searching a prior endocardium- pericardium intensity pattern. The proposed framework was applied to 48 patients datasets supplied by the organizers of the MICCAI 2012 Right Ventricle segmentation challenge. Results show the robustness, efficiency and competitiveness of the proposed method both in terms of accuracy and computational load.

  7. Study on the extraction method of tidal flat area in northern Jiangsu Province based on remote sensing waterlines

    Science.gov (United States)

    Zhang, Yuanyuan; Gao, Zhiqiang; Liu, Xiangyang; Xu, Ning; Liu, Chaoshun; Gao, Wei

    2016-09-01

    Reclamation caused a significant dynamic change in the coastal zone, the tidal flat zone is an unstable reserve land resource, it has important significance for its research. In order to realize the efficient extraction of the tidal flat area information, this paper takes Rudong County in Jiangsu Province as the research area, using the HJ1A/1B images as the data source, on the basis of previous research experience and literature review, the paper chooses the method of object-oriented classification as a semi-automatic extraction method to generate waterlines. Then waterlines are analyzed by DSAS software to obtain tide points, automatic extraction of outer boundary points are followed under the use of Python to determine the extent of tidal flats in 2014 of Rudong County, the extraction area was 55182hm2, the confusion matrix is used to verify the accuracy and the result shows that the kappa coefficient is 0.945. The method could improve deficiencies of previous studies and its available free nature on the Internet makes a generalization.

  8. Comparison of function approximation, heuristic, and derivative-based methods for automatic calibration of computationally expensive groundwater bioremediation models

    Science.gov (United States)

    Mugunthan, Pradeep; Shoemaker, Christine A.; Regis, Rommel G.

    2005-11-01

    The performance of function approximation (FA) methods is compared to heuristic and derivative-based nonlinear optimization methods for automatic calibration of biokinetic parameters of a groundwater bioremediation model of chlorinated ethenes on a hypothetical and a real field case. For the hypothetical case, on the basis of 10 trials on two different objective functions, the FA methods had the lowest mean and smaller deviation of the objective function among all algorithms for a combined Nash-Sutcliffe objective and among all but the derivative-based algorithm for a total squared error objective. The best algorithms in the hypothetical case were applied to calibrate eight parameters to data obtained from a site in California. In three trials the FA methods outperformed heuristic and derivative-based methods for both objective functions. This study indicates that function approximation methods could be a more efficient alternative to heuristic and derivative-based methods for automatic calibration of computationally expensive bioremediation models.

  9. A Novel OD Estimation Method Based on Automatic Vehicle Identification Data

    Science.gov (United States)

    Sun, Jian; Feng, Yu

    With the development and application of Automatic Vehicle Identification (AVI) technologies, a novel high resolution OD estimation method was proposed based on AVI detector information. 4 detected categories (Ox + Dy, Ox/Dy + Path(s), Ox/Dy, Path(s)) were divided at the first step. Then the initial OD matrix was updated using the Ox + Dy sample information considering the AVI detector errors. Referenced by particle filter, the link-path relationship data were revised using the last 3 categories information based on Bayesian inference and the possible trajectory and OD were determined using Monte Carlo random process at last. Finally, according to the current application of video detector in Shanghai, the North-South expressway was selected as the testbed which including 17 OD pairs and 9 AVI detectors. The results show that the calculated average relative error is 12.09% under the constraints that the simulation error is under 15% and the detector error is about 10%. It also shows that this method is highly efficient and can fully using the partial vehicle trajectory which can be satisfied with the dynamic traffic management application in reality.

  10. Automatic Method for Identifying Photospheric Bright Points and Granules Observed by Sunrise

    CERN Document Server

    Javaherian, Mohsen; Amiri, Ali; Ziaei, Shervin

    2014-01-01

    In this study, we propose methods for the automatic detection of photospheric features (bright points and granules) from ultra-violet (UV) radiation, using a feature-based classifier. The methods use quiet-Sun observations at 214 nm and 525 nm images taken by Sunrise on 9 June 2009. The function of region growing and mean shift procedure are applied to segment the bright points (BPs) and granules, respectively. Zernike moments of each region are computed. The Zernike moments of BPs, granules, and other features are distinctive enough to be separated using a support vector machine (SVM) classifier. The size distribution of BPs can be fitted with a power-law slope -1.5. The peak value of granule sizes is found to be about 0.5 arcsec^2. The mean value of the filling factor of BPs is 0.01, and for granules it is 0.51. There is a critical scale for granules so that small granules with sizes smaller than 2.5 arcsec^2 cover a wide range of brightness, while the brightness of large granules approaches unity. The mean...

  11. Automatic Calibration Method of Voxel Size for Cone-beam 3D-CT Scanning System

    CERN Document Server

    Yang, Min; Liu, Yipeng; Men, Fanyong; Li, Xingdong; Liu, Wenli; Wei, Dongbo

    2013-01-01

    For cone-beam three-dimensional computed tomography (3D-CT) scanning system, voxel size is an important indicator to guarantee the accuracy of data analysis and feature measurement based on 3D-CT images. Meanwhile, the voxel size changes with the movement of the rotary table along X-ray direction. In order to realize the automatic calibration of the voxel size, a new easily-implemented method is proposed. According to this method, several projections of a spherical phantom are captured at different imaging positions and the corresponding voxel size values are calculated by non-linear least square fitting. Through these interpolation values, a linear equation is obtained, which reflects the relationship between the rotary table displacement distance from its nominal zero position and the voxel size. Finally, the linear equation is imported into the calibration module of the 3D-CT scanning system, and when the rotary table is moving along X-ray direction, the accurate value of the voxel size is dynamically expo...

  12. BMAA extraction of cyanobacteria samples: which method to choose?

    Science.gov (United States)

    Lage, Sandra; Burian, Alfred; Rasmussen, Ulla; Costa, Pedro Reis; Annadotter, Heléne; Godhe, Anna; Rydberg, Sara

    2016-01-01

    β-N-Methylamino-L-alanine (BMAA), a neurotoxin reportedly produced by cyanobacteria, diatoms and dinoflagellates, is proposed to be linked to the development of neurological diseases. BMAA has been found in aquatic and terrestrial ecosystems worldwide, both in its phytoplankton producers and in several invertebrate and vertebrate organisms that bioaccumulate it. LC-MS/MS is the most frequently used analytical technique in BMAA research due to its high selectivity, though consensus is lacking as to the best extraction method to apply. This study accordingly surveys the efficiency of three extraction methods regularly used in BMAA research to extract BMAA from cyanobacteria samples. The results obtained provide insights into possible reasons for the BMAA concentration discrepancies in previous publications. In addition and according to the method validation guidelines for analysing cyanotoxins, the TCA protein precipitation method, followed by AQC derivatization and LC-MS/MS analysis, is now validated for extracting protein-bound (after protein hydrolysis) and free BMAA from cyanobacteria matrix. BMAA biological variability was also tested through the extraction of diatom and cyanobacteria species, revealing a high variance in BMAA levels (0.0080-2.5797 μg g(-1) DW).

  13. Influence of Extraction Methods on the Yield of Steviol Glycosides and Antioxidants in Stevia rebaudiana Extracts.

    Science.gov (United States)

    Periche, Angela; Castelló, Maria Luisa; Heredia, Ana; Escriche, Isabel

    2015-06-01

    This study evaluated the application of ultrasound techniques and microwave energy, compared to conventional extraction methods (high temperatures at atmospheric pressure), for the solid-liquid extraction of steviol glycosides (sweeteners) and antioxidants (total phenols, flavonoids and antioxidant capacity) from dehydrated Stevia leaves. Different temperatures (from 50 to 100 °C), times (from 1 to 40 min) and microwave powers (1.98 and 3.30 W/g extract) were used. There was a great difference in the resulting yields according to the treatments applied. Steviol glycosides and antioxidants were negatively correlated; therefore, there is no single treatment suitable for obtaining the highest yield in both groups of compounds simultaneously. The greatest yield of steviol glycosides was obtained with microwave energy (3.30 W/g extract, 2 min), whereas, the conventional method (90 °C, 1 min) was the most suitable for antioxidant extraction. Consequently, the best process depends on the subsequent use (sweetener or antioxidant) of the aqueous extract of Stevia leaves.

  14. [Searching for WDMS Candidates In SDSS-DR10 With Automatic Method].

    Science.gov (United States)

    Jiang, Bin; Wang, Cheng-you; Wang, Wen-yu; Wang, Wei

    2015-05-01

    The Sloan Digital Sky Survey (SDSS) has released the latest data (DR10) which covers the first APOGEE spectra. The massive spectra can be used for large sample research inscluding the structure and evolution of the Galaxy and multi-wave-band identi cation. In addition, the spectra are also ideal for searching for rare and special objects like white dwarf main-sequence star (WDMS). WDMS consist of a white dwarf primary and a low-mass main-sequence (MS) companion which has positive significance to the study of evolution and parameter of close binaries. WDMS is generally discovered by repeated imaging of the same area of sky, measuring light curves for objects or through photometric selection with follow-up observations. These methods require significant manual processing time with low accuracy and the real-time processing requirements can not be satisfied. In this paper, an automatic and efficient method for searching for WDMS candidates is presented. The method Genetic Algorithm (GA) is applied in the newly released SDSS-DR10 spectra. A total number of 4 140 WDMS candidates are selected by the method and 24 of them are new discoveries which prove that our approach of finding special celestial bodies in massive spectra data is feasible. In addition, this method is also applicable to mining other special celestial objects in sky survey telescope data. We report the identfication of 24 new WDMS with spectra. A compendium of positions, mjd, plate and fiberid of these new discoveries is presented which enrich the spectral library and will be useful to the research of binary evolution models.

  15. Microscale extraction method for HPLC carotenoid analysis in vegetable matrices

    Directory of Open Access Journals (Sweden)

    Sidney Pacheco

    2014-10-01

    Full Text Available In order to generate simple, efficient analytical methods that are also fast, clean, and economical, and are capable of producing reliable results for a large number of samples, a micro scale extraction method for analysis of carotenoids in vegetable matrices was developed. The efficiency of this adapted method was checked by comparing the results obtained from vegetable matrices, based on extraction equivalence, time required and reagents. Six matrices were used: tomato (Solanum lycopersicum L., carrot (Daucus carota L., sweet potato with orange pulp (Ipomoea batatas (L. Lam., pumpkin (Cucurbita moschata Duch., watermelon (Citrullus lanatus (Thunb. Matsum. & Nakai and sweet potato (Ipomoea batatas (L. Lam. flour. Quantification of the total carotenoids was made by spectrophotometry. Quantification and determination of carotenoid profiles were formulated by High Performance Liquid Chromatography with photodiode array detection. Microscale extraction was faster, cheaper and cleaner than the commonly used one, and advantageous for analytical laboratories.

  16. Extraction method for parasitic capacitances and inductances of HEMT models

    Science.gov (United States)

    Zhang, HengShuang; Ma, PeiJun; Lu, Yang; Zhao, BoChao; Zheng, JiaXin; Ma, XiaoHua; Hao, Yue

    2017-03-01

    A new method to extract parasitic capacitances and inductances for high electron-mobility transistors (HEMTs) is proposed in this paper. Compared with the conventional extraction method, the depletion layer is modeled as a physically significant capacitance model and the extrinsic values obtained are much closer to the actual results. In order to simulate the high frequency behaviour with higher precision, series parasitic inductances are introduced into the cold pinch-off model which is used to extract capacitances at low frequency and the reactive elements can be determined simultaneously over the measured frequency range. The values obtained by this method can be used to establish a 16-elements small-signal equivalent circuit model under different bias conditions. The results show good agreements between the simulated and measured scattering parameters up to 30 GHz.

  17. THE METHODS OF EXTRACTING WATER INFORMATION FROM SPOT IMAGE

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Some techniques and methods for deriving water information from SPOT -4 (XI) image were investigatedand discussed in this paper. An algorithm of decision-tree (DT) classification which includes several classifiers based onthe spectral responding characteristics of water bodies and other objects, was developed and put forward to delineate wa-ter bodies. Another algorithm of decision-tree classification based on both spectral characteristics and auxiliary informa-tion of DEM and slope (DTDS) was also designed for water bodies extraction. In addition, supervised classificationmethod of maximum-likelyhood classification (MLC), and unsupervised method of interactive self-organizing dada analy-sis technique (ISODATA) were used to extract waterbodies for comparison purpose. An index was designed and used toassess the accuracy of different methods adopted in the research. Results have shown that water extraction accuracy wasvariable with respect to the various techniques applied. It was low using ISODATA, very high using DT algorithm andmuch higher using both DTDS and MLC.

  18. METHOD TO EXTRACT BLEND SURFACE FEATURE IN REVERSE ENGINEERING

    Institute of Scientific and Technical Information of China (English)

    Lü Zhen; Ke Yinglin; Sun Qing; Kelvin W; Huang Xiaoping

    2003-01-01

    A new method of extraction of blend surface feature is presented. It contains two steps: segmentation and recovery of parametric representation of the blend. The segmentation separates the points in the blend region from the rest of the input point cloud with the processes of sampling point data, estimation of local surface curvature properties and comparison of maximum curvature values. The recovery of parametric representation generates a set of profile curves by marching throughout the blend and fitting cylinders. Compared with the existing approaches of blend surface feature extraction, the proposed method reduces the requirement of user interaction and is capable of extracting blend surface with either constant radius or variable radius. Application examples are presented to verify the proposed method.

  19. Spectrophotometric validation of assay method for selected medicinal plant extracts

    Directory of Open Access Journals (Sweden)

    Matthew Arhewoh

    2014-09-01

    Full Text Available Objective: To develop UV spectrophotometric assay validation methods for some selected medicinal plant extracts.Methods: Dried, powdered leaves of Annona muricata (AM and Andrographis paniculata (AP as well as seeds of Garcinia kola (GK and Hunteria umbellata (HU were separately subjected to maceration using distilled water. Different concentrations of the extracts were scanned spectrophotometrically to obtain wavelengths of maximum absorbance. The different extracts were then subjected to validation studies following international guidelines at the respective wavelengths obtained.Results: The results showed linearity at peak wavelengths of maximum absorbance of 292, 280, 274 and 230 nm for GK, HU, AM and AP, respectively. The calibration curves for the different concentrations of the extract gave R2 values ranging from 0.9831 for AM to 0.9996 for AP the inter-day and intra-day precision study showed that the relative standard deviation (% was ≤ 10% for all the extracts.Conclusion: The aqueous extracts and isolates of these plants can be assayed and monitored using these wavelengths.

  20. A novel method of genomic DNA extraction for Cactaceae1

    Science.gov (United States)

    Fehlberg, Shannon D.; Allen, Jessica M.; Church, Kathleen

    2013-01-01

    • Premise of the study: Genetic studies of Cactaceae can at times be impeded by difficult sampling logistics and/or high mucilage content in tissues. Simplifying sampling and DNA isolation through the use of cactus spines has not previously been investigated. • Methods and Results: Several protocols for extracting DNA from spines were tested and modified to maximize yield, amplification, and sequencing. Sampling of and extraction from spines resulted in a simplified protocol overall and complete avoidance of mucilage as compared to typical tissue extractions. Sequences from one nuclear and three plastid regions were obtained across eight genera and 20 species of cacti using DNA extracted from spines. • Conclusions: Genomic DNA useful for amplification and sequencing can be obtained from cactus spines. The protocols described here are valuable for any cactus species, but are particularly useful for investigators interested in sampling living collections, extensive field sampling, and/or conservation genetic studies. PMID:25202521

  1. A Novel Method of Genomic DNA Extraction for Cactaceae

    Directory of Open Access Journals (Sweden)

    Shannon D. Fehlberg

    2013-03-01

    Full Text Available Premise of the study: Genetic studies of Cactaceae can at times be impeded by difficult sampling logistics and/or high mucilage content in tissues. Simplifying sampling and DNA isolation through the use of cactus spines has not previously been investigated. Methods and Results: Several protocols for extracting DNA from spines were tested and modified to maximize yield, amplification, and sequencing. Sampling of and extraction from spines resulted in a simplified protocol overall and complete avoidance of mucilage as compared to typical tissue extractions. Sequences from one nuclear and three plastid regions were obtained across eight genera and 20 species of cacti using DNA extracted from spines. Conclusions: Genomic DNA useful for amplification and sequencing can be obtained from cactus spines. The protocols described here are valuable for any cactus species, but are particularly useful for investigators interested in sampling living collections, extensive field sampling, and/or conservation genetic studies.

  2. A Noise-Assisted Data Analysis Method for Automatic EOG-Based Sleep Stage Classification Using Ensemble Learning

    DEFF Research Database (Denmark)

    Olesen, Alexander Neergaard; Christensen, Julie Anja Engelhard; Sørensen, Helge Bjarup Dissing;

    2016-01-01

    (EOG) signals by presenting a method for automatic sleep staging using the complete ensemble empirical mode decomposition with adaptive noise algorithm, and a random forest classifier. It achieves a high overall accuracy of 82% and a Cohen’s kappa of 0.74 indicating substantial agreement between...

  3. Optimization of the Phenol -Chloroform Silica DNA Extraction Method in Ancient Bones DNA Extraction

    Directory of Open Access Journals (Sweden)

    Morteza Sadeghi

    2014-04-01

    Full Text Available Introduction: DNA extraction from the ancient bones tissues is currently very difficult. Phenol chloroform silica method is one of the methods currently used for this aim. The purpose of this study was to optimize the assessment method. Methods: DNA of 62 bone tissues (average 3-11 years was first extracted with phenol chloroform silica methods and then with changing of some parameters of the methods the extracted DNA was amplified in eight polymorphisms area including FES, F13, D13S317, D16, D5S818, vWA and CD4. Results from samples gained by two methods were compared in acrylamide gel. Results: The average of PCR yield for new method and common method in eight polymorphism regions was 75%, 78%, 81%, 76%, 85%, 71%, 89%, 86% and 64%, 39%, 70%, 49%, 68%, 76%, 71% and 28% respectively. The average of DNA in optimized (in 35l silica density and common method were 267.5 µg/ml with 1.12 purity and 192.76 g/ml with 0.84 purity respectively. Conclusions: According to the findings of this study, it is estimated that longer EDTA attendance is an efficient agent in removing calcium and also adequate density of silica particles can be efficient in removal of PCR inhibitors.

  4. Automatic Method for Synchronizing Workpiece Frames in Twin-robot Nondestructive Testing System

    Institute of Scientific and Technical Information of China (English)

    LU Zongxing; XU Chunguang; PAN Qinxue; MENG Fanwu; LI Xinliang

    2015-01-01

    The workpiece frames relative to each robot base frame should be known in advance for the proper operation of twin-robot nondestructive testing system. However, when two robots are separated from the workpieces, the twin robots cannot reach the same point to complete the process of workpiece frame positioning. Thus, a new method is proposed to solve the problem of coincidence between workpiece frames. Transformation between two robot base frames is initiated by measuring the coordinate values of three non-collinear calibration points. The relationship between the workpiece frame and that of the slave robot base frame is then determined according to the known transformation of two robot base frames, as well as the relationship between the workpiece frame and that of the master robot base frame. Only one robot is required to actually measure the coordinate values of the calibration points on the workpiece. This requirement is beneficial when one of the robots cannot reach and measure the calibration points. The coordinate values of the calibration points are derived by driving the robot hand to the points and recording the values of top center point(TCP) coordinates. The translation and rotation matrices relate either the two robot base frames or the workpiece and master robot. The coordinated are solved using the measured values of the calibration points according to the Cartesian transformation principle. An optimal method is developed based on exponential mapping of Lie algebra to ensure that the rotation matrix is orthogonal. Experimental results show that this method involves fewer steps, offers significant advantages in terms of operation and time-saving. A method used to synchronize workpiece frames in twin-robot system automatically is presented.

  5. A HYBRID METHOD FOR AUTOMATIC SPEECH RECOGNITION PERFORMANCE IMPROVEMENT IN REAL WORLD NOISY ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Urmila Shrawankar

    2013-01-01

    Full Text Available It is a well known fact that, speech recognition systems perform well when the system is used in conditions similar to the one used to train the acoustic models. However, mismatches degrade the performance. In adverse environment, it is very difficult to predict the category of noise in advance in case of real world environmental noise and difficult to achieve environmental robustness. After doing rigorous experimental study it is observed that, a unique method is not available that will clean the noisy speech as well as preserve the quality which have been corrupted by real natural environmental (mixed noise. It is also observed that only back-end techniques are not sufficient to improve the performance of a speech recognition system. It is necessary to implement performance improvement techniques at every step of back-end as well as front-end of the Automatic Speech Recognition (ASR model. Current recognition systems solve this problem using a technique called adaptation. This study presents an experimental study that aims two points, first is to implement the hybrid method that will take care of clarifying the speech signal as much as possible with all combinations of filters and enhancement techniques. The second point is to develop a method for training all categories of noise that can adapt the acoustic models for a new environment that will help to improve the performance of the speech recognizer under real world environmental mismatched conditions. This experiment confirms that hybrid adaptation methods improve the ASR performance on both levels, (Signal-to-Noise Ratio SNR improvement as well as word recognition accuracy in real world noisy environment.

  6. Automatic method for synchronizing workpiece frames in twin-robot nondestructive testing system

    Science.gov (United States)

    Lu, Zongxing; Xu, Chunguang; Pan, Qinxue; Meng, Fanwu; Li, Xinliang

    2015-07-01

    The workpiece frames relative to each robot base frame should be known in advance for the proper operation of twin-robot nondestructive testing system. However, when two robots are separated from the workpieces, the twin robots cannot reach the same point to complete the process of workpiece frame positioning. Thus, a new method is proposed to solve the problem of coincidence between workpiece frames. Transformation between two robot base frames is initiated by measuring the coordinate values of three non-collinear calibration points. The relationship between the workpiece frame and that of the slave robot base frame is then determined according to the known transformation of two robot base frames, as well as the relationship between the workpiece frame and that of the master robot base frame. Only one robot is required to actually measure the coordinate values of the calibration points on the workpiece. This requirement is beneficial when one of the robots cannot reach and measure the calibration points. The coordinate values of the calibration points are derived by driving the robot hand to the points and recording the values of top center point(TCP) coordinates. The translation and rotation matrices relate either the two robot base frames or the workpiece and master robot. The coordinated are solved using the measured values of the calibration points according to the Cartesian transformation principle. An optimal method is developed based on exponential mapping of Lie algebra to ensure that the rotation matrix is orthogonal. Experimental results show that this method involves fewer steps, offers significant advantages in terms of operation and time-saving. A method used to synchronize workpiece frames in twin-robot system automatically is presented.

  7. Application of Feedback Linearization Method in Airplane Automatic Landing Control System

    Institute of Scientific and Technical Information of China (English)

    Wang Xiaoyan; Feng Jiang; Feng Xiujuan; Wu Junqin

    2004-01-01

    Summarizes the I/O feedback linearization about MIMO system, and applies it to nonlinear control equation of airplane. And also designs the tracing control laws for airplane longitudinal automatic landing control system.

  8. Comparison of DNA and RNA extraction methods for mummified tissues.

    Science.gov (United States)

    Konomi, Nami; Lebwohl, Eve; Zhang, David

    2002-12-01

    Nucleic acids extracted from mummified tissues are valuable materials for the study of ancient human beings. Significant difficulty in extracting nucleic acids from mummified tissues has been reported due to chemical modification and degradation. The goal of this study was to determine a method that is more efficient for DNA and RNA extraction from mummified tissues. Twelve mummy specimens were analyzed with 9 different nucleic acid extraction methods, including guanidium thiocyanate (GTC) and proteinase K/detergent based methods prepared in our laboratory or purchased. Glyceraldehyde 3-phosphate dehydrogenase DNA and beta-actin RNA were used as markers for the presence of adequate DNA and RNA, respectively, for PCR and RT-PCR amplification. Our results show that 5 M GTC is more efficient of releasing nucleic acids from mummified tissue than proteinase K/detergent, and phenol/chloroform extraction with an additional chloroform step is more efficient than phenol/chloroform along. We were able to isolate DNAs from all 12 specimens and RNAs from 8 of 12 specimens, and the nucleic acids were sufficient for PCR and RT-PCR analysis. We further tested hepatitis viruses including hepatitis B virus, hepatitis C virus, hepatitis G virus, and TT virus DNA, and fail to detect these viruses in all 12 specimens.

  9. Correction method for line extraction in vision measurement.

    Directory of Open Access Journals (Sweden)

    Mingwei Shao

    Full Text Available Over-exposure and perspective distortion are two of the main factors underlying inaccurate feature extraction. First, based on Steger's method, we propose a method for correcting curvilinear structures (lines extracted from over-exposed images. A new line model based on the Gaussian line profile is developed, and its description in the scale space is provided. The line position is analytically determined by the zero crossing of its first-order derivative, and the bias due to convolution with the normal Gaussian kernel function is eliminated on the basis of the related description. The model considers over-exposure features and is capable of detecting the line position in an over-exposed image. Simulations and experiments show that the proposed method is not significantly affected by the exposure level and is suitable for correcting lines extracted from an over-exposed image. In our experiments, the corrected result is found to be more precise than the uncorrected result by around 45.5%. Second, we analyze perspective distortion, which is inevitable during line extraction owing to the projective camera model. The perspective distortion can be rectified on the basis of the bias introduced as a function of related parameters. The properties of the proposed model and its application to vision measurement are discussed. In practice, the proposed model can be adopted to correct line extraction according to specific requirements by employing suitable parameters.

  10. Channel selection for automatic seizure detection

    DEFF Research Database (Denmark)

    Duun-Henriksen, Jonas; Kjaer, Troels Wesenberg; Madsen, Rasmus Elsborg

    2012-01-01

    of an automatic channel selection method. The characteristics of the seizures are extracted by the use of a wavelet analysis and classified by a support vector machine. The best channel selection method is based upon maximum variance during the seizure. Results: Using only three channels, a seizure detection...

  11. An automatic method for fast and accurate liver segmentation in CT images using a shape detection level set method

    Science.gov (United States)

    Lee, Jeongjin; Kim, Namkug; Lee, Ho; Seo, Joon Beom; Won, Hyung Jin; Shin, Yong Moon; Shin, Yeong Gil

    2007-03-01

    Automatic liver segmentation is still a challenging task due to the ambiguity of liver boundary and the complex context of nearby organs. In this paper, we propose a faster and more accurate way of liver segmentation in CT images with an enhanced level set method. The speed image for level-set propagation is smoothly generated by increasing number of iterations in anisotropic diffusion filtering. This prevents the level-set propagation from stopping in front of local minima, which prevails in liver CT images due to irregular intensity distributions of the interior liver region. The curvature term of shape modeling level-set method captures well the shape variations of the liver along the slice. Finally, rolling ball algorithm is applied for including enhanced vessels near the liver boundary. Our approach are tested and compared to manual segmentation results of eight CT scans with 5mm slice distance using the average distance and volume error. The average distance error between corresponding liver boundaries is 1.58 mm and the average volume error is 2.2%. The average processing time for the segmentation of each slice is 5.2 seconds, which is much faster than the conventional ones. Accurate and fast result of our method will expedite the next stage of liver volume quantification for liver transplantations.

  12. Automatic method for thalamus parcellation using multi-modal feature classification.

    Science.gov (United States)

    Stough, Joshua V; Glaister, Jeffrey; Ye, Chuyang; Ying, Sarah H; Prince, Jerry L; Carass, Aaron

    2014-01-01

    Segmentation and parcellation of the thalamus is an important step in providing volumetric assessment of the impact of disease n brain structures. Conventionally, segmentation is carried out on T1-weighted magnetic resonance (MR) images and nuclear parcellation using diffusion weighted MR images. We present the first fully automatic method that incorporates both tissue contrasts and several derived fea-fractional anisotrophy, fiber orientation from the 5D Knutsson representation of the principal eigenvectors, and connectivity between the thalamus and the cortical lobes, as features. Combining these multiple information sources allows us to identify discriminating dimensions and thus parcellate the thalamic nuclei. A hierarchical random forest framework with a multidimensional feature per voxel, first distinguishes thalamus from background, and then separates each group of thalamic nuclei. Using a leave one out cross-validation on 12 subjects we have a mean Dice score of 0.805 and 0.799 for the left and right thalami, respectively. We also report overlap for the thalamic nuclear groups.

  13. Automatic Detection Method of Behavior Change in Dam Monitor Instruments Cause by Earthquakes

    Directory of Open Access Journals (Sweden)

    Fernando Mucio Bando

    2016-02-01

    Full Text Available A hydroelectric power plant consists of a project of great relevance for the social and economic development of a country. However, this kind of construction demands extensive attention because the occurrence of unusual behavior on its structure may result in undesirable consequences. Seismic waves are some of the phenomena which demand attention of one in charge of a dam safety because once it happens can directly affect the structure behavior. The target of this work is to present a methodology to automatically detect which monitoring instruments have gone under any change in pattern and their measurements after the seism. The detection method proposed is based on a neuro/fuzzy/bayesian formulation which is divided in three steps. Firstly, a clustering of points in a time series is developed from a self-organizing Kohonen map. Afterwards a fuzzy set is built to transform the initial time series, with arbitrary distribution, into a new series with beta distribution probability and thus enable the detection of changing points through a Monte Carlo simulation via Markov chains. In order to demonstrate the efficiency of the proposal the methodology has been applied in time series generated by Itaipu power plant building structures measurement instruments, which showed little behavior change after the earthquake in Chile in 2010.

  14. Using automatic calibration method for optimizing the performance of Pedotransfer functions of saturated hydraulic conductivity

    Directory of Open Access Journals (Sweden)

    Ahmed M. Abdelbaki

    2016-06-01

    Full Text Available Pedotransfer functions (PTFs are an easy way to predict saturated hydraulic conductivity (Ksat without measurements. This study aims to auto calibrate 22 PTFs. The PTFs were divided into three groups according to its input requirements and the shuffled complex evolution algorithm was used in calibration. The results showed great modification in the performance of the functions compared to the original published functions. For group 1 PTFs, the geometric mean error ratio (GMER and the geometric standard deviation of error ratio (GSDER values were modified from range (1.27–6.09, (5.2–7.01 to (0.91–1.15, (4.88–5.85 respectively. For group 2 PTFs, the GMER and the GSDER values were modified from (0.3–1.55, (5.9–12.38 to (1.00–1.03, (5.5–5.9 respectively. For group 3 PTFs, the GMER and the GSDER values were modified from (0.11–2.06, (5.55–16.42 to (0.82–1.01, (5.1–6.17 respectively. The result showed that the automatic calibration is an efficient and accurate method to enhance the performance of the PTFs.

  15. Seamless Ligation Cloning Extract (SLiCE) cloning method.

    Science.gov (United States)

    Zhang, Yongwei; Werling, Uwe; Edelmann, Winfried

    2014-01-01

    SLiCE (Seamless Ligation Cloning Extract) is a novel cloning method that utilizes easy to generate bacterial cell extracts to assemble multiple DNA fragments into recombinant DNA molecules in a single in vitro recombination reaction. SLiCE overcomes the sequence limitations of traditional cloning methods, facilitates seamless cloning by recombining short end homologies (15-52 bp) with or without flanking heterologous sequences and provides an effective strategy for directional subcloning of DNA fragments from bacterial artificial chromosomes or other sources. SLiCE is highly cost-effective and demonstrates the versatility as a number of standard laboratory bacterial strains can serve as sources for SLiCE extract. We established a DH10B-derived E. coli strain expressing an optimized λ prophage Red recombination system, termed PPY, which facilitates SLiCE with very high efficiencies.

  16. Comparison of RNA extraction methods in Thai aromatic coconut water

    Directory of Open Access Journals (Sweden)

    Nopporn Jaroonchon

    2015-10-01

    Full Text Available Many researches have reported that nucleic acid in coconut water is in free form and at very low yields which makes it difficult to process in molecular studies. Our research attempted to compare two extraction methods to obtain a higher yield of total RNA in aromatic coconut water and monitor its change at various fruit stages. The first method used ethanol and sodium acetate as reagents; the second method used lithium chloride. We found that extraction using only lithium chloride gave a higher total RNA yield than the method using ethanol to precipitate nucleic acid. In addition, the total RNA from both methods could be used in amplification of betaine aldehyde dehydrogenase2 (Badh2 genes, which is involved in coconut aroma biosynthesis, and could be used to perform further study as we expected. From the molecular study, the nucleic acid found in coconut water increased with fruit age.

  17. Automatic diagnosis for prostate cancer using run-length matrix method

    Science.gov (United States)

    Sun, Xiaoyan; Chuang, Shao-Hui; Li, Jiang; McKenzie, Frederic

    2009-02-01

    Prostate cancer is the most common type of cancer and the second leading cause of cancer death among men in US1. Quantitative assessment of prostate histology provides potential automatic classification of prostate lesions and prediction of response to therapy. Traditionally, prostate cancer diagnosis is made by the analysis of prostate-specific antigen (PSA) levels and histopathological images of biopsy samples under microscopes. In this application, we utilize a texture analysis method based on the run-length matrix for identifying tissue abnormalities in prostate histology. A tissue sample was collected from a radical prostatectomy, H&E fixed, and assessed by a pathologist as normal tissue or prostatic carcinoma (PCa). The sample was then subsequently digitized at 50X magnification. We divided the digitized image into sub-regions of 20 X 20 pixels and classified each sub-region as normal or PCa by a texture analysis method. In the texture analysis, we computed texture features for each of the sub-regions based on the Gray-level Run-length Matrix(GL-RLM). Those features include LGRE, HGRE and RPC from the run-length matrix, mean and standard deviation of the pixel intensity. We utilized a feature selection algorithm to select a set of effective features and used a multi-layer perceptron (MLP) classifier to distinguish normal from PCa. In total, the whole histological image was divided into 42 PCa and 6280 normal regions. Three-fold cross validation results show that the proposed method achieves an average classification accuracy of 89.5% with a sensitivity and specificity of 90.48% and 89.49%, respectively.

  18. Automatic Mapping Extraction from Multiecho T2-Star Weighted Magnetic Resonance Images for Improving Morphological Evaluations in Human Brain

    Directory of Open Access Journals (Sweden)

    Shaode Yu

    2013-01-01

    Full Text Available Mapping extraction is useful in medical image analysis. Similarity coefficient mapping (SCM replaced signal response to time course in tissue similarity mapping with signal response to TE changes in multiecho T2-star weighted magnetic resonance imaging without contrast agent. Since different tissues are with different sensitivities to reference signals, a new algorithm is proposed by adding a sensitivity index to SCM. It generates two mappings. One measures relative signal strength (SSM and the other depicts fluctuation magnitude (FMM. Meanwhile, the new method is adaptive to generate a proper reference signal by maximizing the sum of contrast index (CI from SSM and FMM without manual delineation. Based on four groups of images from multiecho T2-star weighted magnetic resonance imaging, the capacity of SSM and FMM in enhancing image contrast and morphological evaluation is validated. Average contrast improvement index (CII of SSM is 1.57, 1.38, 1.34, and 1.41. Average CII of FMM is 2.42, 2.30, 2.24, and 2.35. Visual analysis of regions of interest demonstrates that SSM and FMM show better morphological structures than original images, T2-star mapping and SCM. These extracted mappings can be further applied in information fusion, signal investigation, and tissue segmentation.

  19. Calculation of radon concentration in water by toluene extraction method

    Energy Technology Data Exchange (ETDEWEB)

    Saito, Masaaki [Tokyo Metropolitan Isotope Research Center (Japan)

    1997-02-01

    Noguchi method and Horiuchi method have been used as the calculation method of radon concentration in water. Both methods have two problems in the original, that is, the concentration calculated is changed by the extraction temperature depend on the incorrect solubility data and the concentration calculated are smaller than the correct values, because the radon calculation equation does not true to the gas-liquid equilibrium theory. However, the two problems are solved by improving the radon equation. I presented the Noguchi-Saito equation and the constant B of Horiuchi-Saito equation. The calculating results by the improved method showed about 10% of error. (S.Y.)

  20. A Robust Front-End Processor combining Mel Frequency Cepstral Coefficient and Sub-band Spectral Centroid Histogram methods for Automatic Speech Recognition

    Directory of Open Access Journals (Sweden)

    R. Thangarajan

    2009-06-01

    Full Text Available Environmental robustness is an important area of research in speech recognition. Mismatch between trained speech models and actual speech to be recognized is due to factors like background noise. It can cause severe degradation in the accuracy of recognizers whichare based on commonly used features like mel-frequency cepstral co-efficient (MFCC and linear predictive coding (LPC. It is well understood that all previous auditory based feature extraction methods perform extremely well in terms of robustness due to the dominantfrequency information present in them. But these methods suffer from high computational cost. Another method called sub-band spectral centroid histograms (SSCH integrates dominant-frequency information with sub-band power information. This method is based onsub-band spectral centroids (SSC which are closely related to spectral peaks for both clean and noisy speech. Since SSC can be computed efficiently from short-term speech power spectrum estimate, SSCH method is quite robust to background additive noise at a lowercomputational cost. It has been noted that MFCC method outperforms SSCH method in the case of clean speech. However in the case of speech with additive noise, MFCC method degrades substantially. In this paper, both MFCC and SSCH feature extraction have beenimplemented in Carnegie Melon University (CMU Sphinx 4.0 and trained and tested on AN4 database for clean and noisy speech. Finally, a robust speech recognizer which automatically employs either MFCC or SSCH feature extraction methods based on the variance of shortterm power of the input utterance is suggested.

  1. Single corn kernel aflatoxin B1 extraction and analysis method

    Science.gov (United States)

    Aflatoxins are highly carcinogenic compounds produced by the fungus Aspergillus flavus. Aspergillus flavus is a phytopathogenic fungus that commonly infects crops such as cotton, peanuts, and maize. The goal was to design an effective sample preparation method and analysis for the extraction of afla...

  2. Extraction Methods of Spanish Broom (Spartium Junceum L.

    Directory of Open Access Journals (Sweden)

    Drago Katović

    2011-12-01

    Full Text Available Effects of different extraction methods of the Spanish Broom shoots were measured and compared with the purpose of obtaining composite material. The content of cellulose, lignin, pentosan and ash in the Spanish Broom fibers was determined. SEM analyses were performed.

  3. Antioxidant activity and total phenolic compounds of Dezful sesame cake extracts obtained by classical and ultrasound-assisted extraction methods

    OpenAIRE

    2014-01-01

    Sesame cake is a by-product of sesame oil industry. In this study, the effect of extraction methods (maceration and sonication) and solvents (ethanol, methanol, ethanol/water (50:50), methanol/water (50:50), and water) on the antioxidant properties of sesame cake extracts are evaluated to determine the most suitable extraction method for optimal use of this product. Total phenolic content is measured according to the Folin–Ciocalteu method and antioxidant activities of each extract are evalua...

  4. Facial Feature Extraction Method Based on Coefficients of Variances

    Institute of Scientific and Technical Information of China (English)

    Feng-Xi Song; David Zhang; Cai-Kou Chen; Jing-Yu Yang

    2007-01-01

    Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two popular feature ex- traction techniques in statistical pattern recognition field. Due to small sample size problem LDA cannot be directly applied to appearance-based face recognition tasks. As a consequence, a lot of LDA-based facial feature extraction techniques are proposed to deal with the problem one after the other. Nullspace Method is one of the most effective methods among them. The Nullspace Method tries to find a set of discriminant vectors which maximize the between-class scatter in the null space of the within-class scatter matrix. The calculation of its discriminant vectors will involve performing singular value decomposition on a high-dimensional matrix. It is generally memory- and time-consuming. Borrowing the key idea in Nullspace method and the concept of coefficient of variance in statistical analysis we present a novel facial feature extraction method, i.e., Discriminant based on Coefficient of Variance (DCV) in this paper. Experimental results performed on the FERET and AR face image databases demonstrate that DCV is a promising technique in comparison with Eigenfaces, Nullspace Method, and other state-of-the-art facial feature extraction methods.

  5. Iris Pattern Segmentation using Automatic Segmentation and Window Technique

    OpenAIRE

    Swati Pandey; Prof. Rajeev Gupta

    2013-01-01

    A Biometric system is an automatic identification of an individual based on a unique feature or characteristic. Iris recognition has great advantage such as variability, stability and security. In thispaper, use the two methods for iris segmentation -An automatic segmentation method and Window method. Window method is a novel approach which comprises two steps first finds pupils' center andthen two radial coefficients because sometime pupil is not perfect circle. The second step extract the i...

  6. Spindle extraction method for ISAR image based on Radon transform

    Science.gov (United States)

    Wei, Xia; Zheng, Sheng; Zeng, Xiangyun; Zhu, Daoyuan; Xu, Gaogui

    2015-12-01

    In this paper, a method of spindle extraction of target in inverse synthetic aperture radar (ISAR) image is proposed which depends on Radon Transform. Firstly, utilizing Radon Transform to detect all straight lines which are collinear with these line segments in image. Then, using Sobel operator to detect image contour. Finally, finding all intersections of each straight line and image contour, the two intersections which have maximum distance between them is the two ends of this line segment and the longest line segment of all line segments is spindle of target. According to the proposed spindle extraction method, one hundred simulated ISAR images which are respectively rotated 0 degrees, 10 degrees, 20 degrees, 30 degrees and 40 degrees in counterclockwise are used to do experiment and the proposed method and the detection results are more close to the real spindle of target than the method based on Hough Transform .

  7. Sequential injection system incorporating a micro extraction column for automatic fractionation of metal ions in solid samples

    DEFF Research Database (Denmark)

    Chomchoei, Roongrat; Miró, Manuel; Hansen, Elo Harald

    2005-01-01

    as to the kinetics of the leaching processes and chemical associations in different soil geological phases. Special attention is also paid to the potentials of the microcolumn flowing technique for automatic processing of solid materials with variable homogeneity, as demonstrated with the sewage amended CRM483 soil...

  8. Status of the Reactive Extraction as a Method of Separation

    Directory of Open Access Journals (Sweden)

    Dipaloy Datta

    2015-01-01

    Full Text Available The prospective function of a novel energy efficient fermentation technology has been getting great attention in the past fifty years due to the quick raise in petroleum costs. Fermentation chemicals are still limited in the modern market in huge part because of trouble in recovery of carboxylic acids. Therefore, it is needed considerable development in the current recovery technology. Carboxylic acids have been used as the majority of fermentation chemicals. This paper presents a state-of-the-art review on the reactive extraction of carboxylic acids from fermentation broths. This paper principally focuses on reactive extraction that is found to be a capable option to the proper recovery methods.

  9. Spoken Language Identification Using Hybrid Feature Extraction Methods

    CERN Document Server

    Kumar, Pawan; Mishra, A N; Chandra, Mahesh

    2010-01-01

    This paper introduces and motivates the use of hybrid robust feature extraction technique for spoken language identification (LID) system. The speech recognizers use a parametric form of a signal to get the most important distinguishable features of speech signal for recognition task. In this paper Mel-frequency cepstral coefficients (MFCC), Perceptual linear prediction coefficients (PLP) along with two hybrid features are used for language Identification. Two hybrid features, Bark Frequency Cepstral Coefficients (BFCC) and Revised Perceptual Linear Prediction Coefficients (RPLP) were obtained from combination of MFCC and PLP. Two different classifiers, Vector Quantization (VQ) with Dynamic Time Warping (DTW) and Gaussian Mixture Model (GMM) were used for classification. The experiment shows better identification rate using hybrid feature extraction techniques compared to conventional feature extraction methods.BFCC has shown better performance than MFCC with both classifiers. RPLP along with GMM has shown be...

  10. New Multipole Method for 3-D Capacitance Extraction

    Institute of Scientific and Technical Information of China (English)

    Zhao-Zhi Yang; Ze-Yi Wang

    2004-01-01

    This paper describes an effcient improvement of the multipole accelerated boundary element method for 3-D capacitance extraction.The overall relations between the positions of 2-D boundary elements are considered instead of only the relations between the center-points of the elements,and a new method of cube partitioning is introduced.Numerical results are presented to demonstrate that the method is accurate and has nearly linear computational growth as O(n),where n is the number of panels/boundary elements.The proposed method is more accurate and much faster than Fastcap.

  11. 容积法体积管自动检定装置%Volumetric method volume tube automatic calibration device

    Institute of Scientific and Technical Information of China (English)

    周兵

    2014-01-01

    Using standard metal gauge, four-way commutator, PC, PLC as the main hardware, Kingview software development platform, The building Volumetric method volume tube automatic calibration device. According to the verification regulation requirements, To control the flow valve, Device automatically read standard gauge of metal volume value, The automatic temperature and pressure values collected, Automatically calculate the volume tube basic volume, standard error, repeatability, accuracy and repeatability. Realize the visual operation interface, save the test data, print calibration certificate, the notice of verification results, etc.%利用标准金属量器、四通换向器、计算机、PLC 为主要硬件,组态王软件为开发平台,构建容积法体积管自动检定装置。根据检定规程要求,控制流量阀,装置自动读取标准金属量器的容积值、自动采集温度、压力值,自动计算出体积管基本体积、标准误差、重复性、准确度、复现性。实现可视的操作界面、保存检定数据、打印检定证书、检定结果通知书等功能。

  12. Application of a new feature extraction and optimization method to surface defect recognition of cold rolled strips

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Considering that the surface defects of cold rolled strips are hard to be recognized by human eyes under high-speed circumstances, an automatic recognition technique was discussed. Spectrum images of defects can be got by fast Fourier transform (FFT) and sum of valid pixels (SVP), and its optimized center region, which concentrates nearly all energies, are extracted as an original feature set. Using genetic algorithm to optimize the feature set, an optimized feature set with 51 features can be achieved.Using the optimized feature set as an input vector of neural networks, the recognition effects of LVQ neural networks have been studied. Experiment results show that the new method can get a higher classification rate and can settle the automatic recognition problem of surface defects on cold rolled strips ideally.

  13. Rapid methods to extract DNA and RNA from Cryptococcus neoformans.

    Science.gov (United States)

    Bolano, A; Stinchi, S; Preziosi, R; Bistoni, F; Allegrucci, M; Baldelli, F; Martini, A; Cardinali, G

    2001-12-01

    Extraction of nucleic acids from the pathogenic yeast Cryptococcus neoformans is normally hampered by a thick and resistant capsule, accounting for at least 70% of the whole cellular volume. This paper presents procedures based on mechanical cell breakage to extract DNA and RNA from C. neoformans and other capsulated species. The proposed system for DNA extraction involves capsule relaxation by means of a short urea treatment and bead beating. These two steps allow a consistent extraction even from strains resistant to other procedures. Yield and quality of DNA obtained with the proposed method were higher than those obtained with two earlier described methods. This protocol can be extended to every yeast species and particularly to those difficult to handle for the presence of a capsule. RNA purification is accomplished using an original lysing matrix and the FastPrep System (Bio101) after a preliminary bead beating treatment. Yields range around 1 mg RNA from 15 ml overnight culture (10(9) cells), RNA appears undegraded, making it suitable for molecular manipulations.

  14. A new method for stable lead isotope extraction from seawater

    Energy Technology Data Exchange (ETDEWEB)

    Zurbrick, Cheryl M., E-mail: CZurbric@ucsc.edu [WIGS, Department of Microbiology and Environmental Toxicology, University of California Santa Cruz, 1156 High Street, Santa Cruz, CA 95064 (United States); Gallon, Céline [Institute of Marine Sciences, University of California Santa Cruz, 1156 High Street, Santa Cruz, CA 95064 (United States); Flegal, A. Russell [WIGS, Department of Microbiology and Environmental Toxicology, University of California Santa Cruz, 1156 High Street, Santa Cruz, CA 95064 (United States); Institute of Marine Sciences, University of California Santa Cruz, 1156 High Street, Santa Cruz, CA 95064 (United States)

    2013-10-24

    Graphical abstract: -- Highlights: •We present a relatively fast (2.5–6.5 h), semi-automated system to extract Pb from seawater. •Extraction requires few chemicals and has a relatively low blank (0.7 pmol kg{sup −1}). •We compare analyses of Pb isotopes by HR ICP-MS with those by MC-ICP-MS. -- Abstract: A new technique for stable lead (Pb) isotope extraction from seawater is established using Toyopearl AF-Chelate 650 M{sup ®} resin (Tosoh Bioscience LLC). This new method is advantageous because it is semi-automated and relatively fast; in addition it introduces a relatively low blank by minimizing the volume of chemicals used in the extraction. Subsequent analyses by HR ICP-MS have a good relative external precision (2σ) of 3.5‰ for {sup 206}Pb/{sup 207}Pb, while analyses by MC-ICP-MS have a better relative external precision of 0.6‰. However, Pb sample concentrations limit MC-ICP-MS analyses to {sup 206}Pb, {sup 207}Pb, and {sup 208}Pb. The method was validated by processing the common Pb isotope reference material NIST SRM-981 and several GEOTRACES intercalibration samples, followed by analyses by HR ICP-MS, all of which showed good agreement with previously reported values.

  15. Automatic summarization method based on thematic term set%一种基于主题词集的自动文摘方法

    Institute of Scientific and Technical Information of China (English)

    刘兴林; 郑启伦; 马千里

    2011-01-01

    This paper proposed an automatic summarization method based on thematic tern set for automatic extracting abstracts from Chinese documents.According to the extracted thematic term set, the method calculated the sentence weights by the weights of the thematic terms, then got the corresponding total weight of each sentence, and selected several sentences with higher weight by percentage, and finally, output the summarization sentences by original order.Experiments were conducted on HIT IR-lab text summarization corpus, and utilized intrinsic automatic evaluation measures to evaluate the performance of the proposed method.Experimental results show that the proposed method achieves 66.07% upon the F-measure, which suggests it can generate higher quality summarization, nearly to the reference abstract, achieving very good performance.%提出一种基于主题词集的文本自动文摘方法,用于自动提取文档文摘.该方法根据提取到的主题词集,由主题词权重进行加权计算各主题词所在的句子权重,从而得出主题词集对应的每个句子的总权重,再根据自动文摘比例选取句子权重较大的几个句子,最后按原文顺序输出文摘.实验在哈工大信息检索研究室单文档自动文摘语料库上进行,使用内部评测自动评估方法对获得的文摘进行评价,总体F值达到了66.07%.实验结果表明,该方法所获得的文摘质量高,较接近于参考文摘,取得了良好的效果.

  16. Review on off extraction methods from microalgae%微藻油脂提取方法研究进展

    Institute of Scientific and Technical Information of China (English)

    贺赐安; 余旭亚; 赵鹏; 王琳

    2012-01-01

    微藻作为生物质资源进行开发,其油脂的提取是关键.介绍了微藻油脂在生物柴油与生物活性化合物上的应用;对油脂提取研究中有机溶剂的选择及细胞破碎处理方法进行综述;对超声波或微波辅助萃取、液体加压萃取、自动酸解萃取和酶法水解等提取方法的研究进展情况进行了介绍.%Fast and effective oils extraction from microalgae was a key process which constrained the development of microalgae as a source of biomass energy. The application of microalgae oils in developing biodiesel and screening bioactivity compounds was introduced. The choice of organic solvent and cell disruption in oil extraction process from microalgae were reviewed. The study status and research advance of extraction methods of microalgae such as ultrasonic or microwave assisted extraction, pressurized liquid extraction , automatic acid hydrolysis extraction and enzymatic hydrolysis extraction, etc were summarized.

  17. Road Extraction from High-Resolution SAR Images via Automatic Local Detecting and Human-Guided Global Tracking

    Directory of Open Access Journals (Sweden)

    Jianghua Cheng

    2012-01-01

    Full Text Available Because of existence of various kinds of disturbances, layover effects, and shadowing, it is difficult to extract road from high-resolution SAR images. A new road center-point searching method is proposed by two alternant steps: local detection and global tracking. In local detection step, double window model is set, which consists of the outer fixed square window and the inner rotary rectangular one. The outer window is used to obtain the local road direction by using orientation histogram, based on the fact that the surrounding objects always range along with roads. The inner window rotates its orientation in accordance with the result of local road direction calculation and searches the center points of a road segment. In global tracking step, particle filter of variable-step is used to deal with the problem of tracking frequently broken by shelters along the roadside and obstacles on the road. Finally, the center-points are linked by quadratic curve fitting. In 1 m high-resolution airborne SAR image experiment, the results show that this method is effective.

  18. Evaluation of in vitro antioxidant potential of different polarities stem crude extracts by different extraction methods of Adenium obesum

    Directory of Open Access Journals (Sweden)

    Mohammad Amzad Hossain

    2014-09-01

    Full Text Available Objective: To select best extraction method for the isolated antioxidant compounds from the stems of Adenium obesum. Methods: Two methods used for the extraction are Soxhlet and maceration methods. Methanol solvent was used for both extraction method. The methanol crude extract was defatted with water and extracted successively with hexane, chloroform, ethyl acetate and butanol solvents. The antioxidant potential for all crude extracts were determined by using 1, 1-diphenyl-2- picrylhydrazyl method. Results: The percentage of extraction yield by Soxhlet method is higher compared to maceration method. The antioxidant potential for methanol and its derived fractions by Soxhlet extractor method was highest in ethyl acetate and lowest in hexane crude extracts and found in the order of ethyl acetate>butanol>water>chloroform>methanol>hexane. However, the antioxidant potential for methanol and its derived fractions by maceration method was highest in butanol and lowest in hexane followed in the order of butanol>methanol>chloroform>water>ethyl acetate>hexane. Conclusions: The results showed that isolate antioxidant compounds effected on the extraction method and condition of extraction.

  19. Evaluation of in vitro antioxidant potential of different polarities stem crude extracts by different extraction methods of Adenium obesum

    Institute of Scientific and Technical Information of China (English)

    Mohammad Amzad Hossain; Tahiya Hilal Ali Alabri; Amira Hamood Salim Al Musalami; Md. Sohail Akhtar; Sadri Said

    2014-01-01

    Objective: To select best extraction method for the isolated antioxidant compounds from the stems of Adenium obesum.Methods:Two methods used for the extraction are Soxhlet and maceration methods. Methanol solvent was used for both extraction method. The methanol crude extract was defatted with water and extracted successively with hexane, chloroform, ethyl acetate and butanol solvents. The antioxidant potential for all crude extracts were determined by using 1, 1-diphenyl-2-picrylhydrazyl method.Results:The percentage of extraction yield by Soxhlet method is higher compared to maceration method. The antioxidant potential for methanol and its derived fractions by Soxhlet extractor method was highest in ethyl acetate and lowest in hexane crude extracts and found in the order of ethyl acetate>butanol>water>chloroform>methanol>hexane. However, the antioxidant potential for methanol and its derived fractions by maceration method was highest in butanol and lowest in hexane followed in the order of butanol>methanol>chloroform>water>ethyl acetate>hexane.Conclusions:The results showed that isolate antioxidant compounds effected on the extraction method and condition of extraction.

  20. THE METHODS OF EXTRACTING WATER INFORMATION FROM SPOT IMAGE

    Institute of Scientific and Technical Information of China (English)

    DUJin-kang; FENGXue-zhi; 等

    2002-01-01

    Some techniques and methods for deriving water information from SPOT-4(XI) image were investigated and discussed in this paper.An algorithmoif decision-tree(DT) classification which includes several classifiers based on the spectral responding characteristics of water bodies and other objects,was developed and put forward to delineate water bodies.Another algorithm of decision-tree classification based on both spectral characteristics and auxiliary information of DEM and slope(DTDS) was also designed for water bodies extraction.In addition,supervised classification method of maximum-likelyhood classification(MLC),and unsupervised method of interactive self -organizing dada analysis technique(ISODATA) were used to extract waterbodies for comparison purpose.An index was designed and used to assess the accuracy of different methods abopted in the research.Results have shown that water extraction accuracy was variable with respect to the various techniques applied.It was low using ISODATA,very high using DT algorithm and much higher using both DTDS and MLC.

  1. SAR Data Fusion Imaging Method Oriented to Target Feature Extraction

    Directory of Open Access Journals (Sweden)

    Yang Wei

    2015-02-01

    Full Text Available To deal with the difficulty for target outlines extracting precisely due to neglect of target scattering characteristic variation during the processing of high-resolution space-borne SAR data, a novel fusion imaging method is proposed oriented to target feature extraction. Firstly, several important aspects that affect target feature extraction and SAR image quality are analyzed, including curved orbit, stop-and-go approximation, atmospheric delay, and high-order residual phase error. Furthermore, the corresponding compensation methods are addressed as well. Based on the analysis, the mathematical model of SAR echo combined with target space-time spectrum is established for explaining the space-time-frequency change rule of target scattering characteristic. Moreover, a fusion imaging strategy and method under high-resolution and ultra-large observation angle range conditions are put forward to improve SAR quality by fusion processing in range-doppler and image domain. Finally, simulations based on typical military targets are used to verify the effectiveness of the fusion imaging method.

  2. Development of an Analytical Method Based on Temperature Controlled Solid-Liquid Extraction Using an Ionic Liquid as Solid Solvent

    Directory of Open Access Journals (Sweden)

    Zhongwei Pan

    2015-12-01

    Full Text Available At the present paper, an analytical method based on temperature controlled solid-liquid extraction (TC-SLE utilizing a synthesized ionic liquid, (N-butylpyridinium hexafluorophosphate, [BPy]PF6, as solid solvent and phenanthroline (PT as an extractant was developed to determine micro levels of Fe2+ in tea by PT spectrophotometry. TC-SLE was carried out in two continuous steps: Fe2+ can be completely extracted by PT-[BPy]PF6 or back-extracted at 80 °C and the two phases were separated automatically by cooling to room temperature. Fe2+, after back-extraction, needs 2 mol/L HNO3 as stripping agent and the whole process was determined by PT spectrophotometry at room temperature. The extracted species was neutral Fe(PTmCl2 (m = 1 according to slope analysis in the Fe2+-[BPy]PF6-PT TC-SLE system. The calibration curve was Y = 0.20856X − 0.000775 (correlation coefficient = 0.99991. The linear calibration range was 0.10–4.50 μg/mL and the limit of detection for Fe2+ is 7.0 × 10−2 μg/mL. In this method, the contents of Fe2+ in Tieguanyin tea were determined with RSDs (n = 5 3.05% and recoveries in range of 90.6%–108.6%.

  3. Establishing a novel automated magnetic bead-based method for the extraction of DNA from a variety of forensic samples.

    Science.gov (United States)

    Witt, Sebastian; Neumann, Jan; Zierdt, Holger; Gébel, Gabriella; Röscheisen, Christiane

    2012-09-01

    Automated systems have been increasingly utilized for DNA extraction by many forensic laboratories to handle growing numbers of forensic casework samples while minimizing the risk of human errors and assuring high reproducibility. The step towards automation however is not easy: The automated extraction method has to be very versatile to reliably prepare high yields of pure genomic DNA from a broad variety of sample types on different carrier materials. To prevent possible cross-contamination of samples or the loss of DNA, the components of the kit have to be designed in a way that allows for the automated handling of the samples with no manual intervention necessary. DNA extraction using paramagnetic particles coated with a DNA-binding surface is predestined for an automated approach. For this study, we tested different DNA extraction kits using DNA-binding paramagnetic particles with regard to DNA yield and handling by a Freedom EVO(®)150 extraction robot (Tecan) equipped with a Te-MagS magnetic separator. Among others, the extraction kits tested were the ChargeSwitch(®)Forensic DNA Purification Kit (Invitrogen), the PrepFiler™Automated Forensic DNA Extraction Kit (Applied Biosystems) and NucleoMag™96 Trace (Macherey-Nagel). After an extensive test phase, we established a novel magnetic bead extraction method based upon the NucleoMag™ extraction kit (Macherey-Nagel). The new method is readily automatable and produces high yields of DNA from different sample types (blood, saliva, sperm, contact stains) on various substrates (filter paper, swabs, cigarette butts) with no evidence of a loss of magnetic beads or sample cross-contamination.

  4. AB-INITIO SOLUTION OF MISFIT LAYER STRUCTURES BY AUTOMATIC PATTERSON AND DIRECT-METHODS

    NARCIS (Netherlands)

    BEURSKENS, PT; BEURSKENS, G; LAM, EJW; VANSMAALEN, S; FAN, HF

    1994-01-01

    A procedure is presented for the automatic solution of composite (misfit) layer compounds, for the case when the composite crystal structure consists of two types of layer, each of which can be approximately described as a three-dimensional periodic structure with, however, mutually incommensurate l

  5. 深度图像自动配准点云的方法研究%A method of automatically registering point cloud data based on range images

    Institute of Scientific and Technical Information of China (English)

    田慧; 周绍光; 李浩

    2012-01-01

    点云配准是三维激光扫描数据处理过程中不可或缺的一个环节,利用标靶进行配准是经典的手段之一,此类方案在单独扫描标靶的基础上进行半自动化配准.本文给出一种配准策略,利用中心投影原理将单站扫描的点云转换为深度影像,借助教字图像处理技术完成标靶的自动提取,拟合获得标靶中心点的坐标,并借用摄影测量学的知识实现点云的自动化配准.实验证明了本文方法的有效性.%Point cloud registration plays an essential role to process the data acquired with 3D laser scanner. One traditional registration scheme is based on targets that need to be scanned separately at each station. In this paper, an automatic registration strategy was developed that converted single station point clouds to range images by the center projection principle, utilized digital image processing technology to extract target automatically, calculated the coordinates of its center point, and made use of the knowledge of pho-togrammetry to achieve point cloud registration automatically. Experimental results showed the effectiveness of this method.

  6. Detecting and extracting clusters in atom probe data: A simple, automated method using Voronoi cells

    Energy Technology Data Exchange (ETDEWEB)

    Felfer, P., E-mail: peter.felfer@sydney.edu.au [Australian Centre for Microscopy and Microanalysis, The University of Sydney, NSW 2006 (Australia); School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, NSW 2006 (Australia); Ceguerra, A.V., E-mail: anna.ceguerra@sydney.edu.au [Australian Centre for Microscopy and Microanalysis, The University of Sydney, NSW 2006 (Australia); School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, NSW 2006 (Australia); Ringer, S.P., E-mail: simon.ringer@sydney.edu.au [Australian Centre for Microscopy and Microanalysis, The University of Sydney, NSW 2006 (Australia); School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, NSW 2006 (Australia); Cairney, J.M., E-mail: julie.cairney@sydney.edu.au [Australian Centre for Microscopy and Microanalysis, The University of Sydney, NSW 2006 (Australia); School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, NSW 2006 (Australia)

    2015-03-15

    The analysis of the formation of clusters in solid solutions is one of the most common uses of atom probe tomography. Here, we present a method where we use the Voronoi tessellation of the solute atoms and its geometric dual, the Delaunay triangulation to test for spatial/chemical randomness of the solid solution as well as extracting the clusters themselves. We show how the parameters necessary for cluster extraction can be determined automatically, i.e. without user interaction, making it an ideal tool for the screening of datasets and the pre-filtering of structures for other spatial analysis techniques. Since the Voronoi volumes are closely related to atomic concentrations, the parameters resulting from this analysis can also be used for other concentration based methods such as iso-surfaces. - Highlights: • Cluster analysis of atom probe data can be significantly simplified by using the Voronoi cell volumes of the atomic distribution. • Concentration fields are defined on a single atomic basis using Voronoi cells. • All parameters for the analysis are determined by optimizing the separation probability of bulk atoms vs clustered atoms.

  7. Automatic identification for standing tree limb pruning

    Institute of Scientific and Technical Information of China (English)

    Sun Renshan; Li Wenbin; Tian Yongchen; Hua Li

    2006-01-01

    To meet the demand of automatic pruning machines,this paper presents a new method for dynamic automatic identification of standing tree limbs and capture of the digital images of Platycladus orientalis.Methods of computer vision,image processing and wavelet analysis technology were used to compress,filter,segment,abate noise and capture the outline of the picture.We then present the arithmetic for dynamic automatic identification of standing tree limbs,extracting basic growth characteristics of the standing trees such as the form,size,degree of bending and their relative spatial position.We use pattern recognition technology to confirm the proportionate relationship matching the database and thus achieve the goal of dynamic automatic identification of standing tree limbs.

  8. Integrated Phoneme Subspace Method for Speech Feature Extraction

    Directory of Open Access Journals (Sweden)

    Park Hyunsin

    2009-01-01

    Full Text Available Speech feature extraction has been a key focus in robust speech recognition research. In this work, we discuss data-driven linear feature transformations applied to feature vectors in the logarithmic mel-frequency filter bank domain. Transformations are based on principal component analysis (PCA, independent component analysis (ICA, and linear discriminant analysis (LDA. Furthermore, this paper introduces a new feature extraction technique that collects the correlation information among phoneme subspaces and reconstructs feature space for representing phonemic information efficiently. The proposed speech feature vector is generated by projecting an observed vector onto an integrated phoneme subspace (IPS based on PCA or ICA. The performance of the new feature was evaluated for isolated word speech recognition. The proposed method provided higher recognition accuracy than conventional methods in clean and reverberant environments.

  9. Curvelet Transform-Based Denoising Method for Doppler Frequency Extraction

    Institute of Scientific and Technical Information of China (English)

    HOU Shu-juan; WU Si-liang

    2007-01-01

    A novel image denoising method based on curvelet transform is proposed in order to improve the performance of Doppler frequency extraction in low signal-noise-ratio (SNR) environment. The echo can be represented as a gray image with spectral intensity as its gray values by time-frequency transform. And the curvelet coefficients of the image are computed. Then an adaptive soft-threshold scheme based on dual-median operation is implemented in curvelet domain. After that, the image is reconstructed by inverse curvelet transform and the Doppler curve is extracted by a curve detection scheme. Experimental results show the proposed method can improve the detection of Doppler frequency in low SNR environment.

  10. Method of extracting coal from a coal refuse pile

    Science.gov (United States)

    Yavorsky, Paul M.

    1991-01-01

    A method of extracting coal from a coal refuse pile comprises soaking the coal refuse pile with an aqueous alkali solution and distributing an oxygen-containing gas throughout the coal refuse pile for a time period sufficient to effect oxidation of coal contained in the coal refuse pile. The method further comprises leaching the coal refuse pile with an aqueous alkali solution to solubilize and extract the oxidized coal as alkali salts of humic acids and collecting the resulting solution containing the alkali salts of humic acids. Calcium hydroxide may be added to the solution of alkali salts of humic acid to form precipitated humates useable as a low-ash, low-sulfur solid fuel.

  11. Establishing Criteria for a Method to Automatically Detect the Onset of Parturition and Dystocia in Breeding Pigs

    OpenAIRE

    Gutierrez, Winson-Montanez; Kim, Dae-Geun; Kim, Dong-Hyeok; Kim, Suk; Le, Seung-Joo; Kim, Byeong-Woo; Hong, Jong-Tae; Yu, Byeong-Ki; Kim, Hyuck-Joo; Oh, Taek-Keun

    2011-01-01

    The aims of the present study were to characterize the farrowing process in gilts and multiparous sows in terms of duration of farrowing, birth intervals, birth weight, piglets born alive, stillbirth, mummified and dystocia by comparing means in terms of parities, and to establish criteria for a method to automatically detect the first birth and dystocia in breeding pigs for a selected farm in South Korea. One hundred nine Yorkshire x Landrace; YL, Landrace x Yorkshire; LY which were mainly r...

  12. Automatic barcode recognition method based on adaptive edge detection and a mapping model

    Science.gov (United States)

    Yang, Hua; Chen, Lianzheng; Chen, Yifan; Lee, Yong; Yin, Zhouping

    2016-09-01

    An adaptive edge detection and mapping (AEDM) algorithm to address the challenging one-dimensional barcode recognition task with the existence of both image degradation and barcode shape deformation is presented. AEDM is an edge detection-based method that has three consecutive phases. The first phase extracts the scan lines from a cropped image. The second phase involves detecting the edge points in a scan line. The edge positions are assumed to be the intersecting points between a scan line and a corresponding well-designed reference line. The third phase involves adjusting the preliminary edge positions to more reasonable positions by employing prior information of the coding rules. Thus, a universal edge mapping model is established to obtain the coding positions of each edge in this phase, followed by a decoding procedure. The Levenberg-Marquardt method is utilized to solve this nonlinear model. The computational complexity and convergence analysis of AEDM are also provided. Several experiments were implemented to evaluate the performance of AEDM algorithm. The results indicate that the efficient AEDM algorithm outperforms state-of-the-art methods and adequately addresses multiple issues, such as out-of-focus blur, nonlinear distortion, noise, nonlinear optical illumination, and situations that involve the combinations of these issues.

  13. One-step column chromatographic extraction with gradient elution followed by automatic separation of volatiles, flavonoids and polysaccharides from Citrus grandis.

    Science.gov (United States)

    Han, Han-Bing; Li, Hui; Hao, Rui-Lin; Chen, Ya-Fei; Ni, He; Li, Hai-Hang

    2014-02-15

    Citrus grandis Tomentosa is widely used in traditional Chinese medicine and health foods. Its functional components include volatiles, flavonoids and polysaccharides which cannot be effectively extracted through traditional methods. A column chromatographic extraction with gradient elution was developed for one-step extraction of all bioactive substances from C. grandis. Dried material was loaded into a column with petroleum ether: ethanol (8:2, PE) and sequentially eluted with 2-fold PE, 3-fold ethanol: water (6:4) and 8-fold water. The elutes was separated into an ether fraction containing volatiles and an ethanol-water fraction containing flavonoids and polysaccharides. The later was separated into flavonoids and polysaccharides by 80% ethanol precipitation of polysaccharides. Through this procedure, volatiles, flavonoids and polysaccharides in C. grandis were simultaneously extracted at 98% extraction rates and simply separated at higher than 95% recovery rates. The method provides a simple and high-efficient extraction and separation of wide range bioactive substances.

  14. Mechanomyographic Parameter Extraction Methods: An Appraisal for Clinical Applications

    Directory of Open Access Journals (Sweden)

    Morufu Olusola Ibitoye

    2014-12-01

    Full Text Available The research conducted in the last three decades has collectively demonstrated that the skeletal muscle performance can be alternatively assessed by mechanomyographic signal (MMG parameters. Indices of muscle performance, not limited to force, power, work, endurance and the related physiological processes underlying muscle activities during contraction have been evaluated in the light of the signal features. As a non-stationary signal that reflects several distinctive patterns of muscle actions, the illustrations obtained from the literature support the reliability of MMG in the analysis of muscles under voluntary and stimulus evoked contractions. An appraisal of the standard practice including the measurement theories of the methods used to extract parameters of the signal is vital to the application of the signal during experimental and clinical practices, especially in areas where electromyograms are contraindicated or have limited application. As we highlight the underpinning technical guidelines and domains where each method is well-suited, the limitations of the methods are also presented to position the state of the art in MMG parameters extraction, thus providing the theoretical framework for improvement on the current practices to widen the opportunity for new insights and discoveries. Since the signal modality has not been widely deployed due partly to the limited information extractable from the signals when compared with other classical techniques used to assess muscle performance, this survey is particularly relevant to the projected future of MMG applications in the realm of musculoskeletal assessments and in the real time detection of muscle activity.

  15. Mechanomyographic parameter extraction methods: an appraisal for clinical applications.

    Science.gov (United States)

    Ibitoye, Morufu Olusola; Hamzaid, Nur Azah; Zuniga, Jorge M; Hasnan, Nazirah; Wahab, Ahmad Khairi Abdul

    2014-12-03

    The research conducted in the last three decades has collectively demonstrated that the skeletal muscle performance can be alternatively assessed by mechanomyographic signal (MMG) parameters. Indices of muscle performance, not limited to force, power, work, endurance and the related physiological processes underlying muscle activities during contraction have been evaluated in the light of the signal features. As a non-stationary signal that reflects several distinctive patterns of muscle actions, the illustrations obtained from the literature support the reliability of MMG in the analysis of muscles under voluntary and stimulus evoked contractions. An appraisal of the standard practice including the measurement theories of the methods used to extract parameters of the signal is vital to the application of the signal during experimental and clinical practices, especially in areas where electromyograms are contraindicated or have limited application. As we highlight the underpinning technical guidelines and domains where each method is well-suited, the limitations of the methods are also presented to position the state of the art in MMG parameters extraction, thus providing the theoretical framework for improvement on the current practices to widen the opportunity for new insights and discoveries. Since the signal modality has not been widely deployed due partly to the limited information extractable from the signals when compared with other classical techniques used to assess muscle performance, this survey is particularly relevant to the projected future of MMG applications in the realm of musculoskeletal assessments and in the real time detection of muscle activity.

  16. Preparing silica aerogel monoliths via a rapid supercritical extraction method.

    Science.gov (United States)

    Carroll, Mary K; Anderson, Ann M; Gorka, Caroline A

    2014-02-28

    A procedure for the fabrication of monolithic silica aerogels in eight hours or less via a rapid supercritical extraction process is described. The procedure requires 15-20 min of preparation time, during which a liquid precursor mixture is prepared and poured into wells of a metal mold that is placed between the platens of a hydraulic hot press, followed by several hours of processing within the hot press. The precursor solution consists of a 1.0:12.0:3.6:3.5 x 10(-3) molar ratio of tetramethylorthosilicate (TMOS):methanol:water:ammonia. In each well of the mold, a porous silica sol-gel matrix forms. As the temperature of the mold and its contents is increased, the pressure within the mold rises. After the temperature/pressure conditions surpass the supercritical point for the solvent within the pores of the matrix (in this case, a methanol/water mixture), the supercritical fluid is released, and monolithic aerogel remains within the wells of the mold. With the mold used in this procedure, cylindrical monoliths of 2.2 cm diameter and 1.9 cm height are produced. Aerogels formed by this rapid method have comparable properties (low bulk and skeletal density, high surface area, mesoporous morphology) to those prepared by other methods that involve either additional reaction steps or solvent extractions (lengthier processes that generate more chemical waste).The rapid supercritical extraction method can also be applied to the fabrication of aerogels based on other precursor recipes.

  17. Evaluation of DNA extraction methods for freshwater eukaryotic microalgae.

    Science.gov (United States)

    Eland, Lucy E; Davenport, Russell; Mota, Cesar R

    2012-10-15

    The use of molecular methods to investigate microalgal communities of natural and engineered freshwater resources is in its infancy, with the majority of previous studies carried out by microscopy. Inefficient or differential DNA extraction of microalgal community members can lead to bias in downstream community analysis. Three commercially available DNA extraction kits have been tested on a range of pure culture freshwater algal species with diverse cell walls and mixed algal cultures taken from eutrophic waste stabilization ponds (WSP). DNA yield and quality were evaluated, along with DNA suitability for amplification of 18S rRNA gene fragments by polymerase chain reaction (PCR). QiagenDNeasy(®) Blood and Tissue kit (QBT), was found to give the highest DNA yields and quality. Denaturant Gradient Gel Electrophoresis (DGGE) was used to assess the diversity of communities from which DNA was extracted. No significant differences were found among kits when assessing diversity. QBT is recommended for use with WSP samples, a conclusion confirmed by further testing on communities from two tropical WSP systems. The fixation of microalgal samples with ethanol prior to DNA extraction was found to reduce yields as well as diversity and is not recommended.

  18. A hybrid method for pancreas extraction from CT image based on level set methods.

    Science.gov (United States)

    Jiang, Huiyan; Tan, Hanqing; Fujita, Hiroshi

    2013-01-01

    This paper proposes a novel semiautomatic method to extract the pancreas from abdominal CT images. Traditional level set and region growing methods that request locating initial contour near the final boundary of object have problem of leakage to nearby tissues of pancreas region. The proposed method consists of a customized fast-marching level set method which generates an optimal initial pancreas region to solve the problem that the level set method is sensitive to the initial contour location and a modified distance regularized level set method which extracts accurate pancreas. The novelty in our method is the proper selection and combination of level set methods, furthermore an energy-decrement algorithm and an energy-tune algorithm are proposed to reduce the negative impact of bonding force caused by connected tissue whose intensity is similar with pancreas. As a result, our method overcomes the shortages of oversegmentation at weak boundary and can accurately extract pancreas from CT images. The proposed method is compared to other five state-of-the-art medical image segmentation methods based on a CT image dataset which contains abdominal images from 10 patients. The evaluated results demonstrate that our method outperforms other methods by achieving higher accuracy and making less false segmentation in pancreas extraction.

  19. Displacement fields denoising and strains extraction by finite element method

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    Optical full-field measurement methods are now widely applied in various domains. In general,the displacement fields can be directly obtained from the measurement,however in mechanical analysis strain fields are preferred.To extract strain fields from noisy displacement fields is always a challenging topic.In this study,a finite element method for smoothing displacement fields and calculating strain fields is proposed.An experimental test case on a holed aluminum specimen under tension is applied to vali...

  20. Comparison of sample preparation methods for reliable plutonium and neptunium urinalysis using automatic extraction chromatography

    DEFF Research Database (Denmark)

    Qiao, Jixin; Xu, Yihong; Hou, Xiaolin

    2014-01-01

    of plutonium and neptunium associated with organic compounds in real urine assays. In this work, different protocols for decomposing organic matter in urine were investigated, of which potassium persulfate (K2S2O8) treatment provided the highest chemical yield of neptunium in the iron hydroxide co...